<p>Honeybees are incredible in many ways. Swooshing at over twenty kilometres per hour, they fly a whopping 90,000 kilometres—a distance equal to going around the world 2.2 times—to make half a kilogram of honey. There’s a strategy behind the success: instead of all the workers aimlessly wandering in search of pollen and nectar, a forager bee first ventures out in pursuit of food. When she finds a bounty, she returns to the hive and recruits an army of her kin to bring it all.</p>.<p>But there’s a catch—honeybees can’t talk, so how does the forager tell others the story of her expedition? She uses Tanzsprache, or ‘dance-language’. By distinctively moving her body in a zig-zag way and tracing a path that resembles the number 8, or sometimes a circle, she reveals all they need to know. The distinct moves she makes, by wagging her tail and moving her body relative to the Sun’s position, convey the direction, distance and quality of the treasure she found.</p>.<p>Scientists call her moves ‘waggle-dance’. The other workers grasp her signal, do the math, and set off to bring back the nectar and pollen. </p>.<p>While honeybee’s waggle dance has stirred the curiosity of many thinkers—from Aristotle to Karl von Frisch, an ethologist who translated the dance-speak into human-speak, it also inspired two robotic scientists at the Indian Institute of Science, Bengaluru. In a recent study, published in the journal <span class="italic">Frontiers in Robotics and AI</span>, Prof Abhra Roy Chowdhury and his student Kaustubh Joshi, propose a new method of communication between robots that uses gestures alone. They have designed robots that use robotic ‘eyes’ to identify the movements of other robots and carry out a task. </p>.<p>“Our research is inspired from [how] bees communicate in nature,” says Chowdhury, who has been involved in developing many nature-inspired technologies for over a decade. In the past, his lab successfully developed fish-inspired robotic underwater vehicles and turtle-inspired water pipeline inspectors. With the current work, he says the motivation was to “integrate waggle dance into gesture tracking”—just like how the bees understand the forager’s gestures and head out in the needed direction for a specific distance, the robots do the same. </p>.<p>Today, most communication between robots relies on a network, and signals are transmitted and received over it akin to wi-fi or Bluetooth. However, the robots designed by Chowdhury and Joshi need no such network—one robot can ‘see’ (using cameras) the gestures shown by another (or a human), understand what that means, and act accordingly. It’s similar to how you might wave your hands to signal your presence to a friend at a distance without uttering a word. As a result, these robots don’t need a network to work, making them useful even in places where there isn’t sufficient network coverage or is intermittent, like military applications, space, or rescue operations. </p>.<p>The researchers designed two robots that interact with each other. The package-handling robot carries packages and can detect gestures shown by humans and other robots. It’s akin to the worker bees that read the signals from the waggle-dance moves and act accordingly. The messenger robot, on the other hand, can only detect human gestures. Like the forager bee, it performs the ‘waggle-dance’—moves in a particular direction and path that conveys information to the package-handling robot, and also supervises it.</p>.<p class="CrossHead Rag">Using pre-coded body poses</p>.<p>To understand human gestures, the messenger robot relied on some pre-coded human body poses, like raising the right hand with an open palm to signal ‘stop’, or raising both hands with open palms to ‘follow’. The robot can identify a handful of such ‘commands’ based on the position of the human’s upper body, and translate that into different orientations and shapes that it moved in. The package-deliver robot, which has cameras that can sense the depth of the vision, like 3D glasses, recognises the messenger’s movements and translates that by moving the needed distance in a particular direction.</p>.<p>“Past works [on vision-based robots] have mostly focused on just the shape identification,” says Chowdhury. “Our study also identifies the orientation of the shape [of the messenger robot], which subsequently provides the direction and time taken to trace the shape corresponding to the distance to be travelled by the [package-handling] robot.”</p>.<p>The researchers tested how accurate the communication between the robots was by conducting experiments in the real world and through simulations. In both situations, they were successful in over nine tries out of ten. Encouraged by these numbers, the researchers say these robots could be deployed in situations where the network servers are prone to damage by moisture, heat and dust, and are hence unreliable—much like industrial warehouses, in addition to other areas where humans can’t go but can gesture these robots from a distance. They also claim that their communication system is scalable—the number of robots interacting with each other can be more than two—and it would still work.</p>.<p>“We are working towards making this technology more accurate and robust to handle more complex messages and tasks,” before such robots could be available in the market, says Chowdhury. Nevertheless, he hopes that their work “enhances the usability and consumer trust in robotics technology”.</p>
<p>Honeybees are incredible in many ways. Swooshing at over twenty kilometres per hour, they fly a whopping 90,000 kilometres—a distance equal to going around the world 2.2 times—to make half a kilogram of honey. There’s a strategy behind the success: instead of all the workers aimlessly wandering in search of pollen and nectar, a forager bee first ventures out in pursuit of food. When she finds a bounty, she returns to the hive and recruits an army of her kin to bring it all.</p>.<p>But there’s a catch—honeybees can’t talk, so how does the forager tell others the story of her expedition? She uses Tanzsprache, or ‘dance-language’. By distinctively moving her body in a zig-zag way and tracing a path that resembles the number 8, or sometimes a circle, she reveals all they need to know. The distinct moves she makes, by wagging her tail and moving her body relative to the Sun’s position, convey the direction, distance and quality of the treasure she found.</p>.<p>Scientists call her moves ‘waggle-dance’. The other workers grasp her signal, do the math, and set off to bring back the nectar and pollen. </p>.<p>While honeybee’s waggle dance has stirred the curiosity of many thinkers—from Aristotle to Karl von Frisch, an ethologist who translated the dance-speak into human-speak, it also inspired two robotic scientists at the Indian Institute of Science, Bengaluru. In a recent study, published in the journal <span class="italic">Frontiers in Robotics and AI</span>, Prof Abhra Roy Chowdhury and his student Kaustubh Joshi, propose a new method of communication between robots that uses gestures alone. They have designed robots that use robotic ‘eyes’ to identify the movements of other robots and carry out a task. </p>.<p>“Our research is inspired from [how] bees communicate in nature,” says Chowdhury, who has been involved in developing many nature-inspired technologies for over a decade. In the past, his lab successfully developed fish-inspired robotic underwater vehicles and turtle-inspired water pipeline inspectors. With the current work, he says the motivation was to “integrate waggle dance into gesture tracking”—just like how the bees understand the forager’s gestures and head out in the needed direction for a specific distance, the robots do the same. </p>.<p>Today, most communication between robots relies on a network, and signals are transmitted and received over it akin to wi-fi or Bluetooth. However, the robots designed by Chowdhury and Joshi need no such network—one robot can ‘see’ (using cameras) the gestures shown by another (or a human), understand what that means, and act accordingly. It’s similar to how you might wave your hands to signal your presence to a friend at a distance without uttering a word. As a result, these robots don’t need a network to work, making them useful even in places where there isn’t sufficient network coverage or is intermittent, like military applications, space, or rescue operations. </p>.<p>The researchers designed two robots that interact with each other. The package-handling robot carries packages and can detect gestures shown by humans and other robots. It’s akin to the worker bees that read the signals from the waggle-dance moves and act accordingly. The messenger robot, on the other hand, can only detect human gestures. Like the forager bee, it performs the ‘waggle-dance’—moves in a particular direction and path that conveys information to the package-handling robot, and also supervises it.</p>.<p class="CrossHead Rag">Using pre-coded body poses</p>.<p>To understand human gestures, the messenger robot relied on some pre-coded human body poses, like raising the right hand with an open palm to signal ‘stop’, or raising both hands with open palms to ‘follow’. The robot can identify a handful of such ‘commands’ based on the position of the human’s upper body, and translate that into different orientations and shapes that it moved in. The package-deliver robot, which has cameras that can sense the depth of the vision, like 3D glasses, recognises the messenger’s movements and translates that by moving the needed distance in a particular direction.</p>.<p>“Past works [on vision-based robots] have mostly focused on just the shape identification,” says Chowdhury. “Our study also identifies the orientation of the shape [of the messenger robot], which subsequently provides the direction and time taken to trace the shape corresponding to the distance to be travelled by the [package-handling] robot.”</p>.<p>The researchers tested how accurate the communication between the robots was by conducting experiments in the real world and through simulations. In both situations, they were successful in over nine tries out of ten. Encouraged by these numbers, the researchers say these robots could be deployed in situations where the network servers are prone to damage by moisture, heat and dust, and are hence unreliable—much like industrial warehouses, in addition to other areas where humans can’t go but can gesture these robots from a distance. They also claim that their communication system is scalable—the number of robots interacting with each other can be more than two—and it would still work.</p>.<p>“We are working towards making this technology more accurate and robust to handle more complex messages and tasks,” before such robots could be available in the market, says Chowdhury. Nevertheless, he hopes that their work “enhances the usability and consumer trust in robotics technology”.</p>