Many times, we look for step by step instructions to do something we want, for instance if we want a food recipe, we try searching on the websites like WikiHow which gives you a detail instruction on the recipe. Now it’s time for Robots to do the same.
In Germany, a robot named PR2 is learning to cook ‘Pancakes’ and ‘Pizzas’ by thoroughly reading via WikiHow’s recipe. It’s a segment of an European Project called RoboHow.
The project RoboHow tries to look for ways to teach robots to understand language. It could offer ways for people to communicate with the robots to perform unusual tasks rather than creating programs to perform particular tasks.
With this RoboHow, a person needs to simply command robot what to do.
Training robots about how to turn high-level descriptions into specific actions is not only important but also a demanding task. The researchers behind the RoboHow project want to train robots about the general knowledge required to turn high-level instruction into specific actions. Till now, the project has been successful in converting a few WikiHow instructions into useful behavior, both in simulations and in real robots.
If the project proves successful, very soon robots will be everybody’s personal help to listen to your commands and acts upon it. Michael Beetz, Head of the Artificial Intelligence Institute at the University of Bremen in northern Germany where the RoboHow project based says “If you have a robot in a factory, you want to say ‘take the screw and put it into the nut and fasten the nut.” He also added, “You want the robot to generate the parameters automatically out of the semantic description of objects.”
In an experimentation, the researchers were training PR2 robots to perform simple lab tasks, such as handling chemicals.
When the robot is exposed to learn the standard procedures or specific instructions that is related to an activity, it understanding is added to an online database called Open Ease which is made available to other robots to understand. The procedures and instructions are encoded in a machine-readable language similar to the one used in the Semantic Web project.
The researchers are using other techniques as well to guide robots in learning to perform basic tasks. It consists of watching videos and studying virtual reality data of human performing the same tasks.
Yet a simple alteration could be a challenge for robots to understand, however many, including the e-commerce monster Amazon, are intrusive to develop better robot grasping (see “Help Wanted: Robot to Fulfill Amazon Orders”). Natural language processing is also very challenging, but progress is being made here, too (see “Teaching Machines to Understand Us”).
Siddhartha Srinivasa, a professor at the Robotics Institute at Carnegie Mellon University, says connecting language with action is hugely important but also very difficult. “I have a four-year-old and often face disaster when I try to instruct him to assemble a toy,” Srinivasa says. “Succeeding in this domain will require a tight integration of natural language, grounding the understanding via perception, and planning complex actions via manipulation algorithms.”