Google for Robots
Here’s an interesting article from MIT Review.
The general view is that something such as “make a cup of coffee” is simple for humans to understand, while being too simple for a robot to understand. They would need copious amounts of instructions around the variables; for example, filter coffee, espresso, flat white, big cup, small cup, soya milk, etcetera.
Some guys from Palo Alto in Silicon Valley are planning to build a Google for Robots – called RoboBrain. . .which, combined with deep learning and self repairing robots, gives humanity a large problem. We become redundant.
One of the most exciting changes influencing modern life is the ability to search and interact with information on a scale that has never been possible before. All this is thanks to a convergence of technologies that have resulted in services such as Google Now, Siri, Wikipedia and IBM’s Watson supercomputer.
This gives us answers to a wide range of questions on almost any topic simply by whispering a few words into a smart phone or typing a few characters into a laptop. Part of what makes this possible is that humans are good at coping with ambiguity. So the answer to a simple question such as “how do I make cheese on toast” can result in very general instructions that an ordinary person can easily follow.
For robots, the challenge is quite different. These machines require detailed instructions for even the simplest task. For example, a robot asking a search engine: “How can I bring sweet tea from the kitchen?” is unlikely to receive the detail it needs to carry out the task, because it requires all kinds of incidental knowledge, such as the idea that cups can hold liquid (but not when held upside down); that water comes from taps and can be heated in a kettle or microwave, and so on.
The truth is that if robots are ever to get useful knowledge from search engines, these databases will have to contain a much more detailed description of every task that they might need to carry out.
Enter Ashutosh Saxena at Stanford University in Palo Alto and a number of pals, who have set themselves the task of building such a knowledge engine for robots.
These guys have already begun creating a kind of Google for robots that can be freely accessed by any device wishing to carry out a task. At the same time, the database gathers new information about these tasks as robots perform them, thereby learning as it goes. They call their new knowledge engine RoboBrain.
The team has taken on a number of challenges in designing RoboBrain. For a start, robots have many different types of sensors and designs so the information has to be stored in a way that is useful for any kind of machine. The knowledge engine should be able to respond to a variety of different types of questions posed by robots in different ways. It should also be able to gather knowledge from different sources, such as the World Wide Web, and by trawling existing knowledge bases, such as WordNet, ImageNet, Freebase and OpenCyc.
What’s more, Saxena and co want Robobrain to be a collaborative effort that links up with existing services. To that end, the team has already partnered with services such as Tell Me Dave, which is a start-up aiming to allow robots to understand natural language instructions, and PlanIt, which is a way for robots to plan paths using crowd sourced information.
“As more and more researchers contribute knowledge to RoboBrain, it will not only make their robots perform better but we also believe this will be beneficial for the robotics community at large,” says Saxena and co. They have set up a website called RoboBrain.me to act as a gateway and to promote the idea.
Setting up a knowledge engine of this kind is no easy task. Saxena and co have approached it as a problem of network theory in which the knowledge is represented as a directed graph. The nodes in this graph can be a variety of different things such as an image, text, video, haptic data or a learned concept, such as a “container”.
RoboBrain then accepts new information in the form of a set of edges that link a subset of nodes together. For example, the idea that a “sitting human can use a mug” might link the nodes for mug, cup and sitting human with concepts such as “being able to use”.
Any robot that queries the database for this term, or something like it, can then download the set of edges and nodes that represent it.
This is more than just a neat idea. Saxena and co have already begun to build the database and use it to allow robots to plan certain actions, such as navigating indoors or moving cooking ingredients around.
They show how one of their own robots uses RoboBrain to move an egg carton to the other end of a table. Since eggs are fragile, they have to be handled carefully, which is something that the robot can learn by querying RoboBrain.
An important part of the project is to apply knowledge learned in one situation to other situations. So the same technique for handling eggs could also be used for handling other fragile objects, such as light bulbs.
The team has big plans for the future. For instance, they would like to expand the knowledge base to include even larger knowledge sources, such as online videos. A robot capable of querying online “how-to” videos could then learn how to perform a wide variety of household tasks.
This is interesting work with important potential to change the way that robots learn on a grand scale. Online knowledge bases have already had a remarkable impact on the way humans think about the world around them and how they interact with it.
It is certainly not beyond belief that RoboBrain might have a similar impact for our electronic cousins.