People acquire skills by interacting with the world. Now artificial intelligence should learn human patterns of action according to the same principle. Virtual reality could play a vital role in this.
Artificial intelligence learns human skills with Virtual Reality
Artificial Intelligence copes well with chess: The rules are clear, the number of possible moves is limited. But what happens after the game is over? We humans can get up and go for a walk, water the flowers or read a book. The world is full of possibilities for action that overtax artificial intelligence.
“Many problems in AI research have to do with variability. The more variability, the harder it is to solve the problem,” Professor of Computer Science Pieter Abbeel of Berkeley University told Forbes. Abbeel and three of his students founded the US startup Embodied Intelligence last September to solve this problem and received US$7 million in start-up financing from investors.
Acting instead of thinking
The startup aims to remove Artificial Intelligence from controlled learning environments in the long term and to familiarize people with the complex world in which they live. In a first step, she must learn to perform all the supposedly simple motor activities that characterize everyday life.
The AI researchers assume a popular cognitive theoretical approach that says that world knowledge is acquired not only with the help of abstract thinking but above all through physical interactions with the environment.
Up to now, robotics has taken a different approach and programmed action patterns. That requires specialists with good programming knowledge and a lot of development time. Embodied Intelligence wants to go a completely different way to teach Artificial Intelligence actions.
They are oriented towards human learning: people learn actions not according to general instructions, but by observing other people’s movements and imitating them with their bodies.
Learning with Virtual Reality
To achieve this effect in robots, the AI researchers use VR glasses and spatially recorded controllers to perform the corresponding actions over and over again, each time with slight variations. The movements are transferred by remote control to a robot, which memorizes each action sequence using machine learning and derives a uniform action pattern after sufficient training.
The use of an advanced VR system, in this case, an HTC Vive, has two significant advantages. Firstly, exact data sets can be generated thanks to submillimeter-precise motion detection, from which the artificial intelligence learns actions.
Secondly, the machine does not have to observe and imitate the movements like a human being. Since the teacher acts as a “puppeteer,” mistakes and misinterpretations are excluded.
With this learning process, AI researchers can teach the robot an action within one day. With conventional programming methods, this used to take weeks or months.