Twitter
Advertisement

Scientists trains robot to learn new tasks like humans

Scientists have developed new algorithms that enable robots to learn motor tasks through trial and error - much like humans learn new tasks, marking a major milestone in artificial intelligence.

Latest News
article-main
This team of UC Berkeley researchers has developed algorithms that enable their PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, to learn new tasks through trial and error. Shown, left to right, are Chelsea Finn, Pieter Abbeel, BRETT, Trevor Darrell and Sergey Levine. (Photo courtesy of UC Berkeley Robot Learning Lab)
FacebookTwitterWhatsappLinkedin

Scientists have developed new algorithms that enable robots to learn motor tasks through trial and error - much like humans learn new tasks, marking a major milestone in artificial intelligence.

Researchers demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks - putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more - without preprogrammed details about its surroundings.

"What we're reporting on here is a new approach to empowering a robot to learn," said Professor Pieter Abbeel of University of California, Berkeley's Department of Electrical Engineering and Computer Sciences. "The key is that when a robot is faced with something new, we won't have to reprogramme it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it," said Abbeel.

The researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

In the experiments, the researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks. They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks.

The algorithm controlling BRETT's learning included a reward function that provided a score based upon how well the robot was doing with the task. BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot's movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.

BRETT learns to screw the cap on a bottle:

This end-to-end training process underlies the robot's ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn. With this approach, when given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours. 

Watch: BRETT the Robot learns to put things together on his own

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement