Robot arm uses trial and error to solve problems.
One thing robots traditionally have been unable to do is learn from their mistakes. They’ll follow their programming over and over and not adapt to new situations or correct errors.
That could all change soon thanks to a team of Leeds University scientists, who are using artificial intelligence (AI) techniques to teach robots to conduct trial-and-error problem-solving.
If a robotic arm can’t grab its intended object in a confined or cluttered space, it has to plan a sequence of moves to reach its target. The processing power required to make that plan is often so great that the robot arm will freeze for a few minutes. And then once the robot does make a plan, it often fails to execute it properly.
By using AI techniques, the robot could draw on its own past attempts to make a better plan, faster.
The researchers are using automated planning and reinforcement learning techniques to train a robot to find and move an object in a cluttered environment. The goal is to enable the machines to have more autonomy so that they can be better problem solvers.
Automated planning allows the robot to use a vision system to “see” a problem. The software then visualizes the potential sequence of moves needed to complete the robot’s task.
But the robot’s simulated sequence of moves often doesn’t factor in real-world complexities. So, if a sequence of moves knocks an object over, the robot’s plan can’t compensate for that error.
That’s where reinforcement learning comes in. With this technique, the robot’s software will run about 10,000 simulations of its task, using trial and error to arrive at the sequence of moves most likely to complete the task successfully. The computer begins by picking a move at random, then building a progression of moves based on what works—or doesn’t. It would then refer back to that sequence for its next task, building on what it’s learned—essentially transferring its learned skills to a new challenge.
As a result, the robot can plan out its task and make decisions faster.
“Our work is significant because it combines planning with reinforcement learning,” said Wissam Bejjani, a PhD student who wrote the research paper. “A lot of research to try and develop this technology focuses on just one of those approaches. Our approach has been validated by results we have seen in the university’s robotics lab.”
In fact, in lab tests a decision that would normally take the robot 50 seconds to make was being made in just five seconds.
This new approach to robot training has significant potential to vastly increase the versatility of robot arms, allowing them to problem-solve without input from a controller—resulting in a more efficient work environment.
Read more about robot learning at Robots Learn Swarm Behaviors, Aim to Escape the Lab.