Caltech Engineers Use Machine Learning to Help Robots Get Along

Multiple robot coordination achieved through motion planning algorithm.

(Image courtesy of Caltech.)

(Image courtesy of Caltech.)

Engineering a single adept robot is in itself a feat. When several robots in motion inhabit the same space, more challenges arise. Multi-robot scenarios are becoming more common with the advancement of autonomous vehicles (AVs) and use of drones for various applications, as well as the current uses of robots to help curb the COVID-19 pandemic. The most obvious pitfall to having more than one robot navigate a space is that they could collide. In the case of self-driving cars, the vehicles must make spur of the moment changes to their trajectories to stay en route, prevent an accident or both. Such decisions must be made without fully knowing how the landscape may change after the decision has been executed. The multitude of variables that exist in complex spaces add to the complexity of operating several robotic devices simultaneously.

A team of researchers at Caltech sought to deal with these multi-robot difficulties by developing a motion planning algorithm called Global-to-Local Safe Autonomy Synthesis (GLAS) and a swarm-tracking controller called Neural-Swarm. GLAS uses local information to plan robotic movement, and Neural-Swarm is augmented to learn complex aerodynamic interactions in close-proximity flight. The work on GLAS was recently published in IEEE Robotics and Automation Letters. The work on Neural-Swarm work was published in the Proceedings of IEEE International Conference on Robotics and Automation.

“Our work shows some promising results to overcome the safety, robustness and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm,” said Bren Professor of Aerospace, Jet Propulsion Laboratory Research Scientist Soon-Jo Chung.

GLAS and Neural-Swarm allow robots to learn within the confines of a limited environment. They do not need to anticipate the full flight path they will take or even all the various possibilities for flight paths. The systems enable the robots to make spur-of-the-moment decisions based on new information they receive while in flight. Using this localized information, each robot focuses on its own immediate surroundings and movement and makes decisions accordingly.

The systems accomplish this by gathering local environmental input, such as actions of neighboring robots and any obstacles encountered, as well as the flight goal of the individual drones. This information shapes the real-time behavior of the drone. Observations of the outcome are encoded and backpropagated through a differential safety module.

“These projects demonstrate the potential of integrating modern machine-learning methods into multi-agent planning and control and also reveal exciting new directions for machine-learning research,” said Professor of Computing and Mathematical Sciences Yisong Yue.

The researchers put the GLAS and Neural Swam systems on display at an open-air drone arena at Caltech’s Center for Autonomous Systems and Technologies. They flew 16 drones and demonstrated how the robots could perform simultaneous flight in a small area. The test showed that GLAS performed 20 percent better than the next-best multi-robot motion planning algorithm. Neural-Swarm performed much better than a standard controller, which did not take into account aerodynamic interactions.