Getting Physical with Collaborative Robots to Program for Adaptability

Human interaction will no longer be considered an interruption to cobots, thanks to new programming.

Rice University researchers led by graduate student Dylan Losey want to help humans and robots collaborate by enabling interactive tasks like rehabilitation, surgery and training programs in which environments are less predictable. In early studies, Losey and colleagues at the University of California, Berkeley, used gentle feedback to train a robot arm to manipulate a coffee cup in real time. (Image courtesy Andrea Bajcsy)

Rice University researchers led by graduate student Dylan Losey want to help humans and robots collaborate by enabling interactive tasks like rehabilitation, surgery and training programs in which environments are less predictable. In early studies, Losey and colleagues at the University of California, Berkeley, used gentle feedback to train a robot arm to manipulate a coffee cup in real time. (Image courtesy Andrea Bajcsy)

Collaborative robots, or cobots, are designed for safety in use within proximity of humans and they do that job splendidly—but they’re not intelligent enough to adapt to anything outside of their programmed sequence of movements.

As a result, were a human to bump into them, they pause until it’s safe to resume their task or are re-activated to continue. Cobots haven’t failed to get the job done in this way, but what could be accomplished if they were smarter and able to adapt according to a human’s guiding touch?

Researchers at Rice University have developed a new way to train cobots with “gentle feedback” as they perform tasks, physically adjusting a robot’s trajectory in real time.

The ultimate goal is to simplify training of robots that are expected to work side by side with humans.

“Historically, the role of robots was to take over the mundane tasks we don’t want to do: manufacturing, assembly lines, welding and painting,” said Marcia O’Malley, a professor of mechanical engineering, electrical and computer engineering and computer science. “As we become more willing to share personal information with technology, like the way my watch records how many steps I take, that technology moves into embodied hardware as well.”

According to O’Malley and graduate student Dylan Losey, robots respond to physical human-robot interaction (pHRI) as disturbances, resuming their programmed behaviors when an interaction ends.

The Rice algorithm the researchers developed allows the robot to recalculate its path to its goal when interrupted or taught – think how a GPS system calculates alternative routes.

Losey spent a summer with other students training a cobot to deliver a coffee cup across a desktop and used pHRI to help it avoid a computer keyboard, while remaining low enough that a drop wouldn’t break the cup.

The “trajectory deformation” effectively altered how the cobot proceeded to its goal, Losey said. “By our replanning the robot’s desired trajectory after each new observation, the robot was able to generate behavior that matches the human’s preference.”

This, and other tests, proved that trajectory deformation made subsequent tasks physically easier and required less interaction to achieve goals. The experiments thus demonstrated that impromptu interactions can program autonomous robots.

“The paradigm shift in this work is that instead of treating a human as a random disturbance, the robot should treat the human as a rational being who has a reason to interact and is trying to convey something important,” Losey said. “The robot shouldn’t just try to get out of the way. It should learn what’s going on and do its job better.”

The researchers plan on further refining the Rice algorithm to help cobots optimize the time it takes to complete a task.

For more information, visit the IEEE Explore website.