Researchers train robots on data from a wide net of sources.
Pick-and-place robotics applications are an important part of manufacturing and logistics operations in a wide variety of industries—wherever goods are produced, sorted, stored or packed. It’s old news—so why is it that every time a new pick and place robot is set up, integrators must reinvent the wheel and program the robot to do such a familiar process all over again?
That’s the question researchers at the Karlsruhe Institute of Technology (KIT) set out to answer, along with partners from Canada and Germany. The researchers are investigating how to use distributed AI to train robots using data collected from multiple stations, multiple plants, and even across multiple companies. By compiling this robust data set from so many diverse sources, they hope to train AI algorithms that can work in a wide variety of pick-and-place applications.
During the project, a total of four autonomous picking stations will be set up for training the robots: two at the KIT Institute for Material Handling and Logistics (IFL) and two at the Festo SE company based in Esslingen am Neckar.
“We are investigating how the most versatile training data possible from multiple locations can be used to develop more robust and efficient solutions using artificial intelligence algorithms than with data from just one robot,” said Jonathan Auberle from IFL at KIT.
In the process, items are further processed by autonomous robots at several picking stations by means of gripping and transferring. At the various stations, the robots are trained with very different articles. At the end, they should be able to grasp articles from other stations that they have not yet learned about.
“Through the approach of federated learning, we balance data diversity and data security in an industrial environment,” said Auberle.
Federated Learning for Industry and Logistics 4.0
“Until now, federated learning has been used predominantly in the medical sector for image analysis, where the protection of patient data is a particularly high priority,” explained Auberle.
Consequently, there is no exchange of training data such as images or grasp points for training the artificial neural network. Instead, only the local weights of the neural network —i.e. parts of the stored knowledge —are transferred to a central server.
“There, the weights from all stations are collected and optimized using various criteria,” said Auberle. “Then the improved version is played back to the local stations and the process repeats.”
The goal is to develop new, more powerful algorithms for the robust use of artificial intelligence for Industry and Logistics 4.0 while complying with data protection guidelines.
“In the FLAIROP research project, we are developing new ways for robots to learn from each other without sharing sensitive data and company secrets,” said Jan Seyler, Head of Advanced Development, Analytics and Control at Festo. “This brings two major benefits: we protect our customers’ data and we gain speed because the robots can take over many tasks more quickly. In this way, the collaborative robots can, for example, support production workers with repetitive, heavy and tiring tasks.”
“The University of Waterloo is ecstatic to be working with Karlsruhe Institute of Technology and a global industrial automation leader like Festo to bring the next generation of trustworthy artificial intelligence to manufacturing,” said Dr. Alexander Wong, Co-director of the Vision and Image Processing Research Group, University of Waterloo, and Chief Scientist at DarwinAI. “By harnessing DarwinAI’s Explainable AI (XAI) and Federated Learning, we can enable AI solutions to help support factory workers in their daily production tasks to maximize efficiency, productivity and safety.”
The Advent of a Federated Approach?
Machine learning has been making inroads into manufacturing and logistics for several years, especially as vendors like PTC, Siemens and IBM develop more user-friendly and plug-and-play dashboards, sensor hardware and software applications for monitoring, tracking and analyzing data. Quality monitoring and predictive maintenance have been two dominant applications for AI in industry so far, but training the algorithm and avoiding bad or poisoned data have been roadblocks for many customers looking to adopt these technologies. Can this wider, federated approach to AI training data in industry be a solution?
Of course, collecting anonymized data from a wide base of users is nothing new in the tech world: maybe Google, Netflix or Facebook will be the next big players in the industrial AI solutions market.