New communication system could improve collaboration between robots and humans in emergency response teams.
Autonomous robots collaborate by continuously sending each other updates, but bombarding human brains with that much data at once would be intolerable.
In order to facilitate robot/human collaboration, researchers at MITâs Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised a simplified system that requires 60 percent less communication.
Multi-agent systems involve collaborations between autonomous agents (human or robot) that require the participants to modify their behaviors in response to othersâ models of the surrounding environment as well as their own. Hence, collaborations between humans and robots normally require a lot of information processing.
The Cost of Robot Communication
Currently, the best way to model multi-agent systems is with a decentralized, partially observable Markov decision process (Dec-POMDP). A Dec-POMDP factors uncertainties; it considers whether an agentâs view of the world is correct, whether its estimate of its fellowsâ worldviews are correct as well as whether their actions will be successful.
Dec-POMDPs assume some prior knowledge about the environment in which the agents will be operating. This was problematic for the collaborative application the researchers had in mind: human-robot emergency response teams.Â
Emergency-response teams enter unfamiliar environments where prior information is often irrelevant to the situation. Moreover, recording the agentsâ surroundings in real time is time-consuming and computationally intensive.
To address these issues, the MIT researchers designed a system that simply ignores the uncertainty associated with whether or not an action will be successful. Instead, it assumes that the agent will succeed in whatever itâs trying to do.
Â
The Human Factor in Robot Communication
Agents presented with new information have three choices: they can ignore it, use it without broadcasting it or use it and broadcast it.
Each choice has benefits as well as costs. The researchers took this into account by incorporating a cost-benefit analysis into their system based on the agentâs model of the world, its expectations of its fellowsâ actions and the likelihood of accomplishing the joint goal more efficiently.
The researchers tested their system with electronic agents in over 300 computer simulations of rescue tasks in unfamiliar environments. The version of their system that permitted extensive communication between agents completed the tasks at a rate only two to ten percent higher than the version that reduced communication by 60 percent.
âWhat Iâd be willing to bet, although we have to wait until we do the human-subject experiments, is that the human-robot team will fail miserably if the system is just telling the person all sorts of spurious information all the time,â said Julie Shah, associate professor of aeronautics and astronautics at MIT.
In a separate project, Shah and her team conducted experiments with only human subjects completing similar virtual rescue missions. By studying the subjects’ communication patterns using machine-learning algorithms, the team hopes to incorporate those patterns into their new model to further improve human-robot collaboration.
âWe havenât implemented it yet in human-robot teams,â said Shah. âBut itâs very exciting, because you can imagine: Youâve just reduced the number of communications by 60 percent, and presumably those other communications werenât really necessary toward the person achieving their part of the task in that team.â
The research was presented at the 30th Annual AAAI Conference and published under the title, âConTaCT: Deciding to Communicate during Time-Critical Collaborative Tasks in Unknown, Deterministic Domains.â