Why the US Requires Ethical Standards on the Use and Design of Robotics

Dehumanisation and over-dependence on robots are ethical hazards according to a new British standard.

Artificial Intelligence (AI) has been an increasing subject of interest for engineers, scientists and tech geeks everywhere — and for good reason.

In the past year or so, AI systems have fooled human ears into confusing synthetic sounds for real ones, been tested for use in keeping firefighters safe, accomplished something  no physicist has done before and fooled an entire class of computer science students.

As AI rapidly becomes more intelligent and eventually combines with advanced robotics, machinery and computers, we’re going to need some form of guidelines like Isaac Asimov’s Three Laws of Robotics:

  • “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • “A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.”
  • “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

These laws sound all well and good, but they’re not without their loop holes. This is where a new document from the British Standards Institution, titled “BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems,” comes in.

The document was written by a committee of scientists, academics, ethicists, philosophers and users of robots to provide guidance on possible hazards and protective measures when dealing with robotics and AI.

BS 8611 provides guidelines for robotics developers to identify the potential for ethical harm, while posing significant ethical hazards for robot applications. Additionally, readers of the document can find guidelines to minimize the risks of these hazards.

The piece is also written to recognize that not all ethical hazards are physical in nature (i.e. unsafe design or weaponization). Psychological hazards, including fear or stress, are also taken into consideration.

The document builds on existing safety requirements for different types of robots (industrial, personal care and medical) by examining safe design, protective measures and other information for the design and application of robots.

BS 8611 was recently unveiled at the Social Robotics and AI conference in Oxford, England.

Alan Winfield, professor of robotics at the University of the West of England was quoted by The Guardian as citing the document to be “the first step towards embedding ethical values into robotics and AI.”

“As far as I know, this is the first published standard for the ethical design of robots,” Winfield added. “It’s a bit more sophisticated than Asimov’s Laws – it basically sets out how to do an ethical risk assessment of a robot.”

Ethical Hazards of Robotics in Manufacturing

When discussing ethical hazards inherent to the design and use of robots, the first thing that comes to my mind is manufacturing.

When designing robotics for the purpose of industrial automation, design can take two paths:

  • Replacing human roles (i.e. conventional industrial robots)
  • Assisting human roles (i.e. collaborative robots, or “cobots” for short)

Neither of these design paths are inherently malevolent, but they can present opportunities for ethical hazards as laid out by BS 8611.

Conventional industrial robots, and even some commercial robots, have frightened many with the idea that a robot might replace their job. They can also be physically dangerous, as they require a number of safeguards, including fencing, emergency stop features and more. This creates both physical and psychological ethical hazards.

Cobots are designed to counter these hazards for specific applications, but as they continue to develop and incorporate AI, Industry 4.0 and IIoT technologies, it could shift the balance in the working relationship and create an over-dependence on robotics.

“Using robots and automation techniques to make processes more efficient, flexible and adaptable is an essential part of manufacturing growth,” The Guardian quoted Dan Palmer, head of manufacturing for BSI.

“For this to be acceptable, it is essential that ethical issues and hazards such as dehumanisation of humans or over-dependence on robots, are identified and addressed…. This new guidance on how to deal with various robot applications will help designers and users of robots and autonomous systems to establish this new area of work,” Palmer said.

If the BS 8611 document is met with wide popularity across the industrial and academic elite in the UK, will we see a US-based equivalent?

As North America turns to more advanced technologies to embrace Industry 4.0, we may need it.

Leave your thoughts in the comments below. To learn more about the BS 8611 document, click here