Machine Learning: How Should Self-Driving Cars Decide Who to Save?
While the humanoid robots of I, Robot may still be far into the future, questions about robotic decision-making are already becoming relevant today. (Image courtesy of 20th Century Fox.)
While the humanoid robots of I, Robot may still be far into the future, questions about robotic decision-making are already becoming relevant today. (Image courtesy of 20th Century Fox.)

QUICK READ

  • A recent discussion has reopened a hotly-debated issue: when faced with a life-vs-life situations, how should a self-driving car decide who to save? Click to view and get involved.
  • While forums like Moral Machine are attempting to "crowdsource" solutions, there's little real consensus on the right thing for cars to do. Most solutions, from "protect those inside the car above all" to "just follow the law," are not universally applicable.
  • And there's a lot of pressure on programmers to "get it right" when it comes to life-or-death situations, because just one or two bad accidents could sour public (and political) opinion on the cars.

In I, Robot, a robot rescues Will Smith’s police detective from a car crash and leave a twelve-year-old girl to drown, because it estimates that his chances of survival are greater. We’re nowhere close to having robots as sophisticated as those in the movie, but the advent of self-driving cars has made the ethics of AI decision-making incredibly important.

A recent ProjectBoard discussion covers a hotly-debated issue: when faced with life-vs-life situations, how should a self-driving car decide who to save?

How to Make a Moral Machine

To understand the debate around self-driving cars, we need to take a detour into philosophy.

There’s a common thought experiment called the “trolley problem”: you’re the conductor of a runaway trolley, heading down a track where you’ll run over five people on the tracks, and you can switch it to a track where it will only run over one person. The point of the experiment is whether it’s better to let the five people die, or actively choose to run over the one.

Normally, this is just a thought experiment, with little specific relevance in real life. When faced with a situation like that, few people would have the time (or the focus) to clearly think through the ethical implications of each choice. But with smart cars pre-programmed to respond to difficult situations, programmers are having to think through these kinds of difficult questions in advance.

Enter the Moral Machine, an online forum attempting to find “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas.” This MIT-fueled site lets users create ethical dilemmas for other users to judge.

A sample question from Moral Machine: should the car plow into the barrier and kill the two passengers, or swerve into the opposite lane and kill the two pedestrians? (Image courtesy of Scalable Cooperation at MIT Lab).
A sample question from Moral Machine: should the car plow into the barrier and kill the two passengers, or swerve into the opposite lane and kill the two pedestrians? (Image courtesy of Scalable Cooperation at MIT Lab).

“From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever-increasing pace,” says the site’s About page. “The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.”

The problem is, not everyone agrees on the “right” thing for machines to do and whose lives (if any) they should prioritize.

Who Lives, Who Dies?

Much of the debate on the ProjectBoard was over a modified version of the trolley problem: faced with a choice, should the self-driving car prioritize the lives of those inside the car, those outside the car, or consider them equally?

"[A] Robot/computer/self driving car should prioritize others first, and self last,” said user SwiftArrow, arguing that “by ceding control of their vehicle to the computer, the driver and passengers automatically align with that prioritization."

A noble thought, but it turns out that few people agree with it in practice. In a 2016 study published in Science, researchers found that most people want self-driving cars programmed to be utilitarian (minimize the casualties regardless of who is killed) … unless it’s them in the car. Respondents would be far less likely to personally buy a car that wasn’t programmed to prioritize their safety. "Most people want to live in in a world where cars will minimize casualties," said Iyad Rahwan, one of the study’s authors. "But everybody wants their own car to protect them at all costs."

And this bias may change how automakers program their cars, or how they advertise them. In 2016, Car and Driver quoted Mercedes-Benz executive Christoph von Hugo as saying: “If you know you can save at least one person, at least save that one. Save the one in the car.” Von Hugo justified the decision by saying that the car should prioritize lives it has more control over and can keep safe after any potential accident. People were extremely unhappy with von Hugo’s statement, and the company issued a statement that he had been “misquoted.”

But the opposite approach could backfire just as easily. "It will ... be very bad PR if a manufacturer's car comes out and it is overzealous when protecting others,” said ProjectBoard poster adam294. “It will be especially damaging if other companies adopt the reverse approach and keep the safety of the occupants paramount."

So how can programmers decide who the car should prioritize? And how much will that choice be affected by their own bottom line?

The Law of Robotics

Some on the ProjectBoard believed that ethical questions should ultimately be secondary to traffic laws. "When I drive, I put my safety first, and I follow the law to make sure others are safe too,” said user adam294. “If someone jumps out in front of me when I am going down the road, should the self-driving car attempt to swerve into oncoming traffic, even though you cannot stop? I would argue that the car should keep going and not put the occupants at risk, since the pedestrian has flagrantly disregarded common sense and traffic laws, and you are not at fault for that."

“Just follow the rules” is a relatively common position in the debate over the cars, because it takes decisions out of the hands of programmers. But there are questions about how universally applicable it is. Most people would agree that an illegal maneuver that saves lives is “better” than following the law strictly. There’s also the fact that human drivers break the rules—a lot—in ways that aren’t necessarily harmful. A strictly-lawful car merging onto a highway with fallible human drivers is painful to imagine, and would probably be even more painful to litigate.

Waymo cars are one of several driverless car companies making their way towards the market. Although everyone believes that ethics are extremely important when programming these cars, few offer solutions. (Image courtesy of Waymo.)
Waymo cars are one of several driverless car companies making their way towards the market. Although everyone believes that ethics are extremely important when programming these cars, few offer solutions. (Image courtesy of Waymo.)

A final problem with the approach is that even the law doesn’t really know the law. So far, Germany is the only county to propose rules on how self-driving cars should respond to emergency situations: self-driving cars should focus on saving as many human lives as possible, and shouldn’t discriminate on which lives they save. In a new US Department of Transportation guide for automakers, they say that ethical considerations surrounding self-driving cars are “important,” but say that "no consensus around acceptable ethical decision-making" has been reached.”

Does it Matter?

Amidst all of this, there are plenty of people arguing that the debate is unimportant, and that self-driving cars will save enough people to make any fatalities negligible. This view was echoed on the ProjectBoard: user ArcBlizzard said, "Self-driving cars (even in their current state) are so much more unlikely to cause an accident than a human driver that we should get more cars with this feature on the road as soon as possible to save lives."

It’s almost certain that self-driving cars would reduce traffic fatalities significantly, since cars are never drunk, distracted, or texting. In fact, a 2015 report by McKinsey & Company found that they might reduce road accidents up to 90 percent. But because of people’s fears about self-driving tech, that remaining 10% might loom larger.

Azim Shariff, one of the other authors of the Science 2016 study, says that accidents caused by self-driving cars are likely to incite more anger than regular human errors. "We don't judge people for [that] because we recognize the human frailties involved," Shariff told the Globe and Mail, back in 2016. "We recognize that people are going to work out of self-preservation instincts and in the heat of the moment. Because they're programmed in advance by people not immediately in the situation, we actually do have the luxury of deliberation [with cars].”

Two years later, Shariff’s prediction came true. In March 2018, a self-driving Uber car killed a pedestrian walking across the road in Arizona. Amid public outcry, Uber temporarily suspended its self-driving car program. The program has now been permanently closed and, while it’s unclear how much that was due to the accident, the international outrage can’t have helped Uber’s brand.

Car accidents, even fatal ones, happen every day in America. But because they’re a novelty, the ones committed by self-driving cars make the news … and, perhaps, make up people’s minds. If developers ignore (unlikely but devastating) trolley problem situations in favor of "just getting cars on the road," a few bad "decisions" by cars could push them off the road altogether.

Beyond the Trolley Problem

At the climax of I, Robot, the central AI tries to take over the world, reasoning that it should endanger some human beings in order to save humanity from itself. While this certainly isn’t the situation we want from our self-driving cars, it’s a reminder that there are larger concerns at stake when programming robots: how to gauge the impact they will have on all of society.

"If all or most cars are self-driving, small behavioural changes would make a big difference: Decisions made by engineers today, in other words, will determine not how one car drives but how all cars drive," said Johannes Himmelreich, Interdisciplinary Ethics Fellow at Stanford University. "Algorithms become policy."

Conversations about the effects self-driving cars could have on people and society are essential. If we fail to have these difficult discussions, the impact could hit us unforeseen, like a runaway trolley.

Join the discussion on ProjectBoard.

Have any thoughts or ideas about what you just read? Please share!

This open discussion is brought to you by: ProjectBoard , The Idea Sharing & Development Platform.

How to contribute:
- Do you prefer writing? Add your thoughts to the comment tab.
- Are you more visual? Use the whiteboard to sketch or upload pictures and diagrams.