DEBATE: Should we embrace self-driving cars? - Debates, casebooks, and classic arguments

Practical argument: A text and anthology - Laurie G. Kirszner, Stephen R. Mandell 2019

DEBATE: Should we embrace self-driving cars?
Debates, casebooks, and classic arguments

Image

When writers and filmmakers imagine the future, they often include self-driving cars as part of their creations. The fictional worlds of Isaac Asimov, Philip K. Dick, Ray Bradbury, and many others contain detailed accounts of these vehicles. Such animate machines may be a convenience, as in the film Total Recall (1990), or they may be sinister, as in Maximum Overdrive (1986). But despite their futuristic connotations, driverless cars have a long pedigree in the history of technology and transportation. For example, in 1926, the electrical engineer Francis Houdina demonstrated a driverless automobile on the streets of New York: his radio-controlled car (The American Wonder) dazzled onlookers on Broadway and Fifth Avenue—before crashing into another car full of photographers. Since then, Japan’s Tsukuba Mechanical Engineering Laboratory, Carnegie Mellon University, Mercedes-Benz, and other companies have developed and refined various prototypes. Now, however, these cars are on the verge of becoming a practical means of transportation, not just novelties. Google and Tesla have invested heavily into autonomous vehicles, and Uber has put these cars into service in a limited number of cities.

Supporters of self-driving cars tout their advantages: the elimination of human error from driving, fewer accidents, decreased traffic jams, and additional mobility for the elderly. Of course, there are potential disadvantages too. Uber’s tests recently resulted in the first pedestrian death caused by a self-driving car; the accident occurred in 2018, when a forty-eight-year-old woman was killed in Arizona. In addition, autonomous vehicles raise numerous questions. Americans have long associated cars with individualism and freedom: are they willing to give up their autonomy to self-driving machines? How will government agencies and insurance underwriters treat these cars? What about the taxi drivers, truckers, and others whose jobs autonomous vehicles may eliminate?

The two essays that follow explore the ethical challenges presented by self-driving cars. For Karl Iagnemma, who supports these vehicles, abstract philosophical thought experiments are of little use when examining the practical effects of this evolving technology. In contrast, Olivia Goldhill says that driverless cars will allow philosophers to see their theories tested in a “very real way.”

WHY WE HAVE THE ETHICS OF SELF-DRIVING CARS ALL WRONG

KARL IAGNEMMA

This piece was posted on the World Economic Forum on January 21, 2018.

A trolley barrels down a track at high speed. Suddenly, the driver sees five people crossing the track up ahead. There’s not enough time to brake. If the driver does nothing, all five will die. But there is enough time to switch onto a side track, killing one person instead. Should the driver pull the switch?

Philosophers have debated trolley problems like this for decades. It’s a useful thought experiment for testing our intuitions about the moral difference between doing and allowing harm. The artificial scenario allows us to ignore empirical questions that might cloud the ethical issue, such as could the trolley stop in time? Could the collision be avoided in another way?

Recently the trolley problem has been invoked within the real-world policy debate about regulating autonomous vehicles (AVs). The issue at hand is how AVs will choose between harms to one set of people or another.

In September 2016, the National Highway Traffic Safety Administration (NHTSA) asked companies developing AVs to certify that they have taken ethical considerations into account in assessing the safety of their vehicles.

Engineers and lawyers actually working on AV technology, however, largely agree that the trolley problem is at best a distraction and at worst a dangerously misleading model for public policy.

The trolley problem is the wrong guide for regulating AVs for three reasons:

1. Trolley Problem Scenarios Are Extremely Rare

Even in a world of human-driven vehicles, for a driver to encounter a real-world trolley problem, he or she must 1) perceive an imminent collision in time to consider alternative paths; 2) have only one viable alternative path, which just happens to involve another fatal collision; and yet 3) be able to react in time to steer the car into the alternative collision. The combination of these three circumstances is vanishingly unlikely. It’s not surprising, then, that we never see trolley problem-like collisions in the news, let alone in judicial decisions.

“A NHTSA study concluded that driver error is the critical reason for 94 percent of crashes.”

But sadly, unlike trolley problems, fatal collisions are not rare. The National Safety Council estimates that about 40,200 Americans died on the highway in 2016, a 6 percent increase over the previous year. By comparison, about 40,610 women in the US will die from breast cancer this year, as estimated by the American Cancer Society. A NHTSA study concluded that driver error is the critical reason for 94 percent of crashes. Policymakers need to keep the real causes of preventable highway deaths, like alcohol and texting, in mind to save lives.

2. Autonomous Vehicles Will Make Them Even Rarer

To the extent that trolley problem scenarios exist in the real word, AVs will make them rarer, not more frequent. One might think that, since AVs will have superior perception and decision-making capacities and faster reaction times, an AV might be able to make trolley problem-like choices in situations where a human driver wouldn’t. But those same advantages will also enable an AV to avoid a collision entirely—or reduce the speed and severity of impact—when a human driver wouldn’t.

Unlike the track-bound trolley, an AV will almost never be restricted to two discrete paths, both of which involve a collision. AVs are equipped with sensors that provide a continuously updated, three-dimensional, 360-degree representation of the world around the vehicle, enabling it to know, and be able to act on, many alternative paths. More importantly, since AVs are never drunk, drowsy, or distracted, they are less likely to be in near-collision situations in the first place.

3. There Is Not Much Regulators Can Do about Them

Even if trolley problems were a realistic concern for AVs, it is not clear what, if anything, regulators or companies developing AVs should do about them. The trolley problem is an intensely debated thought experiment precisely because there isn’t a consensus on what should be done.

Generally, if commentators applying the trolley problem to AVs give any conclusions at all, they propose that AVs should not distinguish among different types of people, based on age, sex, or other characteristics. But it doesn’t take a trolley problem to reach that common sense conclusion.

Focusing on the trolley problem could distract regulators from the important task of ensuring a safe transition to the deployment of AVs, or mislead the public into thinking either that AVs are programmed to target certain types of people or simply that AVs are dangerous.

We are all vulnerable to the tendency to overestimate the likelihood of vivid, cognitively available risks rather than statistically likelier, but less salient, risks. We often neglect the base rate of conventional traffic accidents, even though the statistical risk is high. Associating AVs with deadly trolley collisions could only exacerbate this problem.

Conflating thought experiments with reality could slow the deployment of AVs that are reliably safer than human drivers. Let’s not go down that wrong track when it comes to regulating self-driving cars.

ImageREADING ARGUMENTS

1. Iagnemma opens his essay with a “thought experiment”? What is a thought experiment? What does this particular thought experiment allow us to do?

2. According to Iagnemma, the “trolley problem” is the wrong way to think about the safety benefits and risks of self-driving cars. Why?

3. Iagnemma argues that autonomous vehicles will make “trolley problem scenarios” (to the degree that it exists in the real world) rarer, not more frequent. What evidence does he use to support this claim? Do you find it convincing? Why or why not?

4. In paragraph 14, Iagnemma claims, “We are all vulnerable to the tendency to overestimate the likelihood of vivid, cognitively available risks rather than statistically likelier, but less salient, risks.” Do you agree? Explain.

5. Iagnemma concludes by saying, “Conflating thought experiments with reality could slow the deployment of AVs that are reliably safe for human drivers” (para. 15). What does he mean? Is this an effective conclusion? Why or why not?

SHOULD DRIVERLESS CARS KILL THEIR OWN PASSENGERS TO SAVE A PEDESTRIAN?

OLIVIA GOLDHILL

This piece was posted on the website Quartz on November 1, 2015.

Imagine you’re in a self-driving car, heading towards a collision with a group of pedestrians. The only other option is to drive off a cliff. What should the car do?

Philosophers have been debating a similar moral conundrum for years, but the discussion has a new practical application with the advent of self-driving cars, which are expected to be commonplace on the road in the coming years.

Specifically, self-driving cars from Google, Tesla, and others will need to address a much-debated thought experiment called The Trolley Problem. In the original set-up, a trolley is headed towards five people. You can pull a lever to switch to a different track, where just one person will be in the trolley’s path. Should you kill the one to save five?

Many people believe they should, but this moral instinct is complicated by other scenarios. For example: You’re standing on a footbridge above the track and can see a trolley hurtling towards five people. There’s a fat man standing next to you, and you know that his weight would be enough to stop the trolley. Is it moral to push him off the bridge to save five people?

Go Off the Cliff

When non-philosophers were asked how driverless cars should handle a situation where the death of either passenger or pedestrian is inevitable, most believed that cars should be programmed to avoid hurting bystanders, according to a paper uploaded to the scientific research site Arxiv this month.

The researchers, led by psychologist Jean-François Bonnefon from the Toulouse School of Economics, presented a series of collision scenarios to around 900 participants in total. They found that 75 percent of people thought the car should always swerve and kill the passenger, even to save just one pedestrian.

Among the philosophers debating moral theory, this solution is complicated by various arguments that appeal to our moral intuitions but point to different answers. The Trolley Problem is fiercely debated precisely because it is a clear example of the tension between our moral duty not to cause harm, and our moral duty not to do bad things.

The former school of thought argues that the moral action is that which causes the maximum happiness to the maximum number of people, a theory known as utilitarianism. Based on this reasoning, a driverless car should take whatever action would save the most number of people, regardless of whether they are passenger or pedestrian. If five people inside the car would be killed in a collision with the wall, then the driverless car should continue on even if it means hitting an innocent pedestrian. The reasoning may sound simplistic, but the details of Utilitarian theory, as set out by John Stuart Mill, are difficult to dispute.

Who Is Responsible?

However, other philosophers who have weighed in on the Trolley Problem argue that utilitarianism is a crude approach, and that the correct moral action doesn’t just evaluate the consequences of the action, but also considers who is morally responsible.

Helen Frowe, a professor of practical philosophy at Stockholm University, who has given a series of lectures on the Trolley Problem, says self-driving car manufacturers should program vehicles to protect innocent bystanders, as those in the car have more responsibility for any danger.

“We have pretty stringent obligations not to kill people,” she tells Quartz. “If you decided to get into a self-driving car, then that’s imposing the risk.”

The ethics are particularly complicated when Frowe’s argument points to a different moral action than utilitarian theory. For example, a self-driving car could contain four passengers, or perhaps two children in the backseat. How does the moral calculus change?

If the car’s passengers are all adults, Frowe believes that they should die to avoid hitting one pedestrian, because the adults have chosen to be in the car and so have more moral responsibility.

Although Frowe believes that children are not morally responsible, she still argues that it’s not morally permissible to kill one person in order to save the lives of two children.

“But with enough driverless cars on the road, it’s far from implausible that software will someday have to make a choice between causing harm to a pedestrian or passenger.”

“As you increase the number of children, it will be easier to justify killing the one. But in cases where there are just adults in the car, you’d need to be able to save a lot of them—more than ten, maybe a busload—to make it moral to kill one.”

It’s Better to Do Nothing

Pity the poor software designers (and, undoubtedly, lawyers) who are trying to figure this out, because it can get much more complicated. What if a pedestrian acted recklessly, or even stepped out in front of the car with the intention of making it swerve, thereby killing the passenger? (Hollywood screenwriters, start your engines.) Since driverless cars cannot judge pedestrians’ intentions, this ethical wrinkle is practically very difficult to take into account.

Philosophers are far from a solution despite the scores of papers that debate every tiny ethical detail. For example, is it more immoral to actively swerve the car into a lone pedestrian than to simply do nothing and allow the vehicle to hit someone? Former UCLA philosophy professor Warren Quinn explicitly rejected the utilitarian idea that morality should maximize happiness. Instead, he argued that humans have a duty to respect other persons, and so an action that directly and intentionally causes harm is ethically worse than an indirect action that happens to lead to harm.

Of course, cars will very rarely be in a situation where there are only two courses of action, and the car can compute, with 100 percent certainty, that either decision will lead to death. But with enough driverless cars on the road, it’s far from implausible that software will someday have to make such a choice between causing harm to a pedestrian or passenger. Any safe driverless car should be able to recognize and balance these risks.

Self-driving car manufacturers have yet to reveal their stance on the issue. But, given the lack of philosophical unanimity, it seems unlikely they’ll find a universally acceptable solution. As for philosophers, time will tell if they enjoy having their theories tested in a very real way.

ImageREADING ARGUMENTS

1. In her essay, Goldhill cites a 2015 study that asked nonphilosophers whether driverless cars should be programmed to protect passengers or to protect pedestrians. How did they respond? How would you respond?

2. What is the ethical utilitarianism? What results when you apply utilitarian theory to the trolley problem? Why do some philosophers “argue that utilitarianism is a crude approach” (para. 9)?

3. What is this essay’s thesis? How would you state it in your own words?

4. According to Goldhill, car manufacturers are not likely to find a solution to the trolley problem. What real-world issues complicate this problem?

5. What is the purpose of paragraph 18 in this essay? How does it further the writer’s argument?

ImageAT ISSUE: SHOULD WE EMBRACE SELF-DRIVING CARS?

1. Both Iagnemma and Goldhill write about the ethical dilemmas associated with self-driving cars. Which writer’s argument do you find more convincing? Do either (or both) of these articles change your view of autonomous vehicles? Why or why not?

2. Self-driving cars are already in use in some places. Do you think that car manufacturers, government regulators, insurance companies, and others need to work out most (or all) of the ethical implications of driverless cars before these vehicles become widespread? Do you think they will be able to resolve these issues? What regulations would you institute to make sure these cars are safe?

3. These two essays discuss the ethical implications of driverless cars with regard to safety. What other issues, problems, or difficult questions do autonomous vehicles raise for you—and for society, as a whole?

ImageWRITING ARGUMENTS: SHOULD WE EMBRACE SELF-DRIVING CARS?

How do you view the future of self-driving cars? Do you think they are a good idea? Do their advantages outweigh their disadvantages? What do you see as their most serious problem? Write an essay in which you argue for or against the need for self-driving cars, making sure that you answer these questions.