Abstract
Self-driving cars are no longer confined to the realm of sci-fi; a variety of autonomous vehicles are under development by companies around the world. Before they hit consumer markets, though, manufacturers, lawmakers, and society as a whole must decide how cars should behave ethically in the worst-case scenario: a possibly fatal crash.
Introduction
Autonomous vehicles, commonly known as self-driving cars or AVs, are no longer fantastical, a dream from the realm of science fiction. Partially automated vehicles, which include features such as advanced cruise control and self-parking, already made up 1.3% of car sales in 2016, a percentage estimated to balloon to 15% by the mid-2020s [1]. Fully autonomous ones, powered by a combination of software and a variety of sensors that completely eliminate the need for any passenger intervention in the driving process, are currently in testing across the country by Google, Uber, Tesla, Nissan, and numerous start-ups [2]. If these companies are successful in their development, integration of fully autonomous vehicles onto roadways is predicted to increase traffic efficiency, reduce pollution, and eliminate the up to 90% of traffic accidents caused by human error, which would save more than 29,000 lives per year in the United States alone [3, 4].
However, at this stage, AVs are far from the levels of safety and sophistication necessary for an easy incorporation into the vehicle fleet. The first passenger casualty of a self-driving car occurred in May 2016, when a Tesla in “autopilot mode” slammed into a large truck it failed to register as an obstacle; the first pedestrian casualty happened two years later, caused by an Uber vehicle that failed to stop when the pedestrian stepped off a median into the street unexpectedly [5]. The large, perhaps disproportionate media coverage of these accidents, along with the typical public distrust of new technology, has led to a lack of enthusiasm for AVs in the general populace: in a recent survey, 78% of Americans said they would “fear” traveling in an autonomous vehicle, and only 19% said they would “trust” an AV [6].
Clearing the hurdle of lukewarm public opinion will likely be relegated mostly to marketing departments, but another important factor in public acceptance of AVs is their ethical implications, something engineers and company leaders must consider. According to Karl Lagnemma of self-driving car startup nuTonomy, as of 2016, the company did not have “any procedure for what we would commonly think of as ethical decision making,” and he was not “aware of any other group that [did] either” [7]. While companies may not be giving any consideration to the ethics of AVs ‒ a fact that is concerning in and of itself and must almost inevitably change before fully autonomous cars reach the consumer market ‒ plenty of philosophers have begun to weigh in on the morality of many of the split-second decisions that self-driving cars must make, especially in crash situations.
AVs and the Trolley Problem
A classic ethical decision-making scenario called the Trolley Problem, created by Philippa Foot and Judith Jarvis Thomson, can be considered a relatively simple stand-in for autonomous vehicle crashes. In essence, a trolley is about to hit five people tied up on the tracks, and the person making the decision can throw a switch to change the trolley’s path, thereby causing the death of one person trapped on a parallel track [8]. This is analogous to a situation in which an AV is traveling towards a group of pedestrians on the road and must decide to either swerve off the road into some kind of barrier, killing the passenger, or ram into the pedestrians, killing them instead [9].
The general consensus among both philosophers and laypeople on the Trolley Problem is that it is morally permissible to flip the switch, saving five people with the conscious decision to kill one, and the same holds for most people on the relatively equivalent decision to swerve in the AV version of the scenario [8, 9]. But as the number and nature of the people in the car and on the road are varied, the majority opinion begins to splinter. An online survey from MIT, dubbed the Moral Machine, went viral a few years ago and received responses from 2.3 million people around the world with decisions on 13 different AV crash scenarios involving pedestrians and passengers with a variety of differing characteristics, including age and social class [10]. The survey creators found trends when they divided the responses into three different groups of nations that share certain religions or cultural traditions. For example, respondents from North America and Europe tended to sacrifice older people to save younger ones much more frequently than countries in Asia with heavily Confucian or Islamic cultures [10].
The only decision a majority of people seemed able to agree upon was saving humans over animals; otherwise, the results of the study indicate that all participants had a slightly different sense of morals when it came to these sort of crash scenarios, suggesting that it would be very difficult to settle on a moral code for AVs to satisfy the whole world [10]. Additionally, some philosophers believe that the “Trolley Problem” approach used in this study is too simplistic to effectively mimic the more complex real-world scenarios AVs will inevitably engender and point out that it does not consider the legal and moral responsibility of the decision makers, a vital component of any feasible AV lawmaking [5]. However, scenarios like those in the Moral Machine are accessible and understandable for those with little formal ethics knowledge, making them useful to investigate as an introduction to the complex web of autonomous vehicle ethics and to harness as a method for gathering data on differences in beliefs about those ethics.
Utilitarian vs. Rights Approach to Accidents
At its core, the main ethical conflict regarding autonomous vehicles is between the interests of the passenger (arriving quickly, cheaply, and safely at their destination) and those of the community as a whole (making sure roads are safe for everyone using them) [4]. One significant ethical approach, which sides more with the community interest, is Utilitarianism, famously known as “the greatest amount of good for the greatest number of people.” In the case of AVs, philosophers agree that a utilitarian approach to crashes would mean minimizing overall casualties in any manner necessary, including sacrificing the passenger of the AV to save more pedestrians, as in the Trolley Problem example above [3, 11]. A few studies have found that laypeople overwhelmingly believe this approach to be the most moral; one in particular concluded that 76% of people thought it more morally good to sacrifice one passenger to save ten pedestrians than vice versa [3, 7]. Despite this, the studies also found that participants were more likely to buy an AV programmed to save its own passengers over any number of pedestrians, when offered the hypothetical choice between that and a utilitarian car.
So, it seems that this ethical dilemma is simultaneously a social one: every person wants those around them to behave in a way that would lead to the best global outcome, but they are not motivated to practice that behavior themselves [3]. And while the utilitarian approach to accidents would theoretically save the maximum number of people, it seems that consumers would be less likely to actually buy AVs programmed with utilitarianism in mind. Counterintuitively, this may mean that non-utilitarian AVs would be ethically better, since people would be more likely to buy them, resulting in a greater number of lives saved overall simply by having more AVs on the road [5]. Additionally, a utilitarian approach to crashes does not take into account more complex factors, including the social value of victims (say they are children, or pregnant women, or Nazis) and whether it is morally worse to actively kill rather than killing through inaction [4]. Creating AVs that act on the former may seem more moral at first glance, but any algorithm that makes decisions on the basis of social value must be supported by a complete, numericalized scale of social value, and inevitably, as evidenced by the Moral Machine results above, not everyone has the same idea about the relative value of different lives. Furthermore, to implement this sort of decision-making, AVs would have to be able to identify and categorize the people involved in each crash based on social factors that may well not be visible on their person, which is well beyond the realm of current sensor technology. The question of killing by inaction is slightly more straightforward and can be addressed with the Principle of Double-Effect, developed by Christian philosophers. This principle would advise never to swerve an AV regardless of the number of pedestrians that would be killed, as doing so would result in intentional harm to others, while keeping the AV on its original path would be considered “merely foreseen harm” (more ethically permissible than the former) [9]. However, realistically, it would likely be difficult for AV companies to justify their cars killing, say, thirty pedestrians to avoid swerving and hitting one, regardless of the theoretical morality.
Another ethical framework, called the Rights Approach, skews more towards the interests of the individual, in this case, the passenger(s) of an AV or each pedestrian on the road. One interpretation of this approach indicates that since the main goal of an autonomous vehicle is to move passengers around safely, the passenger has a right to life [12]. But one must also consider the right to life of everyone else on the road; since AV manufacturers have made the decision to produce a “dangerous machine,” and passengers have made the decision to ride in such a machine, they both owe pedestrians a duty of care to safety [11]. As passengers know the risk they are taking by riding in an AV, it may be morally right to prioritize the right to life of others on the road, since they have taken no such risk. However, if autonomous vehicle manufacturers programmed their cars this way, consumer interest would likely drop immensely, and since that means fewer AVs on the road, this might result in a net loss of life due to more casualties from manned vehicles (same as the drawback to the utilitarian approach) [4]. So, both approaches seem to result in the catch-22 that programming AVs with the more morally correct approach will end in more loss of life overall due to consumers’ selfish (though understandable) desire to protect themselves above others.
A Theoretical Alternative
An alternative to the utilitarian and rights approaches to AV accidents can be found in contractarianism, in which people are considered inherently self-interested, and it is presumed that their rational assessment of strategies to maximize their self-interest will result in morally good actions and a consent to governmental authority [9]. Using an offshoot of this approach, dubbed a “Rawlsian algorithm” after contractarian philosopher John Rawls, an AV’s software would estimate the probability of survival of each person about to be involved in the crash, then calculate which course of action each person would agree to if they did not know who they would be in the scenario. For instance, consider a situation in which an AV could either hit and kill a pedestrian on one side of the street or swerve and severely injure three on the other side. If the four pedestrians involved were told ahead of time that they would be in the accident but not where on the street they would be standing, they would all rationally choose the second scenario, in which they would have no chance of death. Essentially, this algorithm minimizes the harm for whoever would be worst off in the crash; in this case, the worst outcome, death, would be replaced with the less harmful one of serious injury, thereby “maximizing the minimum” chance of survival [9]. The Rawlsian approach differs from utilitarianism in that a utilitarian approach to this scenario might mean killing the one pedestrian, as this would leave the rest uninjured and therefore result in the maximum amount of good overall for the people involved. Since contractarianism emphasizes “respect for persons as equals” and an “unwillingness to sacrifice the interests of one person for the interests of others,” the outcome of the accident becomes more morally fair for each person in the crash than either of the approaches discussed earlier [9]. With the rights approach, a pedestrian crash puts the rights of passengers and pedestrians at odds, and a utilitarian approach may violate the right to life of individuals in favor of fewer deaths overall. Thus, the Rawlsian algorithm increases moral fairness beyond these two approaches by attempting to spread the harm more equally, so to speak.
However, the Rawlsian algorithm requires consideration of the sophistication of current sensor technology found in AVs. Generally, AVs use GPS for relatively precise position information, with accuracy within one meter, and gyroscopes and accelerometers on all three axes of motion to calculate position when GPS is blocked. For detection of objects in the AV’s surroundings, cameras are generally too difficult to keep in clean, working order, so cars use Light Detection and Ranging, or LIDAR, for three-dimensional data on the environment around them. In essence, LIDAR scans 360 degrees around the car with a series of high speed, high power laser pulses, which return a “cloud” of data, analyzed by software inside the car to direct movement [13]. For functions requiring more precise control, like parking, lane-changing, and gridlock traffic, AVs use radar systems embedded in their sides and bumpers [14].
With this current technology, which is still likely not sophisticated enough for consumer cars (as evidenced by the Uber and Tesla crashes mentioned earlier), it is doubtful that the Rawlsian approach will be feasible, as it requires the AV to have precise knowledge of the location of each person involved in the accident and accurately estimate the harm to them for any given choice of movement. However, if engineers are able to develop the necessary sensors and software in the future, it seems like the best ethical balance between the protection of the passenger and the safety of everyone else on the road. As such, making fully autonomous vehicles programmed with any other algorithms available to consumers before sensor technology is advanced enough to enable the Rawlsian approach may be considered morally wrong, as it could result in preventable deaths.
Who’s Liable?
Another essential consideration among the ethical issues surrounding autonomous vehicle crashes is the question of liability. When an AV crashes and kills a pedestrian, who is legally and morally responsible for the death? The two main options seem to be either the owner/passenger of the AV, or the vehicle manufacturer. However, holding a passenger liable for someone’s death in an accident by inattention might actually be considered a form of defamation, since there was likely no chance for them to intervene before the accident occurred [15]. And if AV passengers were expected to pay attention and intervene in the case of an accident, that would practically defeat the purpose of fully autonomous vehicles in the first place, and even then, not everyone would possess the necessary reaction speed to respond in time. So, the other option is holding the car manufacturer responsible, which seems reasonable, given that they would be the cause of any flaws in the vehicle’s programming or sensors [15]. But holding manufacturers completely responsible may not be ethically correct, as this would likely slow development of AVs and thereby result in fewer lives saved. Furthermore, it may be argued that AV passengers still have a moral responsibility in the case of an accident, as they consciously took the risk to ride in an AV. Therefore, while passengers should not have a moral duty to intervene in the case of an accident, a solution to account for the cost of crashes could be shared among all AV owners through a mandatory insurance [15]. In essence, this system would split liability between passengers and manufacturers, as the passengers could pay enough insurance to cover part of the costs of accidents and the manufacturers could be held responsible for the rest of the damage costs.
Conclusion
Since the ethical issues of autonomous vehicles involve life and death decisions, they are understandably complex and controversial, and opinions are heavily influenced by individuals’ personal moral codes. While those tend to differ greatly from person to person and place to place, it is essential to have a uniform, unified system of ethics and laws in place for AVs, at least for a single country. As cars cross city and state borders constantly, inconsistent laws regarding the decision-making of AVs among these areas would be confusing for autonomous vehicle owners and challenging for law enforcement. So, before fully autonomous vehicles reach consumers, companies developing AVs should collaborate to create a code of ethics for their vehicles, ideally using some form of the Rawlsian algorithm (implemented with the necessary sensors and software), and the national government should develop the proper regulation and laws. Otherwise, any accidents will likely cause a massive moral and legal morass that would result in decreased popularity and restrictions on the use of autonomous vehicles, and AVs would therefore be unable to reach their full life-saving potential.
By Teagan Ampe, Viterbi School of Engineering, University of Southern California
About the Author
At the time of writing this paper, Teagan Ampe was a sophomore Computer Science student with an interest in web and mobile app development.
References
[1] “15% of new cars sold worldwide in 2025 will be autonomous vehicles,” Canalys, 07-Dec-2016. [Online]. Available: https://www.canalys.com/newsroom/15-new-cars-sold-worldwide-2025-will-be-autonomous-vehicles. [Accessed: 27-Apr-2019].
[2] “Self-Driving Cars Explained,” Union of Concerned Scientists, 21-Feb-2018. [Online]. Available: https://www.ucsusa.org/clean-vehicles/how-self-driving-cars-work. [Accessed: 07-Mar-2019].
[3] J.-F. Bonnefon, A. Shariff, and I. Rahwan, “The social dilemma of autonomous vehicles,” Science, vol. 352, no. 6293, pp. 1573–1576, Jun. 2016.
[4] J. Fleetwood, “Public Health, Ethics, and Autonomous Vehicles,” American Journal of Public Health, vol. 107, pp. 532–537, Mar. 2017.
[5] S. Nyholm, “The ethics of crashes with self-driving cars: A roadmap, I,” Philosophy Compass, vol. 13, no. 7, May 2018.
[6] A. Shariff, J.-F. Bonnefon, and I. Rahwan, “Psychological roadblocks to the adoption of self-driving vehicles,” Nature Human Behaviour, vol. 1, no. 10, pp. 694–696, Sep. 2017.
[7] E. Ackerman, “People Want Driverless Cars with Utilitarian Ethics, Unless They’re a Passenger,” IEEE Spectrum, 23-Jun-2016. [Online]. Available: https://spectrum.ieee.org/cars-that-think/transportation/self-driving/people-want-driverless-cars-with-utilitarian-ethics-unless-theyre-a-passenger. [Accessed: 07-Mar-2019].
[8] H. M. Roff, “The folly of trolleys: Ethical challenges and autonomous vehicles,” Brookings, 17-Dec-2018. [Online]. Available: https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/. [Accessed: 07-Mar-2019].
[9] D. Leben, “A Rawlsian algorithm for autonomous vehicles,” Ethics and Information Technology, vol. 19, no. 2, pp. 107–115, Mar. 2017.
[10] A. Maxmen, “Self-driving car dilemmas reveal that moral choices are not universal,” Nature News, 24-Oct-2018. [Online]. Available: https://www.nature.com/articles/d41586-018-07135-0. [Accessed: 06-Mar-2019].
[11] F. S. D. Sio, “Killing by Autonomous Vehicles and the Legal Doctrine of Necessity,” Ethical Theory and Moral Practice, vol. 20, no. 2, pp. 411–429, Feb. 2017.
[12] M. Drury, J. Lucia, and V. Caruso, “Autonomous Vehicles: An Ethical Theory to Guide Their Future,” The Lehigh Review, vol. 25, pp. 38–49, 2017.
[13] B. Schweber, “The Autonomous Car: A Diverse Array of Sensors Drives Navigation, Driving, and Performance,” Mouser Electronics. [Online]. Available: https://www.mouser.com/applications/autonomous-car-sensors-drive-performance/. [Accessed: 06-Mar-2019].
[14] B. Marshall, “Lidar, Radar & Digital Cameras: the Eyes of Autonomous Vehicles,” DesignSpark, 21-Feb-2018. [Online]. Available: https://www.rs-online.com/designspark/lidar-radar-digital-cameras-the-eyes-of-autonomous-vehicles. [Accessed: 09-Mar-2019].
[15] A. Hevelke and J. Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis,” Science and Engineering Ethics, vol. 21, no. 3, pp. 619–630, Jun. 2014.
Related Links
https://www.nature.com/articles/s41586-018-0637-6
https://www.ted.com/talks/patrick_lin_the_ethical_dilemma_of_self_driving_cars?language=en