The Ethics of Self-Driving Cars

Abstract

Self-driving cars process huge amounts of sensory information in a very short amount of time. The processing speed of this information allows self-driving cars to make an informed decision on how to act in the case of an accident. In scenarios where casualties are unavoidable, this produces an ethical dilemma in determining who should survive, raising questions about how the value of a life should be calculated. Ultimately, because all lives are equal and no individual should have power over deciding the fate of the lives of others, self-driving cars are unethical.


Introduction

Imagine you’re driving in your car after a long day of work and approach an intersection displaying a green light. However, as you get closer, a family quickly runs across the intersection illegally, and you judge that you will not stop in time before the intersection. You’re left with two choices: you could swerve to the side and hit a barrier, likely ending your life, or you could run into the family, killing two children and four people total. What do you do? Given only a split second, you likely don’t have the chance to make an informed judgment in regard to whose life you think is more valuable and what you should do in the moment. However, imagine instead that a computer controlled your car and could evaluate all conditions in a fraction of a second and make a calculated decision on how to respond. Should the car swerve into the barrier in order to save children’s lives? What if a homeless man were crossing the street instead of a family? These questions show the ethical dilemma that naturally exists from the car’s quick ability to collect and process data moments before an accident. Ultimately, self-driving cars are unethical because accident avoidance programming inherently places a higher value on some lives over others and gives programmers the power to decide whose lives should be considered more valuable.

About Self-Driving Cars

A self-driving car is a vehicle that doesn’t require human operation to safely function and travel to destinations. It uses a variety of sensors to observe conditions surrounding the car and analyzes the information with software algorithms that determine the best course of action for the car to take. The car then repeats this process continually, as self-driving cars are constantly taking in new information from their changing surroundings [1]. This technology behind self-driving vehicles is not completely new, however–– it’s already been used for years in blind-spot monitoring, lane-keep assistance, and forward collision warning.

Companies such as Alphabet have been working to produce completely autonomous self-driving vehicles ready for the consumer market. In December 2018, Waymo, Alphabet’s self-driving technology company, launched the first self-driving car service in Phoenix, Arizona. Waymo’s autonomous vehicles are currently being tested on a group of 400 people and still require a Waymo-trained driver to oversee the car’s functions. Once Waymo has ensured the feasibility of the program, it hopes to expand it to more people and remove the Waymo-trained driver. Throughout this process, however, Waymo has been unclear about how they’ve handled the clear ethical dilemma in cases surrounding accidents [2]. At the rapid rate that autonomous technology is advancing and completely self-driving vehicles are emerging in commercial markets, it’s imperative to delve into the ethical dilemma of accident avoidance on the road.

The Dilemma: Calculating the Value of a Life

Because self-driving cars collect and process data so quickly, new predicaments arise out of their ability to consider more decision-making factors–– especially in regard to calculating the value of a life. A study by the University of Michigan found that in addition to age, sex, and appearance of individuals involved prior to the accident, self-driving cars can even predict future pedestrian movements with machine learning technology. For example, if a pedestrian is playing with their phone, the car recognizes that the pedestrian is distracted and can predict that they may make a mistake such as stepping into moving traffic [3]. Additionally, self-driving cars can react in 0.5 seconds, much faster than the approximate 1.6 seconds humans require while driving [4]. These combined features mean that self-driving cars can detect possible accidents before they even occur and react to them much quicker than humans can. While these features make self-driving cars safer, they force self-driving cars to make decisions to avoid accidents that humans otherwise wouldn’t be able to make. When a car can’t stop in time to avoid an accident and casualties are inevitable, the car must be programmed on who should be saved and who should be hit. In turn, this dilemma suggests an inherent need to calculate the relative value of a life, which could be based on a variety of factors such as youth or fitness. Should a self-driving car even act on this value to determine who will be the victim of an accident?

According to the fairness and justice approaches to ethics, all people in this crash scenario should be treated equally. Philosopher John Rawls explains the Fairness Approach by claiming that no individual deserves to be born into a certain socioeconomic group, have a certain sex or race, or be naturally gifted at something. Therefore, he declares, these personal features are morally arbitrary; Rawls’s first principle of justice states that “each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties” [5]. According to this theory of fairness, no life is more valuable than another–– everyone is equally deserving of life. Therefore, by extension, it is ethically wrong for self-driving cars to consider extrinsic qualities when determining how to act in a situation when death is inevitable. A self-driving car can neither determine one’s life to be more valuable based on information collected nor act on this value in determining the victim of an incident.

However, while individual lives can’t be valued based on their background, another ethics approach suggests they can be valued based on quantity. Utilitarian ethics evaluates the consequences of a situation in order to minimize negative outcomes and increase positive outcomes [6]. Philosopher Jeremy Bentham established Utilitarianism around the basic principle of greatest welfare, claiming that the most ethical act is the one that does the greatest amount of good and the least amount of harm [7]. From a Utilitarian view, self-driving cars should act in order to save the most lives, reducing the amount of harm and producing the best outcome for the greatest number of people. In this way, if given a choice between steering into a path with multiple people versus a path with a single pedestrian, the lives in the former path would be more valuable than the life in the latter. However, if given a scenario where the same number of lives are at hand in each possible outcome, it is impossible and ethically wrong to determine who should be killed. This forces the same dilemma of using individuals’ features to determine which individual to kill, making self-driving cars unethical.

Who Makes the Decision?

The heart of the issue with self-driving cars lies in the programming of the cars themselves. Instructions on how a given company’s cars should react in the case of an accident are programmed into the vehicles by a small group of staff. In other words, the fates of different parties in an accident rest entirely in these programmers’ hands. Millions and millions of people can be affected by the moral convention that a small group of people in a large corporation believes is right. So, what makes these individuals qualified to calculate the value of lives for an entire society and designate how millions of cars will operate in a scenario where a split-second decision could end people’s lives?

The answer is that these individuals have no right to decide on a moral code that dictates the lives of all members of society. According to the rights approach to ethics, humans have three inalienable rights that no individual can infringe upon. The Declaration of Independence outlines these basic rights as central to the moral code of the United States, proclaiming that “all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness” [8]. John Locke, a famed philosopher who favored the rights approach, further affirmed the unethicality of denying others the right to life: “There cannot be supposed any such subordination among us that may authorize us to destroy one another, as if we were made to one another’s uses” [9]. Under this rights approach, every person has the basic right of life; in other words, it is ethically wrong for a person to take away this basic right. When a programmer writes self-driving software that intentionally makes a decision with a fatal outcome in a car crash–– regardless of whether there was a way to avoid killing–– they commit an act that violates right to life. Programmers of self-driving cars therefore act unethically by writing code that provides for an option to end another’s life.

Conclusion

Unlike humans, autonomous vehicles can act in an instant and evaluate all sensory information involved before a car accident. They can be programmed to act a certain way based on this sensory information, naturally producing the ethical dilemma: should cars be programmed to save certain lives based on a predetermined value? The answer is no, as all people are created equal and doing so can ultimately result in a group of individuals being unfairly targeted, which is both unethical and illegal. While self-driving cars may initially seem like the future of driving, programming them ultimately compromises the ethical obligations that engineers have.

By Isabel Yarwood-Perez, Viterbi School of Engineering, University of Southern California


About the Author

At the time of writing this paper, Isabel Yarwood Perez was a sophomore studying Mechanical Engineering and involved in a variety of clubs at USC including Science Outreach, the AUV Design Team, and the Society of Women Engineers.

References

[1] “How do Self-Driving Cars Work?,” IoT For All, 05-Oct-2018. [Online]. Available: https://www.iotforall.com/how-do-self-driving-cars-work/. [Accessed: 14-Mar-2019].

[2] M. DeBord, “Waymo has launched its commercial self-driving service in Phoenix – and it’s called ‘Waymo One’,” Business Insider, 05-Dec-2018. [Online]. Available: https://www.businessinsider.com/waymo-one-driverless-car-service-launches-in-phoenix-arizona-2018-12. [Accessed: 19-Mar-2019].

[3] K. Beukema, “Teaching self-driving cars to predict pedestrian movement,” University of Michigan News, 14-Feb-2019. [Online]. Available: https://news.umich.edu/teaching-self-driving-cars-to-predict-pedestrian-movement/. [Accessed: 26-Mar-2019].

[4] A. Marshall, “Puny Humans Still See the World Better Than Self-Driving Cars,” Wired, 06-Mar-2018. [Online]. Available: https://www.wired.com/story/self-driving-cars-perception-humans/. [Accessed: 26-Mar-2019]

[5] L. Wenar, “John Rawls,” Stanford Encyclopedia of Philosophy, Apr. 2017.

[6] Santa Clara University, “A Framework for Ethical Decision Making,” Markkula Center for Applied Ethics. [Online]. Available: https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical-decision-making/. [Accessed: 27-Mar-2019].

[7] “Jeremy Bentham,” The Basics of Philosophy, 2019. [Online]. Available: https://www.philosophybasics.com/philosophers_bentham.html. [Accessed: 26-Mar-2019].

[8] The Declaration of independence, 1776. Literal print. Washington: Govt. Print. Off., 1921.

[9] J. J. Jenkins, “Locke and Natural Rights,” Philosophy, vol. 42, no. 160, p. 149, 1967.

Related Links

https://www.ted.com/talks/patrick_lin_the_ethical_dilemma_of_self_driving_cars/up-next

https://www.vox.com/future-perfect/2018/11/9/18072678/self-driving-cars-philosophy-safety-trolley-problem-mit

https://www.washingtonpost.com/science/2018/10/24/self-driving-cars-will-have-decide-who-should-live-who-should-die-heres-who-humans-would-kill/