Abstract
Deepfakes are a new form of AI that can simulate a person’s likeness and words using visual and audio media. The videos of Tom Cruise doing magic tricks that are surfacing on TikTok are prevalent examples of deepfakes; the actor has never posted such content on any of his official social media accounts and seems to be unaware of their existence. The possibilities in which deepfakes can be used are endless, as long as video and/or audio data of a subject can be obtained. This reveals ethical concerns surrounding the lack of regulation of deepfake technology. Engineers have a duty to address these concerns and improve the practices involved in deepfake research.
Introduction
Did you know that Tom Cruise has posted videos performing magic tricks on the social media platform TikTok? If not, don’t be alarmed– neither does he. “DeepTomCruise” is an account dedicated to creating deepfake videos of the actor doing things he, himself, may have never done before. The account successfully displays what deepfakes are: synthetic or “fake” media developed using existing video or audio files and deep learning algorithms.
However, the application of deepfakes surpass simple imitations of actors on social media. Deepfakes can be used to animate portraits, revive the dead, and much more. The unchecked power of deepfakes unveils the need to establish limitations. The digital resurrection of the dead, the misattribution of words to individuals, and the manipulation of visual and audio data pose serious ethical concerns about identity, privacy, and ownership. Due to the intricate nature of these ethical issues, it is essential to examine deepfakes using two ethical frameworks, consequentialism and deontology. Analyzing deepfakes through both consequentialism and deontology can help highlight their unethical nature.
Consequentialist Approach
Consequentialism assesses whether deepfakes are morally “good” based on their overall impact on society. When beneficial outcomes outweigh the adverse ones, an action is deemed moral [1]. With its misleading nature, deepfakes have been used for a multitude of morally corrupt actions [2]. For example, Russia harnessed deepfake technology to falsify a video of the Ukrainian president stating that the Ukraine was surrendering [3]. The video was clearly manipulated, but had it been more carefully crafted, it could’ve manipulated an entire country into surrendering during wartime.
Furthermore, deepfakes can undermine the credibility and authority of judicial institutions [4]. For example, in March 2021, there was a custody dispute in the UK where a parent attempted to provide deepfaked video evidence of the other parent making violent threats. In this case, the video was exposed as a deepfake, but as the technology continues to evolve, it is only getting harder to distinguish from reality. In the near future, deepfaked evidence could easily be accepted as fact. This is concerning; in more extreme situations such as murder or capital punishment trials, deepfakes could be used to frame someone for a crime they didn’t commit, to induce false witness testimony, or even to wrongly sentence someone to death [5].
In all of these cases, the lack of restrictions on deepfake technology present catastrophic risks to society. Thus, under the consequentialist approach, deepfakes are unethical. Data from public opinion supports this notion. According to a 2019 survey conducted by the Pew Research Center, about 63% of U.S. adults believed that deepfakes generated a “great deal of confusion” [6]. Although confusion alone isn’t directly harmful, its misleading nature can cause individuals to make poor decisions with damaging outcomes. Further, 77% of those surveyed were in support of instating restrictions for access and publication of deepfakes [6].
However, the ethicality of deepfakes cannot be determined by consequentialism alone. That’s because consequentialism fails to factor in the moral context of an action. Consider deepfake porn, which makes up 96% of all existing deepfakes [7]. If someone created deepfake porn and kept it private, no party would be harmed [8]. This means that there would be no ethical stipulations under a consequentialist lens because the action doesn’t lead to a negative consequence. Does this mean that creating non-consensual deepfake porn is acceptable? It’s common knowledge that pornographic content created without the consent of involved parties is illegal, which reveals a large oversight in the consequentialist approach. Therefore, another ethical viewpoint of deepfakes is necessary to truly understand why they are unethical.
Deontological Approach
Deontology determines the morality of an action based on existing rules, regardless of the action’s consequences [9]. For instance, stealing is considered wrong under deontology because it violates laws stating that stealing is illegal. Even if someone were to steal food for their starving family, the theft would be considered immoral under this ethical approach. When applying deontology to deepfakes, the morality of deception must be decided. This is because the technology uses existing video and audio files to create media that differs from real events. Since deepfakes distort reality, they can be deceptive under certain conditions.
Although there is no law that explicitly prohibits deception, it is generally accepted as an immoral action. Immanuel Kant, the German philosopher who pioneered deontological ethics, believed that deception is always wrong because it goes against one’s moral duty to respect others as rational beings [10]. Thus, according to deontology, a deepfake created with the intention of deceiving or spreading falsehoods is morally suspect. A deepfake is only ethical when its deceptive qualities are removed. This can only be achieved when all involved parties consent to being in the deepfake and understand that the media they are watching is not real. For instance, using deepfakes to grant speech to people who have permanently lost their voices would not be seen as immoral. As long as it was known that the voice in the video did not belong to the individual and the person whose voice is being used consented to it, there would be no deceptive qualities in the deepfake. Unfortunately, the lack of current deepfake regulation does not mandate the transparency needed to ensure that the technology is ethical. Most deepfakes fail to meet this criteria.
Value Neutrality of Technology
Value neutrality states that a given piece of technology is neither good nor bad. However, both consequentialism and deontology stipulate that deepfakes are unethical. This opposes the theory of technology’s neutral value. To understand this concept, think of an ax. Axes were designed to cut down trees for wood, a resource that is in high demand. Suppose that instead of its normal use, someone used an ax to murder an innocent person. Value neutrality says that only the murderer is in the wrong. The ax itself is not liable. While this ideal holds up when applied to simpler technology, it falls apart when accounting for AI-based technology. This can be seen in the case of autonomous vehicles (AVs). Imagine that a person was killed in an accident solely caused by a malfunction in the AV. In this scenario, the person cannot be blamed because they had no control over the situation– they were merely a trusting passenger. Yet, when assuming value neutrality, the car cannot be blamed either. It’s simply a tool.
This is alarming because, clearly, the AV is at fault in this situation. More specifically, the error in the AV’s code would be responsible for the person’s death, which makes the engineers who created the car liable. This emphasizes the dangers of assigning value neutrality to AI. Doing so overlooks engineers’ crucial role in creating technology and fails to consider that AI is easily influenced by the biases of engineers [11]. This is problematic because it prevents engineers from taking accountability for the harmful implications of their creations [12]. This is already happening within the deepfake industry. One of the engineers behind “FakeApp ” has stated that it’s not “right to condemn the technology itself which can of course be used for many purposes, good and bad” [13]. Their response violates the NSPE Code of Ethics which states that engineers are to prioritize the “…safety, health and welfare of the public…” [14]. By allowing the user to determine how to wield deepfake technology, the engineer creates the potential for harm to be brought upon others, even if it was not intentional.
Another drawback of adopting a value-neutral mindset in engineering is its potential to discourage engineers from asking moral questions about their work [15]. Without having to take accountability for their projects, engineers don’t have to think critically about them. This can be especially problematic in a fast-paced working environment where the pressure to deliver products quickly may override considerations about the potential negative consequences of a technology. By avoiding responsibility and resisting discussions about values, engineers may fail to recognize the harm that deepfakes can cause. Instead, engineers should take a more active role in considering the ethical implications of their work and hold themselves accountable for the potentially dangerous misuse of deepfakes.
It can be argued that engineers cannot bear the sole moral responsibility for the dangers presented by deepfakes. Even when the most thorough cost-benefit analysis has been conducted, it is impossible to predict every negative consequence of a product. Given that societal values are dynamic, it would be challenging to ensure that deepfake technology is not misused while also being accepted by society. The unpredictability of the technology and dynamic nature of societal values make it challenging to assign full accountability to engineers [14]. If engineers are to bear the sole responsibility for malicious deepfakes, it could discourage them from continuing research and could impede their pursuing innovative ways to revolutionize deepfakes [12].
Even so, engineers have a moral obligation to consider the impact of their work on society. It is not necessary for them to anticipate every possible misuse, but they should recognize the potential harm that their technology could cause to the general public. For example, Nazi death camps are considered to be technology designed with a set of evil purposes [15]. Yet, their head architect said that he “…was not concerned with the political and moral meaning of the things he produced” [15]. Not only did he adopt a value-neutral stance towards his innovation, but he completely ignored the evil and grotesque consequences of his work. To avoid a similar moral failure, engineers must wholeheartedly attempt to understand any threats that their research may have on the public. Otherwise, they have broken the NSPE Code of Ethics, which calls to prioritize the quality of life for all people [14].
Conclusion
Engineers play a crucial role in the development and deployment of deepfake technology. While the idea of value neutrality may seem appealing, it is ultimately a weak stance that fails to consider the potential negative consequences of deepfakes. The unpredictability of deepfakes and dynamic nature of societal values make it challenging to assign sole blame to engineers for any misuse of the technology. However, engineers must take responsibility for their work to mitigate these risks. By prioritizing the safety of life and upholding ethical standards, engineers can ensure that deepfake and AI technology are used safely. It is essential for engineers to recognize the deceptive nature of deepfakes and take a proactive approach to ensure that this technology is used in a responsible and ethical manner.
By Cameron Gomez, Viterbi School of Engineering, University of Southern California
About the Author
At the time of writing this, Cameron Gomez was a Junior majoring in Electrical and Computer Engineering at Viterbi. While she is interested in a variety of research stemming from robotics to VR, she always seeks to understand the perspective of non-engineers and apply those views to her research.
References
[1] “Ethics Explainer: What is Consequentialism?”, The Ethics Centre, 2016. [Online]. Available: https://ethics.org.au/ethics-explainer-consequentialism/#:~:text=Consequentialism%20is%20a%20theory%20that.
[2] Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM Computing Surveys, vol. 54, no. 1, pp. 1–41, Jan. 2021, doi: 10.1145/3425780. [Online]. Available: https://doi-org.libproxy1.usc.edu/10.1145/3425780.
[3] B. Allyn, “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn,” NPR, 2022. [Online]. Available: https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.
[4] T. Brooks et al., “Increasing Threat of DeepFake Identities,” Homeland Security. [Online]. Available: https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf.
[5] F. Dauer, “Law Enforcement in the Era of Deepfakes,” IACP Police Chief, 2022. [Online]. Available: https://www.policechiefmagazine.org/law-enforcement-era-deepfakes/.
[6] J. Gottfried, “About three-quarters of Americans favor steps to restrict altered videos and images,” Pew Research Center, 2019. [Online]. Available: https://www.pewresearch.org/fact-tank/2019/06/14/about-three-quarters-of-americans-favor-steps-to-restrict-altered-videos-and-images/.
[7] K. Goodwine, “Ethical Considerations of Deepfakes,” Prindle Institute, 2020. [Online]. Available: https://www.prindleinstitute.org/2020/12/ethical-considerations-of-deepfakes/.
[8] C. Öhman, “Introducing the pervert’s dilemma: a contribution to the critique of Deepfake Pornography,” Ethics and Information Technology, vol. 22, no. 2, pp. 133–140, Nov. 2019, doi: 10.1007/s10676-019-09522-1. [Online] Available: https://doi.org/10.1007/s10676-019-09522-1.
[9] “Ethics Explainer: What is Deontology?,” The Ethics Centre, 2016. [Online]. Available: https://ethics.org.au/ethics-explainer-deontology/.
[10] A. de Ruiter, “The Distinct Wrong of Deepfakes,” Philosophy & Technology, Jun. 2021, doi: 10.1007/s13347-021-00459-2. [Online] Available: http://dx.doi.org.libproxy2.usc.edu/10.1007/s13347-021-00459-2.
[11] “2.3E: Value Neutrality in Sociological Research,” Social Sci LibreTexts, 2018. [Online]. Available: https://socialsci.libretexts.org/Bookshelves/Sociology/Introduction_to_Sociology/Book%3A_Sociology_(Boundless)/02%3A_Sociological_Research/2.03%3A_Ethics_in_Sociological_Research/2.3E%3A_Value_Neutrality_in_Sociological_Research#:~:text=Value%20neutrality%2C%20as%20described%20by.
[12] D. R. Morrow, “When Technologies Makes Good People Do Bad Things: Another Argument Against the Value-Neutrality of Technologies,” Science and Engineering Ethics, vol. 20, no. 2, pp. 329–343, Aug. 2013, doi: 10.1007/s11948-013-9464-1. [Online]. Available: https://doi.org/10.1007/s11948-013-9464-1.
[13] “Viewpoints – Ethics and Transformative Technologies,” Seattle University, 2020. [Online]. Available: https://www.seattleu.edu/ethics-and-technology/viewpoints/deepfakes-and-the-value-neutrality-thesis.html.
[14] “NSPE Code of Ethics for Engineers,” National Society of Professional Engineers, Jul. 2019. [Online]. Available: https://www.nspe.org/resources/ethics/code-ethics.
[15] B. Miller, “Is Technology Value-Neutral?,” Science, Technology, & Human Values, vol. 46, no. 1, pp. 53–80, Jan. 2020, doi: 10.1177/0162243919900965. [Online]. Available: https://doi-org.libproxy1.usc.edu/10.1177/0162243919900965.
For Further Reading:
Media Literacy in the Age of DeepFakes
Useful knowledge that will keep you safe from falling victim to deepfakes.
Detect DeepFakes: How to counteract misinformation created by AI
An introduction to DetectFakes, an MIT founded website that can differentiate deepfakes from real videos.