Killing on Instinct: A Defense of Autonomous Weapon Systems for Offensive Combat

ABSTRACT

Autonomous weapon systems (AWS) are machines that can undertake lethal action in a combat scenario without the direct input of a human controller. This paper justifies the use of AWS under a rule utilitarian lens. Such weapons are more effective than human-operated systems, reduce collateral damage, and are superior at abiding by international law. Common counter arguments against AWS are also presented and refuted.


In July 2018, thousands of artificial intelligence (AI) researchers at the International Joint Conference on Artificial Intelligence came together to call for an international ban on lethal autonomous weapons. Consensus was reached that these weapons are unethical because they reduce accountability when a civilian is killed or property is unjustly damaged [1]. Google researchers took this argument one step further. Four thousand employees petitioned the company to eliminate all United States defense projects involving AI, even when these projects did not involve lethal action [2]. The use of robotics and AI in weapon systems is certainly controversial. Intuitively, technologies designed to take or assist in taking a human life should not be created without considering ethical ramifications. However, the cries of “killer robots” conjure scenes from sci-fi movies where robots indiscriminately kill. This vision does not reflect reality. Autonomous weapon systems (AWS) are major technological advancements that are more effective at performing various tasks required of a 21st century military. At the same time, AWS are also more likely to protect innocent civilians. AWS have not yet developed to a point where human soldiers can be replaced, but the gradual adoption of these systems is necessary to make modern warfare more just because automated machines are superior to humans at discerning when lethal action should be taken.

AWS are relatively new, and there is no agreed upon definition of an “autonomous weapon system.” However, most authors use the term to refer to any machine or system of machines that can identify an enemy target and take lethal action without human input [3]. Examples of such machines include aerial vehicles, submersible vehicles, and ground-based vehicles with an attached lethal weapon. Many systems, such as drones, are theoretically able to be controlled autonomously or by a human controller. That is, some systems can switch between an autonomous and non-autonomous state. For simplicity, this paper will use the term AWS inclusively to refer to a system that is capable of autonomous lethal action, even if it does not always operate in an autonomous state. Fully automated systems, though theoretically possible, are not currently used by any military. Even partially autonomous weapon systems do not currently have the capacity to execute lethal action without human input. The AWS debate is thus forward thinking; most of the arguments for or against a ban on these weapons is preemptive since completely automated weapons will not be available for at least another decade [4].

Even though AWS are not currently available, now is the time to have moral discussions about these weapons in order to ensure that they are developed ethically. While there are many different ethical theories, one important theory that different people subscribe to is utilitarianism. Utilitarianism claims that the most moral decision is the one that maximizes positive outcomes, as measured by metrics such as lives saved. Utilitarians justify this approach by arguing that most moral choices have far-reaching impacts, and it is only fair to choose the action that benefits the most people. This rationale is especially applicable to decisions in war. Every military operation, especially those involving lethal action, has drastic international impacts on multiple actors. Thus, utilitarianism is necessary to minimize harms to civilians and maximize military outcomes.

However, the principle of utilitarianism does not give militaries free reign to do whatever they please because they are still bound by international laws. The philosopher Michael Walzer argues that war should be conducted under “rule utilitarianism [5]”. Rule utilitarianism is a nuanced approach that seeks to create rules and institutions that help facilitate maximizing the outcome. For instance, speed limits are rules that are designed to minimize traffic accidents, which is a moral good according to utilitarianism. In the case of war, the rules are international laws enforced by treaties. International laws may require certain drone strikes to be planned far in advance, giving a military enough time to ensure that civilians won’t be harmed and the sovereignty of neighboring countries is respected.

In order for AWS to be ethical under rule utilitarianism, three criteria must be met. First, AWS must make military action more effective than existing options. Second, AWS must minimize unintended harms (namely civilian causalities) more so than existing military options. Finally, AWS must be able to abide by international military laws when making decisions. If all these criteria are met, then AWS will have maximized benefits while also maintaining legality, which makes the use of such weapon systems ethical under rule utilitarianism.

For the first criteria, AWS are far more capable than humans at performing select tasks, making them crucial for the national security of a state. Robotic systems manned by humans are already used by most militaries, but newer autonomous systems have a major advantage over existing technologies. These systems will eventually perform actions that are difficult for humans to do, such as landing an aircraft on an aircraft carrier in rough seas [6]. Although human pilots can do this, it takes months of training for a human to properly perform this maneuver. An AWS such as an unmanned aerial vehicle (UAV) simply needs to download the software (once it exists in the next few years) to do the same thing. AWS will even reduce the number of humans required to execute a mission. Whereas traditional military technology might require several humans to operate one machine, multiple autonomous machines in a single system can easily be operated by one human [7]. The human operator assigns targets and objectives, and then the system of machines autonomously works to carry out the mission. One can imagine a swarm of aerial drones replacing the several humans it takes to operate a single bomber aircraft. Eliminating the need for several humans to fly aircrafts will free human operators to make higher level decisions that AWS cannot make, leading to more effective military action. In addition, the cost of AWS is typically lower than the costs of older military technologies. One manned F-35 fighter jet is the equivalent cost of 20 autonomous Predator drones, and these lower costs make militaries more effective given a limited budget [8].

Though their ability to perform difficult tasks is impressive, the most important moral ramification of AWS is their ability to reduce the external harms of war. Civilian causalities are often a result of imprecise targeting during a bomb strike. However, artificial intelligence decreases the likelihood that the targeting is incorrect because AI algorithms have access to massive sets of data [8]. Although humans still surpass machines at general problem solving, AI algorithms can perform specific tasks far better than any human. Computer programs can interpret thousands of pieces of data at one time and can process blurry images (such as the ones provided by drone cameras) with greater efficiency than the human eye. This precision reduces the likelihood that a target will be misidentified, which reduces the overall risk for collateral damage. In fact, AWS even have the potential to stop human-induced collateral damage before it occurs. An autonomous vehicle might sense the heat signature of civilians in a certain area and be able to ignore a call from a human operator to strike a particular location [9]. Currently, humans act as the barrier between AWS and lethal force, but the aforementioned scenario is much more probable since computer systems can process data at a faster rate. Inevitably, autonomous vehicles will lead to civilian casualties; no military technology is 100% accurate. However, AWS will lead to a net reduction in casualties, which is preferable under a rule utilitarian calculus to any human-controlled alternatives currently utilized by militaries around the globe.

Although robotic technology is not near the point where AWS could be viewed as moral agents, autonomous vehicles are still superior to humans at following moral rules and international laws in the context of war. International military laws are complex, and even if human military personnel understand these laws perfectly, there is no guarantee that they will freely follow these laws all the time. Consider the case of US Army Staff Sergeant Robert Bales, who deliberately murdered 16 Afghan civilians while on duty [10]. To be clear, this behavior is certainly the exception to, not the norm for, the behavior of most military personnel, especially those from the United States. However, all human beings, despite intent, are biologically limited by psychology. Emotions such as fear and aggression can influence the decisions of human soldiers [11]. These emotions are certainly useful for saving one’s life, but they also can prevent a soldier from objectively examining a situation. While humans have the potential to excel at moral reasoning, they are not consistent in executing such judgment when under duress.

On the contrary, the computer systems that power AWS are exact, and they can execute moral judgments with great consistency. Robots do not kill at random or for pleasure, and computers do not have emotions to affect their decisions. Instead of emotions, AWS have international laws hard-coded into their software. These rules act as a final check against any lethal action the AWS is about to undertake, which ensures that the act is within legal bounds [12]. Encoding morals that are intuitive to humans into software is difficult, but it is not impossible. AWS can calculate the probability that a given target is a combatant and execute lethal force only if the probability surpasses a certain threshold [13]. Programming these decisions with precision will be crucial to ensure the AWS properly follow international laws and ethical norms on the battlefield, and the development of such software will likely take at least another decade. However, this is not a reason to ban AWS, but rather a reason to increase investment in the technology to ensure it works properly when the use of such weapons becomes widespread.

AWS are superior to human beings in maximizing military effectiveness, minimizing collateral damage, and following international law. They should thus be adopted under a rule utilitarian calculus. However, this argument is not without critics. A common criticism is that weapons systems should be semi-autonomous; they can operate independently until a lethal decision must be made, and then require human approval before the lethal action is taken. The semi-autonomous approach falls apart, though, because human operators are too slow to respond to certain scenarios. One non-lethal example of such a situation is the Phalanx close-in weapons system (CIWS). The Phalanx CIWS is mounted on United States Navy ships and can shoot down incoming enemy missiles. This process is completely automated: requiring human approval prior to shooting down an incoming missile could be disastrous [3]. One could imagine a similar example with a lethal AWS, perhaps a ground-based vehicle that is designed to protect troops from enemy forces. If a human operator is required to approve lethal force, the AWS might not defend against the enemy combatants in time, which undermines the function it was meant to perform.

If an AWS does act without human approval and somehow makes a mistake leading to property damage or casualties, it becomes difficult to determine who is at fault for the accident, which is the second main criticism of AWS. Some may blame the mistake on the commanding officer of the operation, and others claim that no one is to blame since the machine is autonomous. Some people even believe that the machine itself could be held liable! There is no legal precedent for holding autonomous machines accountable, and this dilemma is referred to as the “responsibility gap” [14]. This criticism is valid; currently, no legal framework exists to deal with these issues. However, the response is not to ban AWS. Rather, new legal frameworks need to be made prior to the widespread adoption of these weapons. International conventions governing new technologies and weapons are commonplace, and as long as organizations like the United Nations deliberate these issues now, there will not be any legal confusion once AWS are adopted on a large scale.

Technologies that have the power to take a human life should not be adopted without careful thought. However, if developed carefully, AWS have the power to make war safer and more effective. It is undeniable that automation of military technology is inevitable. Several adversarial countries to the United States are heavily investing in AWS, including, China, Russia, and Iran [15]. These weapons already exist in a primitive form, and it is only a matter of time before they are consistently used in combat. Ultimately, human beings are the ones who will program AWS, and human moral reasoning cannot be discredited. However, the evidence clearly indicates that AWS will make war a safer and more ethical endeavor, which is why the United States should increase its research into and development of this technology.

By Drew Charters, Viterbi School of Engineering, University of Southern California


ABOUT THE AUTHOR

At the time of writing this paper, Drew Charters was an undergraduate junior at the University of Southern California studying Computer Science and Computer Engineering. He is interested in astronautics and hopes to pursue a career in this field.

REFERENCES

[1] C. Jenkins, “AI Innovators Take Pledge Against Autonomous Killer Weapons”, Npr.org, 2018. [Online]. Available: https://www.npr.org/2018/07/18/630146884/ai-innovators-take-pledge-against-autonomous-killer-weapons. [Accessed: Oct- 2018].

[2] M. Spencer, “Google Supports Autonomous Weapons as Employees Resign Over Ethical Concerns”, Medium, 2018. [Online]. Available: https://medium.com/futuresin/google-supports-autonomous-weapons-as-employees-resign-over-ethical-concerns-68209f4f3422. [Accessed: Oct- 2018].

[3] N. Leys, “Autonomous Weapon Systems and International Crises”, Strategic Studies Quarterly, vol. 12, no. 1, 2018. [Online]. JSTOR. [Accessed: Oct- 2018].

[4] S. Goose and M. Wareham, ” The Growing International Movement Against Killer Robots”, Harvard International Review, vol. 37, no. 4, 2016. [Online]. JSTOR. [Accessed: Oct- 2018].

[5] B. Orend, “Just and Lawful Conduct in War: Reflections on Michael Walzer”, Law and Philosophy, vol. 20, no. 1, 2001. [Online]. JSTOR. [Accessed: Oct- 2018].

[6] V. Ma, “The Ethics and Implications of Modern Warfare: Robotic Systems and Human Optimization”, Harvard International Review, vol. 37, no. 4, 2016. [Online]. JSTOR. [Accessed: Oct- 2018].

[7] Y. Cheung, “Semi-autonomous collaborative control for multi-weapon multi-target pairing”, International Conference on Control, Automation and Systems 2010, 2010. [Online]. Engineering Village. [Accessed: Oct- 2018].

[8] J. Rabkin and J. Yoo, “‘Killer Robots’ Can Make War Less Awful”, The Wall Street Journal, 2018. [Online]. Available: https://www.wsj.com/articles/killer-robots-can-make-war-less-awful-1504284282. [Accessed: Oct- 2018].

[9] S. Tzafestas, “War Roboethics” in Roboethics: A Navigating Overview, pages 139-153, Springer. 2015. [Online]. [Accessed: Oct- 2018].

[10] J. Healy, “Soldier Sentenced to Life Without Parole for Killing 16 Afghans”, Nytimes.com, 2018. [Online]. Available: https://www.nytimes.com/2013/08/24/us/soldier-gets-life-without-parole-in-deaths-of-afghan-civilians.html. [Accessed: Oct- 2018].

[11] R. Arkin, “Human Failings in the Battlefield” in Governing Lethal Behavior in Autonomous Robots, pp. 29-36, 1st ed, Taylor and Francis Group. 2009. [Online]. [Accessed: Oct- 2018].

[12] B. Deng, “The robot’s dilemma: working out how to build ethical robots is one of the thorniest challenges in artificial intelligence,” Nature, vol. 523, no. 7558, 2015. [Online]. Available: http://link.galegroup.com.libproxy1.usc.edu/apps/doc/A420781862/AONE?u=usocal_main&sid=AONE&xid=34defdd1. [Accessed: Oct- 2018].

[13] R. Arkin, “Formalization for Ethical Control” in Governing Lethal Behavior in Autonomous Robots, pp. 57-67, 1st ed, Taylor and Francis Group. 2009. [Online]. [Accessed: Oct- 2018].

[14] A. Bianchi and D. Hayim, “Unmanned Warfare Devices and the Laws of War: The Challenge of Regulation”, Sicherheit und Frieden (S+F) / Security and Peace, vol. 31, no. 2, p. 97, 2013. [Online]. JSTOR. [Accessed: Oct- 2018].

[15] M. Horowitz, “The Looming Robotics Gap”, Foreign Policy, no. 206, pp. 62-67, 2014. [Online]. JSTOR. [Accessed: Oct- 2018].