Autonomous Weapon Systems: Our New Soldiers, or a Disaster Waiting to Happen?


Morality in the process of war, or “jus in bello,” is one of the two principles of the Just War Theory doctrine, created by ethicists in an attempt to reduce the immorality of war. Autonomous weapons, which function without human control and have been used by the United States military since the Obama administration, may thus be unable to fulfill the requirements of jus in bello. So, is it possible for fully autonomous weaponry to ethically take human life in the context of war? As autonomous weapons cannot be held liable for their actions, lack the capacity to make crucial decisions on their own, and would have a profoundly negative effect on the civilian population, this paper argues that they are therefore unable to be used ethically and should not be participants in war.


Central to ethics is the principle that humans have “intrinsic value”— in other words, human life is priceless. War promotes the harm to and death of human beings, and thus becomes an ethical violation. However, many argue that war is essential to resolving large-scale conflicts, which likely explains why it still exists today. In order to reduce the immorality of war, ethicists have outlined criteria to justify a war in a doctrine called Just War Theory. This doctrine rests on two principles: jus ad bellum (morality in going to war) and jus in bello (morality in the process of war). There are three main clauses under the jus in bello principle: first, the distinction clause mandates that individuals not involved in a war (“non-combatants”) not be terrorized or attacked. Second, the proportionality clause requires that the benefits outweigh the damage of war. Third, the responsibility clause outlines that agents participating in war must be capable of feeling remorse and must take responsibility for their actions [1]. If an agent fails to satisfy any of these conditions, that agent violates the ethical code of war.

Introduced under the Obama administration during the U.S. war against terrorism, autonomous weapons (AW) are human-independent systems that apply lethal force to a targeted opponent during war [2]. By the definition of the U.S. Department of Defense, these weapon systems can “select and engage targets without further intervention by a human operator” once activated [3]. The current most common autonomous weapon systems in testing are Armed Robotic Vehicles (ARVs), Future Combat Systems (FCSs), and Unmanned Aerial Vehicles (UAVs) [4]. The way they are designed now, AW systems simply cannot satisfy the ethical requirements of war in any predictable manner. This essay will analyze the ethical restrictions associated with AW systems by contemplating the following question: In the context of war, can fully autonomous weaponry ethically take human life? In weighing both sides of this debate, it is clear that we should not allow AW systems to make this fatal decision, as they cannot be held liable for their actions, lack the capacity to make crucial decisions on their own, and would have a profoundly negative effect on the civilian population.


Human beings often have the laudable characteristic of ethical intuition. While the path actually taken is up to the individual, it is important to acknowledge that humans have at least the capacity to judge the morality of situations. But no matter how many sensors scientists program into AW systems, these robots simply cannot make judgment calls with the same innate sense of right and wrong that humans have. Instead, autonomous weapons distinguish between combatants and noncombatants primarily through Artificial Intelligence (AI) sensor technology. AW systems typically focus on searching for known weaponry systems, such as arms, planes, or tanks; however, their capacity to accurately detect assailants remains dubious, especially compared to the ability of a trained soldier [5].

Consider a situation in which a soldier walks through a battlefield, about to surrender but still wielding a gun. While a trained soldier might recognize a stance of surrender and make the choice to refrain from using their weapon, an autonomous weapon would likely be programmed to fire at the sight of any armed human. Stance is highly variable and cannot easily be judged by a robot’s programming; essentially, an autonomous robot or drone might exercise poor judgment and unjustly maim or murder in this circumstance. Political science professor Michael Horowitz agrees that “a robotic soldier would follow its order, killing the combatant” [3]. Humans also commit such errors from time to time, but, unlike humans, AW systems cannot be punished–– a principle crucial to the third clause of Just War Theory, which prioritizes accountability [6].

It is not difficult to imagine other problematic cases. For example, consider the situation of a child soldier forced to carry an empty rifle. Would an autonomous drone, completely reliant on its programmed instincts, recognize a rifle and start firing? A human soldier, on the other hand, might question that child’s capacity to do harm or factor in other extenuating circumstances. In this case, we would prefer the human’s common sense to the autonomous entity’s adherence to orders. Autonomous weapons may follow orders too well and shoot the child, as the system lacks the common sense characteristic of most humans.

By no means are these autonomous systems completely ineffective; many pieces of AW technology have the capacity to recognize more obvious distinctions between civilians and soldiers. Unfortunately, the ability of AW technology to judge slight differences is simply lacking, making their usage dangerous, especially considering the shifting dominance of covert operations in warfare today. Many critics maintain that the systems lack an inbuilt “sensibility” to judge a situation, an inadequacy that remains an impediment rather than an asset [7]. Therefore, as today’s autonomous warfare technology falls short of the “distinction” clause mandated by the Just War Theory, it is unlikely that AW systems can be ethically integrated into warfare.


One can interpret the proportionality clause in two different ways. First, its reference to balancing benefits and damage can be compared to utilitarianism, which similarly focuses on maximizing positive impact. Second, the clause could relate to the Geneva Conventions’ regulation of force and collateral damage [5].

Autonomous systems fail to maximize the balance of benefits to harms outlined in utilitarianism. According to the Geneva Conventions, a series of wartime humanitarian agreements made in the mid-1900s, any “incidental harm” inflicted upon civilians is considered unjust [6]. Physical harm typically makes up the majority of the discussion; however, the negative psychological effects of deployment into civilian areas can also cause significant harm to noncombatant residents. Anyone, even civilians of the opposing side, would likely find somewhat more comfort in a trained human holding a gun rather than in a lethal, unpredictable machine still in testing. The lower likelihood of human soldiers making fatal mistakes also reduces their intimidation factor. In his 2010 article on autonomous warfare, philosophy professor John P. Sullins tells us that humans have the potential to become very “overwhelmed when attempting to deduce all the possible implications of an act” and create less than optimal “ethical outcomes” [8]. On the other hand, military officers are highly trained; a slip-up in the case of a first combat experience or overwhelmed soldier would likely be a one-time instance, as opposed to the aforementioned mistakes that AW systems might make. Also, the constant pointing of an autonomous drone’s guns––or terrorization from human soldiers––can be seriously traumatic for civilians. For example, the Obama administration faced substantial outcry over the ethicality of the U.S. military’s “near-constant” presence in Yemen, which included autonomous technology [7]. Autonomous weapons thus have significant capacity to do psychological harm, and therefore poorly uphold utilitarian principles and fail to fulfill the proportionality clause.

We must also be concerned with whether AW systems can take proportionality into account as war progresses. The Geneva Conventions dictate that parties must administer only as much force as their objective requires, and nothing more, in order to minimize destruction [7]. In order to comply, militaries regularly diminish their use of lethal force during high points of war and escalate it during less intense points [5]. But as of today, there exists no “sensing capability” for autonomous systems to determine the amount of force needed to accomplish a task, nor a metric to measure needless suffering [6]. The primary goal of an autonomous system is to aim and fire; as with any machine, the system performs only as programmed, nothing more and nothing less. Restraint on the battlefield necessitates an instinct for when to deviate from the norm, though, and numerous ethicists and machinists caution that autonomous systems will not have the capacity for this instinct for some time [6]. Though AW systems may become more advanced in the future, they cannot currently be ethically introduced into battle by the standards of both the Geneva Conventions and Just War Theory.


Humans may make errors––more often than AW systems, systems proponents argue––but they can also be held accountable. In the past, soldiers have stood trial, and, in extreme cases, recompensed appropriately; drones and other technology, on the other hand, lack similar precedent. In 2003, an American autonomous missile battery wrongfully fired at an allied aircraft­. The event and all its casualties were promptly swept under the rug, and no one was held accountable for this war crime [3]. Surely, the autonomous weapon cannot hold all of the blame, as it is an insentient being. The issue could have stemmed from any point in production, from the manufacturer and their faulty assembly to the programmers’ bug-riddled code to the “commanders and the operator” or anyone in between [6]. Behind the creation of any machine lies a long chain of command and creation, so it remains difficult to pinpoint any one person as the culprit. Even if a specific incident could be traced back to a single person, the law typically allows for leeway if the party could not have “reasonably predicted” the situation [3]. Autonomous weaponry does not have the sentience to feel remorse or “pay” for its crimes, nor can it even project the responsibility onto any one person fairly. In this way, any mishap caused by an autonomous weapon will result in a violation of the third clause of jus in bello.

However, AW systems do have a reputation for efficiency and precision, while humans often fall victim to emotion or general human error. Regardless of their exact degree of efficiency, autonomous weaponry makes a tradeoff of performance over accountability. Militaries may understandably become easily absorbed in completing a mission, but it is equally important to remain within the bounds of ethics; AW systems, unfortunately, tread outside those bounds.

Future Implications

The U.S. is a global leader. With every action it takes, it sets precedents for countries all over the world––and American AW systems could precede a potential worldwide proliferation of robotic weapons. John Canning, head engineer at the Naval Surface Warfare Center, succinctly summarizes the issue behind this scenario: “What happens when another country sees what we’ve been doing [and] realizes it’s not that hard?” Other countries will likely adopt the same technology soon, even if just as a protective measure against the U.S., thus flooding the battlefield with no guarantee of the “same level of safeguards” as today’s AW systems [6]. The scale of this technology will likely increase, as the American military has already pumped over $1 billion into its current autonomous weapons project [9]. Just as atomic bombs escalated war to astronomical proportions during the last century, the world may face a massively destructive robot war in this one. 

Another important consideration is that autonomous weapons may lower barriers of entry to war, producing potentially terrible consequences. Since human soldiers have been a major part of military action historically, a certain hesitation has followed the initiation of war. Known as “body-bag politics,” the idea of humans risking their lives out in the battlefield has always inhibited both leaders and soldiers alike [6]. The use of AW systems reduces the number of humans on the battlefield, and, in turn, the number of military casualties; as a result, there will be less, if any, hesitation in starting a war. While majority-human wars usually necessitate significant consideration of costs and benefits, lower numbers of casualties may lead to wars waged with unethical, reckless abandon.


Just War Theory requires that agents of war distinguish between combatants and noncombatants, cause minimal harm such that the benefits of a war outweigh its damage, and assume responsibility for their actions. And for all intents and purposes, this is something that autonomous warfare cannot sufficiently do. With so many lives at stake, society must be confident that a technology with this scope and capability is practically flawless; present AW systems therefore pose a considerable risk. Governments must make the decision to place drones, ARVs, and other autonomous technology on hold––thankfully, this is a decision that will be made by humans, and only humans.

By Sameeksha Agrawal, Viterbi School of Engineering, University of Southern California

About the Author

At the time of writing this paper, Sameeksha Agrawal was a sophomore studying Biomedical Engineering at the University of Southern California, with a minor in Connected Devices and Making. She hoped to do her Master’s at USC through the PDP program and then work at a bio-device tech company. In her free time, she loves to bake, play tennis, or hang out with friends.


[1] A. Moseley, “Just War Theory,” in The Internet Encyclopedia of Philosophy [Online]. Available: [Accessed: 09- Mar- 2020].

[2] G. A. Knoops, “Legal, Political and Ethical Dimensions of Drone Warfare under International Law: A Preliminary Survey,” International Criminal Law Review, vol. 12, no. 4, pp. 697-720, Jan, 2012.

[3] M. C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus, vol. 145, no. 4, pp. 25-36, Fall 2016.

[4] R. C. Arkin, “Ethical robots in warfare,” IEEE Technology and Society Magazine, vol. 28, no. 1, pp. 30-33, Spring 2009.

[5] M. T. Klare. (2019, March). Autonomous Weapons Systems and the Laws of War [Online]. Available: [Accessed: 09- Mar- 2020].

[6] N. Sharkey, “Cassandra or False Prophet of Doom: AI Robots and War,” IEEE Intelligent Systems, vol. 23, no. 4, pp. 14-17, July-Aug, 2008.

[7] M. J. Boyle, “The legal and ethical implications of drone warfare,” The International Journal of Human Rights, vol. 19, no. 2, pp. 105-126, Feb, 2015.

[8] J. P. Sullins, “RoboWarfare: can robots be more ethical than humans on the battlefield?” Ethics Inf Technol, vol. 12, pp. 263-275, July, 2010.

[9] R. Tonkens, “THE CASE AGAINST ROBOTIC WARFARE: A RESPONSE TO ARKIN,” Journal of Military Ethics, vol. 11, no. 2, pp. 149-168, Sep 2012.

Related Links