The Ethics of Stockpiling Zero-Day Vulnerabilities

Abstract

The development of new technology has allowed the evolution of new methods of warfare. The use of zero-days has propelled this exploration and empowered governments to remotely attack the software systems of their adversaries. Despite the potential military benefits of this weapon, government officials are still questioning whether they should stockpile zero-days, especially when the vulnerability is found in software that everyday citizens use. Analyses of previous zero-day attacks and the US government’s overpowering national security stance reveals that the consequences of stockpiling zero-days outweigh the benefits and are a clear violation of the rights of citizens.


Introduction

Imagine a scenario in which tensions between the United States and an adversary have increased due to the threatening growth of nuclear weapons in the adversary’s country. While the US is not yet willing to respond with military intervention, it would like a nonviolent offensive countermeasure. Luckily, the NSA has a solution. A group of NSA cybersecurity experts have found a cybersecurity vulnerability, or a ‘zero-day’ exploit, in the software that controls the critical infrastructure of the adversary’s nuclear weapons facility. This means that if the US needs to, it can use the zero-day exploit to disrupt and degrade the systems that control the adversary’s production of nuclear weapons. While this seems like an effective tool, there is a catch. That same software that powers the nuclear weapons systems also powers almost every electric car in the US. If another US adversary or malicious group were to find the same vulnerability, it could potentially use that weakness to harm American citizens. This leaves the US government with a tough question:  Should it hold on to the zero-day exploit in the interest of national security, or should it notify the software company of the vulnerability and lose their leverage in the interest of potentially saving American lives?

Over the past decade, countries have been grappling with the potential benefits and harms of stockpiling cybersecurity vulnerabilities. Even though these vulnerabilities could potentially put them at an advantage in the event of a national security crisis, the harm done when those vulnerabilities are stolen would exceed any benefit gained by states accumulating such information. Additionally, when the government fails to act on knowledge that could benefit citizens in exchange for security leverage, it is a clear violation of citizens’ rights and extends beyond their obligation to uphold national security.

To understand zero-days, it is important to understand their history and how they are used today. Zero-days are a relatively new technology that have only been around for about 30 years. Their growth presents an effective cyberweapon for the US but also a potential threat to the safety of American citizens.

Background

The term zero-day was first coined in the 1990s to indicate security holes in software [1]. Their purpose today is to leverage existing vulnerabilities in software that the creator of the software is completely unaware of. The term stems from the idea that the developer of the vulnerable software has zero days to prepare for an attack that targets the vulnerability. Programmers would be unable to produce a patch to fix the problem until the zero-day had been exploited and the damage had already been done.

Zero-days can target any type of software for multiple purposes: hackers can expose or alter data, control devices on a network, or, in the worst scenarios, damage critical infrastructure such as a country’s power grid.

The practice of using zero-days to target systems did not gain traction until the mid-2000s as technology improved and interest in cybersecurity grew. At first, finding zero-days was incredibly rare, with malware security firms only finding a few zero-day exploits per month [1]. As hackers became more skilled, more and more zero-days were discovered. That is when a market for buying and selling zero-days was created, as countries saw their use in gaining a military advantage.

As people began entrusting their lives to their digital devices in the mid to late 2000s, vulnerabilities in the software of those devices became extremely valuable, creating a black market for those assets. Instead of acting as a regulator to disrupt the market for zero-days, the United States became the largest client [2]. Over the past two decades, the United States has purchased millions of dollars’ worth of cyber-weapons to build stockpiles that could potentially be used against other countries [3]. Today, the NSA has a backdoor into almost every major application, social media platform, server, firewall, phone, laptop, and other software.

When a government acquires a zero-day, its options are two-fold. The first option is to disclose the vulnerability to the owner of the software so they can patch it. The second option is to retain that zero-day in a stockpile of weapons to be exploited in case national security requires its use. For example, if the NSA finds a vulnerability in Apple’s iMessage software, they must decide whether to keep that knowledge to themselves or inform Apple. The government might not disclose it if they needed to use the vulnerability to tap into the iPhones of adversaries to gain information. But, since many people who use Apple products could be affected by a potential hack, the government risks Americans becoming the victims of an attack. This creates a dilemma where governments are knowingly exposing their citizens to the risk of cyberattacks in favor of keeping the vulnerability alive for leverage.

Stockpiling From a Utilitarian Perspective

Utilitarian ethics is a consequentialist framework concerned with the outcomes of a particular ethical decision. It holds that the consequences of an act are the most important determinant of whether that act is ethical or not.

Jeremy Bentham was the first to develop the principles of utility by defining it as a measure of minimizing pain and maximizing pleasure [4]. Based on the belief that everyone prefers pleasure over pain, Bentham founded the moral principles of utilitarianism. The two main types of utilitarianism are act utilitarianism and rule utilitarianism [4]. Where act utilitarianism focuses on the consequences of a particular action, rule utilitarianism checks utilitarian principles with the notion of universality, evaluating not just the consequences of a single act but also what the consequences would be if everyone were to engage in that same act (i.e., follow the same rule).

Using the utilitarian approach in evaluating the ethics of stockpiling zero-days is important since the implications of stockpiling are directly determined by its consequences. Either stockpiling is considered moral because it can be used as a tool for national security purposes or is considered immoral because those same stockpiled vulnerabilities can be used against citizens.

The government stockpiles zero-days for both defensive and offensive national security purposes. Cybersecurity weapons are a uniquely powerful tool since they can be used to target specific systems without the user’s knowledge. Additionally, with the resources that the US has, it has been able to corner the zero-day market, potentially keeping the security vulnerabilities away from malicious groups.

Zero-day exploits are used to gain leverage over other countries or groups. Should the US be in any situation where they are threatened, a zero-day exploit is a relatively inexpensive and effective way to mitigate that threat. Zero-days are in the best interest of citizens if that exploit gives the country offensive capabilities or prevents attacks against the country’s people. However, the government must analyze multiple probabilistic metrics when deciding whether to stockpile an exploit [5].

The first metric in deciding whether to store a zero-day is the expected lifetime or longevity of that zero-day [5]. How long the zero-day is expected to go unpatched by the software vendor determines the potential value in keeping the vulnerability. The second, more important metric is determining the probability of a collision [5]. A collision occurs when two or more groups that are not the software vendor find the same exploit. This metric is critical because if analysts deem that the exploit is likely to be found by another group, the best option is to disclose the vulnerability to prevent potential harm to citizens using that software. However, a study conducted by the RAND corporation found that for a given stockpile of zero-days, only 5.7% had been discovered by an outside entity after one year [6].

Using the metrics above to determine whether to stockpile zero-days creates a benefit for citizens as the calculation minimizes personal risk and mitigates potential security threats. In the eyes of the NSA, given that the overall collision rate is so low, the advantage granted by stockpiling vulnerabilities for national security purposes outweighs the potential risk that citizens face if other groups or countries were to find those vulnerabilities. With this low risk, the NSA gains the benefit of having a cybersecurity weapon without a high probability of a software attack on their own citizens.

When the US buys zero-day exploits, it keeps them out of the hands of malicious groups. Even though certain malicious groups can attempt to find the same vulnerabilities, doing so is incredibly challenging. The first challenge is knowing where to look, and the second is the many hours scanning source code required to spot any error. The results from the RAND corporation study represent the improbability of two groups finding the same vulnerability. With the economic power of the US, buying zero-days is not too costly. CIA security analyst Dan Geer said in 2014 that “there is no doubt that the US could openly corner the world vulnerability market, that is we buy them all and make them all public” [7]. While the United States does not disclose all their vulnerabilities, increased participation in the market would yield greater control over who has access to the zero-days. Given that the US is currently the largest single buyer of zero-days, decreasing participation would leave opportunities for malicious groups to buy the zero-days. The US buying known zero-days is an immediate benefit since it lowers the risk of malicious groups obtaining that same zero-day.

Even though stockpiling zero-days may seem like a great defensive option if a solid balance is achieved between obtaining national security advantages and keeping citizens safe, that assumes that a good balance is possible. It assumes that some risk to the public is worthwhile given the offensive capabilities received in return. If a collision were to occur where a malicious group found the same exploit that the US stockpiled, there would be no distinction between failing to disclose the vulnerability and actively doing harm.

Historical Examples of Zero-Day Uses

The WannaCry ransomware attack is a prime example of the consequences of not disclosing a software vulnerability and keeping it for offensive purposes. Before 2017, the NSA discovered a vulnerability in Microsoft Windows operating systems that allowed them to encrypt and corrupt data kept by the OS [8]. Instead of reporting this vulnerability to Microsoft, the NSA stored it and called it EternalBlue [8].

In May 2017, EternalBlue was stolen and leaked by a group of hackers called the Shadow Brokers, who stole and encrypted the data of more than 200,000 computers across 150 countries [8]. Each computer affected by the attack was prompted to pay a ransom in Bitcoin in return for the stolen data. Once the exploit was released, Microsoft patched the vulnerability within a week, but not before billions’ worth of damages had been done [8].

The WannaCry attack exemplifies the fine line between gaining a military advantage and causing global turmoil. The harm done when software vulnerabilities are stolen greatly exceeds the benefits accrued by states having that information. The NSA held onto EternalBlue for five years, only notifying Microsoft of the vulnerability after the Shadow Brokers had launched their global attack [9]. By failing to disclose their information, over 230,000 people were harmed by the attack [9]. Even though the NSA may have had a brief advantage when it was the sole owner of the exploit, the damage done by the ransomware outweighed their small and temporary advantage. But does the deployment of zero-days against other countries effectively mitigate threats and promote the greatest good for citizens?

In the late 2000s, the United States and Israel developed the most sophisticated and powerful cyberweapon ever used. The cyberweapon, known as Stuxnet, was developed to derail the Iranian government in their attempt to enrich uranium and build nuclear weapons [10]. Stuxnet operated by exploiting vulnerabilities in the Windows systems that powered the nuclear facility which controlled the nuclear centrifuges [10]. Stuxnet was so complex and advanced that when deployed against the facility in 2008, it was able to gain control of centrifuges and spin them so quickly that they would be destroyed [10].

Despite the complexity and power of the Stuxnet virus, its effects were ultimately fruitless. Only one-fifth of the facility’s centrifuges were damaged, resulting in a delay of just a few months in Iran’s nuclear program [10]. Additionally, in retaliation to the attack, Iran boosted their production of centrifuges tenfold from 2008 to 2013. Even though Stuxnet had a short-term impact in delaying enrichment production, the long-term impact was the ramping up of Iran’s nuclear program and a deeper diplomatic dilemma for the US [10]. Stuxnet is the only known zero-day to have been deployed offensively and its result shows that even a well-planned and complex zero-day can ultimately be ineffective.

The WannaCry attack and Stuxnet are two examples that show the consequences of both holding on to a vulnerability for too long and launching a zero-day attack. Where the WannaCry attack resulted in hundreds of millions of dollars in damages, the Stuxnet virus led to heightened diplomatic tensions between the United States and Iran. The damages done by WannaCry and the unsuccessful deployment of Stuxnet demonstrate that the poor returns on zero-day attacks are not worth the potential global crisis. The question then becomes whether it is possible for the US to hold on to zero-days and minimize the risk to citizens through proper calculations of their longevity and collision rate.

Zero-days would be a practical tool if their longevity and collision rate probabilities were properly calculated. Unfortunately, properly calculating these metrics requires a very deep knowledge of the software that would typically only be held by the software vendors themselves. Another part of the guesswork involved in finding the collision rate is estimating the knowledge and resources of outside parties and whether they can find the same vulnerability. Given that hackers are constantly improving their skills, knowing the exact probability of a particular group finding the same vulnerability is slim. These calculations would require too much guesswork to provide a solid foundation for decision-making, especially when lives are at stake.

In evaluating zero-days, the consequences of holding on to vulnerabilities outweigh the benefits gained. The best way to maximize pleasure for the greatest number of citizens would be to ensure that their devices are secure and protected from a potential attack. Properly calculated metrics may yield the greatest benefit to the greatest number of people but given their current flaws, their potential harms make a greater impact than their benefits, as shown by the WannaCry attack and Stuxnet Virus. This ultimately leads to the question of whether citizens have a right to know the information that the government holds, and whether the state has a moral obligation to disclose known harms that could affect their citizens.

Stockpiling from the Rights Perspective

The stockpiling of zero-days opens questions not only about their potential consequences but also whether the state has an obligation to disclose known dangers to their citizens. There is a conflict between the government’s obligation to preserve national security and the citizen’s right to know about potential harms that they may face. At what point do actions “in the name of national security” infringe upon a citizen’s right to know about known harms that could be damaging to them?

The government was formed with the inherent obligation to protect its citizens from outside interference and promote their well-being and happiness. While certain means of protection require secrecy, conflict arises when the government fails to disclose actions that directly affect citizens and violate their right to know [11]. Right-to-know laws today are used to protect people from hazards in the workplace and inform consumers about the potential harms of certain products [12]. However, in the last decade this idea has been applied to government action and national security.

The primary moral obligation of governments is to afford protection to its citizens, and the central method of affording this protection is by preserving national security. This is done through economic power, diplomacy, power projection, and political power. While secrecy involving national security can be necessary, especially involving military operations, many people have lost faith in the US’s national security organizations and have viewed their actions as blanket excuses to withhold information from the public [13]. Most notably in 2013, the exposure of the NSA’s mass surveillance of citizens through their electronic devices sparked outrage about how much power the United States is allowed in the name of national security [14]. Citizens have been demanding more transparency over how government security organizations control their information.

This has created conflict between the citizens’ right to know information that could guide their decision-making and the government’s obligation to protect its citizens. In the context of zero-days, this puts the government in a double bind — either they disclose the vulnerability and fail to uphold their obligation in preserving national security or hold on to the vulnerability and violate the rights of citizens by failing to disclose information that directly affects them. The dilemma then becomes a question of which side should be valued more.

Unfortunately, since 9/11 the government has abused the justification of national security, granting them nearly unlimited power. Whether it be secretly accessing digital information of US citizens prior to 2013, human rights abuses at Guantanamo Bay under the Bush Administration, or the abuse of the Patriot Act throughout the 2000s, using national security as an excuse has led to an abuse of power. This abuse of power is directly in line with stockpiling the vulnerabilities of software that everyday citizens use.

The public’s right to know is legally backed by the Freedom of Information Act (FOIA). The failure to disclose zero-days violates its very principles. The roots of the FOIA date back to the Cold War, an era of increasing government secrecy [15]. To make the government more transparent and accountable for its actions, the FOIA was enacted to expose misconduct and threats to public safety [15]. The FOIA is a vehicle to satisfy the public’s right to know since it operates as an enforcement mechanism to compel the disclosure of non-exempt information [15]. It appears now that national security justifications are a way to work around the FOIA and leave the public blind to information that they deserve to know. Despite enacting the FOIA to promote government transparency, threats to public safety are still being concealed. Even though zero-days are exempt from disclosure on the grounds of national security, they are a clear and present danger to the public and their exemption violates the principles on which the FOIA was founded.

Violating the principles on which the FOIA was founded is by extension a clear violation of citizens’ rights and one that extends beyond the bounds of the obligation to preserve national security. The use of zero-days represents how military preparedness has turned into a permanent condition, and one that puts a burden on citizens. Even though there is a need to safeguard national security information from improper disclosure, increased transparency is critical when that same security information directly involves the lives of citizens.

Conclusion

The story behind zero-days is all too reminiscent of cautionary tales of the past. Stockpiling zero-days is an unnecessary power grab by government officials at the expense of everyday people. This is just like the housing market crash in 2008, when Wall Street traders trying to squeeze out extra profits pushed the market to its brink before it finally collapsed. At the end of the day, it was everyday people who lost their jobs and savings, while the banks were bailed out and Wall Street executives made billions. If the use of zero-days were carefully measured and could eliminate risk towards citizens, resolving the ethical dilemma of stockpiling would be more difficult. Instead, a clear imbalance exists, creating a situation in which consequences severely outweigh the benefits and citizens are left unaware of potential harms to them. Zero-days are one of many examples of the government over-reaching the rights of their citizens. Governments are not just stockpiling the vulnerabilities of software, but the vulnerabilities of their own people.

By Matthew Wilson, Viterbi School of Engineering, University of Southern California


About the Author

At the time of writing this paper, Matthew Wilson was a sophomore studying Computer Science in the Viterbi School of Engineering. He is from New York, New York, and is a candidate for the National Academy of Engineering’s Grand Challenges Scholars Program.

References

[1] K. Zetter, “Hacker Lexicon: What Is a Zero Day?,” WIRED, Nov. 11, 2014. [Online]. Available: https://www.wired.com/2014/11/what-is-a-zero-day/. [Accessed: May 7, 2021].

[2] J. Healey, “The U.S. Government and Zero-Day Vulnerabilities: From Pre-Heartbleed to Shadow Brokers,” Journal of International Affairs, Nov. 1, 2016. [Online]. Available: jia.sipa.columbia.edu/online-articles/healey_vulnerability_equities_process. [Accessed: Apr. 4, 2021].

[3] J. Lepore, “The next Cyberattack Is Already under Way,” The New Yorker, Feb. 1, 2021. [Online]. Available: www.newyorker.com/magazine/2021/02/08/the-next-cyberattack-is-already-under-way. [Accessed: Apr. 4, 2021].

[4] S. Bonde and P. Firenze, “A Framework for Making Ethical Decisions | Science and Technology Studies,” Brown University, 2011. [Online]. Available: www.brown.edu/academics/science-and-technology-studies/framework-making-ethical-decisions. [Accessed: Apr. 4, 2021].

[5] S. Wicker, “The Ethics of Zero-Day Exploits: The NSA Meets the Trolley Car,” Communications of the ACM, Jan., 2021. [Online]. Available: cacm.acm.org/magazines/2021/1/249460-the-ethics-of-zero-day-exploits/fulltext. [Accessed: Apr. 4, 2021].

[6] L. Ablon and A. Bogart, Zero Days, Thousands of Nights: The Life and Times of Zero-Day Vulnerabilities and Their Exploits, Santa Monica, CA: RAND Corporation, 2017.

[7] K. Zetter, “CIA Insider: U.S. Should Buy All Security Exploits, Then Disclose Them,” WIRED, Aug. 6, 2014. [Online}. Available: www.wired.com/2014/08/cia-0day-bounty/. [Accessed: Apr. 4, 2021].

[8] J. Fruhlinger, “What Is WannaCry Ransomware, How Does It Infect, and Who Was Responsible?,” CSO, Aug. 30, 2018. [Online]. Available: www.csoonline.com/article/3227906/what-is-wannacry-ransomware-how-does-it-infect-and-who-was-responsible.html. [Accessed: Apr. 4, 2021].

[9] R. McCormick, “Microsoft Says Governments Should Stop ‘Hoarding’ Security Vulnerabilities after WannaCry Attack,” The Verge, May 15, 2017. [Online]. Available: www.theverge.com/2017/5/15/15639890/microsoft-wannacry-security-vulnerabilities-ransomware. [Accessed: Apr. 4, 2021].

[10] J. Glaser, “Cyberwar on Iran Won’t Work. Here’s Why,” Cato Institute, Aug. 21, 2017. [Online]. Available: https://www.cato.org/commentary/cyberwar-iran-wont-work-heres-why. [Accessed: May 7, 2021].

[11] “Right-to-know Definition & Meaning | Dictionary.com,” Dictionary.com. [Online]. Available: www.dictionary.com/browse/right-to-know. [Accessed: Apr. 4, 2021].

[12] C. Rechtschaffen, “CPR Perspective: The Public Right to Know,” Progressive Reform. [Online]. Available: progressivereform.org/our-work/energy-environment/perspright/. [Accessed: Apr. 4, 2021].

[13] D. Banisar, B. Barr, and J. Podesta, Decisions without Democracy, People for the American Way Foundation, July, 2007.

[14] B. Emmerson, “Two years after Snowden: protecting human rights in an age of mass surveillance,” Amnesty International, June 4, 2015. [Online]. Available: https://www.amnestyusa.org/wp-content/uploads/2017/04/ai-pi_two_years_on_from_snowden_final_final_clean.pdf. [Accessed: May 7, 2021].

[15] B. Snow, “Freedom of Information Act of 1966,” MTSU, 2009. [Online]. Available: https://mtsu.edu/first-amendment/article/1081/freedom-of-information-act-of-1966. [Accessed: May 7, 2021].

Links for Further Reading

https://www.law.upenn.edu/institutes/cerl/conferences/cyberwar/Cyberwar%20Details.pdf

https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-83/jfq-83_19-26_Hughes-Colarik.pdf

http://users.umiacs.umd.edu/~tdumitra/papers/CCS-2012.pdf