Pitfalls of Predictive Policing: An Ethical Analysis

Abstract

Predictive policing is a police tactic that uses computer algorithms to predict where crime is likely to occur. This tactic, which has been used in cities like Los Angeles, allows the police to deploy more officers to “high-risk locations.” However, predictive policing violates the ethics of consequentialism and the ethical frameworks of justice and fairness by disproportionately targeting low-income neighborhoods and high-minority areas with increased police activity. Although boosting police patrols can deter crime in some cases, they also make people feel wary and frightened. Predictive policing is an unethical police tactic and should be further regulated or used in other manners. Crime should not be prevented by police-generated fear.


Introduction

In the film Minority Report, Tom Cruise is accused of a murder he has yet to commit by three people who can see into the future. Although the film is science fiction, technology capable of predicting where crime will occur exists today. Predictive policing has been deployed in some of the largest metropolitan centers in the United States since the early 2010s, and these cities have witnessed both positive and negative impacts. This tactic has garnered support from the police departments who use it and criticism from those it is used on. Ultimately, prevention is better than a cure, but the programs designed to provide the “predictions” for predictive policing are flawed, and the technology’s use on the public is unethical and unfair.

Background

Predictive policing is a law enforcement practice in which computer algorithms are used to foresee where crime is more likely to happen. It is used more as a forecast than a prediction–certain areas are more likely to have crime, but those areas are not guaranteed to have crime. There are two types of predictive policing: placed-based and person-based. Place-based policing occurs when q program analyzes locations of past crimes and determines which areas are more likely to have crime committed in them in the present. Police departments then deploy more units to patrol the neighborhoods that are considered high-risk. Ideally, with increased surveillance, police can respond to crime scenes more quickly or deter crime altogether with their presence. On the other hand, in person-based predictive policing the system identifies which individuals are most likely to reoffend by referring to their arrest records. Person-based policing may also identify who is likely to become the victims of future crimes. This is done by analyzing risk factors and common victimization traits and patterns [1].

History and Results

The idea of predictive policing was first developed in 2008 at the Los Angeles Police Department (LAPD). The then police chief, William J. Bratton, worked with the Bureau of Justice Assistance and the National Institute of Justice to host symposiums for experts, researchers, law enforcement leaders, and government officials to gather support for a predictive policing program. PredPol, a software company that provides predictive policing for police departments, was first implemented in Santa Cruz, California in 2010 [2]. With extensive media coverage, predictive policing became massively popular. TIME magazine named predictive policing one of the top fifty best inventions of 2011 [3]. Now, the tactic is being used in many major cities in the United States.

Many different predictive policing systems have been utilized by police departments in the United States. For example, in 2011, the LAPD enacted the Los Angeles Strategic Extraction and Restoration (LASER) program, which helps the department predict hot spots of gun violence. Police officials reported that in the year before LASER was put in place, there were 39 gun-related homicides. In the next year, with the use of LASER, the number of homicides decreased to 14. In addition, out of the 124 people that the program identified as possible repeat offenders, 87 of them were arrested at least once for similar crimes [4].

Ethical Problems

The current uses of predictive policing violate the ethical framework of justice and fairness because they perpetuate systemic racism through the use of biased data. Many cities including Los Angeles, New York, and Chicago have stopped using predictive policing due to issues that arose. For instance, LA’s LASER program was dismantled in 2019 due to an internal audit which revealed “significant problems with the program, including inconsistencies in how individuals were selected and kept in the system” [1]. Since the program was operating with inconsistent data, the officers who used the data to police certain people and neighborhoods may not have had a reasonable cause to make arrests. This is just one example of how errors in predictive systems can be easily overlooked and ignored, when they should instead be raising alarms about how police departments utilize predictive policing. 

The justice and fairness ethical frameworks, according to the Markkula Center at Santa Clara University, indicate that “individuals should be treated the same, unless they differ in ways that are relevant to the situation in which they are involved.” On the topic of fairness, the Markkula Center says that “it isn’t fair when a person is punished for something over which he or she had no control” [5]. This ethical framework is one relevant way to judge predictive policing’s effectiveness and whether the tactic is ethically justified. Predictive policing relies on a large database of previous crime data and forecasts where crime is likely to occur. Since the program relies on old data, those previous arrests need to be unbiased to generate unbiased forecasts. One way crime data can be biased is if it lacks necessary information and context. For example, the program can report that a large number of car burglaries occur between 7:00 and 8:00 AM but fail to specify whether the car was burglarized during that time frame or that the car owner noticed and reported the burglary in that window [2]. Racial bias can also be found in historical crime data. People of color are arrested far more often than white people for committing the same crime. In a 2018 analysis by ABC News, it was found that in 800 jurisdictions around the U.S., Black people were five times more likely to be arrested by the police than white people. Even worse, Black people in 250 of those jurisdictions were ten times more likely to be arrested [6]. Racially biased arrest data creates biased forecasts in neighborhoods where more people of color are arrested. For instance, a simulation study on predictive policing in Oakland, California had a predictive program that suggested more officers should be deployed to predominantly Black and Latinx neighborhoods, even though “rates of drug use are essentially the same across Oakland neighborhoods” [7]. Predictive policing reinforces the intrinsically unjust systemic racism that plagues this country by promoting over-policing in neighborhoods of color.

If the predictive policing programs are biased against certain areas or a specific group of people, then it is making false predictions using inadequate information. If the predictive policing algorithm is using biased data to divert more police forces towards less affluent neighborhoods and neighborhoods of color, then those neighborhoods are not receiving the same treatment as others. By increasing law enforcement presence in more impoverished neighborhoods and “subjecting criminal defendants to money bail and other fees, [the system] effectively punishes people for being poor” [8]. More arrests, caused by predictive policing, subjects more people to “countless fines, fees, and other costs” they often do not have the means to pay [8]. On top of that, a criminal record prevents offenders from voting, getting a driver’s license, and pursuing employment [9]. The justice system, in general, unequally restricts impoverished people and people of color from reentering society after committing a crime. Predictive policing allows law enforcement to increasingly target those who are already struggling. 

Predictive policing victims are selected on a bias, and the tactic can also result in dangerous bias in those who use it. An officer might mistake an activity that is not generally suspicious as threatening if they know they are in a high-risk neighborhood. For example, a 16-year-old cutting through a backyard could be perceived as a child taking a shortcut to school in a low-risk area. However, in a high-risk area, the same officer could assume that the teenager is committing a crime and cutting through alleyways to escape, therefore justifying police intervention [7]. This could also operate the opposite way, and suspicious activity could be regarded as benign by an officer in a low-risk neighborhood. By highlighting neighborhoods as high-risk and low-risk, predictive policing can create a bias in police officers, putting the officers on unnecessary high alert or giving them a false sense of security.

Beyond criminals and police officers, other residents of a neighborhood are also impacted by predictive policing. Some may find it valid to say that the people in high-risk neighborhoods are willingly committing more crimes, and thus the increased police activity is justified. However, this point of view generalizes a whole neighborhood or community based on the few who do commit crimes. From the justice and fairness ethical standpoint, it is unfair for the innocent people who happen to live in a certain neighborhood to be the victims of predictive policing, because they are not in control of who commits crimes and where those crimes occur. They should not have to suffer increased law enforcement activity and increased suspicion of their actions just because a computer program used biased data and recommended that police patrol their neighborhood. 

Predictive policing can also be analyzed from the angle of consequentialism. According to the Stanford Encyclopedia of Philosophy, consequentialism is the ethical decision of “whether an act is morally right depend[ing] only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act)” [10]. To fully understand predictive policing through this ethical framework, it is essential to analyze the consequences of this technology. From the perspective of the police, predictive policing assists in identifying where crime could happen. But while predictive programs like LASER might show positive results, predictive policing may not be the direct cause of any given decrease in crime rates [2]. If there are more police officers on the streets, then people will be less likely to commit crimes, regardless of what area the police are put in. Therefore, the positive results from predictive policing cannot be directly correlated to predictive policing. However, as law enforcement activity increases in a neighborhood, it may also have the opposite effect of what is theoretically intended. The increased police patrols can generate fear, and ironically, the lack of feeling safe [11]. In a study published in the British Psychological Society, the researchers conducted surveys on whether people felt safer or less safe around the police. They concluded that “the sight of a police officer could act as a warning signal, directing people’s attention to potential danger in the vicinity” [12]. This safety issue becomes a negative consequence to predictive policing. The increased police patrols stop some criminals, but they make more people feel unsafe. When police patrols swamp a community, it can result in a direct negative consequence and may not actually be the cause of crime reduction. All in all, as police presence through predictive policing results in residents feeling unsafe in communities, it is unethical to use predictive policing.

Conclusion

The negative consequences and the biased data predictive policing uses have led to many police departments discontinuing their use of programs such as PredPol and LASER. The grassroots organization, Stop LAPD Spying Coalition, contends that predictive policing enables the police to continue targeting poor people and people of color under the “veneer of science” [13]. But is there a way that predictive policing can be used properly? Isn’t crime prevention good? That much is true, but crime prevention should not be facilitated through fear-mongering. According to the BBC, most crimes are committed due to the criminal’s circumstances and upbringings, and rarely out of nature [14]. Patrolling the streets to stop criminals in the act is only a short-term solution. Physically scaring away all the criminals is not the only way police officers should deter crime. The crime is stopped, but the root cause of why the crime was committed persists. Therefore, police should stop using surveillance tactics under the guise of improved science and technology to over-police those who are already over-policed.

By Jonathan Li, Dornsife College of Letters and Sciences, University of Southern California


About the Author

At the time of writing this paper, Jonathan was a third-year student at USC studying Physics and Astronomy. In his spare time, he likes to play video games, watch TV, and learn more about Python, a coding language.

References

[1] T. Lau, “Predictive Policing Explained”, Brennan Center for Justice, 2020. [Online]. Available: https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained. [Accessed: 28- Oct- 2020]

[2] P. Walter L., B. Mcinnis, C. C. Price, S. Smith, J. S. Hollywood “Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations”. RAND Corporation, 2013. [Online]. Available: https://www.rand.org/content/dam/rand/pubs/research_reports/RR200/RR233/RAND_RR233.pdf 

[3] L. Grossman, M. Thompson, J. Kluger, A. Park, B. Walsh, C. Suddath, E. Dodds, K. Webley, N. Rawlings, F. Sun, C. Brock-Abraham and N. Carbone, “The 50 Best Inventions”, TIME.com, 2011. [Online]. Available: http://content.time.com/time/subscriber/article/0,33009,2099708-13,00.html. [Accessed: 28- Oct- 2020]

[4] “LAPD uses big data to target criminals”, Cbsnews.com, 2014. [Online]. Available: https://www.cbsnews.com/news/lapd-uses-big-data-to-target-criminals/. [Accessed: 28- Oct- 2020]

[5] M. Velasquez, C. Andre, T. Shanks, S. J and M. Meyer, “Justice and Fairness”, Scu.edu, 2018. [Online]. Available: https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/justice-and-fairness/. [Accessed: 28- Oct- 2020]

[6] P. Thomas, J. Kelly and T. Simpson, “ABC News analysis of police arrests nationwide reveals stark racial disparity”, ABC News, 2020. [Online]. Available: https://abcnews.go.com/US/abc-news-analysis-police-arrests-nationwide-reveals-stark/story?id=71188546. [Accessed: 13- Nov- 2020]

[7] D. Munro, “The ethics of police using technology to predict future crimes – Macleans.ca”, Macleans.ca, 2017. [Online]. Available: https://www.macleans.ca/society/the-ethical-risks-in-police-using-technology-to-predict-future-crimes/. [Accessed: 28- Oct- 2020]

[8] “Poverty and debt”, Prisonpolicy.org. [Online]. Available: https://www.prisonpolicy.org/poverty.html. [Accessed: 28- Oct- 2020]

[9] “Collateral consequences”, Prisonpolicy.org. [Online]. Available: https://www.prisonpolicy.org/collateral.html. [Accessed: 28- Oct- 2020]

[10] W. Sinnott-Armstrong, “Consequentialism”, Plato.stanford.edu, 2019. [Online]. Available: https://plato.stanford.edu/entries/consequentialism/#WhaCon. [Accessed: 28- Oct- 2020]

[11] F. Patel, A. Papachristos, K. Chavis, S. Young, A. Francois and S. Gangadharan, “Can Predictive Policing Be Ethical and Effective?”, Nytimes.com, 2015. [Online]. Available: https://www.nytimes.com/roomfordebate/2015/11/18/can-predictive-policing-be-ethical-and-effective. [Accessed: 28- Oct- 2020]

[12] C. Jarrett, “It’s not just criminals who feel unsafe when the police are around”, The British Psychological Society, 2013. [Online]. Available: https://digest.bps.org.uk/2013/02/19/its-not-just-criminals-who-feel-unsafe-when-the-police-are-around/. [Accessed: 28- Oct- 2020]

[13] Wired, How Cops Are Using Algorithms to Predict Crimes | WIRED. https://www.youtube.com/watch?v=7lpCWxlRFAw: Youtube, 2018. [Accessed: 28- Oct- 2020]

[14] “Causes of Crime”, BBC. [Online]. Available: https://www.bbc.co.uk/bitesize/guides/zqb2pv4/revision/2. [Accessed: 28- Oct- 2020]

Links for Further Reading

https://www.sciencemag.org/news/2016/09/can-predictive-policing-prevent-crime-it-happens

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ 

https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/