Artificial Intelligence in the Courtroom: Friend or Foe?

Abstract

Correctional Offender Management Profiling for Alternative Sanctions is the US’s primary predictive analytics tool in the courts. When artificial intelligence (AI) is deployed in high-impact human environments like courtrooms, a volcano of ethical dilemmas erupts. AI improves consistency and efficiency and reduces human error. However, it lacks transparency and moral judgment. Furthermore, AI is a vessel for bias that slips into algorithms, damaging human lives. AI’s consequences generally fall into two categories: broad human rights violations—affecting privacy, education, life, and due process—and systemic bias, exemplified by the unfairness of tools like the COMPAS sentencing algorithm. There is also a clear unequal treatment and violation of the defendant’s liberty. Nevertheless, with transparency of algorithms, there is a brighter path forward. To ensure the safe, widespread use of artificial justice, interdisciplinary collaboration is required. 

Introduction

Determining where to draw the line on AI remains a complex and debated issue. The appropriate boundary tends to vary by industry or person. However, drawing this line requires caution in high-impact environments such as courtrooms. The idea of machines deciding human fate is controversial yet already widely implemented. There arise concerns that are not just legal or philosophical but engineering-related. Not unlike raising a child, engineers develop the code that powers the model. The amount of balance and transparency built in determines if the child turns out to be a hero or a villain.

AI in US Courts

In 46 of 50 US states, courtrooms use trained algorithms to assist in decision-making. The most widely utilized assistant is the US’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) [1, 2]. It aids in two ways: judging innocence and determining sentencing, often using a numerical score or percentage. However, because of trade secrets, it is difficult to know how these algorithms create their scores, except for their inputs and results. 

The input is a 137-question form filled out by the defendant [2]. The questions generate a risk assessment score that represents the probability of an offender reoffending [2]. The higher the score, the higher the risk [2]. The first set of questions digs deep into criminal history, including the defendant’s as well as their friends’ and family’s history [3]. The next set of questions concerns stability and environment, in which the algorithm assigns the defendant a higher risk if they come from a more unstable, crime-filled community [3]. Subsequent questions build on this by asking about the defendant’s habits and their network of people. If a defendant has few habits or people around them, they are deemed more susceptible to a life of crime. The following section, “education and work,” attempts to identify any trouble the defendant got involved in previously, as well as their financial situation [3]. Thus, it can be inferred that a troubled past, suboptimal skills, and financial struggles are associated with higher scores. The last few sections fall under the umbrella term psychology. Covering criminal personality, anger, and criminal attitudes, all questions gauge the stability of the defendant’s mind [3]. The algorithm will assign a higher risk score to defendants with an unstable mind. Nevertheless, all of these inferences are merely predictions of how COMPAS operates.

Benefits of COMPAS

Despite its controversy, the COMPAS algorithm has many benefits that ultimately overshadow its pitfalls, justifying its usage in modernizing the courtroom.

AI improves court operations in three areas: consistency, efficiency, and reducing human error. It helps bypass the subjectivity of individual judges [4]. Additionally, it creates more predictability in similar court cases. AI considers similar prior cases to determine innocence and punishment [4]. With consistency and greater accuracy, AI tends to make the majority of its decisions correctly. Court proceedings can be expected to be more efficient and “could be partly or even largely automated using AI, precisely because the outcome is largely or entirely certain” [4]. This efficiency again maximizes the “happiness” from AI, measurable in terms of time and money. As in many cases, AI is a cheaper alternative that allows courts to complete more trials in a day and focus more time on improving the system rather than just maintaining the current level.

For example, Germans use the Oberlandesgerichtsassistent (OLGA) system, meaning “Higher Regional Court Assistant” [5]. With OLGA, court officials can browse through thousands of legal documents using keywords [6]. OLGA also summarizes the key points of the lawsuit to maximize court efficiency [6]. The categorization of cases creates a storyline, resulting in “a comprehensive view of all the information for the case and where it originated” [6]. The benefits reduce the case processing time by over 50%, resulting in fewer hours worked or more time spent on complex issues [6].

AI reduces human error. Like many jobs, as the day goes on, mistakes and emotions seep into the cracks of professionalism. Still, with a baseline expectation from AI, it will be able to flag and reflect on unusual cases sooner or later. 

Still, there are flaws. While the mountains are high, the valleys are low with transparency, bias, and social norm issues. With Equivant, the owner of COMPAS, keeping the code a trade secret, it is a black box. Thus, it is hard to determine the algorithm’s ethos. While the defense can question an expert’s opinion on the score, they can never read the code itself. COMPAS can be used as a tool of prosecution rather than a means of balancing the scales. The increased use of AI is turning the phrase “innocent to proven guilty” from a social norm to an increasingly difficult standard [7]. 

On the other hand, the black box of algorithms leads to automation bias, where judges are more likely to accept the score as the truth without questioning it [8]. With this act, courts are removing human judgment one decision at a time. The more heavily AI is relied on, the greater the distance between court officials and defendants. When people start thinking of each other as numbers instead of human beings, all sense of moral judgment is lost. 

Currently, predictive justice algorithms like COMPAS are slightly better than just a coin toss [2]. However, there is room for improvement, as a group of researchers analyzed the Supreme Court and developed algorithms that predict the court’s decisions with “70.2%” accuracy and individual judges’ votes with “71.9%” accuracy [4]. They achieved this by feeding a machine learning application the individual’s political preferences and past voting behavior [4]. As accuracy and precision increase over time, AI shows promise. But if the courts wish to use COMPAS confidently, they must address one key problem: bias. 

Bias

AI is NOT a third-party based on objective information. It not only has bias but also brings previously hidden human biases into the spotlight [9]. Human bias hitches a ride on the data used to train the AI, leading to false negatives or positives [8]. Leaning in either direction poses significant risks, as a false negative could result in a dangerous person going free into society. Alternatively, a false positive means an innocent person is spending unwarranted time behind bars. The algorithms are trained on previous cases and instructed to act like a human in the situation [10]. Hence, they are “easily tainted by racism, sexism, or other biases” [10]. 

Furthermore, there is an automation bias in which officials tend to blindly accept whatever the AI concludes [11]. Acknowledging biases is necessary. While AI can eliminate bias, “if used unthinkingly, it also has the potential to perpetuate existing biases.” [12]. Bias is only part of the issue, as the ethical impacts are increasingly severe.

Violation of Rights 

The design of COMPASS inevitably violates defendants’ rights to privacy. Many predictive justice tools begin by capturing vast amounts of data about the defendant [4]. Although the defendant is on trial, the court should not consider every part of their life; instead, it should consider only the actions that brought them into court. 

While the algorithm is closed to the public, defendants have to battle against an invisible and unstoppable force [13]. Defendants are not able to explain themselves. For example, Wisconsin police arrested Eric Loomis in 2013 for taking part in a drive-by shooting, but Mr. Loomis pleaded guilty to minor charges to get a reduced sentence [8]. While many cases would end here, Loomis’s high COMPAS score, indicating a high probability of another offence, completely altered the outcome of the trial and led to his imprisonment for six years [8]. Mr. Loomis was unable to uncover the protected code, so he filed for an appeal in the Wisconsin Supreme Court [8]. The Supreme Court concluded that “defendants do not hold a ‘right to explanation’, that is, a detailed knowledge of the information included in their Presentence Investigation Report (PIR) and the process that produces this information, but a mere ‘right to information’ of the means used to produce it” [14]. Many would argue that the reasoning is far more important than the answer,  and in Eric Loomis’ case, the COMPAS system clearly violates his human rights. 

Justice and Fairness

Since AI is deeply biased, just and fair trials are impossible. Due Process is the legal term for fairness and is protected by the Fifth and Fourteenth Amendments of the US Constitution [15]. It is the only command stated twice in the Constitution, once for the federal courts and once for the state courts, highlighting its importance [15]. The Fourteenth Amendment protects “life, liberty, or property” [13]. While COMPAS operates at the state and federal levels, almost all cases start in state courts. Consequently, state courts tend to have more cases and less experienced officials. COMPAS violates the Fourteenth Amendment. The federal court will often take a COMPAS score into account upon receiving cases, but most of the time, these courts use sentencing guidelines rather than AI [16]. 

Liberty is the state of being free from the control or power of another [17]. COMPAS also erodes liberty, treating defendants as numbers rather than human beings. There is a noticeable racial discrepancy in COMPAS risk classifications, even among individuals with similar criminal backgrounds. COMPAS places an unequal burden on marginalized groups, restricting liberty. Historical injustices play a significant role in the decision [2]. 

People are jailed or denied bail based on COMPAS scores. In cases of false positives, when bail is set too high or even rejected, many defendants are coerced into pleading guilty [18]. This can partly be attributed to the system’s novelty. There is a significant legal and technological knowledge gap between the court officials and the defendants [19]. This gap directly violates procedural liberty, a subsection under Due Process [20]. In addition, the over-reliance on the code creates a system in which machines indirectly control the defendants’ verdicts, thereby violating their liberty [11].

A Clear Path Forward

The main challenge is finding the right balance of AI in the justice system. If AI presents the case in a similar style to the TV show The Voice, there is a possibility of its ethical use [21]. The goal of this style is to make decisions based on the objective merit of the case, as presenting information neutrally could reduce a judge’s inherent bias. However, this approach requires programmers to represent information in an unbiased manner. Such a change goes against every idea of human rights. 

Having an AI assistant is very close to the ideal, but not perfect. The key to due process is a straightforward process. The paramount factor in courtroom AI is transparency. Clarity in action is how trust is built in society. A community without trust becomes an anarchy. Trust is a key part of the judicial system. 

A major difference in artificial justice is that in ordinary trials, the defense cross-examines the prosecution’s key witness. In contrast, in algorithmic justice, the algorithm is hidden behind developers’ intellectual property. Intellectual property allows inventors to profit from their work, and it should be limited, especially when it affects someone’s life. This action thus requires the intervention of engineers. The National Society of Professional Engineers (NSPE) states that “Engineers shall treat all persons with dignity, respect, fairness, and without discrimination” [22]. The main goal should be to minimize the effects of biases. These codes are in place to protect society. To uphold their duties, engineers should share more of the code. 

Utopia In Reach

Of all the applications of Artificial intelligence, an AI-integrated judicial system could be the most impactful. An optimistic future of courtroom AI lies in balancing utilitarianism and rights-based ethics. It should be a supplemental advisor to the courts, helping both the defense and the prosecution equally. A fair trial is achievable through AI. Courts can be more accurate and efficient while still maintaining a humanistic and equitable final say, thanks to constant oversight of individual rights. Ultimately, courts must treat AI as a tool rather than a replacement, drawing a clear line before AI influences judicial decision-making. The call to action extends past the walls of the courts. To harness the potential of AI, interdisciplinary collaboration in tech, law, and politics is needed, along with thorough oversight of rights and fairness. 

By Julius Hultsch, Dornsife College of Letters, Arts and Sciences, University of Southern California


About the Author

At the time of writing this paper, Julius Hultsch was a junior majoring in Applied and Computational Mathematics with an emphasis in Economics. Despite this, he has always had an interest in law and was looking toward a career in Economic Consulting, where he could use math in the judicial system.             

References

[1] N. Mesa, “Can the criminal justice system’s artificial intelligence ever be truly fair?,” Massive Science, May 13, 2021. https://massivesci.com/articles/machine-learning-compas-racism-policing-fairness/

[2] J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine Bias,” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[3] J. Angwin“Sample-COMPAS-Risk-Assessment-COMPAS-‘CORE’ | DocumentCloud,” Documentcloud.org, 2016. https://embed.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE/.

[4] A. D.  Reiling, “Courts and Artificial Intelligence,” International Journal for Court Administration, vol. 11, no. 2, Aug. 2020, doi: https://doi.org/10.36745/ijca.343.

[5] “The Evolving Role of AI in German Dispute Resolution,” Hengeler Mueller News, 2025. https://hengeler-news.com/en/articles/the-evolving-role-of-ai-in-german-dispute-resolution.

[6] E. Schindler, “Judicial systems are turning to AI to help manage vast quantities of data and expedite case resolution,” Ibm.com, 2025. https://www.ibm.com/case-studies/blog/judicial-systems-are-turning-to-ai-to-help-manage-its-vast-quantities-of-data-and-expedite-case-resolution

[7] P. W. Grimm, M.R. Grossman, M. Hildebrandt, and S. Gless, “Artificial Justice: The Quandary of AI in the Courtroom,” Judicature, Sep. 13, 2022. https://judicature.duke.edu/articles/artificial-justice-the-quandary-of-ai-in-the-courtroom/

[8] A. Taylor, “Data and Discretion: Why We Should Exercise Caution Around Using the COMPAS Algorithm in Court,” stanfordrewired.com. https://stanfordrewired.com/post/data-and-discretion

[9] C. Pazzanese, “Ethical Concerns mount as AI takes bigger decision-making role in more industries,” Harvard Gazette, Oct. 26, 2020. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

[10] M. Livermore and D. Rockmore, “France Kicks Data Scientists Out of Its Courts,” Slate Magazine, Jun. 21, 2019. https://slate.com/technology/2019/06/france-has-banned-judicial-analytics-to-analyze-the -courts.

[11] E. Yong, “A Popular Algorithm Is No Better at Predicting Crimes than Random People,” The Atlantic, Jan. 17, 2018. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/

[12] C. Coglianese, M. Grossman, and P. Grimm, “AI in the Courts: How Worried Should We Be? | Judicature,” judicature.duke.edu, Mar. 06, 2024. https://judicature.duke.edu/articles/ai-in-the-courts-how-worried-should-we-be/

[13] A.L Koh and D.V. Sanker, “Artificial Intelligence,” Globallegalpost.com, 2022. https://www.globallegalpost.com/lawoverborders/artificial-intelligence-1272919708/united-states-1303442596

[14] I. D. M. Beriain, “Does the use of risk assessments in sentences respect the right to due process? A critical analysis of the Wisconsin v. Loomis ruling,” Law, Probability and Risk, vol. 17, no. 1, pp. 45–53, Feb. 2018, doi: https://doi.org/10.1093/lpr/mgy001.

[15] P. Strauss, “Due Process,” Legal Information Institute, 2022. https://www.law.cornell.edu/wex/due_process

[16] M. Donohue, “A Replacement for Justitia’s Scales?: Machine Learning’s Role in Sentencing,” Harvard Journal of Law & Technology, vol. 32, no. 2, 2019, Available: https://jolt.law.harvard.edu/assets/articlePDFs/v32/32HarvJLTech657.pdf

[17] “Thesaurus results for LIBERTY,” www.merriam-webster.com. https://www.merriam-webster.com/thesaurus/liberty

[18] ‘Not in it for Justice’ | How California’s Pretrial Detention and Bail System Unfairly Punishes Poor People,” Human Rights Watch, Jun. 06, 2017. https://www.hrw.org/report/2017/04/11/not-it-justice/how-californias-pretrial-detention-and-bail-system-unfairly

[19] N. Runyon, “How to navigate ethics for common AI use cases in courts – Thomson Reuters Institute,” Thomson Reuters Institute, Oct. 30, 2024. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/navigating-ethics/

[20] J. C. Busby, “Procedural due process,” LII / Legal Information Institute, Sep. 12, 2018. https://www.law.cornell.edu/wex/procedural_due_process

[21] K. Weivoda, “AI for Justice: Tackling Racial Bias in the Criminal Justice System – JUSTICE TRENDS Magazine,” Justice Trends Magazine – Exclusive criminal justice and correctional topics worldwide, Aug. 14, 2024. https://justice-trends.press/ai-for-justice-tackling-racial-bias-in-the-criminal-justice-system/

[22] “NSPE Code of Ethics for Engineers | National Society of Professional Engineers,” Nspe.org, 2019. https://www.nspe.org/career-growth/nspe-code-ethics-engineers

Further Reading Links

https://www.mastersinai.org/industries/criminal-justice

https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm

https://www.law.com/legaltechnews/2020/07/13/the-most-widely-used-risk-assessment-tool-in-each-u-s-state/?slreturn=2025032813301