The Ethics of Machine Learning and Discrimination

Humans carry inherent biases. We are influenced by how we are raised, whom we interact with, and what information is provided to us. These biases, especially when they are formed due to prejudice, can have extremely negative impacts. In efforts to make important decisions efficiently, without being affected by personal biases, many institutions have turned to computer algorithms to be impartial. This tactic could take human subjectiveness out of the decision-making equation. Recent advances have improved algorithms with machine learning, meaning the computer itself can make new connections based on provided historical data. This feature allows computers to constantly improve and become more complex, but it also provides ample opportunity for problems to arise. Algorithms can only learn from the information they are provided; if that information contains biases, the algorithms will learn to be biased. Any connections that it develops based on prejudiced information will perpetuate those prejudices, believing they are fact. Because of this, it is essential for engineers to be mindful of the information they provide when creating machine learning-based software. To create bias in what is believed to be an objective system would be unethical, especially when the biases directly impact the lives of those involved, whether they be consumers or third parties.

In March of 2017, the video sharing platform YouTube came under scrutiny for an issue with their restricted mode programing. It was designed to block videos with content inappropriate for children: mainly violence, nudity, and graphic language. It is automatically installed on the accounts of all users aged under eighteen. Schools, libraries, and other businesses can also opt in, instating restricted mode on all devices using their internet server. The mode was programed to block certain content based on an algorithm which considered videos that had been flagged by users as inappropriate. The issue arose when users realized any video regarding LGBT+ issues was automatically flagged as inappropriate and blocked. These were not just videos that discussed sexual situations or other mature content, but any video that had keywords such as “Gay,” “Queer,” or “Lesbian” in the title or description. Flagged videos included creator Tyler Oakley’s “8 Black LGBTQ+ Trailblazers Who Inspire Me” and YouTube’s own video series for pride month [1]. The platform-wide block occurred because, in the evolution of the algorithm, the information provided was heavily influenced by subjective human decisions. The historical data showed that videos addressing LGBT+ issues were often flagged as inappropriate, most likely due to individual viewers’ personal prejudices. According to YouTube, the blocking does not reflect the company’s intentions or the views of the majority of the userbase; however, because the algorithm took patterns as fact, anything LGBT+ became unquestionably inappropriate [1].

This platform-wide block negatively impacted users and content creators alike. Videos normalizing LGBT+ issues provide insight and support to LGBT+ youth who would otherwise be isolated from their immediate environment. Youth who previously could find a community on the platform are now told that an integral part of them has been deemed inappropriate. Additionally, at the content production end, any creator that primarily posts videos with LGBT+ content was impeded monetarily, on top of their freedom of speech being limited. As part of YouTube’s policy, any video that is age restricted cannot be monetized. This means that after the restricted mode was in place, creators lost the ability to make money on any video related to LGBT+. In 2018, YouTube still hadn’t entirely fixed the problem, and creators had to individually request a video to be reviewed to remove the block [1]. Lawsuits against YouTube regarding this matter have persisted into 2019.

According to the Association for Computing Machinery’s code of ethics, there are twenty-four responsibilities that computer engineers must uphold to be ethical in their field. The admittance of bias into systems valued for their perceived impartiality violates four of these responsibilities. These responsibilities are to contribute to society and human wellbeing, to avoid harm to others, to be fair and take action not to discriminate, and to give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks [7]. In the case of YouTube and the example that follows, the engineers were most likely attempting to contribute to society; however, their disregard for the impacts led them to negatively impact many and perpetuate discrimination, effectively violating these responsibilities. Therefore, these actions must be deemed unethical. In order to be ethical in the rising world of machine learning and artificial intelligence, it is essential to fully evaluate the information the system learns from and limit potential learned biases as much as possible. In the case of YouTube, this would mean reworking the algorithm from scratch while monitoring the information that is fed in so that the software does not learn based on prejudiced information.

The same incorporation of bias via machine learning found at YouTube can also be seen in the American court system. Algorithms are now used to create risk assessments of convicted criminals. These risk assessments evaluate how likely a person is to reoffend and can be used to determine the need for mental health support, bond size, sentence length, and parole eligibility. Currently, forty-nine states use a risk assessment at some point in the justice process and judges in nine states are given the results during sentencing to be considered in their decision [2]. Risk assessment software is thought to take biases out of the decision-making process by giving an objective look into the likelihood of risk. The issue is that these algorithms have been found to contain racial biases. A study by ProPublica found that risk assessment software COMPAS used by the Florida judicial system was wrongly assessing people in a way correlating with race. Specifically, the program wrongly labeled African Americans as a future criminal at twice the rate of whites. Additionally, white defendants were wrongly labeled as low risk more often than African Americans. Northpointe, the company that owns COMPAS, disputes claims of racial bias citing that, across all demographics, the algorithm is consistently correct at the same rate [3]. This is true; however, they are predicted wrong in different ways. A wrongful prediction for a white defendant is more likely to be because they were rated low and then reoffended. Meanwhile, a wrongful prediction for a black defendant is more often because they were rated high risk and then did not reoffend, meaning they suffer the negative results of lengthened sentencing and diminished chance of parole.

Specifically, according to the study conducted by ProPublica, the percentage of white defendants labelled low risk who reoffended is 47.7% while for African American defendants the percentage is 28.0% [3]. In an ideal system, the maximum score of ten would logically be reserved for the riskiest offenders and therefore occur less often than other scores. Yet, black defendants receive the maximum risk score at around the same rate they receive any other score, while white defendants are very rarely awarded scores of ten. Similarly, it seems suspicious that a vast majority of white defendants receive the minimum score of one while black defendants receive it as often as they do scores of ten.

On the individual level, the bias becomes even more drastic. Take Dylan Fugget, who is white, and Bernard Parker, who is black. Both were charged with drug possession, Fugget for cocaine and marijuana, Parker for marijuana. Fugget had a prior arrest for burglary, Parker had a charge of resisting arrest without violence. Despite the similarities in their cases, with Fugget’s past and current crimes being arguably more extensive, Fugget received a risk score of 3 while Parker received a score of 10. While his low score predicted little chance of committing another crime, Fugget went on to be charged with drug possession three more times. Parker has not reoffended despite having the highest risk level [4]. This case is not an anomaly; many similar cases have resulted in drastically different outcomes with race as the only apparent determining factor.

COMPAS determines risk by asking defendants 137 questions on topics varying from their home life growing up to philosophical ideas on when stealing is justified. While race is not specifically asked for, it is thought that some of the questions can be correlated to race, such as if an individual faced systematic disadvantages that stem from being a part of a marginalized group [5]. With that information, the program compares the defendant’s data to the historical data of past convictions. While the algorithm cannot create its own biases, these past convictions are affected by disproportionate arrest rates of African American men and other inequities that have been consistently present in the justice system. Because of these factors, questions that are correlated to race could be interpreted as correlating to crime, presenting black defendants as higher risk and further perpetuating the system of inequality [6]. It should therefore be the responsibility of the engineers to evaluate racial biases and remove them from the system’s learning. This could entail consulting with experts on prison inequities and systematic racism that could clarify which questions could lead to racial targeting. Engineers should not need to be experts on the multifaceted issues that face every technological endeavor. But, when creating a system that drastically effects lives and is perceived as impartial, engineers should make efforts to inform themselves so that the information they use to develop software can be applied ethically. At this point, the COMPAS software is past the point of fixing, as the information regarding how the algorithm weighs information is proprietary, so outside entities cannot properly audit it. Additionally, since the algorithm is creating connections and learning more as it gains information, the engineers who built it cannot know for sure the exact rationale for a given decision. This reason is why it is essential for engineers to consider bias while building the algorithm, not after the algorithm has learned and impacted lives.

Due to the ethical implications, it is essential that engineers fully consider the effects of their products and the information they are using. Machine learning and Artificial Intelligence are constantly evolving, along with the ways they interpret information and make connections. Because of this, engineers may never fully understand how a machine arrived at a certain conclusion or what connections spawned integral decisions. With that in mind, the factors engineers can control must be heavily monitored so that biases do not become part of the fundamental framework.

By Jenna Dethlefsen, Marshall School of Business, University of Southern California


References

[1] A. Ohlheiser, “YouTube is ‘looking into’ complaints that it unfairly censors LGBT videos”, The Washington Post, 2017. [Online]. Available: https://www.washingtonpost.com/news/the-intersect/wp/2017/03/20/youtube-is-looking-into-complaints-that-it-unfairly-censors-lgbt-videos/?utm_term=.7e8abd5f991d. [Accessed: 20- Feb- 2018].

[2] E. Center, “EPIC – Algorithms in the Criminal Justice System”, Epic.org, 2018. [Online]. Available: https://epic.org/algorithmic-transparency/crim-justice/. [Accessed: 20- Feb- 2018].

[3] J. Angwin and S. Mattu, “Machine Bias — ProPublica”, ProPublica, 2018. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. [Accessed: 20- Feb- 2018].

[4] J. Angwin and S. Mattu, “What Algorithmic Injustice Looks Like in Real Life — ProPublica”, ProPublica, 2018. [Online]. Available: https://www.propublica.org/article/what-algorithmic-injustice-looks-like-in-real-life. [Accessed: 20- Feb- 2018].

[5] K. Kirkpatrick, “Battling algorithmic bias”, Communications of the ACM, vol. 59, no. 10, pp. 16-17, 2016.

[6] J. Angwin and J. Larson, “Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say — ProPublica”, ProPublica, 2018. [Online]. Available: https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say?utm_source=suggestedarticle&utm_medium=referral&utm_campaign=readnext&utm_content=https%3A%2F%2Fwww.propublica.org%2Farticle%2Fbias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say. [Accessed: 20- Feb- 2018].

[7]”Code of Ethics”, ACM Ethics, 2018. [Online]. Available: http://ethics.acm.org/code-of-ethics/. [Accessed: 20- Feb- 2018].