7/27/2020
AI decision-making is everywhere these days. Getting a loan, getting an apartment, getting bail: in all three of these cases, there could be analgorithm taking in your data, crunching the numbers, and spitting out whether or not you qualify for a loan, your background check has been passed, or you are allowed bail. And oftentimes, there’s no explanation for the decision, no person able to tell you what exactly about your life or situation turned the algorithm against you. The lack of an explanation makes it very difficult for the people those decisions are made about to object or protest that decision at all.
In the European Union, a recent law called theGeneral Data Protection Regulation (GDPR) gives people the right to “meaningful information” about why algorithms made the decision they did. But the vague wording around the part of the law relating to artificial intelligence means loopholes letting the “meaningful information” consist of simply the usage and “basic design” of the decisions an algorithm makes, not an actual explanation of why or how the decision was made. This has catalyzed a call for what is known as the “right to an explanation” in the EU and elsewhere.
The concept of the right to an explanation has drawn criticism in a variety of lanes. Critics are concerned about the feasibility of explaining complex algorithms which are difficult to understand for people without considerable technical knowledge, let alone the average person. Companies and others who use the algorithms are also worried about infringement on their intellectual propertyby having to reveal their data or code and the risk of exploitation of the system that comes with making things more transparent. These are valid concerns, but they can likely be alleviated by a technique called “counterfactual explanations” developed by Dr. Sandra Watcher, a Turing Research Fellow in London, and a few of her colleagues. These counterfactual explanations would, in essence, show each person what the algorithm-generated decision would have been if certain relevant characteristics of that person were different. This would be able to reveal why the decision was made without exposing any actual code or having to explain the complex workings of the algorithm to a layperson.
It’s important to have this transparency in decisions made by artificial intelligence mainly due to the existence of bias. There has been and still is a large amount of discussion about how to prevent human bias that ends up encoded into AI, whether through biased training data or biased programmers themselves. One example of it is the oft-referenced Amazon hiring algorithm that favored men’s resumes over women’s because of certain keywords that were found more frequently in the former. Another is the many times facial recognition algorithms have shown higher error rates for people of color than white people. So, perhaps the right to an explanation would be able to pinpoint if and when a biased decision has been made by an algorithm, and it could be remedied. And maybe even before an algorithm is created, the pressure of having to explain why it makes the decisions it does would encourage companies and their programmers to give more consideration to preventing biases like those mentioned above in the first place.
So, to what extent should the creators of artificially intelligent decision-makers be responsible for explaining each decision’s rationale? And where should we draw the line for which decisions need explanations?