Training Gender Bias into Resumé Readers


Profiled article

J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women”,, Oct. 09, 2018. [Online]. Available:


This week we’ll profile Jeffrey Dastin’s report on Amazon’s use of a machine learning model in their hiring process. Because of the astronomical potential of artificial intelligence, it is easy for many companies to get swept up in the buzz around big data and machine learning. Amazon is no exception. However, it is important for every company to take a few steps back and think critically about how these models generate their conclusions. According to Dastin’s report, Amazon’s hiring model was trained to have a gender bias. The problem arose from the fact that their model was trained on the past ten years of hiring data, which was heavily dominated by men. Dastin writes in the article, “In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’” Amazon denied that this automated model was a deciding factor in their hiring process and would merely acknowledge that interviewers may have looked at the model’s results. However, this points to an important discussion point, not only in the tech industry but in all industries: is it ethical to use machine learning models in the hiring process? How can we measure the bias in these automated processes? Can any machine learning model be impartial?

Related Links

For an introduction to machine learning:

For more information on bias in machine learning:

For further information on the Amazon report: