Facial Recognition in Schools Might Do More Harm than Good

2/24/2020

“Being spied on like dissidents is not part of the high school experience that any of us would want for our children,” urged Jim Schulz, a parent of a teenager enrolled in the Lockport, NY school district, in 2018. Authorities with the New York State education board had approved a new project that would install facial recognition cameras and software that would scan for objects resembling weapons and registered sex offenders, subjects of active restraining orders, and anyone else deemed a “credible threat” by the school. Concerned about the lack of information and possible invasion of privacy this $1.4 million program would incur, Shulz filed a petition with the district superintendent attempting to stop its implementation–– and, about a month ago, lost.

Including Lockport, 8 different public school systems in the United States have implemented facial recognition software that scans for weapons and persons of interest, primarily with the goal of addressing potential shooting situations. Their concern is not unwarranted, as in 2019 alone, the United States suffered 70 school shootings, and over 400 have occurred in the past decade. The student responsible for the Parkland shooting had been forced to leave Stoneman Douglas High a year prior due to disciplinary issues; it is likely he would have been documented as a person of interest, so it is possible that a facial recognition system could have mitigated the situation. Although none of the systems have yet to stop an identified shooting threat, a school in Oklahoma was able to respond to a student reportedly running away from home by using facial recognition to identify and locate the student.

Despite its potential to ensure students’ safety, there still remains two notable issues with implementing facial recognition. First, little to no standard legal regulation exists for the software; while cities like San Francisco and Somerville released bans on facial recognition, neither the U.S. government nor board of education have established policies. RealNetworks, one company that developed some of the recognition software, recommends to its schools a set of “best practices” that include establishing consent and transparency of the process–– but no administration is required to follow them. One volunteer at a Texas school, for example, got into an argument with a security official and was placed on a watch list using a photo obtained from social media. She was never notified that her face was being added to the watch list and insists that the discipline was a sign of “authoritarianism.”

Second, these facial recognition systems might help enable discrimination against certain groups of students. Black students are already three times as likely to be suspended than white students but not any more likely to misbehave; depending on how schools implement their persons of interest list, this facial recognition software may disproportionately monitor black students. Moreover, the National Institute of Standards and Technology published a study showing that facial recognition software is more likely to incorrectly identify people of color, which would make schools more likely to discipline innocent students if they are non-white.

In a climate where dozens of gunmen threaten the safety of schoolchildren each year, school administrations rightly feel pressure to increase security to keep their students safe. Given the lack of regulation surrounding facial recognition, however, is implementing the software worth the potential breach of privacy and potential adverse effects on students?