by Christopher Koopman, Executive Director; Harith Khawaja
Posted August 22, 2019 In Scholar Commentary

This article was originally posted on the Medium publication The Benchmark

This week, Senator Bernie Sanders (I-VT) called for a ban on police use of facial recognition as part of his presidential campaign’s criminal justice reform platform. He’s not alone. Earlier this year, San Francisco became the first city in the US to ban local agencies’ use of facial recognition technology. Since then, officials in Oakland, CA and Somerville, MA also voted to limit how police departments use this technology.

This is the latest part of a broader conversation about the costs and benefits of facial recognition technology — a complex puzzle worth unpacking.

From open letters calling on companies like Amazon to stop selling its technology to law enforcement to recent unsuccessful efforts by Amazon shareholders to prohibit the company from selling its facial recognition technology to government customers, there is a serious conversation taking place in the United States about when it is appropriate for governments to use emerging technology that may not be ready for broad deployment.

The Risks of Facial Recognition Technology
Many have raised serious concerns about whether these algorithms are biased against people with darker skin. In 2018, the American Civil Liberties Union (ACLU) found that Amazon’s software, Rekognition, incorrectly detected 28 members of Congress as criminals. Many of those misidentified were people of color, including six members of the Congressional Black Caucus, among them civil rights leader Rep. John Lewis (D-Ga.).

Moreover, other researchers have found evidence of both gender and skin-type bias. A study from 2018 co-authored by Joy Buolamwini of MIT Media Lab and Timnit Gebru of Microsoft Research — comparing gender-detection software built by Microsoft, IBM and a Chinese AI company called Megvii — found that the algorithms were, on average, more than 99 percent accurate when fed images of white men. When shown images of black women, the systems were wrong 35 percent of the time. A more recent study found that Amazon’s Rekognition software made no mistakes when identifying the gender of lighter-skinned men. However, it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.

Beyond the risk of algorithmic bias, there are fears that the technology will be deliberately used in biased and rights-violating ways. For example, skeptics fear that facial recognition might be used by governments as a tool for mass surveillance of minority or dissident groups. They point to its extensive use by China to track the movement of the country’s Uighur population. Facial recognition has also been at the center the recent pro-democracy demonstrations in Hong Kong, where protestors are using a variety of techniques to subvert facial recognition, including smashing street cameras, wearing face masks, using umbrellas, and shining laser pointers at police to avoid potential detection and identification.

Skeptics also claim that the technology could have a chilling effect on free speech. For example, in 2015 the Baltimore Police Department used the technology to identify and apprehend protestors from their social media profiles during the demonstrations after the death of Freddie Gray. It could also raise privacy issues. In July, a report from the Washington Post detailed how the FBI and ICE conduct facial searches on driver’s license databases, scanning photos of millions without their consent.

The Future of Facial Recognition and Law Enforcement
While results like this are troubling, these issues are not insurmountable. Biases in these systems are likely the result of unrepresentative training data sets. Research has shown that systems trained mostly on images of white men reflect the biases of the training set. This development should be encouraging for those concerned as the problems of bias seem relatively easy to fix.

Companies are already taking steps to correct these errors by building more comprehensive and representative data sets that would eliminate algorithmic bias. Earlier this year, IBM released a dataset of a million images called Diversity in Faces (DiF). The data set is large and diverse and is a step toward teaching algorithms to handle the diversity of features, skin color, and gender among the population as a whole.

Researchers at MIT have also developed an algorithm that reduces bias, even when trained on unrepresentative data sets. By taking steps as tiny as eliminating variances in posture, angle, and lighting that add noise to predictions, developers could reduce the number of false positives.

Facial recognition technology could also be used to reduce human error in policing. Mistaken eyewitness identifications, for example, contributed to approximately 71% of the more than 360 wrongful convictions in the United States overturned by DNA evidence. When used appropriately, facial recognition systems could help exonerate suspects, increase fairness in the system, and improve current policing practices.

Let’s put this into a real example. Juan Catalan, the subject of the recent Netflix documentary “Long Shot,” was arrested in 2003 for a murder he didn’t commit. To prove his alibi, and save his life, his lawyers poured over hundreds of hours of film to show he was one 56,000 people at a Dodgers game the night HBO filmed an episode of Larry David’s show Curb Your Enthusiasm:

Imagine, instead, that law enforcement, prosecutors, or Catalan’s defense attorneys had access to facial recognition software. We wouldn’t have needed a high-stakes murder trial and potential death sentence, but only a simple computer program.

Where do we go from here?
It is unlikely that an outright ban on facial recognition technology is as fruitful as other, more narrowly-targeted public policies. For example, there is no reason why law enforcement agencies using this technology should not be required to open up the systems to be audited for bias. Or only allow systems that have a proven ability to achieve a certain degree of accuracy to be used for law enforcement purposes.

Another possibility is that the technology should be restricted to certain domains until we are more confident in the results. Real-time monitoring, for example, could be prohibited from public areas because of the potential for invasiveness and false positives. Law enforcement agencies could also be limited to using facial recognition on a case-by-case basis with a warrant from a judge.

While facial recognition could transform our society, some are raising legitimate concerns about how we can ensure that these systems are safe and unbiased. When misused, this technology has the potential to cause real and lasting harm. But we can also ensure that these systems are built to limit misuse. With any technology, improvements will come. To create space for these continued improvements, we need to carefully think through how we measure both the risks and the potential benefits and also ensure that public policies enable their development over time.

CGO scholars and fellows frequently comment on a variety of topics for the popular press. The views expressed therein are those of the authors and do not necessarily reflect the views of the Center for Growth and Opportunity or the views of Utah State University.