28 Apr 2020

Discrimination laws must encompass facial recognition models

Facebook's Emer Cassidy argues that that not only facial recognition’s use but also its creation and accuracy must be regulated

karelnoppe;Shutterstock
Research has found that people of colour are more likely to be misidentified by facial recognition software

In addition to privacy issues associated with the large scale use of facial recognition, there are serious concerns about the maturity of this technology and its ability to accurately identify people - most especially people of colour. Legislators need to look at the creation and accuracy of models being used in facial recognition software and not simply at its use. 

We know that AI, and in particular machine learning models, are only as good as the data you train them on. For facial recognition technology you train it on pictures of humans and it can then identify specific people by mapping their facial characteristics. Historically, when creating a model, the data used to train, validate and test the model needs to be “perfect” data - clear and unambiguous. For facial recognition you might use full frontal images which show someone’s face full on in good light. 

The issue for facial recognition technology in real world use is that you are not getting such clear, detailed images. Whether body cameras on police officers or surveillance cameras on lamp posts you generally will not get the same clarity of image - the lighting may be bad, it could be a moving image, or a partial shot of a face. All of this contributes to make the technology less accurate, and more prone to mistakes in public use. This needs to be acknowledged and the risks mitigated before cities and governments adopt the use of facial recognition in public life. 

This issue becomes more pronounced for people of colour, a United States study by the National Institute of Standards and Technology found people of colour are more likely to be misidentified by facial recognition software. This is not an issue with the artificial intelligence, it’s an issue with the data used to train the model.

The same US study found that facial recognition technology created by Asian companies was less likely to misidentify Asian faces. This indicates that using more extensive training sets of data can mitigate against this bias. There are Chinese companies who say they have 95% accuracy when identifying people through facial recognition. This shows that it’s not that it’s easier to identify certain ethnicities, it’s that without a strong data set incorporating people of colour, misidentification of these people is more likely. 

Regulation should not just be focused on either banning or allowing this technology, but should identify what the technology is intended to be used for and a suitable accuracy standard for that use. Not all uses need to be of the same accuracy - for example, unlocking a phone might require a lower accuracy standard than a law enforcement use.

The security concerns of unlocking a phone incorrectly do not carry the same weight as incorrectly identifying the perpetrator of a crime. A striking example of this was borne out in California during the process of passing the Body Camera Accountability Act. This Bill placed a moratorium on the use of facial recognition in body cameras of police officers. While the Bill was moving through the California legislature the American Civil Liberties Union fed images of the same California legislature into facial recognition software. When these images were compared with mugshots, twenty six lawmakers' images came up and were incorrectly identified as a match. This shows the importance of ensuring the accuracy of this technology to prevent serious legal issues or miscarriages of justice. 

Requiring models to be trained in a way which uses data across many ethnic and racial backgrounds, could work to remove some of the bias which these models have been shown to have. In the same way we do not allow discrimination on the basis of gender or race in employment opportunities we should not allow discrimination in the creation of facial recognition models. Diversity should be a key tenant to the creation of this technology. Regulation should require that data used to train the models should consist of equal volumes of different backgrounds. Creating and training a model with predominantly caucasian data should not be considered appropriate for large scale use in diverse societies. 

Facial recognition is undoubtedly a useful technology advance.  However, before we embrace it across society in public use, we need to regulate not only its use but also its creation and accuracy.

Emer Cassidy is policy manager, business products at Facebook


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email subscriptions@roboticslawjournal.com or contact call us on +44 (0) 20 7193 5801.