09 Nov 2018

How to get the most from facial recognition technology without risking personal freedoms

The use of Automated Facial Recognition (AFR) within a CCTV landscape has the potential to offer many benefits to society, from supporting healthcare and helping city authorities look after our public spaces to biometric authentication for international travel, managing controlled, safe access to workplaces and leisure venues, and control of anti-social behaviours and criminal activities.

shutterstock
Zapp2Photo / Shutterstock.com
 

Some difficult challenges have to be overcome before we can reap the greatest rewards. Arguably the greatest of these lie not in the capabilities of the technology in achieving its many and varied possibilities, but around how we manage its use and abuse through regulation and policy. The public debate around development and use of CCTV/AFR solutions is part of the framework for the creation of this regulation and policy.

In both the UK and US we see signs of unease around the early use of AFR, including those concerns voiced by civil liberties movements and interest groups. Amazon has entered the fray in the US, calling for identification systems used for law enforcement purposes to have 99% confidence levels (that is to say, for them to be as robust as possible), and even then only to be used as one input alongside others.1 Amazon has a reason to get involved in this debate. It has an image analysis application called Rekognition which is already widely used for a range of purposes. In Orlando, the city police department recently extended its trial of the service for facial recognition. Meanwhile, in the UK campaigning groups, such as Big Brother Watch, produce reports like Face Off,2 which specifically characterise the use of AFR by state agencies as an abuse of civil liberties.

 

We already have a strong legal framework

There is no doubt that there is potential for the technology to be misused, but the legal framework in the UK already provides robust safeguards against this. When it comes to data relating to specific individuals, including computerised data such as facial maps, there is protection both in the 2018 EU General Data Protection Regulation (GDPR) and the UK Data Protection Act 2018 (DPA 2018). The DPA 2018, in particular, brings both law enforcement and intelligence service personal data processing formally within the data protection regime.

The new data protection legislation has been specifically designed to be forward looking, so that it can cater for technological developments into the future without the need for continual revision. It places stringent requirements on “data controllers” and “data processors” who utilise personal data, and grants rights including those of access, correction and erasure to identifiable individuals whose personal data is processed – “data subjects”. Data protection law sets a base level of protection for an individual’s personal data, but national regulators will expect controllers and processors to constantly seek to improve the protection they provide as new privacy enhancing and data security technologies become available. An organisation’s data privacy measures that are acceptable and lawful in 2018 may fall out of compliance in the future if they slip behind evolving sectoral data privacy standards.

Data privacy legislation thus treats all personally identifying data as worthy of protection. There is an understanding that, as the algorithms that are used to process the data become ever more sophisticated, it can be difficult to foresee in advance how that data might be used, for example, how elements of your Facebook data might be used to assess your voting intentions.3 There is a focus on data in general, rather than on specific types of data.

The law does, however, recognise the fact that certain types of personal data carry more significance for individuals, and the GDPR refers to these as special category data. These include data about racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, sex life and orientation and health. Unlike prior legislation, the GDPR also includes genetic data, and biometric data for the purpose of uniquely identifying a natural person. Use of such data requires the observance of a further layer of protections by controllers and processors.

This differentiation between “personal data” and “special category personal data” has been criticised, and may be difficult to sustain in a technological environment where “personal data” can be used to make accurate predictions about “special category personal data”, for example, Facebook friendships may be used to predict our sexual orientation.4 Appropriate protection of all personal data is important. In the context of modern data analytics, face mapping data is just another data point, no different in its essence to our home addresses, our bank account details or whether and how we vote in elections. All of those types of data should be treated with the same stringent controls, the same checks and balances, the same regard to personal privacy.

Personal privacy is also protected in the common law, through legal principles such as the duty of confidentiality and the protection of personal information. In the UK, the use of CCTV and ARF must respect the right to private and family life, and ensure that any restrictions on these rights are “in accordance with law” and “necessary in a democratic society” under Article 8 of the European Convention on Human Rights (EHCR). Meanwhile Articles 7 and 8 of the EU Charter of Fundamental Freedoms (CFR) are clear that personal data must be processed fairly for specific purposes, and only under a legitimate basis laid down by law.

 

Having a rounded debate that includes the industry

Despite these strong foundations in law, it is not uncommon for campaigning groups to lobby against the use of CCTV and AFR as if they are an automatic infringement of personal liberty. Somehow, it seems that data relating to our face takes on a different quality than data about our address, bank accounts and so on. We may overemphasise the relative importance of facial data as a single element of our personal “data shadow” – the sum of the data about us. In doing so we risk losing touch with the bigger picture of the potential value of the technology by focusing only on the possibility of negative outcomes.

The work of interest groups in the area of AFR raises some genuinely important concerns about the legitimacy, quality and security of AFR datasets currently in use by agencies like the police. Getting to grips with these concerns is a crucial part of ensuring appropriate checks and balances are put in place. But interest group campaigns can cause difficulties if government policy is created principally in reaction to unsupported assertions and misplaced criticisms and by calls for bans, rather than created as a result of rounded thinking, discussion and debate.

Rather than rely on research and reports developed by interest groups, the UK government would do well to set up an independent review into the use of CCTV, AFR and associated AI and analytics technologies. Interest groups would take their place in this framework, as would government, academics and the industry itself, allowing the topic to be addressed and analysed from all sides, and considering multiple viewpoints.

Indeed, including the industry in discussion is vitally important. Legislation is just one of several ways to ensure that the use of CCTV and AFR does not infringe on public rights and freedoms. Encouraging the industry to produce and work to its own guidelines could be of significant comfort to those members of the public with concerns, and could contribute to the competitive landscape too. For example, if public authorities work with the industry to develop a code of practice, and then require public purchasers of CCTV/AFR/analytics solutions to select providers who sign up to the code, they could influence the market, and contribute to the raising of standards by consensus.

 

Where policy leads, the industry will follow

A wider policy debate, involving a broad scope of interests, should be able to support the creation of an environment which allows CCTV and AFR to reach their maximum potential – in the service of civil society. Such a debate would avoid the binary situation of criticism by interest groups and reaction by policy makers, which is so stifling to wider discussion.

Instead, the debate would include discussion of the checks and balances needed in the protection of civil liberties and personal privacy, and render explicit the compromises society decides to make in the interests of the greater good, such as where it extends specific powers to the police. It could also provide the scope for industry to openly take the lead in some areas, such as through the already noted joint codes of practice, and identifying measures that ensure the technologies that are developed are robust and secure.

Without opening up the debate in this way, we will continue to be mired in an essentially unproductive scenario where one report after another criticises the sociological, technological and political issues around CCTV and AFR, and government and policy makers react to criticisms, but there is no concerted attempt to engage in innovative thinking around the effective regulation of the technology. Regulation of new technologies solely through legal and administrative measures is increasingly recognised as unsophisticated, inflexible and often unsatisfactory in outcome. The “regulatory toolkit” available today encompasses a much broader range of techniques that can be brought to bear. In this area they might include seeking to change culture and practices around CCTV and AFR use by state and private sector, designing privacy protections into the technology’s architecture to prevent its inadvertent or deliberate misuse, or requiring greater disclosure of policies and processes by organisations utilising it. Yet, in heavily polarised debates, these kinds of regulatory experiments can often be swamped by polemic.

As the technology improves, AFR and other CCTV data analytics are poised to enter our lives in ways far beyond the policing uses that underpin the “Big Brother” concerns of the campaigning groups. The use of AFR for authorising access, whether to a mobile phone, a secured location, or a country, is increasingly common. Retailers are examining AFR for use not just in security, but in areas like marketing and client management.

It is implausible, given the business and security imperatives, that a putative ban on CCTV analytics would be effective or garner much support. What is needed is an open conversation that focuses clearly on the proposed uses of the technology and the data it generates, and designing a holistic regulatory regime that optimises the balance between the social utility to be gained from deployment of controlled CCTV analytics and individual/community privacy interests.


References: 

  1.  Dr Matt Wood, Thoughts on Machine Learning Accuracy, AWS news blog, July 2018 https://aws.amazon.com/blogs/aws/thoughts-on-machine-learning-accuracy/
  2. Big Brother Watch, Face Off: The lawless growth of facial recognition in UK policing, May 2018 https://bigbrotherwatch.org.uk/wp-content/uploads/2018/05/Face-Off-final-digital-1.pdf
  3.  Alex Hern, Cambridge Analytica: how did it turn clicks into votes?, The Guardian, 6 May 2018 https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie
  4.  Carter Jernigan and Behram F.T. Mistree, Gaydar: Facebook friendships expose sexual orientation, 2009 http://www.firstmonday.dk/ojs/index.php/fm/article/view/2611/2302

 


 

By James Wickes (left), co-founder and chief executive of Cloudview, and Andrew Charlesworth (right), Professor of Law, Innovation and Society, University of Bristol. 
 

related topics