Over the last four years police have been trialling facial recognition technology across England and Wales, but critics claim it’s more than 90% inaccurate and studies of similar software found it to be racially biased. So why are police continuing to use it?
There are no laws underpinning or regulating the use of live facial recognition technology for law enforcement in England and Wales. It somehow falls outside of the remit of the respective governmental commissioners on surveillance, information, and biometrics. Despite this, since 2015 police forces from Leicestershire to South Wales have been deploying facial recognition at their discretion, including the Metropolitan Police’s highly contested use of the technology at the Notting Hill Carnival in 2016 and 2017.
How does it threaten human rights?
The current unregulated use of facial recognition technology in England and Wales contravenes the European Convention of Human Rights (specifically articles 8, 10 and 11, protecting the right to privacy, expression and association).
The technology used by police in England scans large groups of people in public places, and puts together a biometric data profile for every individual that is designed to be unique in the same way that a fingerprint or DNA swab would be, by assigning numerical value to your features and comprising a “facial signature”. However, you probably don’t know it’s happening, and almost certainly you won’t have consented to it.
This is a significant privacy issue, in that the technology breaches our right to respect for private life and very nearly amounts to covert surveillance.
In June and July 2018, the Metropolitan Police used the technology in Stratford, London, and, although leaflets were handed out to the public to alert them, there were reported issues of non-transparency and deployment by stealth, with claims that most individuals were not notified as purportedly intended by the police (which the Metropolitan Police denies).
Silkie Carlo, the director of Big Brother Watch (a civil liberties campaigning organisation), said: “Technically police can say this is overt, but our experience was that people didn’t know it was going on and didn’t even know what ‘automatic facial recognition’ is as a concept. The total absence of a public debate and information about the technology means a poster will not suffice.”
Comments from witnesses at the scene echoed this sentiment, with complaints that the police – though officially armed with informative leaflets – were not seen to be consistently issuing them, even to those citizens questioning the trial, much less were they proactively distributing them to the general public who were subject to it.
In their Face Off report, Big Brother Watch warns that such trials are jeopardising freedom of expression, (Article 10) – ‘[t]he right to go about your daily activity undisturbed by state authorities, to go where you want and with whom, and to attend events, festivals and demonstrations, is a core principle of a democratic society protected by Article 10 of the Human Rights Act 1998.’
It states:
We are concerned that the use of automated facial recognition with CCTV has a chilling effect on people’s attendance of public spaces and events, and therefore their ability to express ideas and opinions and communicate with others in those spaces.”
This leads nicely into Article 11, which is freedom of assembly and association.
The deployment of the technology by police at protests in particular is a threat to the freedom of association. For example, at a peaceful anti-arms protest in Cardiff in 2018, South Wales Police used the technology on protesters who were engaging both their freedom of association and freedom of expression (i.e. their right to express their opinions, and to assemble/associate themselves with that view by being at a protest). Civil liberties campaigners claim the use of the technology is likely to have a chilling effect on people’s ability and desire to attend public demonstrations, which are traditionally associated with holding power to account (e.g. whistle-blowing, political campaigning, etc.).
Does facial recognition work?
A freedom of information request published by Big Brother Watch in their May 2018 report found that the use of this technology is over 90% ineffective. That is, 90% of people who have been identified by facial recognition used by UK police have been incorrectly identified or misidentified. The statistics for each police force differs: for South Wales it’s over 91%, while for the Metropolitan Police it’s over 98% ineffective.
However, though ineffective in terms of arrests – the police’s use of the technology is far from innocuous. When an individual is identified by the technology, police will stop the person and ask them to prove their identity. In January this year, during the tenth Metropolitan Police trial of the technology, it was reported that anybody covering their face from the technology was approached by police, including a man who the police then demanded identification from. When the individual complied but became angry, he was fined for public disorder.
The police, generally, do not have the power to force members of the public to identify themselves (without “reasonable cause”), so the use of this technology (where its ineffectiveness prevents any identification from being regarded as a reasonable cause) is creating a major and unregulated elevation of police power.
Currently, when a person is misidentified their image is stored from anywhere between one and three months – possibly up to one year. So, innocent individuals are now having their biometric data stored on police databases without their consent.
It’s not only ineffective – it’s racist
In 2018 a study by academics Joy Buolamwini (of MIT) and Timnit Gebru (of Microsoft Research) exposed the racial inequality of similar technology created by IBM, Microsoft and Face++, the largest providers in the USA and China. The research revealed that as a black or ethnic minority woman you were over three times more likely to be misidentified by the technology than a white man.
The technology used by the police in England and Wales is a software called Neoface, supplied by technology corporation NEC. Though this software has been validated as the best in the industry by the US National Institute of Standards and Technology, it has not undergone any demographic testing.
In England we don’t have a study comparable to that of Buolamwini’s that relates to the specific technology used by our police force, and therefore we cannot assess the potential for algorithmic bias (though are plenty of articles about similar technologies that do).
At this stage, without demographic testing of Neoface software, the problem relating to racial bias has less to do with the algorithms underpinning the technology and more to do with the way the police are using the technology. The software works by running the biometric profile that it has created for any given individual against a database of pre-existing (and pre-identified) biometric profiles. This database is the custody image database – i.e. a collection of mugshots of every person arrested by the police in England and Wales.
In its Biometrics Strategy and Forensic Services report published in May 2018, the House of Commons Science and Technology Committee condemned the attitude of both the Home Office and the police towards the custody image database, a database of around 21 million images of individuals, including (unlike the databases for fingerprints and DNA) images of those who were not charged with any crime.
In 2012 the High Court ruled that the police’s retention of innocent individual’s images was unlawful, but it remains the case that those who have gone unconvicted must request that their image be deleted (as opposed to automatic erasure when it comes to DNA and fingerprints).
In 2016-2017, according to the government’s official statistics, black people were over three times more likely to be arrested than white people, with black women more than twice as likely to be arrested as white women, and black men three-and-a-half times more likely to be arrested than white men. People with mixed ethnicity were more than twice as likely to be arrested than white people. These figures, however, were not reflected in a higher number of convictions for people of colour.
This means that the custody image database is made up of a disproportionate amount of images of innocent people of colour, which in turn makes it far more likely that a person of colour who is scanned by this ineffective technology will be misidentified, stopped and searched, forced to prove their identity or arrested.
So, the current unregulated use of facial recognition technology by police in England and Wales perpetuates, and is perpetuated by, pre-existing racism within law enforcement.
What can be done?
In June 2018, the Home Office published their long-awaited and disappointingly sparse Biometrics Strategy. In it, they pledge to “establish a new oversight and advisory board to coordinate consideration of law enforcement’s use of facial images and facial recognition systems” – though nothing has yet materialised.
In July 2018, Big Brother Watch and Baroness Jones filed a legal challenge against the police’s lawless use of facial recognition technology in the High Court, demanding that “the Metropolitan Police and Home Secretary…end their lawless use of dangerously authoritarian facial recognition cameras.” The case was dropped earlier this year after the Metropolitan Police announced that their trial of the technology was over. However, a similar challenge has been brought against the South Wales Police for their deployment of the technology which will soon go ahead with a three-day trial. A Cardiff resident is protesting and running a CrowdJustice campaign here.
At present (May 2019), the Metropolitan Police are considering the permanent deployment of the technology, while other forces across the country continue their trials. But with such high rates of inaccuracy, undeniable racial bias and no regulation, how can the police continue to justify its use?
- The campaign against the Metropolitan Police’s plans to permanently deploy facial recognition technology can be found here.
What is the police response?
The Metropolitan Police Press Office said: “Technology is advancing at a fast pace and Live Facial Recognition (LFR) is a new policing tool that cannot be ignored. LFR has the potential to catch wanted, dangerous individuals in a similar way to Automatic Number Plate Recognition (ANPR). Big Brother Watch and Liberty have highlighted the number of ‘false positives’ after submitting FOIs. The Met does not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts.”
Further, the Met says officers have made eight arrests as a direct result of the flagging system during the 10 deployments
- Two of which were made during the deployment in Westminster on 17 and 18 December. They were for malicious communications (threatening rape) and assault on police.
- Three arrests were made during the deployment in Romford on 31 January. They were on suspicion of robbery, false imprisonment and kidnapping and breach of a molestation order. The man arrested for breach of a molestation order was subsequently charged and sentenced to 11 weeks’ imprisonment.
- An additional three arrests were made during the second deployment in Romford Town Centre on Thursday, 14 February. Two of the arrests were on suspicion of robbery and theft and a third man was arrested on suspicion of common assault, breach of a restraining order and harassment.
Information about the Met’s use of facial recognition technology can be found here.
Main image by Akram Shehadi.