Facial Recognition: An Intrusion On The Right To A Private Life?
We are living in an era in which technology is advancing exponentially. From robots delivering takeaways to the Metaverse, no idea seems too extreme at this point. But at what expense? Life as we know it is changing rapidly but does this make us more susceptible to normalising what was once deemed to be unthinkable? In some ways, this is a positive thing – the move from gas to electric, advances in medical technology, our ability to work remotely during the pandemic…
However, there are negative opinions that are arguably just as widespread, particularly concerns around privacy. The idea that our smartphones, and the companies providing them, can know our every move is an uncomfortable feeling, albeit one that many people now accept. This is similar to the fact that the UK is the third most watched country in the world having approximate 5.2 million CCTV cameras in 2020 according to the IFSEC, and London is the most surveilled city in the world outside of China. Some may remain unphased by these statistics on the basis that they may still remain unknown, provided that they are a law-abiding citizen. However, developments in facial recognition technology seem to challenge this perception.
Facial recognition technology uses biometric information to identify a human face through a video or image by comparing facial features with faces stored in a database, thus involving the processing of special category personal data. This can either be live, i.e. real-time automated processing, or retrospective. There is currently no legislation specifically attributed to the use of facial recognition technology; however, data protection legislation, alongside other relevant legislation, must still be complied with.
The ICO views the use of facial recognition technology as a regulatory priority and opened an investigation in 2018 particularly focusing on how the police, in this case, the Metropolitan Police Service (MPS) and South Wales Police (SWP), use this technology in public places. This investigation involved market research into public awareness and attitudes about the police use of live facial recognition technology and analysis of the extent to which data protection legislation applies and is considered by police forces, particularly in relation to the case of R (Bridges) v The Chief constable of South Wales. However, this isn’t to say that there have not been cases of non-compliant use of such technology by private companies also, e.g. King’s Cross.
The ICO’s market research suggested that there is public support for the police use of live facial recognition technology with 82% of a total of 2,202 surveyed adults indicating that this was acceptable. Further to this, 72% agreed that it should be used permanently in areas considered to have high crime, with 65% in the belief that the technology is necessary to prevent low-level crime. 60% agreed that it is acceptable to use the technology in crowds even if it is only intended to find one person of interest. However, concerns over privacy were still prevalent.
Other research conducted by the London Policing Ethics Panel, purposefully aimed at specific demographics found varying results. In fact, majorities of those surveyed from black (63%) and Asian (56%) backgrounds were opposed to the police use of this technology. Similar results were found in groups of younger respondents.
In another national survey conducted by the Ada Lovelace Institute, while the majority were in favour of police use of the technology, 55% of respondents believed that the government should limit police use. Further to this, concerns were expressed in relation to breaches on privacy rights, normalisation of surveillance, lack of consent or opt-outs and lack of trust in police use of the technology.
These concerns do not appear unreasonable when considering the ICO’s judgement in its investigation. The ICO highlighted numerous non-compliances and areas for improvement, particularly regarding transparency, consideration for proportionality and necessity, and accuracy of the technology. However, the ICO does also see the benefits in its use and found evidence of good practice and some level of commitment to data protection compliance in its investigation, including, for example, the completion and revision of data protection impact assessment (DPIA). However, it was concluded that a binding statutory code of conduct should be drawn for the deployment of facial recognition technology to ensure consistent and compliant practices across the nation.
UK police forces are gradually moving towards the widespread reliance of facial recognition technology in its investigations, as evident through an increased number of deployments and the MPS’ expansion to retrospective facial recognition technology, but there are numerous obstacles to tackle before achieving this. For example, in order to comply with data protection legislation, the use of the technology must be justifiably necessary and proportionate to the purposes for which it is used. For now, what is deemed to be ‘effective’ has not been clearly articulated by the police which arguably undermines the necessity and proportionality of their use of facial recognition technology. However, when considering the statistics for CCTV, it is not unreasonable to assume that, in time, facial recognition could take a similar route. On top of this, talks of post-Brexit reform, and possible weakening, of UK data protection legislation could pave a way for the idea of mass facial recognition processing to inch closer to ‘the new norm’.
There is the chance that we may follow a similar route to the likes of San Francisco, for example, where facial recognition technology is banned. However, deployment of the technology is still on an upward trajectory for now and it will likely require mass campaigning to challenge this.