The London Metropolitan Police Service has announced it will begin the operational use of Live Facial Recognition (LFR) technology, despite there still being many critics and concerns.
The technology itself has come under criticism not only for poor performance when identifying individuals, but critics have also suggested this should be deemed as a violation of privacy rights afforded to individuals in democratic societies. Despite an on-going controversial position, the London police force seem to think it has all the bases covered.
“This is an important development for the Met and one which is vital in assisting us in bearing down on violence,” said Assistant Commissioner Nick Ephgrave. “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London.
“We are using a tried-and-tested technology and have taken a considered and transparent approach in order to arrive at this point. Similar technology is already widely used across the UK, in the private sector. Ours has been trialled by our technology teams for use in an operational policing environment.”
The initiative will start in various London locations the Met believes it will help locate the most serious offenders. The primary focus will be on knife and violent crime. It is unclear whether these deployments will be in permanently at a location, or the officers will be free to move around to other parts of the city.
As individuals pass the relevant cameras, facials maps will be compared to ‘watchlists’ created for specific areas. Should a match be confirmed, the officer will be prompted (not ordered) to approach the individual.
What Ephgrave seems to be conveniently leaving out of the above statements is that the private use of facial recognition technology is either (a) largely in trial period, or (b) highly controversial also.
In August, privacy advocacy group Big Brother Watch unveiled a report which suggested shopping centres, casinos and even publicly owned museums had implemented the technology without public consultation and had even been sharing data with local police forces without consent. This is a worrying disregard to the vitally important privacy principles of the UK.
At European level, the European Commission has been considering new rules which would extend consumer rights to include facial recognition technologies. And in the US, court cases have been raised against implementation in Illinois, while the City of San Francisco has effectively banned the technology unless in the most serious of circumstances.
The London Metropolitan Police Force has said it will delete images which are not matched to individuals on record, though considering police databases have more than 20 million records, this leaves wiggle room. If an arrest is made, the data will be kept for 31 days. Although this is a concession by the Met, Human rights organisations and privacy advocacy groups have continued to suggest such technologies are an intrusion, over-stepping the privileges afforded to the police and eroding the concept of privacy.
Interestingly enough, the same underlying issues are persisting in London; the police force seems to have pushed forward with the introduction of the technology without a comprehensive public consultation. While there is good which can be taken from this technology, there are also grave risks for abuse unless managed very effectively; the general public should be afforded the opportunity to contribute to the debate.
This does seem to be a similar case to the boiling frog. The premise of this fable is that if a frog is put suddenly into boiling water, it will jump out, but if the frog is put in tepid water which is then brought to a boil slowly, it will not perceive the danger and will be cooked to death. The same could be said about facial recognition technology.
Eight trials were conducted by the London Metropolitan Police Force between 2016 and 2018, some with disastrously poor results, though few were widely reported on. In September, the UK High Court ruled facial recognition technologies could be implemented for ‘appropriate and non-arbitrary’ cases. As this is quite a nuanced and subjective way to address the status quo, authorities must be prevented from creeping influence.
Ultimately this does seem like a very brash decision to have been made, but also authorised by the political influencers of the UK. This is not to say facial recognition will not benefit society, or have a positive impact on security, but there is an impact on privacy and a risk of abuse. When there are pros and cons to a decision, it should be opened-up to public debate; we should be allowed to elect whether to sacrifice privacy in the pursuit of security.
The general public should be allowed to have their voice heard before such impactful decisions are made, but it seems the London Metropolitan Police Force does not agree with this statement.