The Rise Of Facial Recognition Technology – What Does It Mean For Our Security?

Share Post To:
  •  
  •  
  •  
  •  
  •  
  •  

Oxford Circus is one of the capital’s location which recently adopted the new technology built to track down potential offenders by comparing facial features. 

On February 20th 2020, Met Police announced on Twitter that it used live facial recognition technology at busy locations in Westminster. The individuals seen by the cameras would be compared to faces of suspected criminals and, if there is a match, the police would stop them to verify it. 

The police reassured worried citizens that “This technology helps keep Londoners safe”. On the other hand opposers, like Big Brother Watch, highlight that “It’s alarming to see biometric mass surveillance being rolled out in London”. 

Big Brother Watch is a non-profit organisation that focusses on protecting privacy and freedoms during an increasingly virtual world. They claim to “work to roll back the surveillance state and protect rights in parliament, the media and the courts”. 

During the RUSI (Royal United Services Institute) Annual Security Lecture, Dame Cressida Dick, London’s Police Commissioner, she discredited any misbeliefs regarding the use of Live Facial Recognition (LFR). She explained that it won’t “store your biometric data”, that human professionals will ultimately decide whether to stop a suspect and that LFR “is proven not to have an ethnic bias”. 

The latter claim came after the National Institute of Standards and Technology (NIST) published a 2019 study concluding that LFR algorithms are up to 100 times more likely to falsely identify Asian and African American faces in comparison to Caucasian ones. 

These specific claims relate to the task of ‘one-to-one’ matching, which verifies if technology can pair two different photos of the same person. The most common error consisted of false positives cases, when two people are incorrectly recognised as the same human. 

The study tested 189 algorithms from almost 100 companies with photos supplied “by the State Department, the Departments of Homeland Security and the FBI”. NIST is part of the US Department of Commerce and its mission is to further American innovation to ensure high quality of life and financial security

Dame Cressida claims that critics of the technology are “highly ill informed” and that, thanks to LFR, the police has arrested eight suspects. Moreover, she reiterates its importance by saying that the worry of images being stored “feels much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest”. 

Not only is LFR being used to identify criminals, but various governments are also deploying it to contain the coronavirus outbreak. For example, Moscow is using it to monitor if individuals defy the quarantine rules and limit the risk of infecting other people. This comes after airplane passengers returning from China were ordered to isolate themselves. 

The use of this technology has attracted both supporters and opposers following the case of a woman escaping the quarantine process from a St. Petersburg hospital. Results of three medical tests showed that  she didn’t have coronavirus. 

Controversy surrounding this advanced technology has also been revived due to Clearview AI’s involvement with mostly US government Departments. The company has been criticised for collecting photos of individuals from social media platforms, such as YouTube, Instagram and Facebook, without them agreeing and/or knowing. 

Clearview AI raised several concerns when its client list was accessed by a hacker, as the Daily Beast announced. The company’s lawyer however reassured the paper saying that “[they] patched the flaw, and continue to work to strengthen [their] security”. 

Yasmine Moro Virion

Image Credit: Daily Mail