What is facial recognition technology?

Over the past few years, facial recognition technologies have benefitted from access to big data, computing power and the advancement of machine learning algorithms.

Facial recognition technology enables the automatic identification and detection of an individual by matching two or more faces from digital images. It does this by creating a ‘biometric template’ through detecting and measuring various facial features and, in a second step, compares them with features taken from other faces. 

Facial recognition technology can also be further categorized through tasks: biometric verification, identification or categorization. 

What is Biometric Verification?

Biometric verification requires two biometric templates to compare and determine if the individual shown on the two images is the same person. This type of technology is often used at airports, especially at border control checks. A person scans his or her passport image and a live image is taken on the spot. The facial recognition technology compares the two facial images and if the likelihood that the two images show the same person is above a certain threshold, the identity is verified. 

Verification does not require biometric data to be stored in a database. Instead, they’re stored on a card or in an identity or travel document. This means an individual is responsible for the security of his/her biometric data. 

What is Biometric Identification?

Biometric identification means that the biometric template of an individual’s facial image is compared to many other biometric templates stored in a database. After, it returns a score for each comparison indicating the likelihood that two images refer to the same individual. 

Sometimes images are checked against databases, where it is known that the referenced individual is enrolled in the database. Other times, where it is unknown, the individual is checked against watchlists.

Identifying Faces on CCTV: Live Facial Recognition Technology

Facial images can also be pulled from video cameras. This method of comparing footage obtained from video cameras and identifying individuals, is also known as ‘live facial recognition technology’. 

This system first detects if there is a face on the video footage. And the approach is very much like smart phones, where the camera automatically draws rectangles over faces. Faces on the video footage are then extracted and compared against the facial images in the database to identify whether the person on the video footage is in the database of images. 

As you might suspect, the quality of the facial images extracted from video cameras cannot be controlled. Light, distance and position of the person captured on the video footage limit the facial features and definition. Consequently, live facial recognition technologies are more likely to result in false matches as compared to facial images taken in a controlled environment, such as a border crossing point or a police station.

What is Biometric Categorization?

Unlike biometric verification and identification, biometric categorisation  deduces whether an individual belongs to a specific group based on his or her biometric characteristics – for example, sex, age, or race. It can, therefore, also be used for profiling individuals to categorize individuals based on their personal characteristics. 

Categorization means that the technology is not used to identify or match individuals, but only characteristics of individuals, which do not necessarily allow for identification. However, if several characteristics are inferred from a face, and potentially linked to other data (e.g. location data), it could de facto enable the identification of an individual.

What is facial recognition technology used for?

In the private sector, facial recognition technology is widely used for advertising and marketing, with customers profiled to predict their future buying preferences towards products based on their facial expressions. 

In sports, football clubs use it in their stadiums to identify individuals who have been banned from attending the club’s matches. 

In human resources, they’ve been using facial recognition technology to analyze facial expressions of job candidates in interviews. 

Major social media companies, such as Facebook started tagging faces to improve their facial recognition technology. 

Recent improvements in AI-powered facial recognition technology have intrigued law enforcement as well as private businesses as they start using, testing and/or planning the use of facial recognition technologies across the world. As a result, this has sparked an intense debate on its potential impact on fundamental rights. 

China’s large scale use of facial recognition technology in combination with surveillance cameras has led to many discussions and concerns about potential human rights violations, particularly with respect to detecting members of certain ethnic minorities. 

Where is facial recognition being used?

Following an increased use of facial recognition in the US, in a 2019 survey, Pew Research Centre found that more than half of Americans(56%) trusted law enforcement agencies to use facial recognition technologies responsibly. Meanwhile, the public is less trusting of advertisers and technology companies (36%). 

However, as of late, US law enforcement agencies have been under scrutiny for using facial recognition software for misidentifying individuals for crimes they did not commit. 

In Europe, there are few examples of national law enforcement authorities using live facial recognition technology. London’s police department said that it would begin using facial recognition to spot criminal suspects with video cameras as they walk the streets.

In Hungary, a project called ‘Szitakötő’ (‘dragonfly’ in English) deployed 35,000 cameras with facial recognition capabilities in Budapest and across the country. The cameras can capture drivers’ license plates and facial images for maintaining public order, including road safety. 

The Czech government approved a plan to expand the use of facial recognition cameras − from 100 to 145 − at the Prague International Airport. Meanwhile the police in Germany and France have carried out extensive testing. 

Sweden’s data protection authority authorised the use of facial recognition technology by the police to help identify criminal suspects, which allows the police to compare facial images from CCTV footage to a watchlist containing over 40,000 pictures.

Once the legal and technical steps are approved and completed, we can expect more of these EU-wide systems to process even more facial images. These images will be taken in controlled environments – for example, at police stations or border-crossing points, where the quality of the images is higher compared to that of CCTV cameras. 

That naturally leads us to a valid concern – if facial recognition technology is accurate. 

How accurate is facial recognition software?

Last year, Axon, the world’s largest corporate supplier of police body cameras announced that it would not deploy facial recognition technology in any of its products because it was too unreliable for law enforcement work and “could exacerbate existing inequities in policing, for example by penalising black or LGBTQ communities.”

In a similar vein, San Francisco has banned the use of the technology because of its excessively intrusive nature into people’s privacy and to avoid possible abuse by law enforcement agencies. And most recently, after the injustice killing of George Floyd, Boston also banned the use of facial recognition technology by police and city agencies.

Against this backdrop, a number of questions arise from a human rights perspective: is this technology appropriate for law enforcement and border management use – for example, when it is used to identify people who are wanted by law enforcement? Which fundamental human rights are most affected when this technology is deployed – and what measures should public authorities take to guarantee that these rights are not violated?

Fundamental human rights, most at risk, include: human dignity, the right to respect for private life, the protection of personal data, non-discrimination, the rights of the child and the elderly, the rights of people with disabilities, the freedom of assembly and association, the freedom of expression, the right to good administration, and the right to an effective remedy and to a fair trial.

Misidentifying faces is the most frequently raised concern. Rightfully so, human rights concerns are higher for minorities whose facial images are captured and processed. For example, facial recognition technology has higher error rates when used on women and people of colour, producing especially biased results, which can ultimately result in discrimination. 

Also biometric scanners aren’t 100% accurate when scanning. When circumstances change and a user alters their face whether through a mask or skin damage, this will always affect the quality of an image. Consequently, the scans of a face will always be different.

Facial recognition technology can also intimidate peaceful protests, if people fear that facial recognition technology is being used to identify them. In the most recent Black Lives Matter (BLM) protests, the Dallas Police Department asked people to send in “video of illegal activity” from the BLM protests in the city through the iWatch Dallas app. Aware of their intention, protestors uploaded and flooded their app photos and videos of K-pop stars instead. 

There are also long-term implications. Processing large amounts of personal data and targeting certain faces, curtails privacy rights and ultimately affects the function of democracy. Privacy is a core value inherent to a liberal democratic and pluralist society, and a cornerstone of fundamental human rights.

Civil society and private companies have advocated for a clear regulatory framework of facial recognition technology. Furthermore, the European Commission’s High-Level Expert Group on Artificial Intelligence specifically recommends a framework in building trust in human-centric AI:

(1) it should  be lawful

(2) it should be ethical, and

(3) it should be robust, both from a technical  and  social  perspective since, even with good  intentions, AI systems can cause  unintentional harm.

Meanwhile, case law is still virtually non-existent, with one recent exception adjudicated in the United Kingdom.

What is Biometric data? 

People’s facial images constitute biometric data: they are more or less unique, cannot be changed, and cannot easily be hidden. 

Legally, biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic [fingerprint] data.”

For laymans definition, biometric data is a defining human quality, such as a fingerprint, facial image, or gait, that can be used for automated recognition or authentication.

Biometric data is sensitive data

Unlike fingerprints or DNA, facial images are easy to capture. In public spaces, an individual is typically unable to avoid having their facial image captured and monitored. However, just like a fingerprint, if your face gets compromised, you won’t be getting a new face. 

EU law regulates the processing of facial images, which means it’s biometric data and considered sensitive.