Print This Post
13 August 2020, Gateway House

We own your Face!

Facial recognition technology has emerged as an important identification tool. Big tech, social media companies and governments around the world use it and hold an unprecedented power over individuals and communities. Its use for surveillance purposes has brought it under public scrutiny. The technology has still not been perfected. Is it really ready for adoption?

Researcher, Cybersecurity Studies Programme and Manager, Management Office, Gateway House

post image

In June 2020, IBM, Microsoft and Amazon decided to stop selling facial recognition technology to law enforcement agencies, in light of potential misuse of the technology. IBM went a step further and stopped the development of the technology altogether.

No other company or entity, however, is taking the high moral ground on facial technology. The use of facial recognition by governments around the world has become quite prevalent. China has used it to enforce quarantine in Beijing[1] and to crack down on protestors in Hong Kong.[2] Dubai’s Ooyun project uses facial recognition for smart policing.[3] Russia has used it against protestors[4] as well as to track citizens’ movements during the coronavirus lockdown.[5]

It is now so prevalent that ordinary citizens know that they are being watched from all angles, public and private. For example, even when Facebook prompts a name to be tagged in a photo, it uses facial recognition algorithm to determine their presence in an image.

Facial recognition technology analyses facial images and interprets the identity of a person. The surface and features of a face are broken down into several data points to derive the output, with the precision of a plastic surgeon. For instance, the distance between the nose and lips; the width and breadth of a lip; protrusion of cheek bones and several other such conditions are used. Facial recognition tech is an Artificial Intelligence driven system. This is because when two unequal entities such as one version of person’s facial expression is compared to his many photos taken across different time intervals, the system makes an intelligent judgement on whether it matches with the person’s face.

The source of a facial image can be both physical – such as street cameras – and digital, such as social media. The captured image is then compared with thousands of images collected in a database made out of social media profiles, photographs provided for social identification cards such as Social Security, driver’s licenses, passport etc. Technology companies can scrape the internet for images and videos containing faces, group them and map them to a single person. More data helps to train the facial recognition algorithm better, as every feature and expression is captured. For instance, it can recognise both a straight face and a laughing image of the same person. The more posts that are uploaded on Instagram, the more the facial recognition algorithm learns about the face.

The primary purpose of this technology is authentication and identification of a person. Authentication of an individual is a one-to-one verification process where the authenticating agency knows what it is searching for. For instance, at an immigration counter, the face of a traveller is scanned against the face of the person who he claims to be in the system. Identification of an individual is a one-to-many process where the identifier does not know the person being searched, like the identification of a suspect within a database of criminals.

There are two categories of players in this game – big tech and social media companies and the State.

Being a tech and social media giant like Facebook, Amazon and Microsoft is a huge advantage. Massive amounts of voluntary, in-coming facial images have created a global database. Facebook’s facial recognition system known as DeepFace[6] uses the tech to enrich the user experience on its platform. The rate of accuracy with which Facebook prompts the user with the name of the person in an image, has increased significantly over the years, and trained on large datasets of faces to reach an accuracy of 97.35% today.

Streaming services such as Amazon Prime Video leverages an in-house Amazon product called X-Ray[7] to help the viewer identify the celebrity on screen. It uses the movie cataloguing website IMDB, also owned by Amazon, as the backend database for this process.

Amazon sells its facial recognition software called, Rekognition.[8] Amazon Rekognition claims to identify objects, people, scenes. It can also detect emotions such as happiness or sadness on people’s faces.Microsoft provides an API (Application Programming Interface) called Face,[9] a re-usable code which can be downloaded by a customer for facial analysis. Age, emotion, pose, smile, facial hair and other features can be determined with Microsoft Face.

These companies have major customers: governments across the world.[10] [11] Their many agencies use facial recognition to provide government services and national security. The recent controversy surrounding police departments in the U.S using ClearviewAI software, a U.S. based surveillance company, to quell public protests,[12] [13] has brought to the forefront of the wide usage of such technology.

Facial recognition tech is a favourite policing tool because all current identification systems require close proximity between the person and the device. For instance, fingerprints require a person to touch a scanner; iris scanner, though contactless, requires a person to be near the scanner. Facial recognition allows identification from afar, eliminating the need for person-device proximity.

This gives the state unprecedented power over its citizens. The virtuous use of tracking and identifying criminals can easily convert into profiling citizens taking part in protests. For this very reason, the Chinese government banned the use of masks in the Hong Kong protest.[14] It’s easier to pick protestors once their facial image from a protest is captured and mapped against a database of citizen’s photos.

This problem multiplies manifold when the tech companies and government are intertwined. The Chinese government uses Face++ a product from Megvii,[15] a Facial recognition AI software company based in Beijing. Other Chinese companies such as Yitu[16] and SenseTime[17] also produce this tech which can be easily used for mass surveillance. China is accused of using this technology for the religious profiling of its Uighur population.[18] The fear of such abuse has resulted in many states in the U.S. such as Californiato ban this technology in police body cameras.[19] Regional bodies like the European Union are still debating whether to adopt this technology[20] or not.

Others are moving ahead. Russian courts in March 2020[21] have struck down a challenge to the use of facial recognition tech.[22] [23] Malaysia, Uganda, Zimbabwe have tied up with Chinese companies to implement facial recognition.[24] India’s National Crime Records Bureau plans to implement a country wide automated facial recognition system (AFRS). It has released a tender for proposals from tech companies for it.[25]

The technology has still not been perfected, though. A major challenge is the problem of misidentification, especially for women and individuals with darker skin tone. For instance, Amazon’s Rekognition falsely identified 28 non-white U.S. lawmakers as criminals. A study by MIT and Microsoft found an error rate of 35% for darker skinned women.[26] [27]

The fear of a dystopian world is legitimate, especially if the technology is not fool-proof. Though facial images are considered sensitive biometric data as per Europe’s General Data Protection Regulation (GDPR) and India’s Personal Data Protection Bill 2019, these regulations exempt government agencies from their purview, on grounds of public safety and national security. India needs to be particularly careful because it is a heterogenous country, with varied ethnicities and facial types. Till the technology matures, facial recognition should be used only as a medium of screening individuals, along with other identity verification methods.

Sagnik Chakraborty is Researcher, Cybersecurity Studies, and Manager, Management Office, Gateway House.

This article was exclusively written for Gateway House: Indian Council on Global Relations. You can read more exclusive content here.

For interview requests with the author, or for permission to republish, please contact

© Copyright 2020 Gateway House: Indian Council on Global Relations. All rights reserved. Any unauthorized copying or reproduction is strictly prohibited.


[1] Borak Masha, ‘Beijing considers using facial recognition to fight a new Covid-19 outbreak’, South China Morning Post, 17 June 2020,

[2] Doffman Zak, ‘Hong Kong Exposes Both Sides of China’s Relentless Facial Recognition Machine’, Forbes, 26 Aug 2019,

[3] ‘Dubai Police Launch “Oyoon” AI Surveillance Programme’, 28 Jan 2018,

[4] ‘Russia: Intrusive facial recognition technology must not be used to crackdown on protests’, Amnesty International, 31 January 2020,

[5] Marrow Alexander, ‘Russia’s lockdown surveillance measures need regulating, rights group say’, Reuters, 24 April 2020,

[6] TaigmanYaniv, Yang Ming, Ranzato Aurelio Marc’, Wolf Lior, ‘DeepFace: Closing the Gap to Human-Level Performance in Face Verification’, Facebook AI Research, Tel Aviv University,

[7] Staff One Day, ‘Behind the scenes with X-Ray’, The Amazon Blog, 27 June 2018,

[8] Amazon Rekognition,

[9] Microsoft Azure,

[10] AI Global Surveillance (AIGS) Index, Carnegie Endowment for International Peace,

[11] Feldstein Steven, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, 17 September 2019,

[12] Markey J Edward, ‘United States Senate’, 8 June 2020,

[13] Haskins Caroline, Mac Ryan, ‘Here Are The Minneapolis Police’s Tools To Identify Protesters’, Buzzfeed News, 29 May 2020,

[14] Doffman Zak, ‘Honk Kong Exposes Both Sides of China’s Relentless Facial Recognition Machine’, Forbes, 26 August 2019,

[15] Simonite Tom, ‘Behind the Rise of China’s Facial Recognition Giants’, Wired, 9 March 2019,

[16] Yitu,

[17] Sense Time,

[18] Doffman Zak, ‘China Is Facial Recognition To Track Ethnic Minorities, Even In Beijing’, Forbes, 3 May 2019,

[19] California Legislative Information, United States Government,

[20] Wiewiorowski, Wojciech, ‘AI and Facial Recognition: Challenges and Opportunities’, European Data Protection Supervisor, 21 February 2020,

[21] Courts of General Jurisdiction Moscow City,

[22] ‘Russian court rejects call to ban facial recognition technology’, DW

[23] Society, ‘В Москвесудотказалсяпризнатьнезаконнымиспользованиенамитингахкамер с распознаваниемлиц’, Novayagazete, 3 March 2020

[24] Feldstein Steven, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, 17 September 2019,

[25] National Crime Records Bureau,

[26] Buolamwini Joy, GebruTimnit, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Conference on Fairness, Accountability and Transparency,

[27] Crumpler William, ‘The Problem of Bias in Facial Recognition’, Center for Strategic & International Studies’, 1 May 2020,