Print This Post
13 August 2020, Gateway House

We own your Face!

Facial recognition technology has emerged as an important identification tool. Big tech, social media companies and governments around the world use it and hold an unprecedented power over individuals and communities. Its use for surveillance purposes has brought it under public scrutiny. The technology has still not been perfected. Is it really ready for adoption?

Former Researcher, Cybersecurity Studies Programme and Manager, Management Office, Gateway House

post image

In June 2020, IBM, Microsoft and Amazon decided to stop selling facial recognition technology to law enforcement agencies, in light of potential misuse of the technology. IBM went a step further and stopped the development of the technology altogether.

No other company or entity, however, is taking the high moral ground on facial technology. The use of facial recognition by governments around the world has become quite prevalent. China has used it to enforce quarantine in Beijing[1] and to crack down on protestors in Hong Kong.[2] Dubai’s Ooyun project uses facial recognition for smart policing.[3] Russia has used it against protestors[4] as well as to track citizens’ movements during the coronavirus lockdown.[5]

It is now so prevalent that ordinary citizens know that they are being watched from all angles, public and private. For example, even when Facebook prompts a name to be tagged in a photo, it uses facial recognition algorithm to determine their presence in an image.

Facial recognition technology analyses facial images and interprets the identity of a person. The surface and features of a face are broken down into several data points to derive the output, with the precision of a plastic surgeon. For instance, the distance between the nose and lips; the width and breadth of a lip; protrusion of cheek bones and several other such conditions are used. Facial recognition tech is an Artificial Intelligence driven system. This is because when two unequal entities such as one version of person’s facial expression is compared to his many photos taken across different time intervals, the system makes an intelligent judgement on whether it matches with the person’s face.

The source of a facial image can be both physical – such as street cameras – and digital, such as social media. The captured image is then compared with thousands of images collected in a database made out of social media profiles, photographs provided for social identification cards such as Social Security, driver’s licenses, passport etc. Technology companies can scrape the internet for images and videos containing faces, group them and map them to a single person. More data helps to train the facial recognition algorithm better, as every feature and expression is captured. For instance, it can recognise both a straight face and a laughing image of the same person. The more posts that are uploaded on Instagram, the more the facial recognition algorithm learns about the face.

The primary purpose of this technology is authentication and identification of a person. Authentication of an individual is a one-to-one verification process where the authenticating agency knows what it is searching for. For instance, at an immigration counter, the face of a traveller is scanned against the face of the person who he claims to be in the system. Identification of an individual is a one-to-many process where the identifier does not know the person being searched, like the identification of a suspect within a database of criminals.

There are two categories of players in this game – big tech and social media companies and the State.

Being a tech and social media giant like Facebook, Amazon and Microsoft is a huge advantage. Massive amounts of voluntary, in-coming facial images have created a global database. Facebook’s facial recognition system known as DeepFace[6] uses the tech to enrich the user experience on its platform. The rate of accuracy with which Facebook prompts the user with the name of the person in an image, has increased significantly over the years, and trained on large datasets of faces to reach an accuracy of 97.35% today.

Streaming services such as Amazon Prime Video leverages an in-house Amazon product called X-Ray[7] to help the viewer identify the celebrity on screen. It uses the movie cataloguing website IMDB, also owned by Amazon, as the backend database for this process.

Amazon sells its facial recognition software called, Rekognition.[8] Amazon Rekognition claims to identify objects, people, scenes. It can also detect emotions such as happiness or sadness on people’s faces.Microsoft provides an API (Application Programming Interface) called Face,[9] a re-usable code which can be downloaded by a customer for facial analysis. Age, emotion, pose, smile, facial hair and other features can be determined with Microsoft Face.

These companies have major customers: governments across the world.[10] [11] Their many agencies use facial recognition to provide government services and national security. The recent controversy surrounding police departments in the U.S using ClearviewAI software, a U.S. based surveillance company, to quell public protests,[12] [13] has brought to the forefront of the wide usage of such technology.

Facial recognition tech is a favourite policing tool because all current identification systems require close proximity between the person and the device. For instance, fingerprints require a person to touch a scanner; iris scanner, though contactless, requires a person to be near the scanner. Facial recognition allows identification from afar, eliminating the need for person-device proximity.

This gives the state unprecedented power over its citizens. The virtuous use of tracking and identifying criminals can easily convert into profiling citizens taking part in protests. For this very reason, the Chinese government banned the use of masks in the Hong Kong protest.[14] It’s easier to pick protestors once their facial image from a protest is captured and mapped against a database of citizen’s photos.

This problem multiplies manifold when the tech companies and government are intertwined. The Chinese government uses Face++ a product from Megvii,[15] a Facial recognition AI software company based in Beijing. Other Chinese companies such as Yitu[16] and SenseTime[17] also produce this tech which can be easily used for mass surveillance. China is accused of using this technology for the religious profiling of its Uighur population.[18] The fear of such abuse has resulted in many states in the U.S. such as Californiato ban this technology in police body cameras.[19] Regional bodies like the European Union are still debating whether to adopt this technology[20] or not.

Others are moving ahead. Russian courts in March 2020[21] have struck down a challenge to the use of facial recognition tech.[22] [23] Malaysia, Uganda, Zimbabwe have tied up with Chinese companies to implement facial recognition.[24] India’s National Crime Records Bureau plans to implement a country wide automated facial recognition system (AFRS). It has released a tender for proposals from tech companies for it.[25]

The technology has still not been perfected, though. A major challenge is the problem of misidentification, especially for women and individuals with darker skin tone. For instance, Amazon’s Rekognition falsely identified 28 non-white U.S. lawmakers as criminals. A study by MIT and Microsoft found an error rate of 35% for darker skinned women.[26] [27]

The fear of a dystopian world is legitimate, especially if the technology is not fool-proof. Though facial images are considered sensitive biometric data as per Europe’s General Data Protection Regulation (GDPR) and India’s Personal Data Protection Bill 2019, these regulations exempt government agencies from their purview, on grounds of public safety and national security. India needs to be particularly careful because it is a heterogenous country, with varied ethnicities and facial types. Till the technology matures, facial recognition should be used only as a medium of screening individuals, along with other identity verification methods.

Sagnik Chakraborty is Researcher, Cybersecurity Studies, and Manager, Management Office, Gateway House.

This article was exclusively written for Gateway House: Indian Council on Global Relations. You can read more exclusive content here.

For interview requests with the author, or for permission to republish, please contact outreach@gatewayhouse.in.

© Copyright 2020 Gateway House: Indian Council on Global Relations. All rights reserved. Any unauthorized copying or reproduction is strictly prohibited.

References

[1] Borak Masha, ‘Beijing considers using facial recognition to fight a new Covid-19 outbreak’, South China Morning Post, 17 June 2020,

https://www.scmp.com/abacus/tech/article/3089378/beijing-considers-using-facial-recognition-fight-new-covid-19-outbreak

[2] Doffman Zak, ‘Hong Kong Exposes Both Sides of China’s Relentless Facial Recognition Machine’, Forbes, 26 Aug 2019,

https://www.forbes.com/sites/zakdoffman/2019/08/26/hong-kong-exposes-both-sides-of-chinas-relentless-facial-recognition-machine/#1995dd1b42b7

[3] ‘Dubai Police Launch “Oyoon” AI Surveillance Programme’, 28 Jan 2018,

https://www.dubaipolice.gov.ae/wps/portal/home/mediacenter/news/details/A70

[4] ‘Russia: Intrusive facial recognition technology must not be used to crackdown on protests’, Amnesty International, 31 January 2020,

https://www.amnesty.org/en/latest/news/2020/01/russia-intrusive-facial-recognition-technology-must-not-be-used-to-crackdown-on-protests/

[5] Marrow Alexander, ‘Russia’s lockdown surveillance measures need regulating, rights group say’, Reuters, 24 April 2020,

https://in.reuters.com/article/health-coronavirus-russia-facial-recogni/russias-lockdown-surveillance-measures-need-regulating-rights-groups-say-idINKCN2260CF

[6] TaigmanYaniv, Yang Ming, Ranzato Aurelio Marc’, Wolf Lior, ‘DeepFace: Closing the Gap to Human-Level Performance in Face Verification’, Facebook AI Research, Tel Aviv University,

https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf

[7] Staff One Day, ‘Behind the scenes with X-Ray’, The Amazon Blog, 27 June 2018,

https://blog.aboutamazon.com/entertainment/behind-the-scenes-with-x-ray

[8] Amazon Rekognition, https://aws.amazon.com/rekognition/?blog-cards.sort-by=item.additionalFields.createdDate&blog-cards.sort-order=desc

[9] Microsoft Azure, https://azure.microsoft.com/en-in/services/cognitive-services/face/#demo

[10] AI Global Surveillance (AIGS) Index, Carnegie Endowment for International Peace,

https://carnegieendowment.org/files/AI_Global_Surveillance_Index1.pdf

[11] Feldstein Steven, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, 17 September 2019,

https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847

[12] Markey J Edward, ‘United States Senate’, 8 June 2020,

https://www.markey.senate.gov/imo/media/doc/Clearview%20protests%2006.08.20.pdf

[13] Haskins Caroline, Mac Ryan, ‘Here Are The Minneapolis Police’s Tools To Identify Protesters’, Buzzfeed News, 29 May 2020,

https://www.buzzfeednews.com/article/carolinehaskins1/george-floyd-protests-surveillance-technology

[14] Doffman Zak, ‘Honk Kong Exposes Both Sides of China’s Relentless Facial Recognition Machine’, Forbes, 26 August 2019,

https://www.forbes.com/sites/zakdoffman/2019/08/26/hong-kong-exposes-both-sides-of-chinas-relentless-facial-recognition-machine/#3856916042b7

[15] Simonite Tom, ‘Behind the Rise of China’s Facial Recognition Giants’, Wired, 9 March 2019,

https://www.wired.com/story/behind-rise-chinas-facial-recognition-giants/

[16] Yitu, https://www.yitutech.com/en

[17] Sense Time, https://www.sensetime.com/en/technology-detail?categoryId=1030

[18] Doffman Zak, ‘China Is Facial Recognition To Track Ethnic Minorities, Even In Beijing’, Forbes, 3 May 2019,

https://www.forbes.com/sites/zakdoffman/2019/05/03/china-new-data-breach-exposes-facial-recognition-and-ethnicity-tracking-in-beijing/#7fc2455234a7

[19] California Legislative Information, United States Government,

http://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200AB1215

[20] Wiewiorowski, Wojciech, ‘AI and Facial Recognition: Challenges and Opportunities’, European Data Protection Supervisor, 21 February 2020,

https://edps.europa.eu/press-publications/press-news/blog/ai-and-facial-recognition-challenges-and-opportunities_en

[21] Courts of General Jurisdiction Moscow City,

https://www.mos-gorsud.ru/rs/tverskoj/services/cases/kas/details/8f0ad27b-ba67-4e50-84eb-c3c5d788ef6c?participants=%D0%9C%D0%B8%D0%BB%D0%BE%D0%B2

[22] ‘Russian court rejects call to ban facial recognition technology’, DW

https://www.dw.com/en/russian-court-rejects-call-to-ban-facial-recognition-technology/a-51135814

[23] Society, ‘В Москвесудотказалсяпризнатьнезаконнымиспользованиенамитингахкамер с распознаваниемлиц’, Novayagazete, 3 March 2020

https://novayagazeta.ru/news/2020/03/03/159500-v-moskve-sud-otkazalsya-priznat-nezakonnym-ispolzovanie-na-mitingah-kamer-s-raspoznavaniem-lits

[24] Feldstein Steven, ‘The Global Expansion of AI Surveillance’, Carnegie Endowment for International Peace, 17 September 2019,

https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847

[25] National Crime Records Bureau, http://www.ncrb.gov.in/TENDERS/AFRS/RFP_NAFRS.pdf

[26] Buolamwini Joy, GebruTimnit, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Conference on Fairness, Accountability and Transparency,

http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

[27] Crumpler William, ‘The Problem of Bias in Facial Recognition’, Center for Strategic & International Studies’, 1 May 2020,

https://www.csis.org/blogs/technology-policy-blog/problem-bias-facial-recognition

TAGGED UNDER: , , , ,