The Digital Identification Parade

Posted by Aayush Rathi and Ambika Tandon at Jul 30, 2019 12:19 AM |
NCRB’s proposed Automated Facial Recognition System impinges on right to privacy, is likely to target certain groups.

The article by Aayush Rathi and Ambika Tandon was published in the Indian Express on July 29, 2019.  The authors acknowledge Sumandro Chattapadhyay, Amber Sinha and Arindrajit Basu for their edits and Karan Saini for his inputs.


The National Crime Records Bureau recently issued a request for proposals for the procurement of an Automated Facial Recognition System (AFRS). The stated objective of the AFRS is to “identify criminals, missing persons/children, unidentified dead bodies and unknown traced children/persons”. It will be designed to compare images against a “watchlist” curated using images from “any […] image database available with police/other entity”, and “newspapers, raids, sent by people, sketches, etc.” The integration of diverse databases indicates the lack of a specific purpose, with potential for ad hoc use at later stages. Data sharing arrangements with the vendor are unclear, raising privacy concerns around corporate access to sensitive information of crores of individuals.

While a senior government official clarified that the AFRS will only be used against the integrated police database in India — the Crime and Criminal Tracking Network and Systems (CCTNS) — the tender explicitly states the integration of several other databases, including the passport database, and the National Automated Fingerprint Identification System. This is hardly reassuring. Even a targeted database like the CCTNS risks over-representation of marginalised communities, as has already been witnessed in other countries. The databases that the CCTNS links together have racial and colonial origins, recording details of unconvicted persons if they are found to be “suspicious”, based on their tribe, caste or appearance. However, including other databases puts millions of innocent individuals on the AFRS’s watchlist. The objective then becomes to identify “potential criminals” — instead of being “presumed innocent”, we are all persons-who-haven’t-been-convicted-yet.

The AFRS may allow indiscriminate searching by tapping into publicly and privately installed CCTVs pan-India. While facial recognition technology (FRT) has proliferated globally, only a few countries have systems that use footage from CCTVs installed in public areas. This is the most excessive use of FRT, building on its more common implementation as border technology. CCTV cameras are already rife with cybersecurity issues, and integration with the AFRS will expand the “attack surface” for exploiting vulnerabilities in the AFRS. Additionally, the AFRS will allow real-time querying, enabling “continuous” mass surveillance. Misuse of continuous surveillance has been seen in China, with the Uighurs being persecuted as an ethnic minority.

FRT differs from other biometric forms of identification (such as fingerprints, DNA samples) in the degree and pervasiveness of surveillance that it enables. It is designed to operate at a distance, without any knowledge of the targeted individual(s). It is far more difficult to prevent an image of one’s face from being captured, and allows for the targeting of multiple persons at a time. By its very nature, it is a non-consensual and covert surveillance technology.

Potential infringements on the right to privacy, a fundamental right, could be enormous as FRT allows for continuous and ongoing identification. Further, the AFRS violates the legal test of proportionality that was articulated in the landmark Puttaswamy judgment, with constant surveillance being used as a strategy for crime detection. Other civil liberties such as free speech and the right to assemble peacefully could be implicated as well, as specific groups of people such as dissidents and protests can be targeted.

Moreover, facial recognition technology has not performed well as a crime detection technology. Challenges arise at the stage of input itself. Variations in pose, illumination, and expression, among other factors, adversely impact the accuracy of automated facial analysis. In the US, law enforcement has been using images from low-quality surveillance feed as probe photos, leading to erroneous matches. A matter of concern is that several arrests have been made solely on the basis of likely matches returned by FRT.

Research indicates that default camera settings better expose light skin than dark, which affects results for FRT across racial groups. Moreover, the software could be tested on certain groups more often than others, and could consequently be more accurate in identifying individuals from that group. The AFRS is envisioned as having both functionalities of an FRT — identification of an individual, and social classification — with the latter holding significant potential to misclassify minority communities.

In the UK, after accounting for a host of the issues outlined above, the Science and Technology Committee, comprising 14 sitting MPs, recently called for a moratorium on deploying live FRT. It will be prudent to pay heed to this directive in India, in the absence of any framework around data protection, or the use of biometric technologies by law enforcement.

The experience of law enforcement’s use of FRT globally, and the unique challenges posed by the usage of live FRT demand closer scrutiny into how it can be regulated. One approach may be to use a technology-neutral regulatory framework that identifies gradations of harms. However, given the history of political surveillance by the Indian state, a complete prohibition on FRT may not be too far-fetched.