You are here: Home / Internet Governance / Blog / India is falling down the facial recognition rabbit hole

India is falling down the facial recognition rabbit hole

Posted by Prem Sylvester and Karan Saini at Jul 25, 2019 01:40 PM |
Its use as an effective law enforcement tool is overstated, while the underlying technology is deeply flawed.

The article by Prem Sylvester and Karan Saini was published in the Wire on July 23, 2019.


 

In a discomfiting reminder of how far technology can be used to intrude on the lives of individuals in the name of security, the Ministry of Home Affairs, through the National Crime Records Bureau, recently put out a tender for a new Automated Facial Recognition System (AFRS). 

The stated objective of this system is to “act as a foundation for a national level searchable platform of facial images,” and to “[improve] outcomes in the area of criminal identification and verification by facilitating easy recording, analysis, retrieval and sharing of Information between different organizations.” 

The system will pull facial image data from CCTV feeds and compare these images with existing records in a number of databases, including (but not limited to) the Crime and Criminal Tracking Networks and Systems (or CCTNS), Interoperable Criminal Justice System (or ICJS), Immigration Visa Foreigner Registration Tracking (or IVFRT), Passport, Prisons, Ministry of Women and Child Development (KhoyaPaya), and state police records. 

Furthermore, this system of facial recognition will be integrated with the yet-to-be-deployed National Automated Fingerprint Identification System (NAFIS) as well as other biometric databases to create what is effectively a multi-faceted system of biometric surveillance.

It is rather unfortunate, then, that the government has called for bids on the AFRS tender without any form of utilitarian calculus that might justify its existence. The tender simply states that this system would be “a great investigation enhancer.” 

This confidence is misplaced at best. There is significant evidence that not only is a facial recognition system, as has been proposed, ineffective in its application as a crime-fighting tool, but it is a significant threat to the privacy rights and dignity of citizens. Notwithstanding the question of whether such a system would ultimately pass the test of constitutionality – on the grounds that it affects various freedoms and rights guaranteed within the constitution – there are a number of faults in the issued tender. 

Let us first consider the mechanics of a facial recognition system itself. Facial recognition systems chain together a number of algorithms to identify and pick out specific, distinctive details about a person’s face – such as the distance between the eyes, or shape of the chin, along with distinguishable ‘facial landmarks’. These details are then converted into a mathematical representation known as a face template for comparison with similar data on other faces collected in a face recognition database. There are, however, several problems with facial recognition technology that employs such methods. 

Facial recognition technology depends on machine learning – the tender itself mentions that the AFRS is expected to work on neural networks “or similar technology” –  which is far from perfect. At a relatively trivial level, there are several ways to fool facial recognition systems, including wearing eyewear, or specific types of makeup. The training sets for the algorithm itself can be deliberately poisoned to recognise objects incorrectly, as observed by students at MIT

More consequentially, these systems often throw up false positives, such as when the face recognition system incorrectly matches a person’s face (say, from CCTV footage) to an image in a database (say, a mugshot), which might result in innocent citizens being identified as criminals. In a real-time experiment set in a train station in Mainz, Germany, facial recognition accuracy ranged from 17-29% – and that too only for faces seen from the front – and was at 60% during the day but 10-20% at night, indicating that environmental conditions play a significant role in this technology.

Facial recognition software used by the UK’s Metropolitan Police has returned false positives in more than 98% of match alerts generated.

When the American Civil Liberties Union (ACLU) used Amazon’s face recognition system, Rekognition, to compare images of legislative members of the American Congress with a database of mugshots, the results included 28 incorrect matches.

There is another uncomfortable reason for these inaccuracies – facial recognition systems often reflect the biases of the society they are deployed in, leading to problematic face-matching results. Technological objectivity is largely a myth, and facial recognition offers a stark example of this. 

An MIT study shows that existing facial recognition technology routinely misidentifies people of darker skin tone, women and young people at high rates, performing better on male faces than female faces (8.1% to 20.6% difference in error rate), lighter faces than darker faces (11.8% to 19.2% difference in error rate) and worst on darker female faces (20.8% to 34.7% error rate). In the aforementioned ACLU study, the false matches were disproportionately people of colour, particularly African-Americans. The bias rears its head when the parameters of machine-learning algorithms, derived from labelled data during a “supervised learning” phase, adhere to socially-prejudiced ideas of who might commit crimes. 

The implications for facial recognition are chilling. In an era of pervasive cameras and big data, such prejudice can be applied at unprecedented scale through facial recognition systems. By replacing biased human judgment with a machine learning technique that embeds the same bias, and more reliably, we defeat any claims of technological neutrality. Worse, because humans will assume that the machine’s “judgment” is not only consistently fair on average but independent of their personal biases, they will read agreement of its conclusions with their intuition as independent corroboration. 

In the Indian context, consider that Muslims, Dalits, Adivasis and other SC/STs are disproportionately targeted by law enforcement. The NCRB in its 2015 report on prison statistics in India recorded that over 55% of the undertrials prisoners in India are either Dalits, Adivasis or Muslims, a number grossly disproportionate to the combined population of Dalits, Adivasis and Muslims, which amounts to just 39% of the total population according to the 2011 Census.

If the AFRS is thus trained on these records, it would clearly reinforce socially-held prejudices against these communities, as inaccurately representative as they may be of those who actually carry out crimes. The tender gives no indication that the developed system would need to eliminate or even minimise these biases, nor if the results of the system would be human-verifiable.

This could lead to a runaway effect if subsequent versions of the machine-learning algorithm are trained with criminal convictions in which the algorithm itself played a causal role. Taking such a feedback loop to its logical conclusion, law enforcement may use machine learning to allocate police resources to likely crime spots – which would often be in low income or otherwise vulnerable communities.

Adam Greenfield writes in Radical Machines on the idea of ‘over transparency,’ that combines “bias” of the system’s designers as well of the training sets – based as these systems are on machine learning – and “legibility” of the data from which patterns may be extracted. The “meaningful question,” then, isn’t limited to whether facial recognition technology works in identification – “[i]t’s whether someone believes that they do, and acts on that belief.”

The question thus arises as to why the MHA/NCRB believes this is an effective tool for law enforcement. We’re led, then, to another, larger concern with the AFRS – that it deploys a system of surveillance that oversteps its mandate of law enforcement. The AFRS ostensibly circumvents the fundamental right to privacy, as ratified by the Supreme Court in 2018, through sourcing its facial images from CCTV cameras installed in public locations, where the citizen may expect to be observed. 

The extent of this surveillance is made even clearer when one observes the range of databases mentioned in the tender for the purposes of matching with suspects’ faces extends to “any other image database available with police/other entity” besides the previously mentioned CCTNS, ICJS et al. The choice of these databases makes overreach extremely viable.

This is compounded when we note that the tender expects the system to “[m]atch suspected criminal face[sic] from pre-recorded video feeds obtained from CCTVs deployed in various critical identified locations, or with the video feeds received from private or other public organization’s video feeds.” There further arises a concern with regard to the  process of identification of such “critical […] locations,” and if there would be any mechanisms in place to prevent this from being turned into an unrestrained system of surveillance, particularly with the stated access to private organisations’ feeds.

The Perpetual Lineup report by Georgetown Law’s Center on Privacy & Technology identifies real-time (and historic) video surveillance as posing a very high risk to privacy, civil liberties and civil rights, especially owing to the high-risk factors of the system using real-time dragnet searches that are more or less invisible to the subjects of surveillance.

It is also designated a “Novel Use” system of criminal identification, i.e., with little to no precedent as compared to fingerprint or DNA analysis, the latter of which was responsible for countless wrongful convictions during its nascent application in the science of forensic identification, which have since then been overturned.

In the Handbook of Face Recognition, Andrew W. Senior and Sharathchandra Pankanti identify a more serious threat that may be born out of automated facial recognition, assessing that “these systems also have the potential […] to make judgments about [subjects’] actions and behaviours, as well as aggregating this data across days, or even lifetimes,”  making video surveillance “an efficient, automated system that observes everything in front of any of its cameras, and allows all that data to be reviewed instantly, and mined in new ways” that allow constant tracking of subjects.

Such “blanket, omnivident surveillance networks” are a serious possibility through the proposed AFRS. Ye et al, in their paper on “Anonymous biometric access control”, show how automatically captured location and facial image data obtained from cameras designed to track the same can be used to learn graphs of social networks in groups of people.

Consider those charged with sedition or similar crimes, given that the CCTNS records the details as noted in FIRs across the country. Through correlating the facial image data obtained from CCTVs across the country – the tender itself indicates that the system must be able to match faces obtained from two (or more) CCTVs – this system could easily be used to target the movements of dissidents moving across locations.

Constantly watched

Further, something which has not been touched upon in the tender – and which may ultimately allow for a broader set of images for carrying out facial recognition – is the definition of what exactly constitutes a ‘criminal’. Is it when an FIR is registered against an individual, or when s/he is arrested and a chargesheet is filed? Or is it only when an individual is convicted by a court that they are considered a criminal?

Additionally, does a person cease to be recognised by the tag of a criminal once s/he has served their prison sentence and paid their dues to society? Or are they instead marked as higher-risk individuals who may potentially commit crimes again? It could be argued that such a definition is not warranted in a tender document, however, these are legitimate questions which should be answered prior to commissioning and building a criminal facial recognition system.

Senior and Pankanti note the generalised metaphysical consequences of pervasive video surveillance in the Handbook of Face Recognition: 

“the feeling of disquiet remains [even if one hasn’t committed a major crime], perhaps because everyone has done something “wrong”, whether in the personal or legal sense (speeding, parking, jaywalking…) and few people wish to live in a society where all its laws are enforced absolutely rigidly, never mind arbitrarily, and there is always the possibility that a government to which we give such powers may begin to move towards authoritarianism and apply them towards ends that we do not endorse.”

Such a seemingly apocalyptic scenario isn’t far-fetched. In the section on ‘Mandatory Features of the AFRS’, the system goes a step further and is expected to integrate “with other biometric solution[sic] deployed at police department system like Automatic Fingerprint identification system (AFIS)[sic]” and “Iris.” This form of linking of biometric databases opens up possibilities of a dangerous extent of profiling.

While the Aadhaar Act, 2016, disallows Aadhaar data from being handed over to law enforcement agencies, the AFRS and its linking with biometric systems (such as the NAFIS) effectively bypasses the minimal protections from biometric surveillance the prior unavailability of Aadhaar databases might have afforded. The fact that India does not have a data protection law yet – and the Bill makes no references to protection against surveillance either – deepens the concern with the usage of these integrated databases. 

The Perpetual Lineup report warns that the government could use biometric technology “to identify multiple people in a continuous, ongoing manner [..] from afar, in public spaces,” allowing identification “to be done in secret”. Senior and Pankanti warn of “function creep,” where the public grows uneasy as “silos of information, collected for an authorized process […] start being used for purposes not originally intended, especially when several such databases are linked together to enable searches across multiple domains.”

This, as Adam Greenfield points out, could very well erode “the effectiveness of something that has historically furnished an effective brake on power: the permanent possibility that an enraged populace might take to the streets in pursuit of justice.”

What the NCRB’s AFRS amounts to, then, is a system of public surveillance that offers little demonstrable advantage to crime-fighting, especially as compared with its costs to fundamental human rights of privacy and the freedom of assembly and association. This, without even delving into its implications with regard to procedural law. To press on with this system, then, would be indicative of the government’s lackadaisical attitude towards protecting citizens’ freedoms. 


The views expressed by the authors in this article are personal.