You are here: Home / Internet Governance / Automated Facial Recognition Systems and the Mosaic Theory of Privacy: The Way Forward

Automated Facial Recognition Systems and the Mosaic Theory of Privacy: The Way Forward

Posted by Arindrajit Basu, Siddharth Sonkar at Jan 02, 2020 02:12 PM |
Arindrajit Basu and Siddharth Sonkar have co-written this blog as the third of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?

 

The Mosaic Theory of Privacy

Whether the data collected by the AFRS should be treated similar to face photographs taken for the purposes of ABBA is not clear in the absence of judicial opinion. The AFRS would ordinarily collect significantly more data than facial photographs during authentication. This can be explained with the help of the mosaic theory of privacy.

The mosaic theory of privacy suggests that data collected for long durations of an individual can be qualitatively different from single instances of observation. It argues that aggregating data from different instances can create a picture of an individual which affects her reasonable expectation of privacy. This is because a mere slice of information reveals a lot less if the same is contextualised in a broad pattern — a mosaic.    

The mosaic theory of privacy does not find explicit reference in Puttaswamy II. The petitioners had argued that seeding of Aadhaar data into existing databases would bridge information across silos so as to make real time surveillance possible. This is because information when integrated from different silos becomes more than the sum of its parts.

The Court, however, dismissed this argument, accepting UIDAI’s submission that the data collected remains in different silos and merging is not permitted within the Aadhaar framework. Therefore, the Court did not examine whether it is constitutionally permissible to integrate data from different silos; it simply rejected the possibility of surveillance as a result of Aadhaar authentication.

Jurisprudence in other jurisdictions is more advanced. In United States v. Jones, the United States Supreme Court had observed that the insertion of a global positioning system into Antoine Jones’ Jeep in the absence of a warrant and without his consent invaded his privacy, entitling him to Fourth Amendment Protection. In this case, the movement of Jones’ vehicle was monitored for a period of twenty-eight days. Five concurring opinions in Jones acknowledges that aggregated and extensive surveillance is capable of violating the reasonable expectation of privacy irrespective of whether or not surveillance has taken place in public.

The Court distinguished between prolonged surveillance and short term surveillance. Surveillance in the short run does not reveal what a person repeatedly does, as opposed to sustained surveillance which can reveal significantly more about a person. The Court takes the example of how a sequence of trips to a bar, a bookie, a gym or a church can tell a lot more about a person than the story of any single visit viewed in isolation.

Most recently, in Carpenter v. United States, the Supreme Court of the United States held that the collection of  historical cell data by the government  exposes the physical movements of an individual to potential surveillance, and an individual holds a reasonable expectation of privacy against such  collection. The Court admitted that historical-cell site information allows the government to go back in time in order to retract the exact whereabouts of a person.

Judicial decisions have not addressed specifically whether facial recognition through law enforcement constitutes a search under the Fourth Amendment or a “mere visual observation”.

The common thread linking CCTV footages and cellular data is the unique ability to track the movement of an individual from one place to another, enabling extreme forms of surveillance. It is perhaps this crucial link that would make ARFS-enabled CCTVs prejudicial to individual privacy.

 The mosaic theory as understood in Carpenter helps one understand the extent to which an AFRS can augment the capacities of law enforcement in India. This in turn can help in understanding whether it is constitutionally permissible to install such systems across the country.

AFRS enabled-CCTV footages from different CCTVs. if viewed in conjunction could reveal a sequence of movements of an individual, enabling long-term surveillance of a nature that is qualitatively distinct from isolated observances observed across unrelated CCTV footages.

Subsequent to Carpenter, federal district courts in the United States have declined to apply Carpenter to video surveillance cases since the judgement did not “call into question conventional surveillance techniques and tools, such as security cameras.”

The extent of processing that an AFRS-enabled CCTV exposes an individual to would be significantly greater. This is because every time an individual is in the zone of a AFRS-enabled CCTV, the facial image will be compared to a common database. Snippets from different CCTVs capturing the individual’s physical presence in two different locations may not be meaningful per se. When observed together, the AFRS will make it possible to identify the individual’s movement from one place to another.

For instance, the AFRS will be able to identify the person when they are on Street A at a particular time and when they are Street B in the immediately subsequent hour recorded by respective CCTV cameras, indicating the person’s physical movement from A to B. While a CCTV camera only records movement of an individual in video format, AFRS translates that digital information into individualised data with the help of a comparison of facial features with a pre-existing database.

Through data aggregation, which appears to be the aim of the Indian government in their tender that links three databases, it is apparent that the right to privacy is in danger. Yet, at present, there does not exist any case law or legislation that can render such efforts illegal at this juncture.

Conclusions and The Way Forward

Despite a lack of judicial recognition of the potential unconstitutionality of deploying AFRS, it is clear that the introduction of these systems pose a clear and present danger to civil rights and human dignity. Algorithmic surveillance alters a human being’s life in ways that even the subject of this surveillance cannot fully comprehend. As an individual’s data is manipulated and aggregated to derive a pattern about that individual’s world, the individual or his data no longer exists for itself but are massaged into various categories.

Louis Amoore terms this a ‘data-derivative’, which is an abstract conglomeration of data that continuously shapes our futures without us having a say in their framing. The branding of an individual as a criminal and then aggregating their data causes emotional distress as individuals move about in fear of the state gaze and their association with activities that are branded as potentially dangerous — thereby suppressing a right to dissent — as exemplified by their use reported use during the recent protests in Hong Kong.

Case law both in India and abroad has clearly suggested that a right to privacy is contextual and is not surrendered merely because an individual is in a public place. However, the jurisprudence protecting public photography or videography under the umbrella of privacy remains less clear globally and non-existent in India.

The mosaic theory of privacy is useful in this regard as it prevents mass ‘data-veillance’ of individual behaviour and accurately identifies the unique power that the volume, velocity and variety of Big Data provides to the state. Therefore, it is imperative that the judiciary recognise safeguards from data aggregation as an essential component of a reasonable expectation of privacy. At the same time, legislation could also provide the required safeguards.

In the US, Senators Coons and Lee recently introduced a draft Bill titled ‘The Facial Recognition Technology Warrant Act of 2019’. The Bill aims to impose reasonable restrictions on the use of facial recognition technology by law enforcement. The Bill creates safeguards against sustained tracking of physical movements of an individual in public spaces. The Bill terms such tracking ‘ongoing surveillance’ when it occurs for over a period of 72 hours in real time or through application of technology to historical records. The Bill requires that ongoing surveillance only be conducted for law enforcement purposes and in pursuance of a Court Order (unless it is impractical to do so).

While the Bill has its textual problems, it is definitely worth considering as a model going forward and ensure that AFR systems are deployed in line with a rights-respecting reading of a reasonable expectation of privacy.  Parsheera suggests that the legislation should narrow tailoring of the objects and purposes for deployment of AFRS, restrictions on the person whose images may be scanned from the databases, judicial approval for its use on a case by case basis and effective mechanisms of oversight, analysis and verification.

Appropriate legal intervention is crucial. A failure to implement this effectively jeopardizes the expression of our true selves and the core tenets of our democracy.