Internet Governance Blog
Governing ID: Kenya’s Huduma Namba Programme
In our fourth case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in Kenya.
Read the case-study or download as PDF.
Governing ID: Use of Digital ID in the Healthcare Sector
In our third case-study, we use our Evaluation Framework for Digital ID to examine the use of Digital ID in the healthcare sector.
Read the case-study or download as PDF.
Governing ID: India’s Unique Identity Programme
In our second case-study, we use our Evaluation Framework for Digital ID to assess India’s Unique Identity Programme.
Read the case-study or download as PDF.
Governing ID: Use of Digital ID for Verification
This is the first in a series of case studies, using our recently-published Evaluation Framework for Digital ID. It looks at the use of digital identity programmes for the purpose of verification, often using the process of deduplication.
Governing ID: A Framework for Evaluation of Digital Identity
As governments across the globe implement new and foundational digital identification systems (Digital ID), or modernize existing ID programs, there is an urgent need for more research and discussion about appropriate uses of Digital ID systems. This significant momentum for creating Digital ID has been accompanied with concerns about privacy, surveillance and exclusion harms of state-issued Digital IDs in several parts of the world, resulting in campaigns and litigations in countries, such as UK, India, Kenya, and Jamaica. Given the sweeping range of considerations required to evaluate Digital ID projects, it is necessary to formulate evaluation frameworks that can be used for this purpose.
This work began with the question of what the appropriate uses of Digital ID can be, but through the research process, it became clear that the question of use cannot be divorced from the fundamental attributes of Digital ID systems and their governance structures. This framework provides tests, which can be used to evaluate the governance of Digital ID across jurisdictions, as well as determine whether a particular use of Digital ID is legitimate. Through three kinds of checks — Rule of Law tests, Rights based tests, and Risks based tests — this scheme is a ready guide for evaluation of Digital ID.
Governing ID: Introducing our Evaluation Framework
With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID. In April 2019, our research began with this question, but it quickly became clear that a determination of the legitimacy of uses hinged on the fundamental attributes and governing structure of the Digital ID system itself. Our evaluation framework is intended as a series of questions against which Digital ID may be tested. We hope that these questions will inform the trade-offs that must be made while building and assessing identity programmes, to ensure that human rights are adequately protected.
Rule of Law Tests
Foundational Digital ID must only be implemented along with a legitimate regulatory framework that governs all aspects of Digital ID, including its aims and purposes, the actors who have access to it, etc. In the absence of this framework, there is nothing that precludes Digital IDs from being leveraged by public and private actors for purposes outside the intended scope of the programme. Our rule of law principles mandate that the governing law should be enacted by the legislature, be devoid of excessive delegation, be clear and accessible to the public, and be precise and limiting in its scope for discretion. These principles are substantiated by the criticism that the Kenyan Digital ID, the Huduma Namba, was met with when it was legalized through a Miscellaneous Amendment Act, meant only for small or negligible amendments and typically passed without any deliberation. These set of tests respond to the haste with which Digital ID has been implemented, often in the absence of an enabling law which adequately addresses its potential harms.
Rights based Tests
Digital ID, because of its collection of personal data and determination of eligibility and rights of users, intrinsically involves restrictions on certain fundamental rights. The use of Digital ID for essential functions of the State, including delivery of benefits and welfare, and maintenance of civil and sectoral records, enhance the impact of these restrictions. Accordingly, the entire identity framework, including its architecture, uses, actors, and regulators, must be evaluated at every stage against the rights it is potentially violating. Only then will we be able to determine if such violation is necessary and proportionate to the benefits it offers. In Jamaica, the National Identification and Registration Act, which mandated citizens’ biometric enrolment at the risk of criminal sanctions, was held to be a disproportionate violation of privacy, and therefore unconstitutional.
Risk based Tests
Even with a valid rule of law framework that seeks to protect rights, the design and use of Digital ID must be based on an analysis of the risks that the system introduces. This could take the form of choosing between a centralized and federated data-storage framework, based on the effects of potential failure or breach, or of restricting the uses of the Digital ID to limit the actors that will benefit from breaching it. Aside from the design of the system, the regulatory framework that governs it should also be tailored to the potential risks of its use. The primary rationale behind a risk assessment for an identity framework is that it should be tested not merely against universal metrics of legality and proportionality, but also against an examination of the risks and harms it poses. Implicit in a risk based assessment is also the requirement of implementing a responsive mitigation strategy to the risks identified, both while creating and governing the identity programme.
Digital ID programmes create an inherent power imbalance between the State and its residents because of the personal data they collect and the consequent determination of significant rights, potentially creating risks of surveillance, exclusion, and discrimination. The accountability and efficiency gains they promise must not lead to hasty or inadequate implementation.
Divergence between the General Data Protection Regulation and the Personal Data Protection Bill, 2019
Our note on the divergence between the General Data Protection Regulation and the Personal Data Protection Bill can be downloaded as a PDF here.
The European Union’s General Data Protection Regulation (GDPR), replacing the 1995 EU Data Protection Directive came into effect in May 2018. It harmonises the data protection regulations across the European Union. In India, the Ministry of Electronics and Information Technology had constituted a Committee of Experts (chaired by Justice Srikrishna) to frame recommendations for a data protection framework in India. The Committee submitted its report and a draft Personal Data Protection Bill in July 2018 (2018 Bill). Public comments were sought on the bill till October 2018. The Central Government revised the Bill and introduced the revised version of the Personal Data Protection Bill (PDP Bill) on December 11, 2019 in the Lok Sabha.
The PDP Bill has incorporated certain aspects of the GDPR, such as requirements for notice to be given to the data principal, consent for processing of data, establishment of a data protection authority, etc. However, there are some differences and in this note we have highlighted the areas of divergence between the two. It only includes provisions which are common to the GDPR and the PDP Bill. It does not include the provisions on (i) Appellate Tribunal, (ii) Finance, Account and Audit; and (iii) Non- Personal Data.
Content takedown and users' rights
After Shreya Singhal v Union of India, commentators have continued to question the constitutionality of the content takedown regime under Section 69A of the IT Act (and the Blocking Rules issued under it). There has also been considerable debate around how the judgement has changed this regime: specifically about (i) whether originators of content are entitled to a hearing, (ii) whether Rule 16 of the Blocking Rules, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative, and (iii) the effect of Rule 16 on the rights of the originator and the public to challenge executive action. In this opinion piece, we attempt to answer some of these questions.
Comments to the Personal Data Protection Bill 2019
The Personal Data Protection Bill, 2019 was introduced in the Lok Sabha on December 11, 2019.
Automated Facial Recognition Systems and the Mosaic Theory of Privacy: The Way Forward
Arindrajit Basu and Siddharth Sonkar have co-written this blog as the third of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?
Automated Facial Recognition Systems (AFRS): Responding to Related Privacy Concerns
Arindrajit Basu and Siddharth Sonkar have co-written this blog as the second of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?
Decrypting Automated Facial Recognition Systems (AFRS) and Delineating Related Privacy Concerns
Arindrajit Basu and Siddharth Sonkar have co-written this blog as the first of their three-part blog series on AI Policy Exchange under the parent title: Is there a Reasonable Expectation of Privacy from Data Aggregation by Automated Facial Recognition Systems?
Extra-Territorial Surveillance and the Incapacitation of Human Rights
This paper was published in Volume 12 (2) of the NUJS Law Review.
ICANN takes one step forward in its human rights and accountability commitments
Akriti Bopanna and Ephraim Percy Kenyanito take a look at ICANN's Implementation Assessment Report for the Workstream 2 recommendations and break down the key human rights considerations in it. Akriti chairs the Cross Community Working Party on Human Rights at ICANN and Ephraim works on Human Rights and Business for Article 19, leading their ICANN engagement.
Call for Comments: Model Security Standards for the Indian Fintech Industry
The Centre for Internet and Society is pleased to make available the Draft document of Model Security Standards for the Indian Fintech Industry, for feedback and comments from all stakeholders. The objective of this document which was first published in November 2019, is to ensure that the data of users is dealt with in a secure and safe manner by the Fintech Industry, and that smaller businesses in the Fintech industry have a specific standard to look at in order to limit their liabilities for any future breaches.
We invite any parties interested in the field of technology policy, including but not limited to lawyers, policy researchers, and engineers, to send in your feedback/comments on the draft document by the 16th of January 2020. We intend to publish our final draft by the end of January 2020. We look forward to receiving your contributions to make this document more comprehensive and effective. Please find a copy of the draft document here.
In Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space
A discussion is underway about the way social media platforms may have to operate within the tenets of constitutional protections of free speech.
A Deep Dive into Content Takedown Timeframes
Since the 1990s, internet usage has seen a massive growth, facilitated in part, by growing importance of intermediaries, that act as gateways to the internet. Intermediaries such as Internet Service Providers (ISPs), web-hosting providers, social-media platforms and search engines provide key services which propel social, economic and political development. However, these developments are also offset by instances of users engaging with the platforms in an unlawful manner. The scale and openness of the internet makes regulating such behaviour challenging, and in turn pose several interrelated policy questions.
Project on Gender, Health Communications and Online Activism with City University
CIS is a partner on the project 'Gender, Health Communications and Online Activism in the Digital Age'. The project is lead by Dr. Carolina Matos, Senior Lecturer in Sociology and Media in the Department of Sociology at City University.
Blockchain: A primer for India
This report is presently being updated.
India’s Role in Global Cyber Policy Formulation
The past year has seen vigorous activity on the domestic cyber policy front in India. On key issues—including intermediary liability, data localization and e-commerce—the government has rolled out a patchwork of regulatory policies, resulting in battle lines being drawn by governments, industry and civil society actors both in India and across the globe.
Document Actions