Blog
DIDP Request #15: What is going on between Verisign and ICANN?
At CIS, we believe that any exchange of dialogue or any outcome from ICANN acting on these complaints needs to be in the public domain. Thus, our 15th DIDP request to ICANN were for documents pertinent to Verisign’s contractual compliance and actions taken by ICANN stemming from any discrepancies of Verisign’s compliance with its ICANN contract.
The DIDP request filed by Padmini Baruah can be found here.
What ICANN said
After sorting through a response designed to obfuscate information, it was clear that ICANN was not going to provide any of the details we requested. As mentioned in their previous responses, individual audit reports and the names of the registries associated with discrepancies are confidential under the DIDP Defined Conditions of Nondisclosure. Nevertheless, some details from the response are worth mentioning.
According to the response, “As identified in Appendix B of the 2012 Contractual Compliance Year One Audit Program Report, the following TLDs were selected for auditing: DotAsia Organisation Limited (.ASIA), Telnic Limited (.TEL), Public Interest Registry (.ORG), Verisign (.NET), Afilias (.INFO), and Employ Media LLC (.JOBS).” The response goes on to state that out of these 6 registries that were selected, only 5 chose to participate in the audit, the identies of which are once again confidential.
However, on further examination, it can be seen that Verisign (.NET) was chosen to participate in the audit the year after as well. Therefore, it’s clear that 2013 was the year Verisign was audited. Unfortunately, that was pretty much all that was relevant to our request in ICANN’s response.
Once again, ICANN was able to use the DIDP Defined Conditions of Nondisclosure, especially the following conditions to allow itself the ability not to answer the public:
- Information exchanged, prepared for, or derived from the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process between and among ICANN, its constituents, and/or other entities with which ICANN cooperates by inhibiting the candid exchange of ideas and communications.
- Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement.
- Confidential business information and/or internal policies and procedures.[1]
ICANN’s response to our request can be found here.
[1] See DIDP https://www.icann.org/resources/pages/didp-2012-02-25-en
DIDP Request #16 - ICANN has no Documentation on Registrars’ “Abuse Contacts”
We wrote to ICANN requesting information on these abuse complaints received by registrars over the last year. We specifically wanted reports of illegal activity on the internet submitted to these abuse contacts as well as details on actions taken by registrars in response to these complaints.
The request filed by Padmini Baruah can be found here.
What ICANN said
Our request to ICANN very specifically dealt with reported illegal activities. However, in their response, ICANN first broadened it to abuse complaints and then failed to give a narrowed down list of even those complaints.
In their response, ICANN indicated that they do not store records of complaints made to the abuse contact. This is stored by the registrars and is available to ICANN only upon request. However, since ICANN is only obliged to publish documents it already has in its possession, we did not receive an answer to our first question.
As for the second item, ICANN gave a familiarly vague answer, linking us to the Contractual Compliance Complaints page with a list of all the breach notices that have been issued by ICANN to registrars. A breach notice is relevant to our request only if it is in response to an abuse complaint, and the abuse complaint specifically deals with illegal activity. Even discounting that, this is not a comprehensive list when you take into account that a breach notice is published only “if a formal contractual compliance enforcement process has been initiated relating to an abuse complaint and resulted in a breach.”[1] What about the rest of the complaints received by the registrar?
In addition, ICANN refused to publish any communication or documentation of ICANN requesting reports of illegal activity under the DIDP non-disclosure conditions.
ICANN's response to our DIDP request may be found here.
[1] See ICANN response here (Pg 4): https://www.icann.org/en/system/files/files/didp-response-20150901-4-cis-abuse-complaints-01oct15-en.pdf
DIDP Request #17 - How ICANN Chooses their Contractual Compliance Auditors
It is clear to us at CIS that the people in charge of these compliance audits perform an important function at ICANN. To that effect, we requested information on the 24 individuals mentioned by Mr Chehadi as well as the third party auditors who perform this powerful watchdog function. More specifically, we requested documents calling for appointments of the auditors and copies of their contracts with ICANN.
The request filed by Padmini Baruah can be found here.
What ICANN said
In their response to the first part of our question, ICANN linked us to a webpage containing the names and titles of all employees working on contractual compliance. This page contains 26 names including the Contractual Compliance Risk and Audit Manager: https://www.icann.org/resources/pages/about-2014-10-10-en
ICANN also described the process of selecting KPMG as their third party auditor in detail. A pre-selection process shortlists 5 companies that fit the following criteria: knowledge of ICANN, global presence, size, expertise and reputation. Then, ICANN issues a targeted Request For Proposal (RFP) to these companies asking them for their audit proposals. After a question and answer session, a proposal analysis and rating the scorecards, a “cross-functional steering committee” decided to go with KPMG. While the process has been discussed transparently, our questions remain unanswered.
The RFP would qualify as the document requested by us in the second part of the question (i.e.) a “document that calls for appointments to the post of the contractual compliance auditor.” Unfortunately, ICANN has not published the RFP citing the DIDP Conditions for Non-disclosure. However, the timeline for the RFP and other details have been posted here after our DIDP request. In addition, the contract between KPMG and ICANN has also not been published.
ICANN's response to our DIDP request may be found here.
DIDP Request #18 - ICANN’s Internal Website will Stay Internal
The request filed by Padmini Baruah can be found here. To no one’s surprise, not only did ICANN not have this document in “ICANN's possession, custody, or control,” even if it did it would be subject to DIDP conditions for non-disclosure.
ICANN's response to our DIDP request may be found here.
DIDP Request #19 - ICANN’s role in the Postponement of the IANA Transition
In August 2015, NTIA announced that it would not be technically possible to meet this deadline and extended it by a year. NTIA stated,
“Accordingly, in May we asked the groups developing the transition documents how long it would take to finish and implement their proposals. After factoring in time for public comment, U.S. Government evaluation and implementation of the proposals, the community estimated it could take until at least September 2016 to complete this process.”
In our DIDP request, we asked ICANN for all documents that it had submitted to NTIA that were relevant to the IANA transition and its postponement from the date of the initial announcement— March 14, 2015 to the date of the announcement of extension — August 17, 2015. We specifically requested the documents requested by NTIA in May 2015 as referenced by this blogpost.
The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN’s response terms our request as “broadly worded” and assumes that our request is only related to documents about the extension of the deadline. It was not.
After NTIA’s announcement in 2014, ICANN launched a multi-stakeholder process and discussion at ICANN 49 in Singapore to facilitate the transition. The organizational structure of this process has been mapped out according to the different IANA functions that are being transitioned. Accordingly, we have the:
- IANA Stewardship Transition Coordination Group (ICG)
- Cross Community Working Group (CWG-Stewardship)
- Consolidated RIR IANA Stewardship Proposal Team (CRISP TEAM)
- IANAPLAN Working Group (IANAPLAN WG)
- Cross-Community Working
- Group (CCWG-Accountability)
In addressing our request, ICANN references this multi-stakeholder community overseeing the transition. According to the response document, the ICG, CWG-Stewardship, CRISP Team, IANAPLAN WG and the CCWG-Accountability submitted responses directly to the NTIA leaving the ICANN with no documents responsive to our request.
ICANN's response to our DIDP request may be found here.
DIDP Request #20 - Is Presumptive Renewal of Verisign’s Contracts a Good Thing?
See the base registry agreement here.
In light of this, we filed a request asking ICANN for documents that discuss the rationale behind including the presumptive renewal clause. We also asked them for documents specific to the renewal of Verisign (.com and .net domains) and PIR (.org) contracts. The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN provided a surprisingly comprehensive response to our request. They provided documents in response to our request and stated the rationale that has been given for including a presumptive renewal clause. According to the response,
“Absent countervailing reasons, there is little public benefit, and some significant potential for disruption, in regular changes of a registry operator. In addition, a significant chance of losing the right to operate the registry after a short period creates adverse incentives to favor short term gain over long term investment.”
ICANN explains that the contracts have been drawn such that they balance the concerns above with the ability to replace a registry that doesn’t serve the community as it is obliged to do. The response also offers links to various documents substantiating this rationale.
We were provided an effective answer to our second question as well. ICANN’s response links us to various documents for the 2001, 2006 and 2012 renewals of Verisign’s contract for the .com domain. This includes a summary of the 2012 renewal, public comments for all three renewals and the proposed agreements.
For the .net domain, a presumptive renewal clause was not included in the 2001 Verisign contract which opened up the process to select an operator in 2005. ICANN chose to continue its relationship with Verisign and included the clause. The documents relevant to the 2011 renewal of the contracts have been provided.
After Verisign relinquished its rights over the .org domain in 2001, ICANN chose the Public Internet Society (PIR) to operate the domain. While there was no presumptive renewal clause in 2002, documents relevant to the 2006 and 2013 renewals have been provided.
ICANN's response to our DIDP request may be found here.
DIDP Request #21 - ICANN’s Relationship with the RIRs
The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN’s response linked us to the Memorandum of Understanding signed by ICANN and the Number Resource Organization (NRO) which represents the 5 RIRs. The MoU replaces the ones signed by ICANN and the individual RIRs. The response also links us to a series of letters written by the NRO to ICANN reaffirming their commitment to the MoU. Interestingly, the MoU does not mention anything about payments or monetary contributions.
In response to the second part of our request focusing on their financial relationship, ICANN gave us the same information as they did earlier. However, as pointed out in this post, that information is either incomplete or inaccurate. Further, they reject the idea that providing anything more than the audited financial reports is necessary for public benefit. According to them, “the burden of compiling the requested documentary information from 2000 to the present would require ICANN to expend a tremendous amount of time and resources.” Therefore, they classified our request as falling under this condition for non-disclosure:
“Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.”
We fail to see how an organization like ICANN does not already have its receipts and documentation in order. If they do, it would not be burdensome to publish them and if they don’t, well, that’s worrying for a lot of different reasons.
ICANN's response to our DIDP request may be found here.
DIDP Request #22 - Reconsideration Requests from Parties affected by ICANN Action
See ICANN bye-laws here
The board governance committee must submit an annual report to the board containing the following information (paraphrased):
- Number and nature of Reconsideration Requests received including an identification of whether they were dismissed, acted upon or are pending.
- If pending, the length of time and explanation if they have been pending for more than 90 days.
- Explanation of other mechanisms ICANN has made available to ensure its accountability to those directly affected by its actions.
CIS requested copies of documents containing all this information. The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN surmised that all the information we sought can be found in their annual reports. ICANN linked us to those: https://www.icann.org/resources/pages/annual-reports-2012-02-25-en
ICANN's response to our DIDP request may be found here.
DIDP Request #23 - ICANN does not Know how Diverse its Comment Section Is
See ICG report here.
We requested ICANN for similar reports on the ICANN public comment section. The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN stated that they do not conduct diversity analysis on their comment sections. This is a shame, given that the one from ICG was so informative, clear and concise. Instead they provided us with links to reports and analyses of the different topics that were up for comments and an annual report on public comments.
ICANN’s public comments section is one of the important ways in which different stakeholders and community members get involved with the organization. A diversity analysis of this section for different topics could help in informing the public about which parts of the world actually get involved in ICANN through this mechanism We suggest that ICANN make it a regular part of their report.
ICANN's response to our DIDP request may be found here.
https://www.ianacg.org/icg-files/documents/Public-Comment-Summary-final.pdf
DIDP Request #25 - Curbing Sexual Harassment at ICANN
In light of that statement, CIS requested ICANN to publish the following information:
- Information about the individual or organization conducting ICANN’s sexual harassment training
- Materials used during this training
- ICANN’s internal sexual harassment policy
The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN’s response answered our questions adequately. The organization conducting their sexual harassment training is NAVEX Global. It is an interactive online training and as such, all materials are within that platform. Besides, ICANN could not publish these materials as it would be an infringement of NAVEX Global’s intellectual property right. ICANN also attached with the response, their internal sexual harassment policy.
ICANN's response to our DIDP request (and the attached policy document) may be found here.
DIDP Request #27 - On ICANN’s support to new gTLD Applicants
Click for Applicant Support Directory
We requested ICANN for information about this program. Specifically, we asked them for information on:
- The number of applicants to the program and the amount received by them;
- The basis on which these applicants were selected;
- The amount that has been utilized thus far for this program;
- Contributions by donors;
- What “in kind” support means and includes.
The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN answered all our questions in a satisfactory manner. There were three applicants to the program. Two of these: Nameshop, and Ummah Digital Ltd, did not meet the eligibility criteria listed in the handbook and therefore only one other applicant, DotKids, received the financial support. Of the USD 2,000,000 set aside, USD 135,000 was awarded to them.
The eligibility criteria is listed in the New gTLD Financial Assistance Handbook and candidates are evaluated by the Support Applicant Review Panel (SARP), “which was comprised of five volunteer members from the community with experience in the domain name industry, in managing small businesses, awarding grants, and assisting others on financial matters in developing countries.”
The USD 2,000,000 allotted to this program was set aside by ICANN’s board and as it is not exhausted, no external contributions were sought by ICANN (in cash or in kind). However, ICANN failed to explain what “in kind” contributions would be.
DIDP Request #28 - ICANN renews Verisign’s RZM Contract?
The request filed by Padmini Baruah can be found here.
What ICANN said
ICANN clarified that it has never been party to the RZM agreement which was made between NTIA and Verisign. According to an ICANN-Verisign joint document, the Root Zone Management Systems allows “ICANN as the IANA Functions Operator (IFO), Verisign, as the Root Zone Maintainer (RZM), and the National Telecommunications and Information Administration (NTIA) at the U.S. Department of Commerce (DoC), as the Root Zone Administrator (RZA).” The only agreement related to this is the one of cooperation between Verisign and the NTIA.
Accordingly, as the role of NTIA is transitioned to the multi-stakeholder community, Verisign and ICANN are working out terms and conditions of their own agreement to facilitate this transition together. In response to NTIA’s request for a proposal for this transition, Verisign and ICANN submitted this document. Besides these, ICANN states that it does not have any documents responsive to our requests.
ICANN's response to our DIDP request may be found here.
Analysis of the Report of the Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security and Implications for India
The United Nations Group of Experts on ICT issued their report on Developments in the Field of Information and Telecommunications in the Context of International Security in June, 2015. This paper analyses the report of the Group of Experts and and India’s compliance with its recommendations based on existing laws and policies. CIS believes that the report of the Group of Experts provides important minimum standards that countries could adhere to in light of challenges to international security posed by ICT developments. Given the global nature of these challenges and the need for nations to holistically address such challenges from a human rights and security perspective, CIS believes that the Group of Experts and similar international forums are useful and important forums for India to actively engage with.
Download: PDF (627 kb)
1. Introduction
2. Analysis of the Recommendations
3. Conclusion
1. Introduction
Cyberspace[1] touches every aspect of our lives, has enormous benefits, but is also accompanied by a number of risks. The international community at large has realized that cyberspace can be made stable and secure only through international cooperation. Traditionally, though there are a number of bilateral agreements and forms of cooperation the foundation of this cooperation has been the international law and the principles of the Charter of the United Nations.
To this end, on December 27, 2013 the United Nations General Assembly adopted Resolution No. 68/243 requesting the" Secretary General, with the assistance of a group of governmental experts,…… to continue to study, with a view to promoting common understandings, existing and potential threats in the sphere of information security and possible cooperative measures to address them, including norms, rules or principles of responsible behaviour of States and confidence-building measures, the issues of the use of information and communications technologies in conflicts and how international law applies to the use of information and communications technologies by States……. and to submit to the General Assembly at its seventieth session a report on the results of the study. "In pursuance of this resolution the Secretary General established a Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security; the report was agreed upon by the Group of Experts in June, 2015. On 23 December 2015, the UN General Assembly unanimously adopted resolution 70/237[2] which welcomed the outcome of the Group of Experts and requested the Secretary-General to establish a new GGE that would report to the General Assembly in 2017.
The report developed by governmental experts from 20 States addresses existing and emerging threats from uses of ICTs, by States and non-State actors alike. These threats have the potential to jeopardize international peace and security. The experts gave recommendations which have built on consensus reports issued in 2010 and 2013, and offer ideas on norm-setting, confidence-building, capacity-building and the application of international law for the use of ICTs by States. Among other recommendations, the Report lays down recommendations for States for voluntary, non-binding norms, rules or principles of responsible behaviour to promote an open, secure, stable, accessible and peaceful ICT environment.
As larger international dialogues around cross border sharing of information and cooperation for cyber security purposes take place between the US and EU, it is critical that India begin to participate in these discussions.[3] It is also necessary to take cognizance of the importance of implementing internal practices and policies that are recognized and set strong standards at the international level.
This paper marks the beginning of a series of questions we will be asking and processes we will be analysing with the aim of understanding the role of international cooperation for cyber security and the interplay between privacy and security. The report analyses the existing norms in India in the backdrop of the recommendations in the Report of Experts to discover how interoperable Indian law and policy is vis-à-vis the recommendations made in this report as well as making recommendations towards ways India can enhance national policies, practices, and approaches to enable greater collaboration at the international level with respect to issues concerning ICTs and security.
2. Analysis of the Recommendations
The Group of Experts took into account existing and emerging threats, risks and vulnerabilities, in the field of ICT and offered the following recommendations for consideration by States for voluntary, non-binding norms, rules or principles of responsible behaviour.
2a. Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security
1. India has been working with a number of countries such as Belarus, Canada, China, Egypt, and France on a number of ICT-related isues thereby increasing international cooperation in the ICT sector, such as:
(i) setting up the India-Belarus Digital Learning Centre (DLC-ICT) to promote
development of ICT in Belarus;
(ii) sending an official business delegation to Canada to attend the 2ndJoint Working Group meeting in ICTE;
(iii) holding Joint Working Groups on ICT with China.[4]
As can be seen from this, most of the cooperation with other countries is currently government to government (or government institution to government institution) cooperation. However, it must be noted that the entire digital revolution, including ICT necessarily involves ICT companies, and thus the role of the private sector in participating in these negotiations as well as the responsibilities of private sector ICT companies in cross border cooperation. Furthermore, the above examples are a few of the many agreements, Memoranda of Understanding (MOU), and negotiations that India has with other countries on cross border cooperation. It is important that, to the extent possible, these negotiations and transparent and easily publicly available.
2. The primary legislation governing ICT in India is the Information Technology Act, 2000 ("IT Act") which was passed to provide legal recognition for the transactions carried out by means of electronic data interchange and other means of electronic communication. The IT Act contains a number of provisions that declare illegal activities that threatenICT infrastructure, data, and individuals as illegal and provide for penalties for the same. These activities are:
Section 43 - Penalty and Compensation for damage to computer, computer system, etc.: If any person without permission: (i) accesses a computer, computer system or network; (ii) downloads, copies or extracts any data from such computer, computer system or network; (iii) introduces any computer contaminant or computer virus into, destroys, deletes or alters any information on, damages or disrupts any computer, computer system or network; (iv) denies or causes the denial of access to any computer, computer system or network by any means; (v) helps any person to access a computer, computer system or network in contravention of the Act; (vi) charges the services availed of by a person to the account of another person through manipulation; or (vii) Steals, conceals, destroys or alters or causes any person to steal, conceal, destroy or alter any computer source code used for a computer resource with an intention to cause damage, he shall be liable to pay damages by way of compensation to the person so affected.
Section 66 - Computer Related Offences: If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to Rs. 5,00,000/- or with both.
Section 66B - Punishment for dishonestly receiving stolen computer resource or communication device: Whoever dishonestly receives or retains any stolen computer resource or communication device knowing or having reason to believe the same to be stolen computer resource or communication device, shall be punished with imprisonment of either description for a term which may extend to three years or with fine which may extend to Rs. 1,00,000/- or with both.
Section 66C - Punishment for identity theft: Whoever, fraudulently or dishonestly make use of the electronic signature, password or any other unique identification feature of any other person, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to rupees one lakh.
Section 66D - Punishment for cheating by personation by using computer resource: Whoever, by means of any communication device or computer resource cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to Rs. 1,00,000/-.
Section 66E - Punishment for violation of privacy: Whoever, intentionally or knowingly captures, publishes or transmits the image of a private area of any person without his or her consent, under circumstances violating the privacy of that person, shall be punished with imprisonment which may extend to three years or with fine not exceeding Rs. 2,00,000 or with both.
Section 66F - Punishment for cyber terrorism: (1) Whoever,- (A) with intent to threaten the unity, integrity, security or sovereignty of India or to strike terror in the people or any section of the people by -
- Denying or cause the denial of access to computer resource; or
- Attempting to penetrate a computer resource; or
- Introducing or causing to introduce any computer contaminant and by means of such conduct causes or is likely to cause death or injuries to persons or damage to or destruction of property or disrupts or knowing that it is likely to cause damage or disruption of supplies or services essential to the life of the community or adversely affect the critical information infrastructure, or
(B) knowingly or intentionally penetrates a computer resource and by by doing so obtains access to information that is restricted for reasons of the security of the State or foreign relations; or any restricted information with reasons to believe that such information may be used to cause or likely to cause injury to the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation or incitement to an offence, or to the advantage of any foreign nation, group of individuals or otherwise, commits the offence of cyber terrorism.
(2) Whoever commits or conspires to commit cyber terrorism shall be punishable with imprisonment which may extend to imprisonment for life.
Section 67 - Publishing of information which is obscene in electronic form: Whoever publishes or transmits in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons, shall be punished on first conviction with a maximum imprisonment upto 2 years and a maximum fine upto Rs. 5,00,000 and for a second or subsequent conviction with a maximum imprisonment upto 5 years and also a maximum with fine upto Rs. 10,00,000.
Section 67A - Punishment for publishing or transmitting of material containing sexually explicit act, etc. in electronic form: Whoever publishes or transmits in the electronic form any material which contains sexually explicit act or conduct shall be punished on 1st conviction with a maximum imprisonment for 5 years and a maximum fine of upto Rs. 10,00,000 and for a 2nd or subsequent conviction with a maximum imprisonment of 7 years and a maximum fine upto Rs. 10,00,000.
Section 67B - Punishment for publishing or transmitting of material depicting children in sexually explicit act, etc. in electronic form: Whoever,- (a) publishes or transmits material in any electronic form which depicts children engaged in sexually explicit act or conduct; or (b) creates text or digital images, collects, seeks, browses, downloads, advertises, promotes, exchanges or distributes material in any electronic form depicting children in obscene or indecent or sexually explicit manner; or (c) cultivates, entices or induces children to online relationship with one or more children for and on sexually explicit act or in a manner that may offend a reasonable adult on the computer resource; or (d) facilitates abusing children online; or (e) records in any electronic form own abuse or that of others pertaining to sexually explicit act with children, shall be punished on first conviction with a maximum imprisonment upto 5 years and a maximum fine upto Rs. 10,00,000 and in the event of a 2nd or subsequent conviction with a maximum imprisonment upto 7 years and also a maximum fine upto Rs. 10,00,000.[5]
Section 72 - Breach of confidentiality and privacy: Any person who, in pursuance of any of the powers conferred under this Act, has secured access to any electronic record, book, register, correspondence, information, document or other material without the consent of the person concerned discloses the same to any other person shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to Rs. 1,00,000 or with both.
Section 72-A - Punishment for Disclosure of information in breach of lawful contract: Any person including an intermediary who, while providing services under the terms of lawful contract, has secured access to any material containing personal information about another person, with the intent to cause or knowing that he is likely to cause wrongful loss or wrongful gain discloses such material to any other person shall be punished with imprisonment for a term which may extend to three years, or with a fine which may extend to Rs. 5,00,000 or with both.
3. The broad language and wide terminology used IT Act seems to cover most of the cyber crimes faced in India as of now, though the technical abilities to prevent the crimes still leave a lot to be desired. The prevention of cyber crime is not the domain of the IT Act and is rather the responsibility of the law enforcement authorities (note: there is no specific authority created under the IT Act, the Act is enforced by the police and other law enforcement authorities). That said, it may be a useful exercise to briefly compare these provisions with the crimes mentioned in the Convention on Cybercrime, 2001 (Budapest Convention), an international treaty that seeks to addresses threats in cyber space by promoting the harmonization of national laws and cooperation across jurisdictions, to examine if there are any that are not covered by the IT Act. A comparison of the principles in Budapest Convention and the IT Act is below:
S. No. |
Article of the Budapest Convention |
Provisions of the IT Act which cover the same |
1 |
Article 2 - Illegal Access |
Section 43(a) read with Section 66 |
2 |
Article 3 - Illegal Interception |
Section 69 of the IT Act read with section 45 as well as Section 24 of the Telegraph Act, 1885 |
3 |
Article 4 - Data interference |
Sections 43(d) and 43(f) read with section 66 |
4 |
Article 5 - System interference |
Sections 43(d), (e) and (f) read with section 66 |
5 |
Article 6 - Misuse of devices |
Not specifically covered |
6 |
Article 7 - Computer related forgery |
Computer related forgery is not specifically covered, but it is possible that when such a case comes to light, the provisions of Section 43 read with section 66 as well as provisions of the Indian Penal Code, 1860 would be pressed into service to cover such crimes |
7 |
Article 8 - Computer related fraud |
While not specifically covered by the IT Act, it is possible that when such a case comes to light, the provisions of Section 43 read with section 66 as well as provisions of the Indian Penal Code, 1860 would be pressed into service to cover such crimes |
8 |
Article 9 - Offences relating to child pornography |
Section 67B |
As can be seen from the above discussion, most of the criminal acts elucidated in the Budapest Convention are covered under the IT Act except for the provision on misuse of devices, which requires the production, dealing, trading, etc. in devices whose sole objective is to violate the provisions of the IT Act, though it is possible that provisions of the Indian Penal Code, 1860 dealing with conspiracy and aiding and abetment may be pressed into service to cover such incidents.
4. Further, there are a number of laws which deal with critical infrastructure in India, however since these are mostly sectoral laws dealing with specific infrastructure sectors, the one most relevant to ICT is the Telegraph Act, 1885, which makes it illegal to interfere with or damage critical telegraph infrastructure. The specific penal provisions are listed below:
Section 23 - Intrusion into signal-room, trespass in telegraph office or obstruction: If any person - (a) without permission of competent authority, enters the signal room of a telegraph office of the Government, or of a person licensed under this Act, or (b) enters a fenced enclosure round such a telegraph office in contravention of any rule or notice not to do so, or (c) refuses to quit such room or enclosure on being requested to do so by any officer or servant employed therein, or (d) wilfully obstructs or impedes any such officer or servant in the performance of his duty, he shall be punished with fine which may extend to Rs. 500.
Section 24 - Unlawfully attempting to learn the contents of messages: If any person does any of the acts mentioned in section 23 with the intention of unlawfully learning the contents of any message, or of committing any offence punishable under this Act, he may (in addition to the fine with which he is punishable under section 23) be punished with imprisonment for a term which may extend to one year.
Section 25 - Intentionally damaging or tampering with telegraphs: If any person, intending - (a) to prevent or obstruct the transmission or delivery of any message, or (b) to intercept or to acquaint himself with the contents of any message, or (c) to commit mischief, damages, removes, tampers with or touches any battery, machinery, telegraph line, post or other thing whatever, being part of or used in or about any telegraph or in the working thereof, he shall be punished with imprisonment for a term which may extend to three years, or with fine or with both.
Section 25A - Injury to or interference with a telegraph line or post: If, in any case not provided for by section 25, any person deals with any property and thereby wilfully or negligently damages any telegraph line or post duly placed on such property in accordance with the provisions of this Act, he shall be liable to pay the telegraph authority such expenses (if any) as may be incurred in making good such damage, and shall also, if the telegraphic communication is by reason of the damage so caused interrupted, be punishable with a fine which may extend to Rs. 1000:
5. The telecom service providers in India have to sign a license agreement with the Department of Telecommunications for the right to provide telecom services in various parts of India. The telecom regulatory regime in India has gone through a lot of turmoil and evolution and currently any service provider wanting to provide telecom services is issued a Unified License (UL) and has to abide by the terms of the UL. Whilst most of the prohibited activities under the UL refer to specific terms under the UL itself such as non payment of fees and not fulfilling obligations under the UL, section 38 provides for certain specific prohibited activities which may be relevant for the ICT sector. These prohibited activities include:
(i) Carrying objectionable, obscene, unauthorized or any other content, messages or communications infringing copyright and intellectual property right etc., which may be prohibited by the laws of India;
(ii) Provide tracing facilities to trace nuisance, obnoxious or malicious calls, messages or communications transported through his equipment and network, to the authorised government agencies;
(iii) Ensuring that the Telecommunication infrastructure or installation thereof, carried out by it, should not become a safety or health hazard and is not in contravention of any statute, rule, regulation or public policy;
(iv) not permit any telecom service provider whose license has been revoked to use its services. Where such services are already provided, i.e. connectivity already exists, the license is required to immediately sever connectivity immediately.
2b. In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution in the ICT environment and the nature and extent of the consequences
The Department of Electronics and Information Technology (DEITY) has released the XIIth Five Year Plan on the information technology sector and the report of the Sub-Group on Cyber Security in the plan recognizes that cyber security threats emanate from a wide variety of sources and manifest themselves in disruptive activities that target individuals, businesses, national infrastructure and Governments alike. [6] The primary objectives of the plan for securing the country's cyber space are preventing cyber attacks, reducing national vulnerability to cyber attacks, and minimizing damage and recovery time from cyber attacks. The plan takes into account a number of focus areas to achieve its stated objectives, which are described briefly below:
- Enabling Legal Framework - Setting up think tanks in Public-Private mode to identify gaps in the existing policy and frameworks and take action to address them including addressing the privacy concerns of online users.
- Security Policy, Compliance and Assurance - Enhancement of IT product security assurance mechanism (Common Criteria security test/evaluation, ISO 15408 & Crypto Module Validation Program), establishing a mechanism for national cyber security index leading to national risk management framework.
- Security Resarch&Development (R&D) - Creation of Centres of Excellence in identified areas of advanced Cyber Security R&D and Centre for Technology Transfer to facilitate transition of R&D prototypes to production, supporting R&D projects in thrust areas.
- Security Incident - Early Warning and Response - Comprehensive threat assessment and attack mitigation by means of net traffic analysis and deployment of honey pots, development of vulnerability database.
- Security awareness, skill development and training - Launching formal security education, skill building and awareness programs.
- Collaboration - Establishing a collaborative platform/ think-tank for cyber security policy inputs, discussion and deliberations, operationalisation of security cooperation arrangements with overseas CERTs and industry, and seeking legal cooperation of international agencies on cyber crimes and cyber security.
2c. States should not knowingly allow their territory to be used for internationally wrongful acts using ICTs
As mentioned in response to (a) above, the primary legislation in India that deals with information technology and hence ICT as well is the Information Technology Act, 2000. The IT Act contains a number of penal provisions which make it illegal to indulge in a number of practices such as hacking, online fraud, etc. which have been recognised internationally as wrongful acts using ICT ( Please refer to answer under section (a) above for details of the penal provisions). Further section 1(2) of the IT Act provides that it also applies to any offence or contravention hereunder committed outside India by any person. This means that the IT Act also covers internationally wrongful acts using ICTs.
2d. States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect
There are a number of ways in which states can share information by using widely accepted formal processes precisely for this purpose. Some of the most common methods of international exchange used by India are given below.
MLATs
Although the exact process by which intelligence agencies in India share information with other agencies internationally is unclear, India is a member of Interpol and the Central Bureau of Investigation, which is a Federal/Central investigating agency functioning under the Central Government, Department of Personnel & Training and is designated as the National Central Bureau of India. A very useful tool in the effort to establish cross-border cooperation is Mutual Legal Assistance Treaties (MLATs). MLATs are extremely important for law enforcement agencies, governments and the private sector, since they act as formal mechanisms for access to data which falls under different jurisdictions. India currently has MLATs with the following 39 countries [7]
Although MLATs are considered to be a useful mechanism to ensure international cooperation, there are certain criticisms of the MLAT mechanism, such as:
- The Lack of Clear Time Tables: Although MLATs do provide for broad time frames, they do not provide for more specific time tables and usually do not have any provision for an expedited process, for eg. it is believed that for requests to the U.S., processing can take from six weeks (for requests with minimal issues complying with U.S. legal standards) to 10 months.[8] Such a long time frame is clearly a burden on the investigation process and has been criticised for being ineffectual as they may not provide information fast enough;
- Variation in Legal Standards: The legal standards for requesting information, for eg. the circumstances under which information can be requested or what information can be requested, differ from jurisdiction to jurisdiction. These differences are often not understood by requesting nations thus causing problems in accessing information;[9]
- Inefficient Legal Process: The legal process to carry out requests through the MLAT process is often considered too cumbersome and inefficient.
- Non-incorporation of Technological Challenges: MLATs have not been updated to meet the challenges brought about by technology, especially with the advent of networked infrastructure and ICT which raise issues of attribution and cross-jurisdictional access to information. [10]
Extradition
Extradition generally refers to the surrender of an alleged or convicted criminal by one State to another. More precisely, it may be defined as the process by which one State upon the request of another surrenders to the latter a person found within its jurisdiction for trial and punishment or, if he has been already convicted, only for punishment, on account of a crime punishable by the laws of the requesting State and committed outside the territory of the requested State. Extradition plays an important role in the international battle against crime and owes its existence to the so-called principle of territoriality of criminal law, according to which a State will not apply its penal statutes to acts committed outside its own boundaries except where the protection of special national interests is at stake. India currently has extradition treaties with 37 countries and extradition arrangements with an additional 8 countries.[11]
Letters Rogatory
A Letter Rogatory is a formal communication in writing sent by the Court in which an action is pending to a foreign court or Judge requesting that the testimony of a witness residing within the jurisdiction of that foreign court be formally taken under its direction and transmitted to the issuing court making the request for use in a pending legal contest or action. This request entirely depends upon the comity of courts towards each other and usages of the court of another nation.
Apart from the above methods, India also regularly signs Bilateral MoUs with various countries on law enforcement and information sharing specially in cases related to terrorism. India also regularly helps and gets helps from Interpol, the International Criminal Police Organisation for purposes of investigation, arrests and sharing of information.[12]
Other than these formal methods states sometimes share information on an informal basis, where the parties help each other purely on the basis of goodwill, or sometimes even coercion. A recent example of informal cooperation between the security agencies of India and Nepal, although not in the realm of cyber space, was the arrest of YasinBhatkal, leader of the banned organisation Indian Mujahideen (IM) where the Indian security agencies allegedly sought informal help from their Neapaelese counterparts to arrest a person who was wantedhad long been wanted by the Indian security agencies for a long time. [13]
In the current environment of growing ICT and increased cross-border information sharing between individuals, the role of private companies who carry this information has become much more pronounced. This changed dynamic raises new problems, especially because manyin light of thesefact that a number of these companies do not have a physical presence in all the countries where they offer services over the internet. This leads to problems for states in terms of law enforcement, speciallyespecially if they want information from these companies who do not have an incentive or desire to provide itagainst their will. These circumstances lead to a number of prickly situations where states are often frustrated in using legal and formal means and often resort to informal pressure to get the companies to agree to data localization requests, encryption/decryption standards and keys, back doors, and other requests. etc., Tthe most famous of these in the Indian context being the disagreement/ heated exchange between the Indian government and Canada based Blackberry Limited (formerly Research in Motion) for data requests on their Blackberry enterprise platform.
2e. States, in ensuring the secure use of ICTs, should respect Human Rights Council resolutions 20/8 and 26/13 on the promotion, protection and enjoyment of human rights on the Internet, as well as General Assembly resolutions 68/167 and 69/166 on the right to privacy in the digital age, to guarantee full respect for human rights, including the right to freedom of expression
Right to Privacy
-
The right to privacy has been recognised as a constitutionally protected fundamental right in India through judicial interpretation of the right to life which is specifically guaranteed under the Constitution of India. Since the right to privacy was read into the constitution by judicial pronouncements, it could be said that the right to privacy in India is a creature of the courts at least in the Indian context. For this reason it may be useful to list out some of the major cases which deal with the right to privacy in India:
i. Kharak Singh v. Union of India¸[14] (1962)
a. For the first time, the courts recognized the right to privacy as a fundamental right, although in a minority opinion.
b. The decision lLocated the right to privacy under both the right to personal liberty as well as freedom of movement.
ii. Govind v. State of M.P.,[15] (1975)
a. Adopted the minority opinion of Kharak Singh as the opinion of the Supreme Court and held that the right to privacy is a fundamental right.
b. An individual deDerivesd the right to privacy from both the right to life and personal liberty as well as freedom of speech and movement.
c. The right to privacy was said to encompass and protect the personal intimacies of the home, the family marriage, motherhood, procreation and child rearing.
d. The court established that the rRight to privacy can be violated in the following circumstances (i) important countervailing interest which is superior, (ii) compelling state interest test, and (iii) compelling public interest.
iii. R. Rajagopal v. Union of India,[16] (1994)
a. Recognised that the rRight to privacy is a part of the right to personal liberty guaranteed under the constitution.
b. Recognizeds that the right to privacy can be both a tort (actionable claim) as well as a fundamental right.
c. Established that aA citizen has a right to safeguard the privacy of his own, his family, marriage, procreation, motherhood, child-bearing and education among other matters and nobody can publish anything regarding the same unless (i) he consents or voluntarily thrusts himself into controversy, (ii) the publication is made using material which is in public records (except for cases of rape, kidnapping and abduction), or (iii) he is a public servant and the matter relates to their discharge of official duties.
iv. People's Union for Civil Liberties v. Union of India,[17] (1996)
a. Extended the right to privacy to include communications privacy..
b. Laid down guidelines which form the backbone for checks and balances in interception provisions.
v. District Registrar and Collector, Hyderabad and another v. Canara Bank and another, [18] (2004)
a. Refers to personal liberty, freedom of expression and freedom of movement as the fundamental rights which give rise to the right to privacy.
b. The rRight to privacy deals with persons and not places.
c. Intrusion into privacy may be by - (1) legislative provisions, (2) administrative/executive orders and (3) judicial orders.
vi. Selvi and others v. State of Karnataka and others,[19] (2010)
a. The Court acknowledged the distinction between bodily/physical privacy and mental privacy
b. Subjecting a person to techniques such as narcoanalysis, polygraph examination and the Brain Electrical Activation Profile (BEAP) test without consent violates the subject's mental privacy
-
Although the judgements in the above cases (except for the case of People's Union for Civil Liberties v. Union of India) were pronounced given in a non telecomnot delivered in a telecommunications context, however the ease with which these principles were applied in the case of People's Union for Civil Liberties v. Union of India, suggests that these principles, where applicable, would be applied even in the context of ICT and are not limited to only the non-digital world.
-
It must however be noted that dueDue to some incongruities in the interpretation of the earlier judgments, the Supreme Court has recently referred the matter regarding the existence and scope of the right to privacy in India to a larger bench so as to bring clarity regarding the exact scope of the right to privacy in Indian law. The very concept that the Constitution of India guarantees a right to privacy was challenged due to an "unresolved contradiction" in judicial pronouncements. This "unresolved contradiction" arose because in the cases of M.P. Sharma & Others v. Satish Chandra & Others,[20] and Kharak Singh v. State of U.P. & Others, [21](decided byEigheighttandsixSixJudges respectively) the majority judgment of the Supreme Court had categorically denied the existence of a right to privacy under the Indian Constitution.
However somehow the later case of Gobind v. State of M.P. and another,[22] (which was decided by a two Judge Bench of the Supreme Court) relied upon the opinion given by the minority of two judges in Kharak Singh to hold that a right to privacy does exist and is guaranteed as a fundamental right under the Constitution of India without addressing the fact that this was a minority opinion and that the majority opinion had denied the existeance of the right to privacy. Thereafter a large number of cases have held the right to privacy to be a fundamental right, the most important of which are R. Rajagopal& Another v. State of Tamil Nadu & Others,[23] (popularly known as Auto Shanker's case) and People's Union for Civil Liberties (PUCL) v. Union of India & Another.[24] However, as was noticed by the Supreme Court in its August 11, 2015 order, all these judgments were decided by two or three Judges only which could not have overturned the judgments given by larger benches.[25] It was to resolve this judicial incongruity that the Supreme Court referred this issue to a larger bench to decide on the existence and scope of the right to privacy in India.
Freedom of Expression
-
Freedom of expression is one of the most important fundamental rights guaranteed under the constitution and has been vehemently protected by the judiciary on a number of occasions whenever it has been threatened. With the advent of social media, the entire dynamics of the freedom of speech and expression have changed in that it is now possible for every individual, with an internet connection and a Facebook/Twitter/Whatsapp account to reach millions of people without spending any extra money. This ability to reach a much larger and wider audience also led to greater friction between people holding different opinions. As the ease of the internet removed the otherwise filtering effects of geography and made it easier for people to communicate with each other, the advent of social media made it easier for them to communicate with a larger number of people at the same time. This ability to communicate within a group also gave rise to "debates" which often turngot ugly, highlighting giving way to concerns of how easy it is to harass people on social media.
-
This concern over of harassment led a number of people to call for greater censorship of social media and it was perhaps this concern which gave rise to the biggest challenge to the freedom of speech and expression in the online world, in the form of section 66A of the Information Technology Act, 2000 which made it an offense to send information which was "grossly offensive" (s.66A(a)) or caused "annoyance" or "inconvenience" while being known to be false (s.66A(c)). This section was used widely seen by Oonline activists, including the Centre for Internet and Society, widely considered this section as a tool for the government to silence those who criticised it. In fact, statistics compiled by the National Crime Records Bureau from 2014 revealed that 2,402 people, including 29 women, were arrested in 4,192 cases under section 66A which accounted for nearly 60% of all arrests under the IT Act, and 40% of arrests for cyber crimes in 2014. [26]
-
The section was finally struck down by the Supreme Court in 2015 in the case of Shreya Singhalv. Union of India, [27] on the ground of being too vague. This decision was seen as a huge victory for the campaign for freedom of speech and expression in the virtual world since this section was frequently used by the state (or rather government in power) to muzzle free speech against the incumbent government or political leaders. The offending section 66A made it an offence to send any information that was "grossly offensive or has menacing character" or "which he knows to be false, but for the purpose of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred, or ill will, persistently makes by makinguse of such computer resource or a communication device,". These terms quoted above were held by the Court to be too vague and wide and falling foul of the limited restrictions constitutionally imposed on the freedom of expression. The Supreme Court therefore, and were therefore struck down section 66A by the Supreme Court.
2f. A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public
The researchers of this report could not locate any norms in India which address this issue. To the best of their knowledge, India does not support any ICT activity that intentionally damages critical infrastructure or impairs the use and operation of critical infrastructure.
2g. States should take appropriate measures to protect their critical infrastructure from ICT threats, taking into account General Assembly resolution 58/199 on the creation of a global culture of cybersecurity and the protection of critical information infrastructures, and other relevant resolutions
1. Section 70 of the IT Act gives the government the authority to declare any computer system which directly affects any critical information infrastructure to be a protected system. The term "critical information infrastructure" (CII) is defined in the IT Act "the computer resource, the incapacitation or destruction of which, shall have debilitating impact on national security, economy, public health or safety." Once the government declares any computer resource as a protected system it gets the authority to prescribe information security practices for such as system as well as identify the persons who are authorised to access such systems. Any person who accesses a protected system in contravention of the provision of Section 70 of the IT Act shall be liable to be imprisoned for a maximum period of 10 years and also pay a fine. Further, section 70A of the IT Act gives the government the power to name a national nodal agency in respect of CII and also prescribe the manner for such agency to perform its duties. In pursuance of the powers under sections 70A the government has designated the National Critical Information Infrastructure Protection Centre (NCIIPC) situated in the JNU campus as the nodal agency [28]. This agency is a part of and under the administrative control of the National Technical Research Organisation (NTRO) [29].
2. The functions and manner of performing such functions by the NCIIPC has been prescribed in the Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013.[30] According to these Rules the functions of the NCIIPC include, inter alia, (i) the protecting and giving advice to reduce the vulnerabilities of CII against cyber terrorism, cyber warfare and other threats; (ii) identification of all critical infrastructure elements so that they can be notified by the government; (iii) providing strategic leadership and coherence across the government to respond to cyber security threats against CII; (iv) coordinating, sharing, monitoring, analysing and forecasting national level threats to CII for policy guidance, expertiese sharing and situational awareness for early warning alerts; (v) assisting in the development of appropriate plans, adoption of standards, sharing best practices and refinining procurement processes for CII; (vi) undertaking and funding research and development to innovate future technologies and collaborate with PSUs, academia and international partners for protection of CII; (vii) organising training and awareness programmes and development of audit and certification agencies for protection of CII; (viii) developing and executing national and international cooperation strategies for protection of CII; (ix) issuing guidelines, advisories and vulnerability notes relating to CII and practices, procedures, prevention and responses in consultation with CERT-In and other organisations; (x) exchanging information with CERT-In, especially in relation to cyber incidents; and (xi) calling for information and giving directions to critical sectors or persons having a critical impact on CII, in the event of any threat to CII.[31]
3. The NCIIPC had in the year 2013 released (non publicly) Guidelines for the Protection of National Critical Information Infrastructure [32] (CII Guidelines) which presented 40forty controls and respective guiding principles for the protection of CII. It is expected that these controls and guiding principles will help critical sectors to draw a CII protection roadmap to achieve safe, secure and resilient CII for India. The 'Guidelines for forty Critical Controls' is considered by the NCIIPC to be a significant milestone in its efforts for the protection of nation's critical information assets. These fort controls can be found in Section 6 (Best Practices, Controls and Guidelines) of the CII Guidelines. It must be noted that the CII Guidelines were drafted after taking inputs from a number of stakeholders such as the national Stock Exchange, the Airports Authority of India, National Thermal Power Corporation, Reserve Bank of India, Indian Railways, Telecom Regulatory Authority of India, Bharat Sanchar Nigam Limited, etc. This exercise of taking inputs from different stakeholders as well as developing a standard of as many as 40forty aspects of security seems to suggest that the NCIIPC is taking steps in the right direction.
4. The Recommendations on Telecommunication Infrastructure Policy issued by the Telecom Regulatory Authority of India in April, 2011 are silent on the issue of security of critical information infrastructure.s. However, the National Policy on Information Technology, 2012 (NPIT) does address the issue of security of cyber space by saying that the government should make efforts to do the following:
"9.1 To undertake policy, promotion and enabling actions for compliance to international security best practices and conformity assessment (product, process, technology & people) and incentives for compliance.
9.2 To promote indigenous development of suitable security techniques & technology through frontier technology research, solution oriented research, proof of concept, pilot development etc. and deployment of secure IT products/processes
9.3 To create a culture of cyber security for responsible user behavior & actions including building capacities and awareness campaigns.
9.4 To create, establish and operate an 'Information Security Assurance Framework'."
5. The Department of Information and Technology has formed the Computer Emergency Response Term of India (CERT-In) to enhance the security of India's Communications and Information Infrastructure through proactive action and effective collaboration. The Information Security Policy on Protection of Critical Infrastructure released by the CERT-In considers information recorded, processed or stored in electronic medium as a valuable asset and is geared towards protection of such "valuable asset". The policy recognises the importance of critical information infrastructure network and says that any disruption of the operation of such networks is likely to have devastating effects. The policy prescribes that personnel with program delivery responsibilities should also recognise the importance of security of information resources and their management. Thus Ddue to this recognition of the growing networked nature of government as well as critical organisations and the need to have a proper vulnerability analysis as well as effective management of information security risks, the Department of Technology prescribes the following information security policy:
"In order to reduce the risk of cyber attacks and improve upon the security posture of critical information infrastructure, Government and critical sector organizations are required to do the following on priority:
- Identify a member of senior management, as Chief Information Security Officer (CISO), knowledgeable in the nature of information security & related issues and designate him/her as a 'Point of contact', responsible for coordinating security policy compliance efforts and to regularly interact with the Indian Computer Emergency Response Team (CERT-In), Department of Information Technology (DIT), which is the nodal agency for coordinating all actions pertaining to cyber security;
- Prepare information security plan and implement the security control measures as per ISI/ISO/IEC 27001: 2005 and other guidelines/standards, as appropriate;
- Carry out periodic IT security risk assessments and determine acceptable level of risks, consistent with criticality of business/functional requirements, likely impact on business/ functions and achievement of organisational goals/objectives;
- Periodically test and evaluate the adequacy and effectiveness of technical security control measures implemented for IT systems and networks. Especially, Test and evaluation may become necessary after each significant change to the IT applications/systems/networks and can include, as appropriate the following:
➢ Penetration Testing (both announced as well as unannounced)
➢ Vulnerability Assessment
➢ Application Security Testing
➢ Web Security Testing
- Carry out Audit of Information infrastructure on an annual basis and when there is major upgradation/change in the Information Technology Infrastructure, by an independent IT Security Auditing organization;..........
- Report to CERT-In the cyber security incidents, as and when they occur and the status of cyber security, periodically."
6. The Department of Electronics and Information Technology (DEITY) released the National Policy on Electronics in 2012 which contained the government's take on the electronics industry in India. Section 5 of the said policy talks about cCyber sSecurity and states that to create a complete secure cyber eco-system in the country, careful and due attention is required for creation of well-d defined technology and systems, use of appropriate technology and more importantly development of appropriate products and& solutions. The priorities for action should be suitable design and development of indigenous appropriate products through frontier technology/product oriented research, testing and& validation of security of products meeting the protection profile requirements needed to secure the ICT infrastructure and cyber space of the country.
7. In addition the CERT-In has issued an Information Security Management Implementation Guide for Government Organisations. [33] CERT-In has also prescribed progressive steps for implementation of Information Security Management System in Government & Critical Sectors as per ISO 27001. The steps prescribed are as follows:
- Identification of a Point-of-Contact (POC) / Chief Information Security Officer (CISO) for coordinating information security policy implementation efforts and communication with CERT-In
- Information Security Awareness Programme
- Determination of general Risk environment of the organization (low / medium / hHigh) depending on the nature of web and& networking environment, criticality of business functions and impact of information security incidents on the organization, business activities, assets / resources and individuals
- Status appraisal and gap analysis against ISO 27001 based best information security practices
- Risk assessment covering evaluation of threat perception and technical and &operational vulnerabilities
- Comprehensive risk mitigation plan including selection of appropriate information security controls as per ISO 27001 based best information security practices
- Documentation of agreed information security control measures in the form of information security policy manual, procedure manual and work instructions
- Implementation of information security control measures (Managerial, Technical and& operational)
- Testing & evaluation of technical information security control measures for their adequacy & effectiveness and audit of IT applications/systems/networks by an independent information security auditing organization (penetration testing, vulnerability assessment, application security testing, web security testing, LAN audits, etc)
- Information Security Management assessment and certification against ISO 27001 standard, preferably by an independent & accredited organization
8. The Unified License for providing various telecommunication services also discusses contains certain terms which talk about how to engagedeal with telecommunication infrastructure in light of national security, which include the following recommendations:
- Providing necessary facilities to the Government to counteract espionage, subversive act, sabotage or any other unlawful activity;
- Giving full access to its network and equipment to the authorised persons for technical scrutiny and inspection;
- Obtaininggettting security clearance for all foreign nationals deployed on for installation, operation and maintenance of the network;
- Being completely responsible for the security of its network and having organizational policy on security and security management of its network including Network forensics, Network Hardening, Network penetration test, Risk assessment;
- Auditing its network or getting the network audited from security point of view once in a financial year from a network audit and certification agency;
- Inducting only those network elements into its telecommunications network, which have been got tested according tos per relevant contemporary Indian or International Security Standards;
- Including all contemporary security related features (including communication security) as prescribed under relevant security standards while procuring the equipment and implementing all such contemporary features into the network;
- Keeping requisite records of operations in the network;
- Monitoring of all intrusions, attacks and frauds on his technical facilities and provide reports on the same to the Licensor.
Further statutory restrictions on tampering critical infrastructure are already contained in the Telegraph Act and have been discussed above, though the penalties provided may need to be increased if they are to act as a deterrent in this age where the stakes are much higher.
2h. States should respond to appropriate requests for assistance by another State whose critical infrastructure is subject to malicious ICT acts. States should also respond to appropriate requests to mitigate malicious ICT activity aimed at the critical infrastructure of another State emanating from their territory, taking into account due regard for sovereignty
There is yet to be a publicly acknowledged request from a foreign government asking the Indian government to take steps to prevent malicious ICT acts originating from its territory.
2i. States should take reasonable steps to ensure the integrity of the supply chain so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions;
Section 4 of the National Electronics Policy, 2012 talks about "Developing and Mandating Standards" and says that in order to curb the inflow of sub-standard and unsafe electronic products the government should mandate technical and safety standards which conform to international standards and do the following:
- Develop Indian standards to meet specific Indian conditions including climatic, power supply, and handling and other conditions etc., by suitably reviewing existing standards.
- Mandate technical standards in the interest of public health and safety.
- Set up an institutional mechanism within Department of Information Technology for mandating compliance to standards for electronics products.
- Develop a National Policy Framework for enforcement and use of Standards and Quality Management Processes.
- Strengthen the lab infrastructure for testing of electronic products and encouraging development of conformity assessment infrastructure by private participation.
- Create awareness amongst consumers against sub-standard and spurious electronic products.
- Build capacity within the Government and public sector for developing and mandating standards.
- Actively participate in the international development of standards in the Electronic System Design and Manufacturing sector.
2j. States should encourage responsible reporting of ICT vulnerabilities and share associated information on available remedies to such vulnerabilities to limit and possibly eliminate potential threats to ICTs and ICT-dependent infrastructure
Under section 70B of the IT Act, India has established a Computer Emergency Response Team (CERT-In) to serve as the national agency for incident responses. The functions mandated to be performed by CERT-In as per the IT Act are:
- Collection, analysis and dissemination of information on cyber incidents;
- Forecasting and alerts of cyber security incidents;
- Emergency measures for handling cyber security incidents;
- Coordination of cyber incidents response activities;
- Issuing ofe guidelines, advisories, vulnerability notes and white papers relating to information security practices, procedures, prevention, response and reporting of cyber incidents;
- Such other functions relating to cyber security as may be prescribed.
CERT-In also publishes information regarding various cyber threats on its websites so as to keep internet users aware of the latest threats in the online world. Such information can be accessed both on the main page of the CERT-In website or under the Advisories section on the website. [34]
2k. States should not conduct or knowingly support activity to harm the information systems of the authorized emergency response teams (sometimes known as computer emergency response teams or cyber security incident response teams) of another State. A State should not use authorized emergency response teams to engage in malicious international activity.
There are no official or public reports of India using its CERT-In to harm the information systems of another state, although it is highly unlikely that any state would publicly acknowledge such activities even if it was indulging in them.
3. Conclusion
As can be seen from the discussion above, the statutory, regulatory and policy regime in India does seem to address most of the cyber security norms in some manner or the other, but these efforts almost always fall short of meeting some of the norms. While the Information Technology Act along with the Rules thereunder, as being the umbrella legislation for digital transactions in India, does address some of the issues mentioned above, it does not address some of the problems that arise out of a greater reliance on the internet such as spamming, trolling, and, online harassment, etc. Although some of these acts may be addressed by regular legislation by applying them in the online world however this does not always take into account the unique features and complexities of committing these acts/crimes in the online world.
In the area of exchange of information between states, India has entered into a number of MLATs and extradition treaties, and frequently issues Letters of Rogatory. Yet however these mechanisms may not be adequate to address the needs of crime prevention of crimes in the age of ICT, as crime prevention it often requires exchange of information inon r a real time basis which is not possible with the bureaucratic procedures involved in the MLAT process. There also needsd to be stronger standards which are applicable to ICT equipment, including imported equipment especially in light of the fact that security concerns related to Chinese ICT equipment that from China have been raised quite frequently in the past. There also needs to be a better system of reporting ICT vulnerabilities to CERT-In or other authorized agencies so that mitigation measure can be implemented in time.
It should be noted that the work of the Group of Experts is not complete since the General Assembly has asked the Secretary General to form a new Group of Experts which would report back to the Secretary General in 2017. It is imperative that the Government of India realise the importance of the work being done by the Group of Experts and take measures to ensure that a representative from India is included in or atleast the comments and concerns of India are included and addressed by the Group of Experts. Meanwhile, India can begin by strengthening domestic privacy safeguards, improving transparency and efficiency of relevant policies and processes, and looking towards solutions that respect rights and strengthen security. Brutent force solutions such as demands for back doors, unfair and unreasonable encryption regulation, and data localization requirements will not help propel India forward in international discussions, dialogues, or agreements on cross-border sharing of information. Though the recommendations from the Group of Experts are welcome, beyond a preliminary mention of privacy and freedom of expression, the rights of individuals - and the ways in which these can be protected, various components that go into supporting those rights including redress, transparency, and due process measures - was inadequately addressed.
[1] The terms "cyberspace" has been defined in the Oxford English Dictionary as the notional environment in which communication over computer networks occurs. Although the scope of this paper is not to discuss the meaning of this term, it was felt that a simple definition of the term would be useful to better define the parameters of the discussion.
[2] https://s3.amazonaws.com/unoda-web/wp-content/uploads/2016/01/A-RES-70-237-Information-Security.pdf
[3] https://www.justsecurity.org/29203/british-searches-america-tremendous-opportunity/
[5] Provided that the provisions of section 67, section 67A and this section does not extend to any book, pamphlet, paper, writing, drawing, painting, representation or figure in electronic form-
(i) The publication of which is proved to be justified as being for the public good on the ground that such book, pamphlet, paper writing, drawing, painting, representation or figure is in the interest of science, literature, art or learning or other objects of general concern; or
(ii) which is kept or used for bona fide heritage or religious purposes
Explanation: For the purposes of this section, "children" means a person who has not completed the age of 18 years.
[7] List of the countries is available at http://cbi.nic.in/interpol/mlats.php
[9] Peter Swire & Justin D. Hemmings, "Re-Engineering the Mutual Legal Assistance Treaty Process", http://www.heinz.cmu.edu/~acquisti/SHB2015/Swire.docx, cf. https://www.lawfareblog.com/mlat-reform-some-thoughts-civil-society .
[10] MLATS and International Cooperation for Law Enforcement Purposes, available at http://cis-india.org/internet-governance/blog/presentation-on-mlats.pdf
[11] The full list of the countries with which India has agreed an MLAT is available at http://cbi.nic.in/interpol/extradition.php
[13] http://www.firstpost.com/india/how-the-police-tracked-and-arrested-im-founder-yasin-bhatkal-1071755.html
[20] AIR 1954 SC 300. In para 18 of the Judgment it was held: "A power of search and seizure is in any system of jurisprudence an overriding power of the State for the protection of social security and that power is necessarily regulated by law. When the Constitution makers have thought fit not to subject such regulation to constitutional limitations by recognition of a fundamental right to privacy, analogous to the American Fourth Amendment, we have no justification to import it, into a totally different fundamental right, by some process of strained construction."
[21] AIR 1963 SC 1295. In para 20 of the judgment it was held: "… Nor do we consider that Art. 21 has any relevance in the context as was sought to be suggested by learned counsel for the petitioner. As already pointed out, the right of privacy is not a guaranteed right under our Constitution and therefore the attempt to ascertain the movement of an individual which is merely a manner in which privacy is invaded is not an infringement of a fundamental right guaranteed by Part III."
[22] (1975) 2 SCC 148.
[23] (1994) 6 SCC 632.
[24] (1997) 1 SCC 301.
[26] http://cis-india.org/internet-governance/news/hindustan-times-august-20-2015-aloke-tikku-stats-from-2014-reveal-horror-of-scrapped-section-66-a-of-it-act
[31] Rule 4 of the Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013.
[32] Since these Guidelines were not publicly released they are not available on any government website. In this paper we have relied on a version available on a private website at http://perry4law.org/cecsrdi/wp-content/uploads/2013/12/Guidelines-For-Protection-Of-National-Critical-Information-Infrastructure.pdf
[33] Available at http://www.cert-in.org.in/
List of Acronyms
- ICTs – Information Communication Technologies
- GGE – Group of Experts
- EU – European Union
- DLC-ICT – India-Belarus Digital Learning Center
- IT Act – Information Technology Act, 2000
- UL - Unified License
- DEITY – Department of Electronics and Information Technology
- IT – Information Technology
- ISO – International Organization for Standardisation
- CERT – Computer Emergency Response Team
- CERT-In - Computer Emergency Response Team, India
- MLAT – Mutual Legal Assistance Treaty
- CII – Critical Information Infrastructure
- NCIIPC - National Critical Information Infrastructure Protection Centre
- NTRO - National Technical Research Organisation
- NPIT - National Policy on Information Technology
- CISO - Chief Information Security Officer
Book Review: Apocalypse Now Redux
The review was published in the Indian Express on August 6, 2016.
Book: Things That Can and Cannot Be Said
Authors: Arundhati Roy & John Cusack
Publication: Juggernaut
Pages: 132
Price: Rs 250
The title of the book — Things That Can and Cannot be Said — demands an imperative. It is as if Arundhati Roy and John Cusack, aware of their internal turmoil in dealing with a world that is rapidly becoming unintelligible, though not incomprehensible, are demanding an order where none exists. Hence, they are advocating for certainty and assurance, only to undermine it, ironically, through their own freely associative writing that mimics linear time and causative narrative. This deep-seated irony of needing to say something, but knowing that saying it is not going to shine a divining light on the sordid realities of the world that is being managed through the production of grand structures like valorous nation states, virtuous civil societies, the obsequious NGO-isation of radical action, and the persistent neutering of justice through the benign vocabulary of human rights, defines the oeuvre, the politics and the poetics of the book. Written like a scrap book, filled with excerpts from long conversations scattered over time and space, annotated by reminiscences of books read long ago that have seared their imprints on the mind, and events that are simultaneously platitudinous for their status as global landmarks and fiercely personal for the scars that they have left on the minds of the authors, the book remains an engaging, if a somewhat freewheeling, ride into a political critique that makes itself all the more palatable and disconcerting for the levity, irreverence and the dark sense of humour that accompanies it.
Composed in alternating chapters, the first half of the book is about Cusack and Roy laying themselves bare. They spare no words, square no edges, and put their personal, political and collective wounds on display with humble pride and proud humility. Cusack’s experience as a screenplay writer comes in handy — he rescues what could have been a long tirade, into a series of conversations. The familiar narratives are rehistoricised and de-territorialised, put into new contexts while eschewing the older ones, thus providing a large landscape that refers to state-sponsored genocide, structural reorganisation of nation states, the dying edge of political action, the overwhelming but invisible presence of capital, and the dithering state of social justice that treats human beings like things. Cusack, identifying the poetic genius of Roy, gives her centre stage, making her the voice in command.
Roy, for her part, seems to have enjoyed this moment in the soapbox — something that she has been doing quite effectively and provocatively to a national and global audience — and gives it her all. There are moments when the text feels indulgent, when the voice feels a little relentless, when the almost schizophrenic global and historical references become a litany of mixed-up events that might have required further nuance and deeper interpretation. However, the whimsical style of Roy’s narrative, with her sense of what is right, and her demeanour that remains friendly, curious and disarming, saves the text from being heavy handed, even when it does dissolve into cloying poignancy and makes you pause, just so that you can breathe.
Surprisingly, it is the second part of the book, where the two encounter Edward Snowden along with Daniel Ellsberg, the “Snowden of the 1960s” who had leaked the Pentagon papers, that falters. Snowden had jocularly mentioned that Roy was there to “radicalise him”. She does that, but in a way that doesn’t give us anything more than what we already know. While Cusack and Roy were committed to getting to know Snowden beyond his systems-man image, there wasn’t much that they could uncover, either in dialogue or in discourse, that could have told us more, endeared us further to possibly the most over-exposed person in recent times. However, one realises that the genius of the narrative is actually in reminding us how transparent Edward Snowden has become to us. We know all kinds of things about this young man — from his girlfriends past to his actions future, from his values and convictions to his opinion on the NSA watching people’s naked pictures — and yet, what has been missing in the Snowden files, has been the larger arc of global politics, social reordering, and perhaps, a glimpse of the post-nation future that Snowden might have seen in his act of whistleblowing that is going to remain the landmark moment that defines the rest of this century.
Once you have gotten over the fact that this is not a book about Snowden, the expectations are better tailored for what is to come, and suddenly, the long prelude to the meeting falls into place. Snowden matches Roy and Cusack in whimsy, irony, political conviction, and the sacred faith in human values that make you want to give them all a fierce hug of hesitant reassurance. What Snowden says, what Roy and Cusack make of it, and how they leave us, almost abruptly at the end, breathless, unnerved, and severely conflicted about some of the 20th century structures like society, activism, nation states, governance, communication, technologies, sharing and caring is what the book has to be read for. The tight screen-writing skills of Cusack meet the perfect timing of Roy’s prose, and all of it becomes surreal, futuristic and indelibly real when it gets anchored on the physical presence of Snowden, who, in exile, talks achingly of the home that has thrown him out and the home that he can never really call his own.
And while there are lapses — fragments, translations and evocations which might have needed more explanations to have their pedagogic intent shine through — there is no denying that, in all its flaws, much like the narrators, the book manages to first immerse you in the cold shock of a sobering reality, clearly positioning the apocalypse as the now, and then drags you out and wraps you up in a warm blanket, opening up forms of critique, formats of intervention, and functions of political commitment towards saying things that have and have not been said. The book should have, perhaps, been titled what could, would, should have been said, but can’t, won’t, shan’t be said — not because of anything else, but because it seems futile.
New Approaches to Information Privacy – Revisiting the Purpose Limitation Principle
This was published in Digital Policy Portal on July 13, 2016.
Introduction
Last year, Mukul Rohatgi, the Attorney General of India, called into question existing jurisprudence of the last 50 years on the constitutional validity of the right to privacy.1 Mohatgi was rebutting the arguments on privacy made against Aadhaar, the unique identity project initiated and implemented in the country without any legislative mandate.2 The question of the right to privacy becomes all the more relevant in the context of events over the last few years—among them, the significant rise in data collection by the state through various e-governance schemes,3 systematic access to personal data by various wings of the state through a host of surveillance and law enforcement initiatives launched in the last decade,4 the multifold increase in the number of Indians online, and the ubiquitous collection of personal data by private parties.5
These developments have led to a call for a comprehensive privacy legislation in India and the adoption of the National Privacy Principles as laid down by the Expert Committee led by Justice AP Shah.6 There are privacy-protection legislation currently in place such as the Information Technology Act, 2000 (IT Act), which was enacted to govern digital content and communication and provide legal recognition to electronic transactions. This legislation has provisions that can safeguard—and dilute—online privacy. At the heart of the data protection provisions in the IT Act lies section 43A and the rules framed under it, i.e., Reasonable security practices and procedures and sensitive personal data information.7Section 43A mandates that body corporates who receive, possess, store, deal, or handle any personal data to implement and maintain ‘reasonable security practices’, failing which, they are held liable to compensate those affected. Rules drafted under this provision also mandated a number of data protection obligations on corporations such the need to seek consent before collection, specifying the purposes of data collection, and restricting the use of data to such purposes only. There have been questions raised about the validity of the Section 43A Rules as they seek to do much more than mandate in the parent provisions, Section 43A— requiring entities to maintain reasonable security practices.
Privacy as control?
Even setting aside the issue of legal validity, the kind of data protection framework envisioned by Section 43A rules is proving to be outdated in the context of how data is now being collected and processed. The focus of Section 43 A Rules—as well as that of draft privacy legislations in India8—is based on the idea of individual control. Most apt is Alan Westin’s definition of privacy: “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other.”9 Westin and his followers rely on the normative idea of “informational self- determination”, the notion of a pure, disembodied, and atomistic self, capable of making rational and isolated choices in order to assert complete control over personal information. More and more this has proved to be a fiction especially in a networked society.
Much before the need for governance of information technologies had reached a critical mass in India, Western countries were already dealing with the implications of the use of these technologies on personal data. In 1973, the US Department of Health, Education and Welfare appointed a committee to address this issue, leading to a report called ‘Records, Computers and Rights of Citizens.’10 The Committee’s mandate was to “explore the impact of computers on record keeping about individuals and, in addition, to inquire into, and make recommendations regarding, the use of the Social Security number.” The Report articulated five principles which were to be the basis of fair information practices: transparency; use limitation; access and correction; data quality; and security. Building upon these principles, the Committee of Ministers of the Organization for Economic Cooperation and Development (OECD) arrived at the Guidelines on the Protection of Privacy and Transborder Flows of Personal Data in 1980.11 These principles— Collection Limitation, Data Quality, Purpose Specification, Use Limitation, Security Safeguards, Openness, Individual Participation and Accountability—are what inform most data protection regulations today including the APEC Framework, the EU Data Protection Directive, and the Section 43A Rules and Justice AP Shah Principles in India.
Fred Cate describes the import of these privacy regimes as such:
“All of these data protection instruments reflect the same approach: tell individuals what data you wish to collect or use, give them a choice, grant them access, secure those data with appropriate technologies and procedures, and be subject to third-party enforcement if you fail to comply with these requirements or individuals’ expressed preferences”12
This is in line with Alan Westin’s idea of privacy exercised through individual control. Therefore the focus of these principles is on empowering the individuals to exercise choice, but not on protecting individuals from harmful or unnecessary practices of data collection and processing. The author of this article has earlier written13 about the sheer inefficacy of this framework which places the responsibility on individuals. Other scholars like Daniel Solove,14 Jonathan Obar15 and Fred Cate16 have also written about the failure of traditional data protection practices of notice and consent. While these essays dealt with the privacy principles of choice and informed consent, this paper will focus on the principles of purpose limitation.
Purpose Limitation and Impact of Big Data
The principles of purpose limitation or purpose specification seeks to ensure the following four objectives:
- Personal information collected and processed should be adequate and relevant to the purposes for which they are processed.
- The entities collect, process, disclose, make available, or otherwise use personal information only for the stated purposes.
- In case of change in purpose, the data’s subject needs to be informed and their consent has to be obtained.
- After personal information has been used in accordance with the identified purpose, it has to be destroyed as per the identified procedures.
The purpose limitation along with the data minimisation principle—which requires that no more data may be processed than is necessary for the stated purpose—aim to limit the use of data to what is agreed to by the data subject. These principles are in direct conflict with new technology which relies on ubiquitous collection and indiscriminate uses of data. The main import of Big Data technologies on the inherent value in data which can be harvested not by the primary purposes of data collection but through various secondary purposes which involve processing of the data repeatedly.17Further, instead to destroying the data when its purpose has been achieved, the intent is to retain as much data as possible for secondary uses. Importantly, as these secondary uses are of an inherently unanticipated nature, it becomes impossible to account for it at the stage of collection and providing the choice to the data subject.
Followers of the discourse on Big Data would be well aware of its potential impacts on privacy. De-identification techniques to protect the identities of individuals in dataset face a threat from an increase in the amount of data available either publicly or otherwise to a party seeking to reverse-engineer an anonymised dataset to re-identify individuals. 18 Further, Big Data analytics promise to find patterns and connections that can contribute to the knowledge available to the public to make decisions. What is also likely is that it will lead to revealing insights about people that they would have preferred to keep private.19In turn, as people become more aware of being constantly profiled by their actions, they will self-regulate and ‘discipline’ their behaviour. This can lead to a chilling effect.20 Meanwhile, Big Data is also fuelling an industry that incentivises businesses to collect more data, as it has a high and growing monetary value. However, Big Data also promises a completely new kind of knowledge that can prove to be revolutionary in fields as diverse as medicine, disaster-management, governance, agriculture, transport, service delivery, and decision-making.21 As long as there is a sufficiently large and diverse amount of data, there could be invaluable insights locked in it, accessing which can provide solutions to a number of problems. In light of this, it is important to consider what kind of regulatory framework is most suitable which could facilitate some of the promised benefits of Big Data and at the same time mitigate its potential harm. This, coupled with the fact that the existing data protection principles have, by most accounts, run their course, makes the examination of alternative frameworks even more important. This article will examine some alternate proposals made to the existing framework of purpose limitation below.
Harms-based approach
Some scholars like Fred Cate22 and Daniel Solove23 have argued that there is a need for the primary focus of data protection law to move from control at the stage of data collection to actual use cases. In his article on the failure of Fair Information Practice Principles,24Cate puts forth a proposal for ‘Consumer Privacy Protection Principles.’ Cate envisions a more interventionist role of the data protection authorities by regulating information flows when required, in order to protect individuals from risky or harmful uses of information. Cate’s attempt is to extend the principles of consumer protection law of prevention and remedy of harms.
In a re-examination of the OECD Privacy Principles, Cate and Viktor Mayer Schöemberger attempt to discard the use of personal data to only purposes specified. They felt that restricting the use of personal to only specified purposes could significantly threaten various research and beneficial uses of Big Data. Instead of articulating a positive obligations of what personal data collected could be used for, they attempt to arrive at a negative obligation of use-cases prevented by law. Their working definition of the Use specification principle broaden the scope of use cases by only preventing use of data “if the use is fraudulent, unlawful, deceptive or discriminatory; society has deemed the use inappropriate through a standard of unfairness; the use is likely to cause unjustified harm to the individual; or the use is over the well-founded objection of the individual, unless necessary to serve an over-riding public interest, or unless required by law.”25
While most standards in the above definition have established understanding in jurisprudence, the concept of unjustifiable harm is what we are interested in. Any theory of harms-based approach goes back to John Stuart Mill’s dictum that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. Therefore, any regulation that seeks to control or prevent autonomy of individuals (in this case, the ability of individuals to allow data collectors to use their personal data, and the ability of data collectors to do so, without any limitation) must clearly demonstrate the harm to the individuals in question.
Fred Cate articulates the following steps to identify tangible harm and respond to its presence:26
- Focus on Use — Actual use of the data should be considered, not mere possession. The assumption is that the collection, possession, or transfer of information do not significantly harm people, rather it is the use of information following such collection, possession, or transfer.
- Proportionality — Any regulatory measure must be proportional to the likelihood and severity of the harm identified.
- Per se Harmful Uses — Uses which are always harmful must be prohibited by law
- Per se not Harmful Uses — If uses can be considered inherently not harmful, they should not be regulated.
- Sensitive Uses — In case where the uses are not per se harmful or not harmful, individual consent must be sought for using that data for those purposes.
The proposal by Cate argues for what is called a ‘use based system’, which is extremely popular with American scholars. Under this system, data collection itself is not subject to restrictions; rather, only the use of data is regulated. This argument has great appeal for both businesses who can reduce their overheads significantly if consent obligations are done away with as long as they use the data in ways which are not harmful, as well as critics of the current data protection framework which relies on informed consent. Lokke Moerel explains the philosophy of ‘harms based approach’ or ‘use based system’ in United States by juxtaposing it against the ‘rights based approach’ in Europe.27 In Europe, rights of individuals with regard to processing of their personal data is a fundamental human right and therefore, a precautionary principle is followed with much greater top-down control upon data collection. However, in the United States, there is a far greater reliance on market mechanisms and self-regulating organisations to check inappropriate processing activities, and government intervention is limited to cases where a clear harm is demonstrable.28
Continuing research by the Centre for Information Policy Leadership under its Privacy Risk Framework Project looks at a system of articulating what harms and risks arising from use of collected data. They have arrived a matrix of threats and harms. Threats are categorised as —a) inappropriate use of personal information and b) personal information in the wrong hands. More importantly for our purposes, harms are divided into: a) tangible harms which are physical or economic in nature (bodily harm, loss of liberty, damage to earning power and economic interests); b) intangible harms which can be demonstrated (chilling effects, reputational harm, detriment from surveillance, discrimination and intrusion into private life); and c) societal harm (damage to democratic institutions and loss of social trust).29For any harms-based system, a matrix like above needs to emerge clearly so that regulation can focus on mitigating practices leading to the harms.
Legitimate interests
Lokke Moerel and Corien Prins, in their article “Privacy for Homo Digitalis – Proposal for a new regulatory framework for data protection in the light of Big Data and Internet of Things”30 use the ideal of responsive regulation which considers empirically observable practices and institutions while determining the regulation and enforcement required. They state that current data protection frameworks—which rely on mandating some principles of how data has to be processed—is exercised through merely procedural notification and consent requirements. Further, Moerel and Prins feel that data protection law cannot only involve a consideration of individual interest but also needs to take into account collective interest. Therefore, the test must be a broader assessment than merely the purpose limitation articulating the interests of the parties directly involved, but whether a legitimate interest is achieved.
Legitimate interest has been put forth as an alternative to the purpose limitation. Legitimate is not a new concept and has been a part of the EU Data Protection Directive and also finds a place in the new General Data Protection Regulation. Article 7 (f) of the EU Directive31 provided for legitimate interest balanced against the interests or fundamental rights and freedoms of the data subject as the last justifiable reason for use of data. Due to confusion in its interpretation, the Article 29 Working Party, in 2014,32looked into the role of legitimate interest and arrived at the following factors to determine the presence of a legitimate interest— a) the status of the individual (employee, consumer, patient) and the controller (employer, company in a dominant position, healthcare service); b) the circumstances surrounding the data processing (contract relationship of data subject and processor); c) the legitimate expectations of the individual.
Federico Ferretti has criticised the legitimate interest principle as vague and ambiguous. The balancing of legitimate interest in using the data against fundamental rights and freedoms of the data subject gives the data controllers some degree of flexibility in determining whether data may be processed; however, this also reduces the legal certainty that data subject have of their data not being used for purposes they have not agreed to.33However, it is this paper’s contention that it is not the intent of the legitimate interest criteria but the lack of consensus on its application which creates an ambiguity. Moerel and Prins articulate a test for using legitimate interest which is cognizant of the need to use data for the purpose of Big Data processing, as well as ensuring that the rights of data subjects are not harmed.
As demonstrated earlier, the processing of data and its underlying purposes have become exceedingly complex and the conventional tool to describe these processes ‘privacy notices’ are too lengthy, too complex and too profuse in numbers to have any meaningful impact.34The idea of information self-determination, as contemplated by Westin in American jurisprudence, is not achieved under the current framework. Moerel and Prins recommend five factors35 as relevant in determining the legitimate interest. Of the five, the following three are relevant to the present discussion:
- Collective Interest — A cost-benefit analysis should be conducted, which examines the implications for privacy for the data subjects as well as the society, as a whole.
- The nature of the data — Rather than having specific categories of data, the nature of data needs to be assessed contextually to determine legitimate interest.
- Contractual relationship and consent not independent grounds — This test has two parts. First, in case of contractual relationship between data subject and data controller: the more specific the contractual relationship, the more restrictions apply to the use of the data. Second, consent does not function as a separate principle which, once satisfied, need not be revisited. The nature of the consent (opportunities made available to data subject, opt in/opt out, and others) will continue to play a role in determining legitimate interest.
Conclusion
Replacing the purpose limitation principles with a use-based system as articulated above poses the danger of allowing governments and the private sector to carry out indiscriminate data collection under the blanket guise that any and all data may be of some use in the future. The harms-based approach has many merits and there is a stark need for more use of risk assessments techniques and privacy impact assessments in data governance. However, it is important that it merely adds to the existing controls imposed at data collection, and not replace them in their entirety. On the other hand, the legitimate interests principle, especially as put forth by Moerel and Prins, is more cognizant of the different factors at play — the inefficacy of existing purpose limitation principles, the need for businesses to use data for purposes unidentified at the stage of collection, and the need to ensure that it is not misused for indiscriminate collection and purposes. However, it also poses a much heavier burden on data controllers to take into account various factors before determining legitimate interest. If legitimate interest has to emerge as a realistic alternative to purpose limitation, there needs to be greater clarity on how data controllers must apply this principle.
Endnotes
- Prachi Shrivastava, “Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise,” Legally India, Jyly 23, 2015, http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise.
- Rebecca Bowe, “Growing Mistrust of India’s Biometric ID Scheme,” Electronic Frontier Foundation, May 4, 2012, https://www.eff.org/deeplinks/2012/05/growing-mistrust-india-biometric-id-scheme.
- Lisa Hayes, “Digital India’s Impact on Privacy: Aadhaar numbers, biometrics, and more,” Centre for Democracy and Technology, January 20, 2015, https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/.
- “India’s Surveillance State,” Software Freedom Law Centre, http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/.
- “Internet Privacy in India,” Centre for Internet and Society, http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-india.
- Vivek Pai, “Indian Government says it is still drafting privacy law, but doesn’t give timelines,” Medianama, May 4, 2016, http://www.medianama.com/2016/05/223-government-privacy-draft-policy/.
- Information Technology (Intermediaries Guidelines) Rules, 2011,
http://deity.gov.in/sites/upload_files/dit/files/GSR314E_10511%281%29.pdf. - Discussion Points for the Meeting to be taken by Home Secretary at 2:30 pm on 7-10-11 to discuss the drat Privacy Bill, http://cis-india.org/internet-governance/draft-bill-on-right-to-privacy.
- Alan Westin, Privacy and Freedom (New York: Atheneum, 2015).
- US Secretary’s Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, http://www.justice.gov/opcl/docs/rec-com-rights.pdf.
- OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm
- Fred Cate, “The Failure of Information Practice Principles,” in Consumer Protection in the Age of the Information Economy, ed. Jane K. Winn (Burlington: Aldershot, Hants, England, 2006) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1156972.
- Amber Sinha and Scott Mason, “A Critique of Consent in Informational Privacy,” Centre for Internet and Society, January 11, 2016, http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy.
- Daniel Solove, “Privacy self-management and consent dilemma,” Harvard Law Review 126, (2013): 1880.
- Jonathan Obar, “Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management,” Big Data and Society 2(2), (2015), doi: 10.1177/2053951715608876.
- Supra Note 12.
- Supra Note 14.
- Paul Ohm, “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization” available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1450006; Arvind Narayanan and Vitaly Shmatikov, “Robust De-anonymization of Large Sparse Datasets” available at https://www.cs.utexas.edu/~shmat/shmat_oak08netflix.pdf.
- D. Hirsch, “That’s Unfair! Or is it? Big Data, Discrimination and the FTC’s Unfairness Authority,” Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf
- A Marthews and C Tucker, “Government Surveillance and Internet Search Behavior”, available at http://ssrn.com/abstract=2412564; Danah Boyd and Kate Crawford, “Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society, Vol. 15, Issue 5, (2012).
- Scott Mason, “Benefits and Harms of Big Data”, Centre for Internet and Society, available at http://cis-india.org/internet-governance/blog/benefits-and-harms-of-big-data#_ftn37.
- Cate, “The Failure of Information Practice Principles.”
- Solove, “Privacy self-management and consent dilemma,” 1882.
- Cate, “The Failure of Information Practice Principles.”
- Fred Cate and Viktor Schoenberger, “Notice and Consent in a world of Big Data,” International Data Privacy Law 3(2), (2013): 69.
- Solove, “Privacy self-management and consent dilemma,” 1883.
- Lokke Moerel, “Netherlands: Big Data Protection: How To Make The Draft EU Regulation On Data Protection Future Proof”, Mondaq, March 11. 2014, http://www.mondaq.com/x/298416/data+protection/Big+Data+Protection+How+To+Make+The+Dra%20ft+EU+Regulation+On+Data+Protection+Future+Proof%20al%20Lecture.
- Moerel, “Netherlands: Big Data Protection.”
- Centre for Information Policy Leadership, “A Risk-based Approach to Privacy: Improving Effectiveness in Practice,” Hunton and Williams LLP, June 19, 2014, https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/white_paper_1-a_risk_based_approach_to_privacy_improving_effectiveness_in_practice.pdf.
- Lokke Moerel and Corien Prins, “Privacy for Homo Digitalis: Proposal for a new regulatory framework for data protection in the light of Big Data and Internet of Things”, Social Science Research Network, May 25, 2016, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2784123.
- EU Directive 95/46/EC – The Data Protection Directive, https://www.dataprotection.ie/docs/EU-Directive-95-46-EC-Chapter-2/93.htm.
- Article 29 Data Protection Working Party, “Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC,” http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp217_en.pdf.
- Frederico Ferretti, “Data protection and the legitimate interest of data controllers: Much ado about nothing or the winter of rights?,” Common Market Law Review 51(2014): 1-26. http://bura.brunel.ac.uk/bitstream/2438/9724/1/Fulltext.pdf.
- Sinha and Mason, “A Critique of Consent in Informational Privacy.”
- Moerel and Prins, “Privacy for Homo Digitalis.”
Policy Brief on the Report of the UN Group of Governmental Experts on ICT
The most recent report, Developments in the Field of Information and Telecommunications in the Context of International Security, was published in June 2015. The 2015 Report touches upon a number of issues, including international cooperation, norms and principles for responsible state behavior, confidence building measures cross border exchange of information, and capacity building measures.
Annual reports will continue to be accepted by the General Assembly, and the 2016/2017 Group of Governmental Experts will have it's first meeting in August 2016. India was a member of the Group of Governmental Experts in 2013.
The Centre for Internet and Society (CIS) has written an article analyzing India’s alignment with the recommendations of the report of the Group of Governmental Experts. This policy brief attempts to articulate the major policy actions that may be considered by India to further incorporate and implement the principles enunciated in the Report.
CIS believes that the report of the Group of Governmental Experts provides important minimum standards that countries could adhere to in light of challenges to international security posed by ICT developments. Given the global nature of these challenges and the need for nations to holistically address such challenges from a human rights and security perspective, CIS believes that the Group of Governmental Experts and similar international forums are useful and important forums for India to continue to actively engage with.
Below are our specific recommendations:
(a) Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security;
India has entered into treaties on ICT issues with countries such as Belarus, Canada, China, Egypt, and France. Additionally, India’s IT Act addresses a number of the cyber crimes listed in the Budapest Convention. However, India is not yet a signatory to the Convention. This leaves scope for India to consider further forums and means of international cooperation to better realise this principle.
India has been invited to accede to the Budapest Convention in the past but for various tactical and political reasons has not yet agreed to do so. Although whether to accede to an International Convention or not is usually a well discussed and thought out policy decision of the diplomatic core of a country, the mutual assistance framework, however flawed it may be, would offer a better opportunity for India for international cooperation for increasing the stability and security of ICTs and prevent harmful ICT practices as envisaged in the Report of the Group of Governmental Experts.
(b) In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution [of cybercrime] in the ICT environment and the nature and extent of the consequences;
While the Department of Electronics and Information Technology (DEITY) as well as the Computer Emergency Response Team, India (CERT-In) have a number of policies which talk about maintaining security and means of addressing threats in the ICT environment, most ICT incidents, crimes or illegal activities using ICT, unless they involve large or government institutions, are handled by the regular police establishment of the country. The lack of capacity, both in terms of infrastructure and skill, of the regular police to adequately address most cyber crimes is an area that needs to be strengthened. The need for cyber security capacity building in India was highlighted in 2015 by the Standing Committee on Information Technology. It would be useful for dedicated cyber crime departments to be established in all districts. This would be a step in the right direction to provide the requisite capacity and resources to deal with the various technical issues such as attribution, jurisdiction, etc. arising out of ICT incidents.
(d) States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect;
Owing to the growing irrelevance of physical and political borders in the age of globally networked devices, one of the most important issues arising out of ICTs and cyber crimes is the need for greater and more efficient exchange of information between nations. It has been widely accepted that sharing of information on a regular and sustained basis between nation states would be a very important tool. Limitations in the traditional mechanisms (MLATs, Letters Rogatory, etc.) such as the delay in accessing the information as well as denial of access due to differences in legal standards, present hurdles to the efficacy of law enforcement agencies only emphasize the urgency of developing a new mechanism of international information sharing that would be able to deal with ICT incidents, while at the same time protecting the freedoms and privacy rights of the citizens of the world. Exploration and participation in dialogues and solutions that are evolving at the international level around cross border sharing of information is key.
(i) States should take reasonable steps to ensure the integrity of the supply chain [of ICT equipment] so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions;
While the National Electronics Policy of 2012 states that the government should mandate technical and safety standards in order to curb the inflow of sub-standard and unsafe electronic products, the government is yet to mandate any broad standards in the Indian market for ICT equipment. Considering the enormous security implications of compromised ICT this is an area where the government should prioritize and must act immediately. Mandating standards may require the establishment of a monitoring or enforcement mechanism to ensure that the standards are being implemented. This should be done with the aim of ensuring security while not hindering innovation or the flow of business. To achieve such a balance, research and discussion is needed within the government to formulate a mechanism which would ensure the safety and quality of ICT tools while at the same time ensuring that industry is not hindered.
Conclusion
The suggestions given above are some of the major lessons from the analysis of the UN Report on ICT which CIS believe the government of India could adopt and pursue to strengthen its enlightenment with the recommendations of the Report. It is also imperative that the Government of India continues to realise the importance of the work being done by the Group of Governmental Experts and take measures to ensure that a representative from India is included in future Groups. Meanwhile, India can take positive steps by strengthening domestic privacy safeguards, improving transparency and efficiency of relevant policies and processes, and looking towards solutions that respect rights and strengthen security.
Report on Understanding Aadhaar and its New Challenges
Report: Download (PDF)
This Report is a compilation of the observations made by participants at the workshop relating to myriad issues under the UID Project and various strategies that could be pursued to address these issues. In this Report we have classified the observations and discussions into following themes:
1. Brief Background of the UID Project
2. Legal Status of the UIDAI Project
3. National Identity Projects in Other Jurisdictions
4. Technologies of Identification and Authentication
- Use of Biometric Information for Identification and Authentication
- Architectures of Identification
- Security Infrastructure of CIDR
7. Strategies for Future Action
Annexure A Workshop Agenda
Annexure B Workshop Participants
1. Brief Background of the UID Project
In the year 2009, the UIDAI was established and the UID project was conceived by the Planning Commission under the UPA government to provide unique identification for each resident in India and to be used for delivery of welfare government services in an efficient and transparent manner, along with using it as a tool to monitor government schemes. The objective of the scheme has been to issue a unique identification number by the Unique Identification Authority of India, which can be authenticated and verified online. It was conceptualized and implemented as a platform to facilitate identification and avoid fake identity issues and delivery of government benefits based on the demographic and biometric data available with the Authority.
The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (the “Act”) was passed as a money bill on March 16, 2016 and was notified in the gazette March 25, 2016 upon receiving the assent of the President. However, the enforceability date has not been mentioned due to which the bill has not come into force.
The Act provides that the Aadhaar number can be used to validate a person’s identity, but it cannot be used as a proof of citizenship. Also, the government can make it mandatory for a person to authenticate her/his identity using Aadhaar number before receiving any government subsidy, benefit, or service. At the time of enrolment, the enrolling agency is required to provide notice to the individual regarding how the information will be used, the type of entities the information will be shared with and their right to access their information. Consent of an individual would be obtained for using his/her identity information during enrolment as well as authentication, and would be informed of the nature of information that may be shared. The Act clearly lays that the identity information of a resident shall not be sued for any purpose other than specified at the time of authentication and disclosure of information can be made only pursuant to an order of a court not inferior to that of a District Judge and/or disclosure made in the interest of national security.
2. Legal Status of the UIDAI Project
In this section, we have summarised the discussions on the procedural issues with the passage of the Act. The participants had criticised the passage of the Act as a money bill in the Parliament. The participants also assessed the litigation pending in the Supreme Court of India that would be affected by this law. These discussions took place in the session titled, ‘Current Status of Aadhaar’ and have been summarised below.
Procedural Issues with Passage of the Act
The participants contested the introduction of the Act in the form of a money bill. The rationale behind this was explained at the session and is briefly explained here. Article 110 (1) of the Constitution of India defines a money bill as one containing provisions only regarding the matters enumerated or any matters incidental to the following: a) imposition, regulation and abolition of any tax, b) borrowing or other financial obligations of the Government of India, c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, d) appropriation of money out of CFI, e) expenditure charged on the CFI or f) receipt or custody or audit of money into CFI or public account of India. The Act makes references to benefits, subsidies and services which are funded by the Consolidated Fund of India (CFI), however the main objectives of the Act is to create a right to obtain a unique identification number and provide for a statutory mechanism to regulate this process. The Act only establishes an identification mechanism which facilitates distribution of benefits and subsidies funded by the CFI and this identification mechanism (Aadhaar number) does not give it the character of a money bill. Further, money bills can be introduced only in the Lok Sabha, and the Rajya Sabha cannot make amendments to such bills passed by the Lok Sabha. The Rajya Sabha can suggest amendments, but it is the Lok Sabha’s choice to accept or reject them. This leaves the Rajya Sabha with no effective role to play in the passage of the bill.
The participants also briefly examined the writ petition that has been filed by former Union minister Jairam Ramesh challenging the constitutionality and legality of the treatment of this Act as a money bill which has raised the question of judiciary’s power to review the decisions of the speaker. Article 122 of the Constitution of India provides that this power of judicial review can be exercised to look into procedural irregularities. The question remains whether the Supreme Court will rule that it can determine the constitutionality of the decision made by the speaker relating to the manner in which the Act was introduced in the Lok Sabha. A few participants mentioned that similar circumstances had arisen in the case of Mohd. Saeed Siddiqui v. State of U.P. [1].
where the Supreme Court refused to interfere with the decision of the Uttar Pradesh legislative assembly speaker certifying an amendment bill to increase the tenure of the Lokayukta as a money bill, despite the fact that the bill amended the Uttar Pradesh Lokayukta and Up-Lokayuktas Act, 1975, which was passed as an ordinary bill by both houses. The Court in this case held that the decision of the speaker was final and that the proceedings of the legislature being important legislative privilege could not be inquired into by courts. The Court added, “the question whether a bill is a money bill or not can be raised only in the state legislative assembly by a member thereof when the bill is pending in the state legislature and before it becomes an Act.”
However, it is necessary to carve a distinction between Rajya Sabha and State Legislature. Unlike the State Legislature, constitution of Rajya Sabha is not optional therefore significance of the two bodies in the parliamentary process cannot be considered the same. Participants also made another significant observation about a similar bill on the UID project (National Identification Authority of India (NIDAI) Bill) that was introduced before by the UPA government in 2010 and was deemed unacceptable by the standing committee on finance, headed by Yashwant Sinha. This bill was subsequently withdrawn.
Status of Related Litigation
A panellist in this session briefly summarised all the litigation that was related to or would be affected by the Act. The panellist also highlighted several Supreme Court orders in the case of KS Puttuswamy v. Union of India [2] which limited the use of Aadhaar. We have reproduced the presentation below.
- KS Puttuswamy v. Union of India - This petition was filed in 2012 with primary concern about providing Aadhaar numbers to illegal immigrants in India. It was contended that this could not be done without a law establishing the UIDAI and amendment to the Citizenship laws. The petitioner raised concerns about privacy and fallibility of biometrics.
- Sudhir Vombatkere & Bezwada Wilson [3] - This petition was filed in 2013 on grounds of infringement of right to privacy guaranteed under Article 21 of the Constitution of India and the security threat on account of data convergence.
- Aruna Roy & Nikhil Dey [4] - This petition was filed in 2013 on the grounds of large scale exclusion of people from access to basic welfare services caused by UID. After their petition, no. of intervention applications were filed. These were the following:
- Col. Mathew Thomas [5] - This petition was filed on the grounds of threat to national security posed by the UID project particularly in relation to arrangements for data sharing with foreign companies (with links to foreign intelligence agencies).
- Nagrik Chetna Manch [6] - This petition was filed in 2013 and led by Dr. Anupam Saraph on the grounds that the UID project was detrimental to financial service regulation and financial inclusion.
- S. Raju [7] - This petition was filed on the grounds that the UID project had implications on the federal structure of the State and was detrimental to financial inclusion.
- Beghar Foundation - This petition was filed in 2013 in the Delhi High Court on the grounds invasion of privacy and exclusion specifically in relation to the homeless. It subsequently joined the petition filed by Aruna Roy and Nikhil Dey as an intervener.
- Vickram Crishna – This petition was originally filed in the Bombay High Court in 2013 on the grounds of surveillance and invasion of privacy. It was later transferred to the Supreme Court.
- Somasekhar – This petition was filed on the grounds of procedural unreasonableness of the UID project and also exclusion & privacy. The petitioner later intervened in the petition filed by Aruna Roy and Nikhil Dey in 2013.
- Rajeev Chandrashekhar– This petition was filed on the ground of lack of legal sanction for the UID project. He later intervened in the petition filed by Aruna Roy and Nikhil Dey in 2013. His position has changed now.
- Further, a petition was filed by Mr. Jairam Ramesh initially challenging the passage of the Act as a money bill but subsequently, it has been amended to include issues of violation of right to privacy and exclusion of the poor and has advocated for five amendments that were suggested to the Aadhaar Bill by the Rajya Sabha.
Relevant Orders of the Supreme Court
There are six orders of the Supreme Court which are noteworthy.
- Order of Sept. 23, 2013 - The Supreme court directed that: 1) no person shall suffer for not having an aadhaar number despite the fact that a circular by an authority makes it mandatory; 2) it should be checked if a person applying for aadhaar number voluntarily is entitled to it under the law; and 3) precaution should be taken that it is not be issued to illegal immigrants.
- Order of 26th November, 2013 – Applications were filed by UIDAI, Ministry of Petroleum & Natural Gas, Govt of India, Indian Oil Corporation, BPCL and HPCL for modifying the September 23rd order and sought permission from the Supreme Court to make aadhaar number mandatory. The Supreme Court held that the order of September 23rd would continue to be effective.
- Order of 24th March, 2014 – This order was passed by the Supreme Court in a special leave petition filed in the case of UIDAI v CBI [8] wherein UIDAI was asked to UIDAI to share biometric information of all residents of a particular place in Goa to facilitate a criminal investigation involving charges of rape and sexual assault. The Supreme Court restrained UIDAI from transferring any biometric information of an individual without to any other agency without his consent in writing. The Supreme Court also directed all the authorities to modify their forms/circulars/likes so as to not make aadhaar number mandatory.
- Order of 16th March, 2015 - The SC took notice of widespread violations of the order passed on September 23rd, 2013 and directed the Centre and the states to adhere to these orders to not make aadhaar compulsory.
- Orders of August 11, 2015 – In the first order, the Central Government was directed to publicise the fact that aadhaar was voluntary. The Supreme Court further held that provision of benefits due to a citizen of India would not be made conditional upon obtaining an aadhaar number and restricted the use of aadhaar to the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene and the LPG Distribution Scheme. The Supreme Court also held that information of an individual that was collected in order to issue an aadhaar number would not be used for any purpose except when directed by the Court for criminal investigations. Separately, the status of fundamental right to privacy was contested and accordingly the Supreme Court directed that the issue be taken up before the Chief Justice of India.
- Orders of October 16, 2015 – The Union of India, the states of Gujarat, Maharashtra, Himachal Pradesh and Rajasthan, and authorities including SEBI, TRAI, CBDT, IRDA , RBI applied for a hearing before the Constitution Bench for modification of the order passed by the Supreme Court on August 11 and allow use of aadhaar number schemes like The Mahatma Gandhi National Rural Employment Guarantee Scheme MGNREGS), National Social Assistance Programme (Old Age Pensions, Widow Pensions, Disability Pensions) Prime Minister's Jan Dhan Yojana (PMJDY) and Employees' Providend Fund Organisation (EPFO). The Bench allowed the use of aadhaar number for these schemes but stressed upon the need to keep aadhaar scheme voluntary until the matter was finally decided.
Status of these orders
The participants discussed the possible impact of the law on the operation of these orders. A participant pointed out that matters in the Supreme Court had not become infructuous because fundamental issues that were being heard in the Supreme Court had not been resolved by the passage of the Act. Several participants believed that the aforementioned orders were effective because the law had not come into force. Therefore, aadhaar number could only be used for purposes specified by the Supreme Court and it could not be made mandatory. Participants also highlighted that when the Act was implemented, it would not nullify the orders of the Supreme Court unless Union of India asked the Supreme Court for it specifically and the Supreme Court sanctioned that.
3. National Identity Projects in Other Jurisdictions
A panellist had provided a brief overview of similar programs on identification that have been launched in other jurisdictions including Pakistan, United Kingdom, France, Estonia and Argentina in the recent past in the session titled ‘Aadhaar - International Dimensions’. This presentation mainly sought to assess the incentives that drove the governments in these jurisdictions to formulate these projects, mandatory nature of their adoption and their popularity. The Report has reproduced the presentation here.
Pakistan
The Second Amendment to the Constitution of Pakistan in 2000 established the National Database and Regulation Authority in the country, which regulates government databases and statistically manages the sensitive registration database of the citizens of Pakistan. It is also responsible for issuing national identity cards to the citizens of Pakistan. Although the card is not legally compulsory for a Pakistani citizen, it is mandatory for:
- Voting
- Obtaining a passport
- Purchasing vehicles and land
- Obtaining a driver licence
- Purchasing a plane or train ticket
- Obtaining a mobile phone SIM card
- Obtaining electricity, gas, and water
- Securing admission to college and other post-graduate institutes
- Conducting major financial transactions
Therefore, it is pretty much necessary for basic civic life in the country. In 2012, NADRA introduced the Smart National Identity Card, an electronic identity card, which implements 36 security features. The following information can be found on the card and subsequently the central database: Legal Name, Gender (male, female, or transgender), Father's name (Husband's name for married females), Identification Mark, Date of Birth, National Identity Card Number, Family Tree ID Number, Current Address, Permanent Address, Date of Issue, Date of Expiry, Signature, Photo, and Fingerprint (Thumbprint). NADRA also records the applicant's religion, but this is not noted on the card itself. (This system has not been removed yet and is still operational in Pakistan.)
United Kingdom
The Identity Cards Act was introduced in the wake of the terrorist attacks on 11th September, 2001, amidst rising concerns about identity theft and the misuse of public services. The card was to be used to obtain social security services, but the ability to properly identify a person to their true identity was central to the proposal, with wider implications for prevention of crime and terrorism. The cards were linked to a central database (the National Identity Register), which would store information about all of the holders of the cards. The concerns raised by human rights lawyers, activists, security professionals and IT experts, as well as politicians were not to do with the cards as much as with the NIR. The Act specified 50 categories of information that the NIR could hold, including up to 10 fingerprints, digitised facial scan and iris scan, current and past UK and overseas places of residence of all residents of the UK throughout their lives. The central database was purported to be a prime target for cyber attacks, and was also said to be a violation of the right to privacy of UK citizens. The Act was passed by the Labour Government in 2006, and repealed by the Conservative-Liberal Democrat Coalition Government as part of their measures to “reverse the substantial erosion of civil liberties under the Labour Government and roll back state intrusion.”
Estonia
The Estonian i-card is a smart card issued to Estonian citizens by the Police and Border Guard Board. All Estonian citizens and permanent residents are legally obliged to possess this card from the age of 15. The card stores data such as the user's full name, gender, national identification number, and cryptographic keys and public key certificates. The cryptographic signature in the card is legally equivalent to a manual signature, since 15 December 2000. The following are a few examples of what the card is used for:
- As a national ID card for legal travel within the EU for Estonian citizens
- As the national health insurance card
- As proof of identification when logging into bank accounts from a home computer
- For digital signatures
- For i-voting
- For accessing government databases to check one’s medical records, file taxes, etc.
- For picking up e-Prescriptions
- (This system is also operational in the country and has not been removed)
France
The biometric ID card was to include a compulsory chip containing personal information, such as fingerprints, a photograph, home address, height, and eye colour. A second, optional chip was to be implemented for online authentication and electronic signatures, to be used for e-government services and e-commerce. The law was passed with the purpose of combating “identity fraud”. It was referred to the Constitutional Council by more than 200 members of the French Parliament, who challenged the compatibility of the bill with the citizens’ fundamental rights, including the right to privacy and the presumption of innocence. The Council struck down the law, citing the issue of proportionality. “Regarding the nature of the recorded data, the range of the treatment, the technical characteristics and conditions of the consultation, the provisions of article 5 touch the right to privacy in a way that cannot be considered as proportional to the meant purpose”.
Argentina
Documento Nacional de Identidad or DNI (which means National Identity Document) is the main identity document for Argentine citizens, as well as temporary or permanent resident aliens. It is issued at a person's birth, and updated at 8 and 14 years of age simultaneously in one format: a card (DNI tarjeta); it's valid if identification is required, and is required for voting. The front side of the card states the name, sex, nationality, specimen issue, date of birth, date of issue, date of expiry, and transaction number along with the DNI number and portrait and signature of the card's bearer. The back side of the card shows the address of the card's bearer along with their right thumb fingerprint. The front side of the DNI also shows a barcode while the back shows machine-readable information. The DNI is a valid travel document for entering Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay, and Venezuela. (System still operational in the country)
4. Technologies of Identification and Authentication
The panel in the session titled ‘Aadhaar: Science, Technology, and Security’ explained the technical aspects of use of biometrics and privacy concerns, technology architecture for identification and inadequacy of infrastructure for information security. In this section, we have summarised the presentation and the ensuing discussions on these issues.
Use of Biometric Information for Identification and Authentication
The panelists explained with examples that identification and authentication were different things. Identity provides an answer to the question “who are you?” while authentication is a challenge-response process that provides a proof of the claim of identity. Common examples of identity are User ID (Login ID), cryptographic public keys and ATM or Smart cards while common authenticators are passwords (including OTPs), PINs and cryptographic private keys. Identity is public information but an authenticator must be private and known only to the user. Authentication must necessarily be a conscious process and active participation by the user is a must. It should also always be possible to revoke an authenticator. After providing this understanding of the two processes the panellist then explained if biometric information could be used for identification or authentication under the UID Project. Biometric information is clearly public information and it is questionable if it can be revoked. Therefore it should never be used for authentication, but only for identity verification. There is a possibility of authentication by fingerprints under the UID Project, without conscious participation of the user. One could trace the fingerprints of an individual from any place the individual has been in contact with. Therefore, authentication must certainly be done by other means. The panellist pointed out that there were five kinds of authentication under the UID Project, out of which two-factor authentication and one time password were considered suitable but use of biometric information and demographic information was extremely threatening and must be withdrawn.
Architectures of Identification
The panelists explained the architecture of the UID Project that has been designed for identification purposes, highlighted its limitations and suggested alternatives. His explanations are reproduced below.
Under the UID Project, there is a centralised means of identification i.e. the aadhaar number and biometric information stored in one place, Central Identification Data Repository (CIDR). It is better to have multiple means of identification than one (as contemplated under the UID Project) for preservation of our civil liberties. The question is what the available alternatives are. Web of trust is a way for operationalizing distributed identification but the challenge is how one brings people from all social levels to participate in it. There is a need for registrars who will sign keys and public databases for this purpose.
The aadhaar number functions as a common index and facilitates correlation of data across Government databases. While this is tremendously attractive it raises several privacy concerns as more and more information relating to an individual is available to others and is likely to be abused.
The aadhaar number is available in human readable form. This raises the risk of identification without consent and unauthorised profiling. It cannot be revoked. Potential for damage in case of identity theft increases manifold.
Under the UID Project, for the purpose of information security, Authentication User Agencies (“AUA”) are required to use local identifiers instead of aadhaar numbers but they are also required to map these local identifiers to the aadhaar numbers. Aadhaar numbers are not cryptographically secured; in fact they are publicly available. Hence this exercise for securing information is useless. An alternative would be to issue different identifiers for different domains and cryptographically embed a “master identifier” (in this case, equivalent of aadhaar number) into each local identifier.
All field devices (for example POS machines) should be registered and must communicate directly with UIDAI. In fact, UIDAI must verify the authenticity (tamper proof) of the field device during run time and a UIDAI approved authenticity certificate must be issued for field devices. This certificate must be made available to users on demand. Further, the security and privacy frameworks within which AUAs work must be appropriately defined by legal and technical means.
Security Infrastructure of CIDR
The panelists also enumerated the security features of the UID Project and highlighted the flaws in these features. These have been summarised below.
The security and privacy infrastructure of UIDAI has the following main features:
- 2048 bit PKI encryption of biometric data in transit
- End-to-end encryption from enrolment/POS to CIDR
- HMAC based tamper detection of PID blocks
- Registration and authentication of AUAs
- Within CIDR only a SHA 1 Hash of Aadhaar number is stored
- Audit trails are stored SHA 1 encrypted. Tamper detection?
- Only hashes of passwords and PINs are stored. (biometric data stored in original form though!)
- Authentication requests have unique session keys and HMAC
- Resident data stored using 100 way sharding (vertical partitioning). First two digits of Aadhaar number as shard keys
- All enrolment and update requests link to partitioned databases using Ref IDs (coded indices)
- All accesses through a hardware security module
- All analytics carried out on anonymised data
The panellists pointed out the concerns about information security on account of design flaws, lack of procedural safeguards, openness of the system and too much trust imposed on multiple players. All symmetric and private keys and hashes are stored somewhere within UIDAI. This indicates that trust is implicitly assumed which is a glaring design flaw. There is no well-defined approval procedure for data inspection, whether it is for the purpose of investigation or for data analytics. There is a likelihood of system hacks, insider leaks, and tampering of authentication records and audit trails. The ensuing discussions highlighted that the UIDAI had admitted to these security risks. The enrolment agencies and the enrolment devices cannot be trusted. AUAs cannot be trusted with biometric and demographic data; neither can they be trusted with sensitive user data of private nature. There is a need for an independent third party auditor for distributed key management, auditing and approving UIDAI programs, including those for data inspection and analytics, whitebox cryptographic compilation of critical parts of the UIDAI programs, issue of cryptographic keys to UIDAI programs for functional encryption, challenge-response for run-time authentication and certification of UIDAI programs. The panellist recommended that there was a need to to put a suitable legal framework to execute this.
The participants also discussed that information infrastructure must not be made of proprietary software (possibility for backdoors for US) and there must be a third party audit with a non-negotiable clause for public audit.
5. Aadhaar for Welfare?
The Report has summarised the discussions that took place in the sessions on ‘Direct Benefits Transfers’ and ‘Aadhaar: Broad Issues - II’ where the panellists critically analysed the claims of benefits and inclusion of Aadhaar made by the government in light of the ground realities in states where Aadhaar has been adopted for social welfare schemes.
Social Welfare: Modes of Access and Exclusion
Under the Act, a person may be required to authenticate or give proof of the aadhaar number in order to receive subsidy from the government (Section 7). A person is required to punch their fingerprints on POS machines in order to receive their entitlement under the social welfare schemes such as LPG and PDS. It was pointed out in the discussions that various states including Rajasthan and Delhi had witnessed fingerprint errors while doling out benefits at ration shops under the PDS scheme. People have failed to receive their entitled benefits because of these fingerprint errors thus resulting in exclusion of beneficiaries [9]. A panellist pointed out that in Rajasthan, dysfunctional biometrics had led to further corruption in ration shops. Ration shop owners often lied to the beneficiaries about functioning of the biometric machines (POS Machines) and kept the ration for sale in the market therefore making a lot of money at the expense of uninformed beneficiaries and depriving them of their entitlements.
Another participant organisation also pointed out similar circumstances in the ration shops in Patparganj and New Delhi constituencies. Here, the dealers had maintained the records of beneficiaries who had been categorized as follows: beneficiaries whose biometrics did not match, beneficiaries whose biometrics matched and entitlements were provided, beneficiaries who never visited the ration shop. It had been observed that there were no entries in the category of beneficiaries whose biometrics did not match however, the beneficiaries had a different story to tell. They complained that their biometrics did not match despite trying several times and there was no mechanism for a manual override. Consequently, they had not been able to receive any entitlements for months. The discussions also pointed out that the food authorities had placed complete reliance on authenticity of the POS machines and claim that this system would weed out families who were not entitled to the benefits. The MIS was also running technical glitches as a result there was a problem with registering information about these transactions hence, no records had been created with the State authority about these problems. A participant also discussed the plight of 30,000 widows in Delhi, who were entitled to pension and used to collect their entitlement from post offices, faced exclusion due to transition problems under the Jan Dhan Yojana (after the Jandhan was launched the money was transferred to their bank accounts in order to resolve the problem of misappropriation of money at the hands of post office officials). These widows were asked to open bank accounts to receive their entitlements and those who did not open these accounts and did not inform the post office were considered bogus.
In the discussions, the participants also noted that this unreliability of fingerprints as a means of authentication of an individual’s identity was highlighted at the meeting of Empowered Group of Ministers in 2011 by J Dsouza, a biometrics scientist. He used his wife’s fingerprints to demonstrate that fingerprints may change overtime and in such an event, one would not be able to use the POS machine anymore as the machine would continue to identify the impressions collected initially.
The participants who had been working in the field had contributed to the discussions by busting the myth that the UID Project helped to identify who was poor and resolve the problem of exclusion due to leakages in the social welfare programs. These discussions have been summarised below.
- It is important to understand that the UID Project is merely an identification and authentication system. It only helps in verifying if an individual is entitled to benefits under a social security scheme. It does not ensure plugging of leakages and reducing corruption in social security schemes as has been claimed by the Government. The reduction in leakage of PDS, for instance, should be attributed to digitization and not UID. The Government claims, that it has saved INR 15000 crore in provision of LPG on identification of 3.34 crore inactive accounts on account of the UID Project. This is untrue because the accounts were weeded by using mechanisms completely unrelated to the UID Project. Consequently, the savings on account of UID are only of INR 120 crore and not 15000 crore.
- The UID Project has resulted in exclusion of people either because they do not have an aadhaar number, or they have a wrong identification, or there are errors of classification or wilful misclassification. About 99.7% people who were given aadhaar numbers already had an identification document. In fact, during enrolment a person is required to produce one of 14 identification documents listed under the law in order to get an aadhaar number which makes it very difficult for a person with no identity to become entitled to a social welfare scheme.
A participant condemned the Government’s claim that the UID Project had helped in removing fake, bogus and duplicate cards and said that these terms could not be used synonymously and the authorities had no clarity about the difference between the meanings of these terms. The UID Project had only helped in removal of duplicate cards but had not helped in combating the use of fake and bogus cards.
Financial Inclusion and Direct Benefits Transfer
The participants also engaged in the discussions about the impact of the UID project on financial inclusion in India in the sessions titled ‘Aadhaar: Broad Issues - I & II’. We have summarised these discussions below.
The UID Project seeks to directly transfer money to a bank account in order to combat corruption. The discussions highlighted that this was nothing but introducing a neo liberal thrust in social policy and that it was not feasible for various reasons. First, 95% of rural India did not have functioning banks and banks are quite far away. Second, in order to combat this dearth of banks the idea of business correspondents, who handled banking transactions and helped in opening of bank accounts, had been introduced which had created various problems. The Reserve Bank of India reported that there was dearth of business correspondents as there was very little incentive to become one; their salary is merely INR 4000. Third, there were concerns about how an aadhaar number was considered a valid document for Know Your Customer (KYC) checks. There was a requirement for scrutiny and auditing of documents submitted during the time of enrolment which, in the present scheme of things, could not be verified. Fourth, there were no restrictions on number of bank accounts that could be opened with a single aadhaar number which gave rise to a possibility of opening multiple and shell accounts on a single aadhaar number. Therefore, records only showed transactions when money was transferred from an aadhaar number to another aadhaar number as opposed to an account-to-account transfer. The discussion relied on NPCI data which shows which bank an aadhaar number is associated with but does not show if a transaction by an aadhaar number is overwritten by another bank account belonging to the same aadhaar number.
6. Surveillance and UIDAI
The participants had discussed the possibility of an alternative purpose for enrolling Aadhaar in the session titled ‘Privacy, Surveillance, and Ethical Dimensions of Aadhaar’. The discussion traced the history of this project to gain insight on this issue. We have summarised below the key take aways from this discussion.
There are claims that the main objective of launching the UID Project is not to facilitate implementation of social security schemes but to collect personal (financial and non-financial) information of the citizens and residents of the country to build a data monopoly. For this purpose, PDS was chosen as a suitable social security scheme as it has the largest coverage. Several participants suggested that numerous reports authored by FICCI, KPMG and ASSOCHAM contained proposals for establishing a national identity authority which threw some light on the commercial intentions behind information collection under the UID Project.
It was also pointed out that there was documented proof that information collected under the UID Project might have been shared with foreign companies. There are suggestions about links established between proponents of the UID Project and companies backed by CIA or the French Government which run security projects and deal in data sharing in several jurisdictions.
7. Strategies for Future Action
The participants laid down a list of measures that must be taken to take the discussions forward. We have enumerated these recommendations below.
- Prepare and compile an anthology of articles as an output of this workshop.
- Prepare position papers on specific issues related to the UID Project
- Prepare pamphlets/brochures on issues with the UID Project for public consumption
- Prepare counter-advertisements for Aadhaar
- Publish existing empirical evidence on the flaws in Aadhaar.
- Set up an online portal dedicated to providing updates on the UID Project and allows discussions on specific issues related to Aadhaar.
- Use Social Media to reach out to the public. Regularly track and comment on social media pages of relevant departments of the government.
- Create groups dedicated to research and advocacy of specific aspects of the UID Project.
- Create a Coordination Committee preferably based in Delhi which would be responsible for regularly holding meetings and for preparing a coordinated plan of action. Employ permanent to staff to run the Committee.
- Organise an advocacy campaign against use of Aadhaar in collaboration with other organisations and build public domain acceptance.
- The campaign must specifically focus on the unfettered scope of UID and expanse, misrepresentation of the success of Aadhaar by highlighting real savings, technological flaws, status of pilot programs and increasing corruption on account of the UID Project
- Prepare a statement of public concern regarding the UID Project and collect signatures from eminent persons including academics, technical experts, civil society groups and members of parliament.
- Organise events and discussions on issues relating to Aadhaar and invite members og government departments to speak and discuss the issues.
- Write to Members of Parliament and Members of Legislative Assemblies raising questions on their or their parties’ support for Aadhaar and silence on the problems created by the UID Project.
- Organise public hearings in states like Rajasthan to observe and document ground realities of the UID Project and share these outcomes with the state government and media.
- Plan a national social audit and public hearing on the working of UID Project in the country.
- File Contempt Petitions in the Supreme Court and High Courts against mandatory use of Aadhaar number for services not allowed by the Supreme Court.
- Reach out to and engage with various foreign citizens and organisations that have been fighting on similar issues. The organisations and individuals who could be approached would include EPIC, Electronic Frontier foundation, David Moss, UK, Roger Clarke, Australia, Prof. Ian Angel, Snowden, Assange and Chomsky.
- Work towards increasing awareness about the UID Project and gaining support from the student and research community, student organisations, trade unions, and other associations and networks in the unorganised sector.
Annexure A – Workshop Agenda
May 26, 2016
9:00-9:30 |
Registration |
9:30-10:00 |
Prof. Dinesh Abrol - Welcome |
10:00-11:00 |
Session 1: Current Status of Aadhaar |
11:00-11:30 |
Tea Break |
11:30-13:30 |
Session 2: Direct Benefits Transfers |
13:30-14:30 |
Lunch |
14:30-16:00 |
Session 3: Aadhaar: Science, Technology, and Security |
16:00-16:30 |
Tea Break |
16:30-17:30 |
Session 4: Aadhaar - International Dimensions |
17:30-18:00 |
High Tea |
May 27, 2016
9:30-11:00 |
Session 5: Privacy, Surveillance and Ethical Dimensions of Aadhaar |
11:00-11:30 |
Tea Break |
11:30-13:00 |
Session 6: Aadhaar - Broad Issues I |
13:00-14:00 |
Lunch |
14:00-15:30 |
Session 7: Aadhaar - Broad Issues II |
15:30-16:00 |
Session 8: Conclusion |
16:00-18:00 |
Informal Meetings |
Annexure B – Workshop Participants
Anjali Bhardwaj, Satark Nagrik Sangathan
Dr. Anupam Saraph
Arjun Jayakumar, Software Freedom Law Centre
Ashok Rao, Delhi Science Forum
Prof. Chinmayi Arun, National Law University, Delhi
Prof. Dinesh Abrol, Jawaharlal Nehru University
Prof. G Nagarjuna, Homi Bhabha Center for Science Education, Tata Institute of Fundamental Research, Mumbai
Dr. Gopal Krishna, Citizens Forum for Civil Liberties
Prof. Himanshu, Jawaharlal Nehru University
Japreet Grewal, the Centre for Internet and Society
Joshita Pai, National Law University, Delhi
Malini Chakravarty, Centre for Budget and Governance Accountability
Col. Mathew Thomas
Prof. MS Sriram, Indian Institute of Management, Bangalore
Nikhil Dey, Mazdoor Kisan Shakti Sangathan
Prabir Purkayastha, Knowledge Commons and Free Software Movement of India
Pukhraj Singh, Bhujang
Rajiv Mishra, Jawaharlal Nehru University
Prof. R Ramakumar, Tata Institute of Social Sciences, Mumbai
Dr. Reetika Khera, Indian Institute of Technology, Delhi
Dr. Ritajyoti Bandyopadhyay, Indian Institute of Science Education and Research, Mohali
S. Prasanna, Advocate
Sanjay Kumar, Science Journalist
Sharath, Software Freedom Law Centre
Shivangi Narayan, Jawaharlal Nehru University
Prof. Subhashis Banerjee, Indian Institute of Technology, Delhi
Sumandro Chattapadhyay, the Centre for Internet and Society
Dr. Usha Ramanathan, Legal Researcher
Note: This list is only indicative, and not exhaustive.
[1] Civil Appeal No. 4853 of 2014
[2] WP(C) 494/2012
[3] . WP(C) 829/2013
[4] WP(C) 833/2013
[5] WP (C) 37/2015; (Earlier intervened in the Aruna Roy petition in 2013)
[6] WP (C) 932/2015
[7] Transferred from Madras HC 2013.
[8] SLP (Crl) 2524/2014 filed against the order of the Goa Bench of the Bombay HC in CRLWP 10/2014 wherein the High Court had directed UIDAI to share biometric information held by them of all residents of a particular place in Goa to help with a criminal investigation in a case involving charges of rape and sexual assault.
[9] See :http://scroll.in/article/806243/rajasthan-presses-on-with-aadhaar-after-fingerprint-readers-fail-well-buy-iris-scanners
We Truly are the Product being Sold
The change to WhatsApp’s terms of service to begin sharing user data with Facebook was effected in order to enable users to “communicate with businesses that matter” to them. (Reuters)
The article was published in the Hindustan Times on August 31, 2016.
WhatsApp clarifies in its blog post, “... by coordinating more with Facebook, we’ll be able to do things like track basic metrics about how often people use our services and better fight spam on WhatsApp. And by connecting your phone number with Facebook’s systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”
WhatsApp’s further clarifies that it will not post your number on Facebook or share this data with advertisers. This means little because it will share your number with Facebook for advertisement. It is simply doing indirectly, what it has said it won’t do directly. This new development also leads to the collapsing of different personae of a user, even making public their private life that they have so far chosen not to share online. Last week, Facebook published a list of 98 data points it collects on users. These data points combined with your WhatsApp phone number, profile picture, status message, last seen status, frequency of conversation with other users, and the names of these users (and their data) could lead to a severely uncomfortable invasion of privacy.
Consider a situation where you have spoken to a divorce lawyer in confidence over WhatsApp’s encrypted channel, and are then flooded with advertisements for marriage counselling and divorce attorneys when you next log in to Facebook at home. Or, you are desperately seeking loans and get in touch with several loan officers; and when you log in to Facebook at work, colleagues notice your News Feed flooded with ads for loans, articles on financial management, and support groups for people in debt.
It is no secret that Facebook makes money off interactions on its platform, and the more information that is shared and consumed, the more Facebook is benefitted. However, the company’s complete disregard for user consent in its efforts to grow is worrying, particularly because Facebook is a monopoly. In order for one to talk to friends and family and keep in touch, Facebook is the obvious, if not the only, choice. It is also increasingly becoming the most accessible way to engage with government agencies. For example, Indian embassies around the world have recently set up Facebook portals, the Bangalore Traffic Police is most easily contacted through Facebook, and heads of states are also turning to the platform to engage with people. It is crucial that such private and collective interactions of citizens with their respective government agencies are protected from becoming data points to which market researchers have access.
Given Facebook’s proclivity for unilaterally compromising user privacy, the Federal Trade Commission (FTC) in 2011 charged the company for deceiving consumers by misleading them about the privacy of their information. Following these charges, Facebook reached an agreement to give consumers clear notice and obtain consumers’ express consent before extending privacy settings that they had established. The latest modification to WhatsApp’s terms of service seems to amount to a clear violation of this agreement and brings out the grave need to treat user consent more seriously.
There is a way to opt out of sharing data for Facebook ads targeting that is outlined by WhatsApp on its blog, which is the best example for a case of invasion-of-privacy-by-design. WhatsApp plans to ask the users to untick a small green arrow, and then click on a large green button that says “Agree” (which is the only button) so as to indicate that they are opting-out. The interface of the notice seems to be consciously designed to confuse users by using the power of default option. For most users, agreeing to terms and conditions is a hasty click on a box and the last part of an installation process. Predictably, most users choose to go with default options, and this specific design of the opt-out option is not meaningful at all.
In 2005, Facebook’s default profile settings were such that anyone on Facebook could see your name, profile picture, gender and network. Your photos, wall posts and friends list were viewable by people in your network. Your contact information, birthday and other data could be seen by friends and only you could view the posts that you liked. Fast forward to 2010, and the entire internet, not just all Facebook users, can see your name, profile picture, gender, network, wall posts, photos, likes, friends list and other profile data. There hasn’t been a comprehensive study since 2010, but one can safely assume that Facebook’s privacy settings will only get progressively worse for users, and exponentially better for Facebook’s revenues. The service is free and we truly are the product being sold.
Indians Ask: Is Visiting a Torrent Site Really A Crime?
The blog post was first published in Global Voices on September 5, 2016.
Netizens who regularly use these and similar services have become anxious about what the rule may mean for them. Last week, a new legal notice concerning copyright violations sparked widespread rumors that users could be penalized for simply viewing torrent sites.
The notice now appears when one visits any of the banned websites. It reads:
This URL has been blocked under the instructions of the Competent Government Authority or in compliance with the orders of a Court of competent jurisdiction. Viewing, downloading, exhibiting or duplicating an illicit copy of the contents under this URL is punishable as an offence under the laws of India, including but not limited to under Sections 63, 63-A, 65 and 65-A of the Copyright Act, 1957 which prescribe imprisonment for 3 years and also fine of upto Rs. 3,00,000/-. Any person aggrieved by any such blocking of this URL may contact at [email protected] who will, within 48 hours, provide you the details of relevant proceedings under which you can approach the relevant High Court or Authority for redressal of your grievance.
Soon after news of the notice began to circulate, the Chennai High Court – one of the oldest courts in India — issued a John Doe order to block as many as 830 websites, including several torrent websites such as thepiratebay.se, torrenthound.com, and kickasstorrents.come.in.
Indian tech news portal Medianama published a blog post arguing that it is the downloading of pirated content from certain banned websites and not accessing those website that should lead to the legal issues. The problem, it seems, lies in the poor wording of the notice. Medianama described this as “bizarre by any rational standard” and noted that, taken literally, it does not comply with the Indian Copyright Act. Digital piracy legislation in India has been modified quite a lot in the recent times in general and over last five years in particular (Sections 63, 63A and 65 of the Indian Copyright Act of 1957 in particular.) But it has not been implemented with such force in the past. |
What is a torrent?A torrent is part of a system that enables peer-to-peer file sharing (“P2P”) that is used to distribute data and electronic files over the Internet. Known as BitTorrent, this file distribution system is one of the most common technical protocols for transferring large files, such as digital audio files containing TV shows or video clips or digital audio files containing songs. Within this system, files labeled with the .torrent extension contain meta data about files — e.g. file names, their sizes, folder structure and cryptographic hash value for integrity verification. They do not contain the content to be distributed, but without them, the system does not work. (via Wikipedia) |
---|
This is not the first time India has put a blanket ban on such sites. In December 2014, 32 websites — including including code repository Github, video streaming sites Vimeo and Dailymotion, online archive Internet Archive, free software hosting site Sourceforge — were banned in India. They were later unblocked after agreeing to remove some ISIS-related content.
As they have in the past, tech-savvy netizens began suggesting hacks to mask or fake one's IP address. Sumiteshwar Choudhary, a practicing criminal and matrimony lawyer, described on Quora how the law had existed for quite some time but the government had never fully enforced it:
[..] The only reason that India has not been able to successfully ban these services is because the servers rest outside India and we don’t have any law to extend our jurisdiction to that extent today. As an end user if you download a pirated version of things you are not entitled to, you can be booked criminally under this Act and can face prison for up to 2 years…
Twitter user Prisma Mama Thakur criticized the ban, arguing that it should be a low priority in a moment when India has many other important problems to solve:
Glaring Errors in UIDAI's Rebuttal
The article was published in Economic & Political Weekly Vol. 51, Issue No. 36, September 3, 2016.
While I am not a statistician, I have followed the technical debate between Hans Verghese Mathews and the UIDAI closely, and see a number of glaring errors in the latter’s so-called rebuttal in EPW (March 12, 2016).
The UIDAI alleges Mathews to have ignored the evidence that the Receiver Operating Characteristic (ROC) "flattens" with more factors. However, Mathews cannot be accused of ignorance if the flattening of the ROC is not relevant to his argument. To explain this in simple terms, the ROC curve is used to choose the appropriate "threshold distance" which determines false positives and false negatives, and belongs to a stage which precedes the estimation of the false positive identification rates (FPIR).
However, Mathews has used the FPIR estimates provided by the UIDAI (based on evidence from the enrolment of 84 million persons), and calculated how the FPIR changes when extrapolated for a population of 1.2 billion persons. In other words, he did not need to look at the ROC curve as that factor is not relevant to his argument, since he has used UIDAI data (which has presumably been estimated on the basis of all 12 factors : 10 fingerprints and 2 irises).
Further, UIDAI asks why Mathews has assumed a linear curve for his extrapolation. Mathews has done no such thing. In fact, in their paper "Role of Biometric Technology in Aadhaar Enrollment," the UIDAI states: "FPIR rate grows linearly with the database size" (nd, 19). Thus, this is an assumption formerly made by them (without providing rationale for it to be a linear curve as opposed to anything else). Mathews mathematically derives bounds for the FPIR in his paper, that is, the range within which the FPIR lies. One gets a linear curve only if they use the upper bound and not on the usage of anything else. So while Mathews does, as he explains, provide the results of the calculation based on the upper bound for the sake of simplicity, he nowhere asserts nor assumes a linear curve.
If, as the UIDAI claims, one cannot perform such an extrapolation and needs to depend on “empirical evidence” instead, the question arises as to how the UIDAI decided to scale up the programme to 1.3 billion people given the error rates. One could also ask if the machines being used to capture biometrics are good enough for the enlargement. Surely they would have performed some extrapolations to decide this.
In their paper they note that "although it [FPIR] is expected to grow as the database size increases, it is not expected to exceed manageable values even at full enrolment of 120 crores" (UIDAI nd, 13). They do not illustrate the extent to which the FPIR is expected to grow—neither in their initial paper, nor in their rebuttal to Mathews—whereas Mathews provides a method of estimating the increase of FPIR. Even if UIDAI is correct in its appraisal of FPIR and that it will not exceed "manageable values," they need to either exemplify their calculations or release the latest data. They have done neither, and that is quite unfortunate.
References
- Flaws in the UIDAI Process http://www.epw.in/journal/2016/9/special-articles/flaws-uidai-process.html
- Erring on Aadhaar http://www.epw.in/journal/2016/11/discussion/erring-aadhaar.html
- Request for Specifics http://www.epw.in/journal/2016/36/documents/request-specifics-rebuttal-u...
- Glaring Errors in UIDAI's Rebuttal http://www.epw.in/journal/2016/36/documents/glaring-errors-uidais-rebutt...
- Overlooking the UIDAI Process http://www.epw.in/journal/2016/36/documents/response-hans-verghese-mathe...
Internet Rights and Wrongs
The article was published in India Today on September 1, 2016. The original piece can be read here.
Over the last few weeks, there have been a number of cases of egregious censorship of websites in India. Many people started seeing notices that (incorrectly) gave an impression that they may end up in jail if they visited certain websites. However, these notices weren't an isolated phenomenon, nor one that is new. Worryingly, the higher judiciary has been drawn into these questionable moves to block websites as well.
Since 2011, numerous torrent search engines and communities have been blocked by Indian internet service providers (ISPs). Torrent search engines provide the same functionality for torrents that Google provides for websites. Are copyright infringing materials indexed and made searchable by Google? Yes. Do we shut down Google for this reason? No. However, that is precisely what private entertainment companies have done over the past five years in India. Companies hired by the producers of Tamil movies Singham and 3 managed to get video-sharing websites like Vimeo, Dailymotion and numerous torrent search engines blocked even before the movies released, without showing even a single case of copyright infringement existed on any of them. During the FIFA World Cup, Sony even managed to get Google Docs blocked. In some cases, these entertainment companies have abused 'John Doe' orders (generic orders that allow copyright enforcement against unnamed persons) and have asked ISPs to block websites. The ISPs, instead of ignoring such requests as instances of private censorship, have also complied. In other cases (like Sony's FIFA World Cup case), courts have ordered ISPs to block hundreds of websites without any copyright infringement proven against them. High court judges haven't even developed a coherent theory on whether or how Indian law allows them to block websites for alleged copyright infringement. Still they have gone ahead and blocked.
In 2012, hackers got into Reliance Communications servers and released a list of websites blocked by them. The list contained multiple links that sought to connect Satish Seth-a group MD in Reliance ADA Group-to the 2G scam: a clear case of secretive private censorship by RCom. Further, visiting some of the YouTube links which pertained to Satish Seth showed that they had been removed by YouTube due to dubious copyright infringement complaints filed by Reliance BIG Entertainment. Did the department of telecom, whose licences forbid ISPs from engaging in private censorship, take any action against RCom? No. Earlier this year, Tata Sky filed a complaint against YouTube in the Delhi High Court, noting that there were videos on it that taught people how to tweak their set-top boxes to get around the technological locks that Tata Sky had placed. The Delhi HC ordered YouTube "not to host content that violates any law for the time being in force", presuming that the videos in question did in fact violate Indian law. They cite two sections: Section 65A of the Copyright Act and Section 66 of the Information Technology Act. The first explicitly allows a user to break technological locks of the kind that Tata Sky has placed for dozens of reasons (and allows a person to teach others how to engage in such breaking), whereas the second requires finding of "dishonesty" or "fraud" along with "damage to a computer system, etc", and an intention to violate the law-none of which were found. The court effectively blocked videos on YouTube without any finding of illegality, thus once again siding with censorial corporations.
In 2013, Indore-based lawyer Kamlesh Vaswani filed a PIL in the Supreme Court calling for the government to undertake proactive blocking of all online pornography. Normally, a PIL is only admittable under Article 32 of the Constitution, on the basis of a violation of a fundamental right (which are listed in Part III of our Constitution). Vaswani's petition-which I have had the misfortune of having read carefully-does not at any point complain that the state is violating a fundamental right by not blocking pornography. Yet the petition wants to curb the fundamental right to freedom of expression, since the government is by no means in a position to determine what constitutes illegal pornography and what doesn't.
The larger problem extends to the now-discredited censor board (headed by the notorious Pahlaj Nihalani), as also the self-censorship practised on TV by the private Indian Broadcasters Federation (which even bleeps out words and phrases like 'Jesus', 'period', 'breast cancer' and 'beef'). 'Swachh Bharat' should not mean sanitising all media to be unobjectionable to the person with the lowest outrage threshold. So who will file a PIL against excessive censorship?
Services like TwitterSeva aren’t the silver bullets they are made out to be
Sunil Abraham, executive director of the Centre for Internet and Society, wrote this in response to the FactorDaily story on TwitterSeva, a special feature developed by Twitter’s India team to help citizens connect better with government services. Sunil's article in FactorDaily can be read here.
Let’s take a look at why the TwitterSeva approach is not adequate:
1. Vendor and Technology Neutrality: Providing a level ground for competing technologies in e-governance has been a globally accepted best practice for about 15 years now. This is usually done by using open standards policies and interoperability frameworks.
India does have a national open standards policy, but the National Informatics Centre (NIC) has only published one chapter of the Interoperability Framework for e-Governance .
The thing is, while Twitter might be the preferred choice for urban elites and the middle class, it might not be the choice of millions of Indians coming online. By implicitly signaling to citizens that Twitter complaints will be taken more seriously than e-mail or SMS complaints, the government is becoming a salesperson for Twitter. Ideally, all interactions that the state has with citizens should be such that citizens can choose which vendor and technology they would like to use. Ideally, the government should have its own work-flow so that it can harvest complaints, feedback and other communications from all social media platforms be it Twitter or Identica, Facebook or Diaspora, and publish responses back onto them.
By implicitly signalling to citizens that Twitter complaints will be taken more seriously than e-mail or SMS complaints, the government is becoming a salesperson for Twitter
Apart from undermining the power of choice for citizens, lack of vendor and technology neutrality in government use of technology undermines the efficient functioning of a competitive free market, which is the bedrock of future innovation.
When it comes to micro-blogging, Twitter has established a near monopoly in India. There are no clear signs of harm and therefore it would not be wise to advocate that the Competition Commission of India investigate Twitter. However, if the government helps Twitter tighten its grip over the Indian market, it is preventing the next cycle of creative destruction and disruption. Therefore, e-governance applications should ideally only “loosely couple” with the APIs of private firms so that competition and innovation are protected.
2. Holistic Approach and Accountability: Ideally, as the Electronic Service Delivery Bill 2011 had envisaged, every agency within the government was supposed to (within 180 days of the enactment of the Act) do several things: publish a list of services that will be delivered electronically with a deadline for each service; commit to service-level agreements for each service and provide details of the manner of delivery; provide an agency-level grievance redressal mechanism for citizens unhappy with the delivery of these electronic services.
Notwithstanding the 180-day commitment, the Bill required that “all public services shall be delivered in electronic mode within five years” after the enactment of the Bill with a potential three-year extension if the original deadline was not met. The Bill also envisaged the constitution of a Central Electronic Service Delivery Commission with a team of commissioners who “monitor the implementation of this Bill on a regular basis” and publish an annual report which would include “the number of electronic service requests in response to which service was provided in accordance with the applicable service levels and an analysis of the remaining cases.”
The Electronic Service Delivery Bill 2011 had a much more comprehensive and accountable plan for e-governance adoption in the country
Citizens suffering from non-compliance with the provisions of the Bill and unsatisfied with the response from the agency level grievance redressal mechanism could appeal to the Commission. The state or central commissioners after giving the government officials an opportunity to be heard were empowered to impose a fine of Rs 5000.
Unlike the piecemeal approach of TwitterSeva, the Bill had a much more comprehensive and accountable plan for e-governance adoption in the country.
3. Right To Transparency: Some of the interactions that the government has with citizens and firms may have to be disclosed under the obligation emerging from the Right to Information Act for disclosure to the public or to the requesting party. Therefore it is important that the government take its own steps for the retention of all data and records — independent of the goodwill and lifecycles of private firms.
Twitter is only 10 years old. It took 10 years for Orkut to shut down. Maybe Twitter will shut down in the next 10 years. How then will the government comply with RTI requests? Even if the government is not keen on pushing for data portablity as a right for consumers (just like mobile number portability in telecom, so that consumers can seamlessly shift between competing service providers), it absolutely should insist on data portability for all government use.
Twitter is only 10 years old. It took 10 years for Orkut to shut down. Maybe Twitter will shut down in the next 10 years. How then will the government comply with RTI requests?
This will allow it to shift to a) support multiple services, b) shift to competing/emerging services c) incrementally build its own infrastructure and also comply with the requirements of the Right to Information Act.
4. Privacy: Unfortunately, thanks to the techno-utopians behind the Aadhaar project, the current government is infected with “data ideology.” There is an obsession with collecting as much data as possible from citizens, storing it in centralized databases and providing “dashboards” to bureaucrats and politicians. This is diametrically opposed to the view of the security community.
Unfortunately, thanks to the techno-utopians behind the Aadhaar project, the current government is infected with “data ideology”
For example, Bruce Schneier posted on his blog in March this year (in a piece titled ‘Data is a Toxic Asset‘) saying: “What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous. This idea has always been part of the data protection law starting with the 2005 EU Data Protection Directive expressed as the principle of “Data Minimization” or “Collection Limitation”. More recently technologists and policy makers also use the phrase “Privacy by Design”. Introducing an unnecessary intermediary or gate-keeper between what is essentially transactions between citizens and the state is an egregious violation of a key privacy principle.”
5. Middle Class and Elite Capture: The use of Twitter amplifies the voices of the English-speaking, elite, and middle class citizens at the expense of the voices of the poor. While elites don’t exhibit fear when tagging police IDs and making public complaints from the comforts of their gated communities with private security guards shielding them the violence of the state, this might be a very intimidating option for the poor and disempowered.
While elites don’t fear tagging police IDs and making public complaints from the comforts of their gated communities, it’s intimidating for the disempowered
While the system may not be discriminatory in its design, it will have disparate impact on different sections of our society. In other words, the introduction of TwitterSeva will exacerbate power asymmetries in our society rather than ameliorating them.
The canonical scholarly reference for this is Kate Crawford’s analysis of City of Boston’s StreetBump smartphone, which resulted in an over-reporting of potholes in elite neighbourhoods and under-reporting from poor and elderly residents. This meant that efficiency in the allocation of the city’s resources was only a cover for increased discrimination against the powerless.
6. Security: The most important conclusion to draw from the Snowden disclosure is that the tin-foil conspiracy theorists who we used to dismiss as lunatics were correct. What has been established beyond doubt is that the United States of America is the world leader when it comes to conducting mass surveillance on netizens across the globe. It is still completely unclear how much access the NSA has to the databases of American social media giants. When the complete police force of a state starts to use Twitter for the delivery of services to the public, then it may be possible for foreign intelligence agencies to use this information to undermine our sovereignty and national security.
Internet Democratisation: IANA Transition Leaves Much to be Desired
The article was published in the Hindustan Times on October 6, 2016.
Many suspect Washington’s 2014 announcement of handing over control of the IANA contract to be fuelled by the outcry following Edward Snowden’s revelations of the extent of US government surveillance. Source: AFP
September 30, 2016, marked the expiration of a contract between the US government and the Internet Corporation for Assigned Names and Numbers (ICANN) to carry out the Internet Assigned Numbers Authority (IANA) functions.
In simpler, acronym-free terms, Washington’s formal oversight over the Internet’s address book has come to an end with the expiration of this contract, with control now being passed on to the “global multistakeholder community”.
ICANN was incorporated in California in 1998 to manage the backbone of the Internet, which included the domain name system (DNS), allocation of IP addresses and root servers. After an agreement with the US National Telecommunications and Information Administration (NTIA), ICANN was tasked with operating the IANA functions, which includes maintenance of the root zone file of the DNS. Over the years Washington has rejected calls to hand over the control of IANA functions, but in March 2014 it announced its intentions to do so and laid down conditions for the handover. Many suspect the driving force behind this announcement to be the outcry following Edward Snowden’s revelations of the extent of US government surveillance.
The conditions laid down by the NTIA were met, and the US government accepted the transition proposal, amidst much political pressure and opposition, most notably from Senator Ted Cruz.
This transition is a step in the right direction, but in reality, it changes very little as it fails to address two critical issues: Of jurisdiction and accountability.
Jurisdiction is important while considering the resolution of contractual disputes, application of labour and competition laws, disputes regarding ICANN’s decisions, consumer protection, financial transparency, etc. Many of these questions, although not all, will depend on where ICANN is located. ICANN’s new bylaws mention that it will continue to be incorporated in California, and subject to California law just as it was pre-transition. Having the DNS subject to the laws of a single country can only lend to its fragility. ICANN’s US jurisdiction also means that it is not free from the political pressures from the US Senate and in turn, the toxic effect of American party politics that were made visible in the events leading up to September 30.
Another critical issue that the transition does not address is that of ICANN accountability. Post-transition, ICANN’s board will continue to be the ultimate decision-making authority, thus controlling the organisation’s functioning, and ICANN staff will be accountable to the board alone.
To put things in perspective, look at the board’s track record in the recent past. In August, an Independent Review Panel (IRP) found that ICANN’s board had violated ICANN’s own bylaws and had failed to discharge its transparency obligations when it failed to look into staff misbehaviour. Following this, in September, ICANN decided to respond to such allegations of mismanagement, opacity and lack of accountability by launching a review. The review however, would not look into the issues, failures and false claims of the board, but instead focus on the process by which ICANN staff was able to engage in such misbehaviour. This ironically, will be in the form of an internal review that will pass through ICANN staff — the subjects of the investigation — before being taken up to the board.
At best, the transition is symbolic of Washington’s oversight over ICANN coming to an end. It is also symbolic of the empowerment of the global multistakeholder community. In reality, it fails to do either meaningfully.
IANA Transition: A Case of the Emperor’s New Clothes?
The post was published by Digital Asia Hub on October 6, 2016.
In March 2014, the US Government committed to ending its contract with ICANN to run the Internet Assigned Numbers Authority (IANA), and also announced that it would hand over control to the “global multistakeholder community”. The conditions for this handover were that the changes must be developed by stakeholders across the globe, with broad community consensus.
Further, it was indicated that any proposal must support and enhance the multistakeholder model; maintain the security, stability, and resiliency of the Internet Domain Name System (DNS); meet the needs and expectation of the global customers and partners of the IANA services and maintain the openness of the Internet. Further, it must not replace the NTIA role with a solution that is government-led or by an inter-governmental organisation.
These conditions were met, ICANN’s Supporting Organisations (SO’s) and Advisory Committees (ACs) accepted transition proposals, and these proposals were then accepted by the ICANN Board as well, putting the transition in motion. But not quite. The “global multistakeholder community” still had to wait for approval from the NTIA and the US government, both of whom eventually approved the proposal. The latter’s approval was confirmed after considerable uncertainty due to Senator Ted Cruz’s efforts to stop the transition, due to his belief that the transition was an exercise of the US government handing over control of the internet to foreign governments. Notwithstanding this, on 29th September, the US Senate passed a short term bill to keep the US Government funded till the end of the year, without a rider on the IANA transition. The next hurdle was a lawsuit filed in federal court in Texas by the attorney generals of four states to stop the handover of the IANA contract. As on the 30th of September, the court denied the Plaintiffs’ Application, thus allowing the transition to proceed.
What does this transition mean? What does it change? The transition, while a welcome step, leaves much to be desired in terms of tangible change, primarily because it fails to address the most important question, that of ICANN jurisdiction. It is important to have the Internet’s core Domain Name System (DNS) functioning free from the pressures and control of a single country or even a few countries; the transition does not ensure this, as the Post Transition IANA entity (PTI) will be under Californian jurisdiction, just like ICANN was pre-transition. The entire ICANN community has been witness to a single American political figure almost derailing its meticulous efforts simply because he could; and in many ways these events cemented the importance of having diversity in terms of legal jurisdiction of ICANN, the PTI and the root zone maintainer.
My colleague Pranesh Prakash has identified 11 reasons why the question of jurisdiction is important to consider during the IANA transition. Some of these issues depend on where ICANN, the PTI and the root zone maintainer are situated, some depend on the location of the office in question and still others depend on contracts that ICANN enters into. ICANN’s new bylaws state that it will be situated in California, the post transition IANA entities bylaws also make a Californian jurisdiction integral to its functioning. As an alternative, the Centre for Internet & Society has called for the “jurisdictional resilience” of ICANN, encompassing three crucial points: legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated, division of core Internet operators among multiple jurisdictions, and jurisdictional division of policymaking functions from technical implementation functions.
Transparency is also key to engaging meaningfully with ICANN. CIS has filed the most number of Documentary Information Disclosure Policy (DIDP) requests with ICANN, covering a range of subjects including its relationships with contracted parties, financial disclosure, revenue statements, and harassment policies. Asvatha Babu, an intern at CIS, analysed all responses to our requests and found that only 14% of our requests were answered fully. 40% of our requests had no relevant answers disclosed at all (these were mostly to do with complaints and contractual compliance). To illustrate the importance of engaging with ICANN transparency, CIS has focused on understanding ICANN’s sources of income since 2014. This is because we believe that conflict of interest can only be properly understood by following the money in a granular fashion. This information was not publicly available, and in fact, it seemed like ICANN didn’t know where it got its money from, either. It is only through the DIDP process that we were able to get ICANN to disclose sources of income, and figures along with those sources for a single financial year.
ICANN prides itself on being transparent and accountable, but in reality it is not. The most often used exception to avoid answering DIDP requests has been “Confidential business information and/or internal policies and procedures”, which in itself is a testament to ICANN’s opacity. Another condition for non-disclosure allows ICANN to reject answering “Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.”. These exemptions are not only vague, they are also extremely subjective: again, demonstrative of the need for enhanced accountability and transparency within ICANN. Key issues have not been addressed even at the time that the transition is formally underway. The grounds for denying DIDP requests are still vague and wide, effectively giving ICANN the discretion to decline answering difficult questions, which is unacceptable from an entity that is at the center of the multi-billion dollar domain name industry.
ICANN’s jurisdictional resilience and enhanced accountability are particularly vital for countries in Asia. Its policies, processes and functioning have historically been skewed towards western and industry interests, and ICANN can neither be truly global nor multistakeholder till such countries can engage meaningfully with it in a transparent fashion. The IANA transition is, of course, largely political, and may symbolise a transition to the global multistakeholder community, but in reality, it changes very little, if anything.
MLATs and the proposed Amendments to the US Electronic Communications Privacy Act
Published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document.
In the previous article on MLATs we discussed, in some detail, what MLATs are and why they are needed. One area which was briefly focused upon in that article was the limitations and criticisms of the MLAT mechanism, of which one of the main criticisms being the problems caused due to different legal standards in various jurisdictions as well as the time taken to process a request for information sent from one country to another. Talking specifically about the United States, where most internet companies are headquartered and hold large amounts of data, it typically takes months to process requests under MLATs and foreign governments often struggle to comprehend and comply with the legal standards in the United States for obtaining data for use in their investigations.[1] The requirement that a foreign government should take permission from, and comply with the requirements of a foreign government simply because the data needed happens to be controlled by a service provider based in a foreign country strikes many foreign law enforcement officials as damaging to security and law enforcement efforts, especially when they are requesting data pertaining to a crime between two of their own citizens that primarily took place on their soil.[2]
These inefficiencies of the MLAT process lead to further problems of foreign governments attempting to apply their search and surveillance laws in an extraterritorial manner for example in 2014 the UK passed the Data Retention and Investigatory Powers Act, 2014 with gives the government the power to directly access data from foreign service providers if sought for specific purposes and the request is approved by the Secretary of State or other specified executive branch official.[3] Another response that may occur is if, frustrated by such inefficiencies of the existing systems, courts in foreign states start assuming extra territorial jurisdiction, as happened when a District Court in Vishakhapatnam restrained Google from complying with a subpoena issued by the Superior Court of California, ordering Google to share the password of the Gmail account belonging to an Indian citizen residing in Vishakhapatnam.[4]
Solution proposed in the United States
In order to overcome these inefficiencies, at least in the American context, the Department of Justice has proposed a legislation which seeks to make the process of foreign governments getting information from US based entities more streamlined by amending the provisions of the Electronic Communications Privacy Act (ECPA) of the United States (the “Amendment”). These amendments have been proposed primarily for the US and UK to effectuate a proposed bilateral agreement whereby the UK government will be able to approach US companies directly with requests for information without going through the MLAT process or getting an order from a US court.
The Amendment seeks to ensure that requests from foreign governments for information from US entities get answered in a smooth manner by including those requests in the process for seeking information under the ECPA itself. This move would no doubt, make it easier for foreign governments to access data in the US, but such a move can be criticized on the ground that it would then allow all states, irrespective of their legal standards of privacy, etc. to get access to such information. This problem has been overcome in the amendment by adding a new section to Title 18 which would allow the Attorney General, with the concurrence of the Secretary of State to certify to the Congress that the legal standards in the contracting state which is being given access to the mechanism under the ECPA satisfies certain requirements specified in the chapter (and discussed below). Only after such a certification has been received by the Congress, a contracting state would be able to receive the benefits sought to be granted under the Amendment.
It is important to note that the US administration is looking to use the US-UK Agreement as a standard to be followed for similar potential agreements with a number of other countries wherein the agencies in those countries could request information from US based entities through court orders through a properly specified legal framework. Though to our knowledge India has not been formally approached by the US government to enter into such an agreement, it is important to ask the question viz. if approached:
- Does India's present legal system meet the standards laid down in the amendment to the ECPA?
- And if they do, should India also seek to enter into such an Agreement with the United States?
- And if India does, what could be the implications for citizens and for countries in a similar position as India?
We hope to be able to answer the above three questions, or at least throw some light on them, in the conclusion of this paper by relying upon the discussions contained herein.
Criticisms of the Amendment
While such a mechanism may be very effective in addressing the needs of security agencies in investigation and prevention of criminal activities, one cannot accept such an overarching change in cross border enforcement without analyzing the consequences that such a proposal will have on the right to privacy. Some of these consequences have been highlighted by experts responding to the amendment:
Lack of Judicial Authorisation: The Amendment requires that the foreign governments have a process whereby a person could seek post-disclosure review by an independent entity instead of a warrant by a court.[5] Although a court order is not the norm for interception even in Indian law, however under American law such protection is given to data held by American companies even though the data may belong to Indian citizens and this protection will no longer be available if the Amendment is passed.
Vague Standard for requests: Under the domestic law of any state there is usually a large amount of jurisprudence regarding when search orders can be issued, such as the “probable cause” standard that is followed in the United States or similar standards that may be followed in other jurisdictions. This ensures that even when the wording of the law is not precise, which it cannot be for such a subjective issue, there is still some amount of clarity around when and under what circumstances such warrants may be issued. In contrast, the Amendment requires that the orders be based on “requirements for a reasonable justification based on articulable and credible facts, particularity, legality, and severity regarding the conduct under investigation.” Although the language here may seem reasonable but in the absence of any jurisprudence backing it, it becomes very vague and susceptible to misuse. Disclosure without a Warrant: Under the current MLAT process as followed in the United States, a judge in the U.S. must issue a warrant based on probable cause in order for a U.S. company to turn over content to a foreign government. This requirement protects individuals abroad by requiring their governments to meet certain standards when seeking information held by U.S. companies. The Amendment seeks to remove this essential safeguard for a judicial warrant. The Amendment does not require requests from foreign governments to be based on a prior judicial authorization, since a large number of countries (including India) do not always require judicial orders for such orders.[6]
Allows Real Time Surveillance by Foreign Governments: American privacy rights activists have raised the concern that the Amendment would allow foreign governments to conduct ongoing surveillance by asking American companies to turn over data in real time. The requirements that the foreign governments would have to fulfill to execute such an order are less stringent than those which have to be fulfilled by the American security agencies if they want to indulge in similar activities. When the U.S. government wants to conduct real-time surveillance, it must comply with the Wiretap Act, which imposes heightened privacy protections.[7] The court orders for this purpose also require minimization of irrelevant information, are strictly time-limited, only available for certain serious crimes, etc.[8] In Indian law any such request, apart from being time limited and being available only for certain specified purposes, also has to satisfy that interception is the only reasonable option to acquire such information.
Process to determine which countries can make demands is not credible: Under the Amendment, the Attorney General and the Secretary of State, would decide whether the laws and practices of the foreign government adequately meet the standards set forth in the legislation for entering into a bilateral agreement. Their decisions would not be liable to be reviewed by a court or in any administrative procedure. They could make their determinations based on information which is not available to the public and the criteria for making the decision are vague and flexible. Further these criteria have been described as “factors” and not “requirements”[9] so that even if some of them are not satisfied, the certification process can still be completed.
Companies do not have the resources to determine if a request complies with the terms of the agreement: The Amendment does not provide any oversight to ensure that technology companies are only turning over information permitted in a specific bilateral agreement. For example, a bilateral agreement may permit disclosure of information only in response to orders that do not discriminate on the basis of religion, however, it may not be possible for the companies receiving the request to determine whether a particular request complies with that condition or not. The Amendment does not require that individual companies put in place requisite processes to weed out requests that may be non compliant with the provisions of the agreement; nor are there periodic audits to ensure that companies are properly responding to foreign government information requests.[10]
Non compliance with Human Rights Standards: Under international human rights law, governments are allowed to conduct surveillance only based on individualized and sufficient suspicion; authorized by an independent and impartial decision-maker; necessary and proportionate to achieve a legitimate aim, including by being the least intrusive means possible.[11] However the mechanism proposed by the Amendment falls woefully short of these standards.[12]
One must not lose sight of the fact that most of the criticisms of the proposal that have been discussed above have been made in the context of, and based on the standards of privacy protection that are available to American citizens. If we look at it from an Indian perspective most of those protections are not available to Indian citizens in any case since independent judicial oversight is not a sine qua non for access to information by the security agencies in India. Although the Amendment leaves open the question of how a request would be made by the foreign government to the individual Agreements, it may be safe to assume that were India to enter into such an Agreement with the United States, it would require the orders for access to comply with the standards laid down under Indian law before the relevant authorities send the request to the US based data controllers. At the least, this would ensure that the rights of Indian citizens currently guaranteed under Indian law, howsoever flawed they might be, would in all likelihood be safeguarded as per Indian law.
Certification from the Attorney General to the US Congress
In the above background if India were to enter into the agreement with the U.S Government apart from actually negotiating and signing that Agreement, the Indian government will also have to ensure (if the Amendment is passed) that the Attorney General of the United States, with the concurrence of the Secretary of State gives a certificate to the Congress that Indian law satisfies the requirements set forth in the proposed section XXXX of Title 18.
It must be kept in mind that if the negotiations between India and the United States in this regard reach such a mature stage that the certification from the Attorney General is required, then that would mean that there is enough political will on both sides to ensure that such an arrangement actually comes to fruition. In this context it would not be unfair to assume that the Attorney General may have a slight bias towards opining that Indian laws do conform to the requirements of the Amendment, as the Attorney General would want to support the decision taken by the administration, and our analysis shall have a similar bias in order to be more contextual.
The certification would, inter alia, contain the determination of the Attorney General:
- That the domestic law of India affords robust substantive and procedural protections for privacy and civil liberties in light of the data collection and activities of the Indian government that will be subject to the agreement.It should be noted that the Amendment specifies various factors that should be taken into account to reach such a determination, which include whether the Indian government:
(i) has adequate substantive and procedural laws on cybercrime and electronic evidence, as demonstrated through accession to the Budapest Convention on Cybercrime, or through domestic laws that are consistent with definitions and the requirements set forth in Chapters I and II of that Convention; Although India is not a signatory to the Budapest Convention the Information Technology Act, 2000 (which is the main legislation dealing with cybercrime) has penal provisions which have borrowed heavily from the provisions of the Budapest Convention.
- demonstrates respect for the rule of law and principles of nondiscrimination;
The provisions of Article 14 as well as Article 21 of the Constitution of India demonstrates that the legal regime in India is committed to the rule of law and principles of non discrimination.
- adheres to applicable international human rights obligations and commitments or demonstrates respect for international universal human rights (including but not limited to protection from arbitrary and unlawful interference with privacy; fair trial rights; freedoms of expression, association and peaceful assembly; prohibitions on arbitrary arrest and detention; and prohibitions against torture and cruel, inhuman, or degrading treatment or punishment);
India is a signatory to a number of international human rights conventions and treaties, it has acceded to the International Covenant on Civil and Political Rights (ICCPR), 1966, International Covenant on Economic, Social and Cultural Rights (ICESCR), 1966, ratified the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), 1965, with certain reservations, signed the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), 1979 with certain reservations, Convention on the Rights of the Child (CRC), 1989 and signed the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), 1984. Further the right to life guaranteed under Article 21 of the Constitution takes within its fold a number of human rights such as the right to privacy. Freedom of expression, right to fair trial, freedom of assembly, right against arbitrary arrest and detention are all fundamental rights guaranteed under the Constitution of India.
- has clear legal mandates and procedures governing those entities of the foreign government that are authorized to seek data under the executive agreement, including procedures through which those authorities collect, retain, use, and share data, and effective of oversight of these activities;
India has a number of legislations which govern the interception and request for information such as the Information Technology Act, 2000, the Indian Telegraph Act, 1885, Code of Criminal Procedure, 1973, etc. which put in place mechanisms governing the authorities and entities which can ask for information.
- has sufficient mechanisms to provide accountability and appropriate transparency regarding the government’s collection and use of electronic data; and
The Right to Information Act, 2005 provides the citizens the right to access any public document unless access to the same is prohibited due to the specific exemptions provided in the Act. It may be noted here that the provisions of the Right to Information Act are often frustrated by the bureaucracy by using exceptions such as “national security”, but for the purposes of this write up we are already assuming a bias towards fulfillment of these factors/conditions and therefore as long as there is even some evidence of compliance, the conditions will be considered as fulfilled by the Attorney General for the purposes of his certificate.
- demonstrates a commitment to promote and protect the global free flow of information and the open, distributed, and interconnected nature of the Internet.
The Telecom Regulatory Authority of India, which regulates telecom services in India has also issued the Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016 which prohibits service providers from charging discriminatory tariffs for data services on the basis of content.
Other than Indian law, the certificate from the Attorney General will also have to certify certain issues which would have to be addressed in the bilateral agreement itself, viz.:
- That the Indian government has adopted appropriate procedures to minimize the acquisition, retention, and dissemination of information concerning United States persons subject to the agreement.
- That the agreement requires the following with respect to orders subject to the agreement:
(i) The Indian government may not intentionally target a United States person or a person located in the United States, and must adopt targeting procedures designed to meet this requirement;
(ii) The Indian government may not target a non–United States person located outside the United States if the purpose is to obtain information concerning a United States person or a person located in the United States;
(iii) The Indian government may not issue an order at the request of or to obtain information to provide to the United States government or a third-party government, nor shall the Indian government be required to share any information produced with the United States government or a third-party government;
(iv) Orders issued by the Indian government must be for the purpose of obtaining information relating to the prevention, detection, investigation, or prosecution of serious crime, including terrorism;
(v) Orders issued by the Indian government must identify a specific person, account, address, or personal device, or any other specific identifier as the object of the Order;
(vi) Orders issued by the Indian government must be in compliance with the domestic laws of India, and any obligation for a provider of an electronic communications service or a remote computing service to produce data shall derive solely from Indian law;
(vii) Orders issued by the Indian government must be based on requirements for a reasonable justification based on articulable and credible facts, particularity, legality, and severity regarding the conduct under investigation;
(viii) Orders issued by the Indian government must be subject to review or oversight by a court, judge, magistrate, or other independent authority;
(ix) Orders issued by the Indian government for the interception of wire or electronic communications, and any extensions thereof, must be for a fixed, limited duration; interception may last no longer than is reasonably necessary to accomplish the approved purposes of the order; and orders may only be issued where that same information could not reasonably be obtained by another less intrusive method;
(x) Orders issued by the Indian government may not be used to infringe freedom of speech;
(xi) The Indian government must promptly review all material collected pursuant to the agreement and store any unreviewed communications on a secure system accessible only to those trained in applicable procedures;
(xii) The Indian government must segregate, seal, or delete, and not disseminate material found not to be information that is, or is necessary to understand or assess the importance of information that is, relevant to the prevention, detection, investigation, or prosecution of serious crime, including terrorism, or necessary to protect against a threat of death or seriously bodily harm to any person;
(xiii) The Indian government may not disseminate the content of a communication of a U.S. person to U.S. authorities unless the communication (a) may be disseminated pursuant to Section 4(a)(3)(xii) and (b) relates to significant harm, or the threat thereof, to the United States or U.S. persons, including but not limited to crimes involving national security such as terrorism, significant violent crime, child exploitation, transnational organized crime, or significant financial fraud;
(xiv) The Indian government must afford reciprocal rights of data access to the United States government;
(xv) The Indian government must agree to periodic review of its compliance with the terms of the agreement by the United States government; and
(xvi) The United States government must reserve the right to render the agreement inapplicable as to any order for which it concludes the agreement may not properly be invoked.
Conclusion
It is clear from the discussion above that the proposed Amendment is a controversial piece of legislation which will affect the way law enforcement is carried out in the internet. While there is no doubt that proposing an alternate mechanism to the existing inefficient MLAT structure is definitely the need of the hour, whether the mechanism proposed in the proposed Amendment, with all the negative implications on privacy, is the right way forward is far from certain.
As for the three questions that we had sought out to answer in the beginning of this paper, we would not like to say that Indian law definitely conforms to all the requirements listed in the Amendments, but it can safely be said that it appears that if the governments of India and the United States so wish, it would not be difficult for the Attorney General of the United States to be able to give a certification to the Congress as required in the proposed Amendment.
The other two questions as to whether India should try to opt for such an arrangement if given a chance and what would be the consequence for its people are somewhat related, in the sense that it is only by examining the consequences on its citizens that we will arrive at an answer as to whether India should opt for such an arrangement or not. The level of protections offered to Indian citizens under India law in terms of protection of their private data from government surveillance is lower than that which is offered to American citizens under American law. The growing influence of the internet is changing the citizen-state dynamic giving rise to increasing incidents where the government has to approach private actors for permission in order to carry out their governmental functions of providing security. This is because more and more private data of individual citizens is being uploaded on to the internet and controlled by private actors such as telecom companies, social media sites, etc. and the governments have to approach these private actors in case they want access to this information. The fact that the government has to approach private actors to get access to data gives private citizens some leverage to ask for better privacy protections in the context of state surveillance.
Although this proposed Amendment may not affect the local surveillance laws in India, however it would definitely have an effect on the way that citizens’ data is protected and accessed by the government.
[1] Explanation by the Assistant Attorney General attached to the proposed Amendment.
[2] https://www.justsecurity.org/24145/u-s-u-k-data-sharing-treaty/
[3] https://www.justsecurity.org/24145/u-s-u-k-data-sharing-treaty/
[4] http://spicyip.com/2012/04/clash-of-courts-indian-district-court.html
[5] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/
[6] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing
[7] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing
[8] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/
[9] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/
[10] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing
[11] International Covenant on Civil and Political Rights, art. 17, Dec. 19, 1966, U.N.T.S 999, cf. https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing
[12] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing
RBI Directions on Account Aggregators
These days’ people have access to various financial services and manage their finances in a diverse manner while dealing with a large number of financial service providers, each providing one or more services that the user may need such as banking, credit card services, investment services, etc. This multiplicity of financial service providers could make it inconvenient for the users to keep track of their finances since all the information cannot be provided at the same place. This problem is sought to be solved by the account aggregators by providing all the financial data of the user at a single place. Account aggregation is the consolidation of online financial account information (e.g., from banks, credit card companies, etc.) for online retrieval at one site. In a typical arrangement, an intermediary (e.g., a portal) agrees with a third party service provider to provide the service to consumers, the intermediary would then generally privately label the service and offer consumers access to it at the intermediary’s website.[1] There are two major ways in which account aggregation takes place, (i) direct access: wherein the account aggregator gets direct access to the data of the user residing in the computer system of the financial service provider; and (ii) scraping: where the user provides the account aggregator the username and password for its account in the different financial service providers and the account aggregator scrapes the information off the website/portal of the different financial service providers.
Since account aggregation involves the use and exchange of financial information there could be a number of potential risks associated with it such as (i) loss of passwords; (ii) frauds; (iii) security breaches at the account aggregator, etc. It is for this reason that on the advice of the Financial Stability and Development Council,[2] the Reserve Bank of India (“RBI”) felt the need to regulate this sector and on September 2, 2016 issued the Non-Banking Financial Company - Account Aggregator (Reserve Bank) Directions, 2016 to provide a framework for the registration and operation of Account Aggregators in India (the “Directions”). The Directions provide that no company shall be allowed to undertake the business of account aggregators without being registered with the RBI as an NBFC-Account Aggregator. The Directions also specify the conditions that have to be fulfilled for consideration of an entity as an Account Aggregator such as:
- the company should have a net owned fund of not less than rupees two crore, or such higher amount as the Bank may specify;
- the company should have the necessary resources and wherewithal to offer account aggregator services;
- the company should have adequate capital structure to undertake the business of an account aggregator;
- the promoters of the company should be fit and proper individuals;
- the general character of the management or proposed management of the company should not be prejudicial to the public interest;
- the company should have a plan for a robust Information Technology system;
- the company should not have a leverage ratio of more than seven;
- the public interest should be served by the grant of certificate of registration; and
- Any other condition that made be specified by the Bank from time to time.[3]
The Direction further talk about the responsibilities of the Account Aggregators and specify that the account aggregators shall have the duties such as: (a) Providing services to a customer based on the customer’s explicit consent; (b) Ensuring that the provision of services is backed by appropriate agreements/ authorisations between the Account Aggregator, the customer and the financial information providers; (c) Ensuring proper customer identification; (d) Sharing the financial information only with the customer or any other financial information user specifically authorized by the customer; (e) Having a Citizen's Charter explicitly guaranteeing protection of the rights of a customer.[4]
The Account Aggregators are also prohibited from indulging in certain activities such as: (a) Support transactions by customers; (b) Undertaking any other business other than the business of account aggregator; (c) Keeping or “residing” with itself the financial information of the customer accessed by it; (d) Using the services of a third party for undertaking its business activities; (e) Accessing user authentication credentials of customers; (f) Disclosing or parting with any information that it may come to acquire from/ on behalf of a customer without the explicit consent of the customer.[5] The fact that there is a prohibition on the information accessed from actually residing with the Account Aggregator will ensure greater security and protection of the information.
Consent Framework
The Directions specify that the function of obtaining, submitting and managing the customer’s consent should be performed strictly in accordance with the Directions and that no information shall be retrieved, shared or transferred without the explicit consent of the customer.[6] The consent is to be taken in a standardized artefact, which can also be obtained in electronic form,[7] and shall contain details as to (i) the identity of the customer and optional contact information; (ii) the nature of the financial information requested; (iii) purpose of collecting the information; (iv) the identity of the recipients of the information, if any; (v) URL or other address to which notification needs to be sent every time the consent artefact is used to access information; (vi) Consent creation date, expiry date, identity and signature/ digital signature of the Account Aggregator; and (vii) any other attribute as may be prescribed by the RBI.[8] The account aggregator is required to inform the customer of all the necessary attributes to be contained in the consent artefact as well as the customer’s right to file complaints with the relevant authorities.[9] The customers shall also be provided an option to revoke consent to obtain information that is rendered accessible by a consent artefact, including the ability to revoke consent to obtain parts of such information.[10]
Comments: While the Directions have specific provisions regarding how the financial data shall be dealt with, it is pertinent to note that the actual consent artefact also has personal information and it is not clear whether Account Aggregators are allowed disclose that information to third parties are not.
Disclosure and sharing of financial information
Financial information providers such as banks, mutual funds, etc. are allowed to share information with account aggregators only upon being presented with a valid consent artifact and also have the responsibility to verify the consent as well as the credentials of the account aggregator.[11] Once the verification is done, the financial information provider shall digitally sign the financial information and transmit the same to the Account Aggregator in a secure manner in real time, as per the terms of the consent.[12] In order to ensure smooth flow of data, the Directions also impose an obligation on financial information providers to:
- implement interfaces that will allow an Account Aggregator to submit consent artefacts, and authenticate each other, and enable secure flow of financial information;
- adopt means to verify the consent including digital signatures;
- implement means to digitally sign the financial information; and
- maintain a log of all information sharing requests and the actions performed pursuant to such requests, and submit the same to the Account Aggregator.[13]
Comments: The Directions provide that the Account Aggregator will not support any transactions by the customers and this seems to suggest that in case of any mistakes in the information the customer would have to approach the financial information provider and not the Account Aggregator.
Use of Information
The Directions provide that in cases where financial information has been provided by a financial information provider to an Account Aggregator for transferring the same to a financial information user with the explicit consent of the customer, the Account Aggregator shall transfer the same in a secure manner in accordance with the terms of the consent artefact only after verifying the identity of the financial information user.[14] Such information, as well as information which may be provided for transferring to the customer, shall not be used or disclosed by the Account Aggregator or the Financial Information user except as specified in the consent artefact.[15]
Data Security
The Directions specify that the business of an Account Aggregator will be entirely Information Technology (IT) driven and they are required to adopt required IT framework and interfaces to ensure secure data flows from the financial information providers to their own systems and onwards to the financial information users.[16] This technology should also be scalable to cover any other financial information or financial information providers as may be specified by the RBI in the future.[17] The IT systems should also have adequate safeguards to ensure they are protected against unauthorised access, alteration, destruction, disclosure or dissemination of records and data.[18] Information System Audit of the internal systems and processes should be in place and be conducted at least once in two years by CISA certified external auditors whose report is to be submitted to the RBI.[19] The Account Aggregators are prohibited from asking for or storing customer credentials (like passwords, PINs, private keys) which may be used for authenticating customers to the financial information providers and their access to customer’s information will be based only on consent-based authorisation (for scraping).[20]
Grievance Redressal
The Directions require the Account Aggregator to put in place a policy for handling/ disposal of customer grievances/ complaints, which shall be approved by its Board and also have a dedicated set-up to address customer grievances/ complaints which shall be handled and addressed in the manner prescribed in the policy.[21] The Account Aggregator also has to display the name and details of the Grievance Redressal Officer on its website as well as place of business.[22]
Supervision
The Directions require the Account Aggregators to put in place various internal checks and balances to ensure that the business of the Account Aggregator does not violate any laws or regulations such as constitution of an Audit Committee, a Nomination Committee to ensure the “fit and proper” status of its Directors, a Risk Management Committee and establishment of a robust and well documented risk management framework.[23] The Risk Management Committee is required to (a) give due consideration to factors such as reputation, customer confidence, consequential impact and legal implications, with regard to investment in controls and security measures for computer systems, networks, data centres, operations and backup facilities; and b) have oversight of technology risks and ensure that the organisation’s IT function is capable of supporting its business strategies and objectives.[24] Further the RBI also has the power to inspect any Account Aggregator at any time.[25]
Penalties
The Directions themselves do not provide for any penalties for non compliance, however since the Directions are issued under Section 45JA of the Reserve Bank of India Act, 1934 (“RBI Act”), this means that any contravention of these directions will be punishable under Section 58B of the RBI Act which provides for an imprisonment of upto 3 years as well as a fine for any contravention of such directions.
Conclusion
The Directions by the RBI provide a number of regulations and checks on Account Aggregators with the view to ensure safety of customer financial data. These Directions appear to be quite trendsetting in the sense that in most other jurisdictions such as the United States or even Europe there are no specific regulations governing Account Aggregators but their activities are mainly being governed under existing privacy or consumer protection legislations.[26]
The entire regulatory regime for Account Aggregators seems to suggest that the RBI wants Account Aggregators to be like funnels to channel information from various platforms right to the customer (or financial information user) and it does not want to take a chance with the information actually residing with the Account Aggregators. Further, by prohibiting Account Aggregators from accessing user authentication credentials, the RBI is trying to eliminate the possibility of this information being leaked or stolen. Although this may make it more onerous for Account Aggregators to provide their services, it is a great step to ensure the safety and security of customer data.
In recent months the RBI has been trying to actively engage with the various new products being introduced in the financial sector owing to various technological advancements, be it the circular informing the public about the risks of virtual currencies including Bitcoin, the consultation paper on P2P lending platforms or these current guidelines on Account Aggregators. These recent actions of the RBI seem to suggest that the RBI is well aware of various technological advancements in the financial sector and is keeping a keen eye on these technologies and products, but appears to be taking a cautious and weighted approach regarding how to deal with them.
[1] Ann S. Spiotto, Financial Account Aggregation: The Liability Perspective, Fordham Journal of Corporate & Financial Law, 2006, Volume 8, Issue 2, Article 6, available at http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1181&context=jcfl
[2] https://rbi.org.in/scripts/BS_PressReleaseDisplay.aspx?prid=34345
[3] Clause 4.2.2 of the Directions.
[4] Clause 5 of the Directions.
[5] Clause 5 of the Directions.
[6] Clauses 6.1 and 6.2 of the Directions.
[7] Clause 6.4 of the Directions.
[8] Clause 6.3 of the Directions.
[9] Clause 6.5 of the Directions.
[10] Clause 6.6 of the Directions.
[11] Clauses 7.1 and 7.2 of the Directions.
[12] Clauses 7.3 and 7.4 of the Directions.
[13] Clause 7.5 of the Directions.
[14] Clause 7.6.1 of the Directions.
[15] Clause 7.6.2 of the Directions.
[16] Clause 9(a) of the Directions.
[17] Clause 9(c) of the Directions.
[18] Clause 9(d) of the Directions.
[19] Clause 9(f) of the Directions.
[20] Clause 9(b) of the Directions.
[21] Clauses 10.1 and 10.2 of the Directions.
[22] Clause 10.3 of the Directions.
[23] Clauses 12.2, 12.3 and 12.4 of the Directions.
[24] Clause 12.4 of the Directions.
[25] Clause 15 of the Directions.
[26] http://www.canadiancybersecuritylaw.com/2016/07/german-regulator-finds-banks-data-rules-impede-non-bank-competitors/
How Long Have Banks Known About The Debit Card Fraud?
The article was published by Bloomberg on October 22, 2016.
The breach was detected when various customers began to lodge complaints with their banks about unauthorised transactions on their accounts, which upon investigation were said to originate from a foreign location such as China. The security breach has affected actively at least 641 customers to the tune of Rs 1.8 crore, with lakhs more being affected by the pro-active measures (including card revocation) being taken by banks to prevent further financial losses.
Surprisingly little is known, however, about the nature of the attack responsible for the breach, the extent or scope of damage it has caused and the sufficiency of the countermeasures being initiated by the banks against the attacks. This article will talk about these aspects of the attack and also suggest normative measures that can be carried out to minimize harm and prevent such attacks in the future.
The Modus Operandi
According to reports, the compromise may have happened at the level of the Hitachi Payment Services, which is a payment services provider which operates, among other financial services, ATMs for a variety of banks across the country. One or a certain number of ATMs were apparently compromised by a malware, which then infected the payment services provider network, leading to a far larger potential target area than just the physical ATMs for malware to act against. The malware could have infected the payment switch provider via physically being uploaded onto vulnerable ATM machines, which are known to run out-dated embedded operating systems with various documented loopholes that are rarely patched. The malware then could have recorded the details of the cards used on the infected ATMs (or even in the network generally) and then, via the same compromised network, transmitted confidential details, including ATM pins and CVV numbers, to the operators of the malware.
The attack could have also occurred from some other vulnerable part of the payment network, such as a payment switch within the bank itself, making it far more dangerous as it still maybe be active on parts of the network within the bank and would have access to a far wider range & variety of information than a mere ATM. There is no real way to know if the threat has been even contained, forget neutralised, as the audits being carried out by PCI-DSS authorised agencies have been on-going for the past month and their reports are not due at least another 15 days, as intimated by NPCIL.
Massive Financial Implications
Policemen guard the banking hall of a State Bank of India branch in New Delhi. (Photographer: Sondeep Shankar/Bloomberg News) |
The compromise of these details, regardless of the source of the compromise, has massive financial implications. This is because various international services allow debit/credit cards to be used only with the card number, expiry date, name & CVV number. They do not require the use of ATM Pins or an OTP (one time password) sent to a mobile phone for online transactions. In fact, unlike India where the RBI mandates OTPs for debit cards, this CVV based simplified online usage is the standard practice of using ATM Cards digitally in most of the developed world.
This would mean that merely changing ATM pins, something which SBI alleges less than 7 percent of its customers had done prior to all 6 lakh cards being blocked, would serve as almost no protection if the cards are enabled for international online transactions. The fact that most of the dubious, unauthorised financial transactions are occurring from foreign locations probably demonstrates that it is these kinds of internationally enabled cards that are being targeted for this sort of an attack.
Are Banks Concealing Information?
A customer exits a Yes Bank Ltd. automated teller machine (ATM) in Ahmedabad. (Photographer: Dhiraj Singh/Bloomberg) |
The absence of data/security breach laws in India is being sharply felt as there as has been an abject lack of clarity and information from the banking sector and the government regarding the attack. Over 47 states in the USA and most of the countries in the EU have enacted strict data security breach laws that mandate public intimation & disclosure of key information pertaining to the attack along with detailed containment measures. The presence of such a law in India would have gone a long way in preventing the breach from being under the wraps for so long (it occurred at the bank level in September, almost a month ago) and also ensured far more vigilant active compliance by corporations & banks to international security standards and best practices. For now, the only true countermeasure to prevent future harm to affected card holders is for all affected cards to be revoked by the banks and new cards being issued to affected customers.
Constant vigilance & comprehensive security audits by banks to detect affected cards and active protection for customers, using financial and identity insurance services such as AllClear ID Plus (used by Sony in the 2011 Playstation Hack) will go a long way in mitigating the harm of the breach. The banking industry, government & security agencies should all learn from this breach and a combination of new legislation, updated industry practices and consumer awareness is necessary for proactive & reactive actions in the future.
Request for Specifics: Rebuttal to UIDAI
The article was published in the Economic & Political Weekly on September 3, 2016, Vol.51, Issue No.36.
The author of a technical paper will be alarmed when he is convicted of “serious mathematical errors” by someone who has not bothered himself with “going too deep into the mathematics” used. The man must possess miraculous powers of divination one feels: fears rather. The UIDAI seems to have even such formidable diviners in their employ: who have dismissed just so peremptorily, in their rebuttal, the calculations made in my paper titled Flaws in the UIDAI process. The paper appeared in the issue of this journal dated to February 27 of this year. The rebuttal was published in the issue dated to the 12th of March. The interested reader can confirm that I have only repeated what was said there. The rebuttal does not specify, in any way, the mathematical mistakes I am supposed to have made. So I shall rehearse the relevant calculations very broadly: and the experts of the UIDAI will then exhibit, I trust, the specific mistakes they impute to me.[*]
[*]My reply to the UIDAIs attempted rebuttal was sent in to the EPW a few days after that appeared in print: and published as a “web exclusive” article in Volume 51, Issue Number 36 of the EPW, on 03/09/2016.
If the DIDP Did Its Job
Over the course of two years, the Centre for Internet and Society sent 28 requests to ICANN under its Documentary Information Disclosure Policy (DIDP). A part of ICANN’s accountability initiatives, DIDP is “intended to ensure that information contained in documents concerning ICANN's operational activities, and within ICANN's possession, custody, or control, is made available to the public unless there is a compelling reason for confidentiality.”
Through the DIDP, any member of the public can request information contained in documents from ICANN. We’ve written about the process here, here and here. As a civil society group that does research on internet governance related topics, CIS had a variety of questions for ICANN. The 28 DIDP requests we have sent cover a range of subjects: from revenue and financial information, to ICANN’s relationships with its contracted parties, its contractual compliance audits, harassment policies and the diversity of participants in its public forum. We have blogged about each DIDP request where we have summarized ICANN’s responses.
Here are the DIDP requests we sent in:
Dec 2014 |
Jan/Feb 2015 |
Aug/Sept 2015 |
Nov 2015 |
Apr/May 2016 |
Compliance audits |
||||
|
||||
|
DIDP statistics * |
|||
|
|
|
||
|
|
|
||
|
|
|
|
ICANN’s responses were analyzed and rated between 0-4 based on the amount of information disclosed. The reasons given for the lack of full disclosure were also studied.
DIDP response rating |
|
0 |
No relevant information disclosed |
1 |
Very little information disclosed; DIDP preconditions and/or other reasons for nondisclosure used. |
2 |
Partial information disclosed; DIDP preconditions and/or other reasons for nondisclosure used. |
3 |
Adequate information disclosed; DIDP preconditions and/or other reasons for nondisclosure used. |
4 |
All information disclosed |
ICANN has defined a set of preconditions under which they are not obligated to answer a request. These preconditions are generously used by ICANN to justify their lack of a comprehensive answer. The wording of the policy also allows ICANN to dodge answering a request if it doesn’t have the relevant documents already in its possession. The responses were also classified by the number of times a particular DIDP condition for non-disclosure was invoked. We will see why these weaken ICANN’s accountability initiatives.
Of the 28 DIDP requests, only 14% were answered fully, without the use of the DIDP conditions of non-disclosure. Seven out of 28 or 40% of the DIDPs received a 0-rated answer which reflects extremely poorly on the DIDP mechanism itself. Of the 7 responses that received 0-rating, 4 were related to complaints and contractual compliance. We had asked for details on the complaints received by the ombudsman, details on contractual violations by Verisign and abuse contacts maintained by registrars for filing complaints. We received no relevant information.
We have earlier written about the extensive and broad nature of the 12 conditions of non-disclosure that ICANN uses. These conditions were used in 24 responses out of 28. ICANN was able to dodge from fully answering 85% of the DIDP requests that they got from CIS. This is alarming especially for an organization that claims to be fully transparent and accountable. The conditions for non-disclosure have been listed in this document and can be referred to while reading the following graph.
On reading the conditions for non-disclosure, it seems like ICANN can refuse to answer any DIDP request if it so wished. These exclusions are numerous, vaguely worded and contain among them a broad range of information that should legitimately be in the public domain: Correspondence, internal information, information related to ICANN’s relationship with governments, information derived from deliberations among ICANN constituents, information provided to ICANN by private parties and the kicker - information that would be too burdensome for ICANN to collect and disseminate.
As we can see from the graph, the most used condition under which ICANN can refuse to answer a DIDP request is F. Predictably, this is the most vaguely worded DIDP condition of the lot: “Confidential business information and/or internal policies and procedures.” It is up to ICANN to decide what information is confidential with no justification needed or provided for it. ICANN has used this condition 11 times in responding to our 28 requests.
It is also necessary to pay attention to condition L which allow ICANN to reject “Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.” This is perhaps the weakest point in the entire list due its subjective nature. Firstly, on whose standards must this information request be reasonable? If the point of a transparency mechanism is to make sure that information sought by the public is disseminated, should they be allowed to obfuscate information because it is too burdensome to collect? Even if this is fair given the time constraints of the DIDP mechanism, it must not be used as liberally as has been happening. The last sub point is perhaps the most subjective. If a staff member dislikes a particular requestor, this point would justify their refusal to answer a request regardless of its validity. This hardly seems fair or transparent. This condition has been used 9 times in our 28 requests.
Besides the DIDP non-disclosure conditions, ICANN also has an excuse built into the definition of DIDP. Since it is not obliged to create or summarize documents under the DIDP process, it can simply claim to not have the specific document we request and thus negate its responsibility to our request. This is what ICANN did with one of our requests for raw financial data. For our research, we required raw data from ICANN specifically with regard to its expenditure on staff and board members for their travel and attendance at meetings. As an organization that is answerable to multiple stakeholders including governments and the public, it is justified to expect that they have financial records of such items in a systematic manner. However, we were surprised to learn that ICANN does not in fact have these stored in a manner that they can send as attachments or publish. Instead they directed us to the audited financial reports which did little for our research. However, in response to our later request for granular data on revenue from domain names, ICANN explained that while they do not have such a document in their possession, they would create one. This distinction between the two requests seems arbitrary to us since we consider both to be important to public.
Nevertheless, there were some interesting outcomes from our experience filing DIDPs. We learnt that there has been no substantive work done to inculcate the NETmundial principles at ICANN, that ICANN has no idea which regional internet registry contributes the most to its budget, and that it does not store (or is not willing to reveal) any raw financial data. These outcomes do not contribute to a sense of confidence in the organization.
ICANN has an opportunity to reform this particular transparency mechanism at its Workstream 2 discussions. ICANN must make use of this opportunity to listen and work with people who have used the DIDP process in order to make it useful, effective and efficient. To that effect, we have some recommendations from our experience with the DIDP process.
That ICANN does not currently possess a particular document is not an excuse if it has the ability to create one. In its response to our questions on the IANA transition, ICANN indicated that it does not have the necessary documents as the multi stakeholder body that it set up is the one conducting the transition. This is somewhat justified. However, in response to our request for financial details, ICANN must not be able to give the excuse that it does not have a document in its possession. It and it alone has the ability to create the document and in response to a request from the public, it should.
ICANN must also revamp its conditions for non-disclosure and make it tighter. It must reduce the number of exclusions to its disclosure policy and make sure that the exclusion is not done arbitrarily. Specifically with respect to condition F, ICANN must clarify how information was classified as confidential and why that is different from everything else on the list of conditions.
Further, ICANN should not be able to use condition L to outright reject a DIDP request. Instead, there must be a way for the requester and ICANN to come to terms about the request. This could happen by an extension of the 1 month deadline, financial compensation by requester for any expenditure on ICANN’s part to answer the request or by a compromise between the requester and ICANN on the terms of the request. The sub point about requests made “by a vexatious or querulous individual” must be removed from condition L or at least be separated from the condition so that it is clear why the request for disclosure was denied.
ICANN should also set up a redressal mechanism specific to DIDP. While ICANN has the Reconsideration Requests process to rectify any wrongdoing on the part of staff or board members, this is not adequate to identify whether a DIDP was rejected on justifiable grounds. A separate mechanism that deals only with DIDP requests and wrongful use of the non-disclosure conditions would be helpful. According to the icann bylaws, in addition to Requests for Reconsideration, ICANN has also established an independent third party review of allegations against the board and/or staff members. A similar mechanism solely for reviewing whether ICANN’s refusal to answer a DIDP request is justified would be extremely useful.
A strong transparency mechanism must make sure that its objective are to provide answers, not to find ways to justify its lack of answers. With this in mind, we hope that the revamp of transparency mechanisms after workstream 2 discussions leads to a better DIDP process than we are used to.
Internet's Core Resources are a Global Public Good - They Cannot Remain Subject to One Country's Jurisdiction
Recently, the US gave up its role of signing entries to the Internet's root zone file, which represents the addressing system for the global Internet. This is about the Internet addresses that end with .com, .net, and so on, and the numbers associated with each of them that help us navigate the Internet. We thank and congratulate the US government for taking this important step in the right direction. However, the organisation that manages this system, ICANN,[1] a US non-profit, continues to be under US jurisdiction, and hence subject to its courts, legislature and executive agencies. Keeping such an important global public infrastructure under US jurisdiction is expected to become a very problematic means of extending US laws and policies across the world.
We the undersigned therefore appeal that urgent steps be taken to transit ICANN from its current US jurisdiction. Only then can ICANN become a truly global organisation.[2] We would like to make it clear that our objection is not directed particularly against the US; we are simply against an important global public infrastructure being subject to a single country's jurisdiction.
Domain name system as a key lever of global control
A few new top level domains like .xxx and .africa are already under litigation in the US, whereby there is every chance that its law could interfere with ICANN's (global) policy decisions. Businesses in different parts of the world seeking top level domain names like .Amazon, and, hypothetically, .Ghaniancompany, will have to be mindful of de facto extension of US jurisdiction over them. US agencies can nullify the allocation of such top level domain names, causing damage to a business similar to that of losing a trade name, plus losing all the 'connections', including email based ones, linked to that domain name. For instance, consider the risks that an Indian generic drugs company, say with a top level domain, .genericdrugs, will remain exposed to.
Sector specific top level domain names like .insurance, health, .transport, and so on, are emerging, with clear rules for inclusion-exclusion. These can become de facto global regulatory rules for that sector. .Pharmacy has been allocated to a US pharmaceutical group which decides who gets domain names under it. Public advocacy groups have protested [3] that these rules will be employed to impose drugs-related US intellectual property standards globally. Similar problematic possibilities can be imagined in other sectors; ICANN could set “safety standards”, as per US law, for obtaining .car.
Country domain names like .br and .ph remain subject to US jurisdiction. Iran's .ir was recently sought to be seized by some US private parties because of alleged Iranian support to terrorism. Although the plea was turned down, another court in another case may decide otherwise. With the 'Internet of Things', almost everything, including critical infrastructure, in every country will be on the network. Other countries cannot feel comfortable to have at the core of the Internet’s addressing system an organisation that can be dictated by one government.
ICANN must become a truly global body
Eleven years ago, in 2005, the Civil Society Internet Governance Caucus at the World Summit on the Information Society demanded that ICANN should “negotiate an appropriate host country agreement to replace its California Incorporation”.
A process is currently under-way within ICANN to consider the jurisdiction issue. It is important that this process provides recommendations that will enable ICANN to become a truly global body, for appropriate governance of very important global public goods.
Below are some options, and there could be others, that are available for ICANN to transit from US jurisdiction.
- ICANN can get incorporated under international law. Any such agreement should make ICANN an international (not intergovernmental) body, fully preserving current ICANN functions and processes. This does not mean instituting intergovernmental oversight over ICANN.
- ICANN can move core internet operators among multiple jurisdictions, i.e. ICANN (policy body for Internet identifiers), PTI [4] (the operational body) and the Root Zone Maintainer must be spread across multiple jurisdictions. With three different jurisdictions over these complementary functions, the possibility of any single one being fruitfully able to interfere in ICANN's global governance role will be minimized.
- ICANN can institute a fundamental bylaw that its global governance processes will brook no interference from US jurisdiction. If any such interference is encountered, parameters of which can be clearly pre-defined, a process of shifting of ICANN to another jurisdiction will automatically set in. A full set-up – with registered HQ, root file maintenance system, etc – will be kept ready as a redundancy in another jurisdiction for this purpose. [5] Chances are overwhelming that given the existence of this bylaw, and a fully workable exit option being kept ready at hand, no US state agency, including its courts, will consider it meaningful to try and enforce its writ. This arrangement could therefore act in perpetuity as a guarantee against jurisdictional interference without actually having ICANN to move out of the US.
- The US government can give ICANN jurisdictional immunity under the United States International Organisations Immunities Act . There is precedent of US giving such immunity to non-profit organisations like ICANN. [6] Such immunity must be designed in such a way that still ensures ICANN's accountability to the global community, protecting the community's enforcement power and mechanisms. Such immunity extends only to application of public law of the US on ICANN decisions and not private law as chosen by any contracting parties. US registries/registrars, with the assent of ICANN, can choose the jurisdiction of any state of the US for adjudicating their contracts with ICANN. Similarly, registries/registrars from other countries should be able to choose their respective jurisdictions for such contracts.
We do acknowledge that, over the years, there has been an appreciable progress in internationalising participation in ICANN's processes, including participation from governments in the Governmental Advisory Committee. However, positive as this is, it does not address the problem of a single country having overall jurisdiction over its decisions.
Issued by the following India based organisation:
- Centre for Internet and Society, Bangalore
- IT for Change, Bangalore
- Free Software Movement of India, Hyderabad
- Society for Knowledge Commons, New Delhi
- Digital Empowerment Foundation, New Delhi
- Delhi Science Forum, New Delhi
- Software Freedom Law Centre - India, New Delhi
- Third World Network - India, New Delhi
Supported by the following global networks:
- Association For Progressive Communications
- Just Net Coalition
For any clarification or inquiries you may may write to or call:
- Parminder Jeet Singh: [email protected] +91 98459 49445, or
- Vidushi Marda: [email protected] +91 99860 92252
[1] Internet Corporation for Assigned Names and Numbers
[2] The “NetMundial Multistakeholder Statement” , endorsed by a large number of governments and other stakeholders, including ICANN and US government, called for ICANN to become a “truly international and global organization”.
[3] See, https://www.techdirt.com/articles/20130515/00145123090/big-pharma-firms-seeking-pharmacy-domain-to-crowd-out-legitimate-foreign-pharmacies.shtml
[4] Public Technical Identifier, a newly incorporated body to carry out the operational aspects of managing Internet's identifiers.
[5] This can be at one of the existing non US global offices of ICANN, or the location of one of the 3 non-US root servers. Section 24.1 of ICANN Bylaws say, “The principal office for the transaction of the business of shall be in the County of Los Angeles, State of California, United States of America. may also have an additional office or offices within or outside the United States of America as it may from time to time establish”.
[6] E.g., International Fertilizer and Development Center was designated as a public, nonprofit, international organisation by US Presidential Decree, granting it immunities under United States International Organisations Immunities Act . See https://archive.icann.org/en/psc/corell-24aug06.html
How Workstream 2 Plans to Improve ICANN's Transparency
The transparency subgroup of ICANN’s Workstream 2 dialogue attempts to see how they could effectively improve the transparency and accountability of the organization. The main document under scrutiny at the moment is the draft Transparency Report published a few days before the 57th ICANN meeting in Hyderabad.
The report begins with an acknowledgement of the value of taking tips from the Right to Information policies of other institutions and governments. My colleague Padmini Baruah had earlier written a blog post comparing the exclusion policy of ICANN’s DIDP and the Indian Government’s RTI where she found that “the net cast by the DIDP exclusions policy is more vast than even than that of a democratic state’s transparency law.”[1] The WS2 report not only discusses the DIDP process, but also discusses ICANN’s proactive disclosures (with regard to lobbying etc) and whistleblower policies. This article focuses solely on the first.
As our earlier blog posts have mentioned, CIS sent in 28 DIDP requests over the last two years. Our experience with DIDP has been less than satisfactory and we are pleased that DIDP reform was an important part of the discussions of this subgroup. The report proposes some concrete structural changes to the DIDP process but skirts around some of the more controversial ones.
The recommendation to make the process of submitting requests clearer is a good one. There are currently no instructions on the follow-up process or what ICANN requires of the requestors. The report also recommends capping any extension to the original 30-day limit to an additional 30 days. While this is good, we further recommend that ICANN stay in touch with the requestor in order to help them to the best of its ability. The correspondence should ideally not be limited to a notification that they require an extension. Any clarifications on the part of the requestor must be resolved by ICANN. We commend the report for pointing out that the status quo – where there is no outside limit for extension of time beyond the mandated 30 days – is problematic as it allows the ICANN staff to give lesser priority to responding to DIDP requests. We strongly suggest that extensions of time on responding to DIDP requests be restricted to a maximum of 7 days after the passing of the 30-day period, after which liability should be strictly imposed on ICANN in the form of an individual fine, analogous to India’s RTI policy.[2]
One of the major areas of focus for this report and for our earlier analysis was the problematic nature of the exclusions to the DIDP. I had written that the conditions were "numerous, vaguely worded and contain among them a broad range of information that should legitimately be in the public domain.”[3] This is echoed by the report which calls for a deletion of two clauses that we found most used in denying our requests for information.
The report also calls into question the subjective nature of the last condition which states that ICANN can deny information if they find requests “not reasonable, excessive or overly burdensome, not feasible, abusive or vexatious or made by a vexatious or querulous individual.” As seen from our blog posts, we are of the firm belief that such a subjective condition has no place in a robust information disclosure policy. Requiring the Ombudsman’s consent to invoke it is a good first step. In addition to that, we strongly encourage that objective guidelines which specify when a requestor is considered “vexatious” be drawn up and made public.
The most disappointing aspect of this report is that it does not delve into details about having an independent party dedicated to reviewing the DIDP process to address grievances. We believe that this must not be left to the Ombudsman who cannot devote all their time to this process. We are of the opinion that an independent party would also be able to more effectively oversee the tracking and periodic review of the DIDP mechanism.
In conclusion, we believe that this report is a good start but does not comprehensively answer all of our issues with the DIDP process as it is. We look forward to more engagement with the Transparency subgroup to close all loopholes within the DIDP process.
[1] Padmini Baruah, Peering behind the veil of ICANN’s DIDP, (September 21, 2015), available at http://cis-india.org/internet-governance/blog/peering-behind-the-veil-of-icann2019s-didp (Last visited on November 9, 2016).
[2] Section 20(1), Right to Information Act, 2005.
[3] Asvatha Babu, If the DIDP Did Its Job, (November 3, 2016), available at http://cis-india.org/internet-governance/blog/if-the-didp-did-its-job (Last Visited on November 9, 2016).
Privacy after Big Data: Compilation of Early Research
Download the Compilation (PDF)
Privacy after Big Data
Evolving data science, technologies, techniques, and practices, including big data, are enabling shifts in how the public and private sectors carry out their functions and responsibilities, deliver services, and facilitate innovative production and service models to emerge. For example, in the public sector, the Indian government has considered replacing the traditional poverty line with targeted subsidies based on individual household income and assets. The my.gov.in platform is aimed to enable participation of the connected citizens, to pull in online public opinion in a structured manner on key governance topics in the country. The 100 Smart Cities Mission looks forwards to leverage big data analytics and techniques to deliver services and govern citizens within city sub-systems. In the private sector, emerging financial technology companies are developing credit scoring models using big, small, social, and fragmented data so that people with no formal credit history can be offered loans. These models promote efficiency and reduction in cost through personalization and are powered by a wide variety of data sources including mobile data, social media data, web usage data, and passively collected data from usages of IoT or connected devices.
These data technologies and solutions are enabling business models that are based on the ideals of ‘less’: cash-less, presence-less, and paper-less. This push towards an economy premised upon a foundational digital ID in a prevailing condition of absent legal frameworks leads to substantive loss of anonymity and privacy of individual citizens and consumers vis-a-vis both the state and the private sector. Indeed, the present use of these techniques run contrary to the notion of the ‘sunlight effect’ - making the individual fully transparent (often without their knowledge) to the state and private sector, while the algorithms and means of reaching a decision are opaque and inaccessible to the individual.
These techniques, characterized by the volume of data processed, the variety of sources data is processed from, and the ability to both contextualize - learning new insights from disconnected data points - and de-contextualize - finding correlation rather than causation - have also increased the value of all forms of data. In some ways, big data has made data exist on an equal playing field as far as monetisation and joining up are concerned. Meta data can be just as valuable to an entity as content data. As data science techniques evolve to find new ways of collecting, processing, and analyzing data - the benefits of the same are clear and tangible, while the harms are less clear, but significantly present.
Is it possible for an algorithm to discriminate? Will incorrect decisions be made based on data collected? Will populations be excluded from necessary services if they do not engage with certain models or do emerging models overlook certain populations? Can such tools be used to surveil individuals at a level of granularity that was formerly not possible and before a crime occurs? Can such tools be used to violate rights – for example target certain types of speech or groups online? And importantly, when these practices are opaque to the individual, how can one seek appropriate and effective remedy.
Traditionally, data protection standards have defined and established protections for certain categories of data. Yet, data science techniques have evolved beyond data protection principles. It is now infinitely harder to obtain informed consent from an individual when data that is collected can be used for multiple purposes by multiple bodies. Providing notice for every use is also more difficult – as is fulfilling requirements of data minimization. Some say privacy is dead in the era of big data. Others say privacy needs to be re-conceptualized, while others say protecting privacy now, more than ever, requires a ‘regulatory sandbox’ that brings together technical design, markets, legislative reforms, self regulation, and innovative regulatory frameworks. It also demands an expanding of the narrative around privacy – one that has largely been focused on harms such as misuse of data or unauthorized collection – to include discrimination, marginalization, and competition harms.
In this compilation we have put together a series of articles that we have developed as we explore the impacts – positive and negative – of big data. This includes looking at India’s data protection regime in the context of big data, reviewing literature on the benefits of harms of big data, studying emerging predictive policing techniques that rely on big data, and analyzing closely the impact of big data on specific privacy principles such as consent. This is a growing body of research that we are exploring and is relevant to multiple areas of our work including privacy and surveillance. Feedback and comments on the compilation are welcome and appreciated.
Elonnai Hickok
Director - Internet Governance
Conference on the Digitalization of the Indian Legal System
The co-founder of DAKSH Society of India, Kishore Mandyam, opened the event with a thought-provoking presentation on the efficiency levels of the current legal system and the kinds of progress that can be brought about by technological reforms. Members of LegalDesk.com then presented their ideas and then introduced their newest white paper on Legal Digitalization, providing a brief overview of the study and summarizing the most relevant sections. The panel discussion then proceeded, moderated by Sanjay Khan Nagra, a policy expert at iSPIRT Foundation. He facilitated an insightful and conducive discussion around the advantages, disadvantages, risks and incentives of digitalizing the Indian legal system. On the discussion panel was Kishore Mandyam from DAKSH Society and Prabhuling K Navadgi, the Additional Solicitor General of India.
The objectives to the conference, as per its website, were to: (1) examine the current legal framework and the possibility of amendments in laws to facilitate digitalization of the system, (2) asses the potential of India Stack in digitalizing the legal system, (3) to identify statutes which require amendment, (4) identify the hurdles and roadblocks in the path towards digital reform of the legal ecosystem, and (5) suggest amendments to the act and potential areas of improvement. With those objectives in mind, this blog post intends to provide a brief overview of the main narratives shared in the conference and to identify some of the loopholes and unanswered questions that I was left with by the end.
Improved efficiency is the dominant narrative used to advocate for the digitalization of the Indian legal system. According to LegalDesk.com, the current Indian legal system relies mostly on paperwork, resulting in thousands of courts and over a million advocates accumulating lackhs of ongoing cases and an enormous pile of pending cases, mostly due to insufficient information. It is stated that the traditional methods of legal documentation, paperwork and court work must change through awareness, technology and pursuance by the government, as it needs to be implemented throughout the country. The key idea here is that digital transactions are faster and simplify the process of storing information. The ultimate desired outcome here, then, is increased efficiency and transparency.
One must question, however, if this narrative may be overly generous with the credit it gives to technology. IT systems, like many other manmade structures, are always bound to glitch and crash. It would be useful, then, to question whether the legal system is a department that can afford the complications that inevitably accompany a digital transformation. If portals or servers fail at critical times (i.e. when a person needs to confirm their trial date, submit a document before a deadline, or any other pressing procedures), the consequences may in fact outweigh the convenience brought about by overall digitalization. This is not to imply that the legal system cannot or should not undergo a digital transformation. Rather, it is to pose the question of whether the government will dedicate sufficient funds and expertise towards developing a resilient and reliable IT system for the courts. The conference was strongly centered on the concept that technology is always the way forward. This is a positive idea but one must pay special attention to the complications that may arise with the digitalization of a system that must function in a particularly time-sensitive manner – and to ensure that these complications can be managed efficiently and effectively should they arise. This then, requires more than a mere push for digitalization. Introducing new technological platforms is a positive step towards digitalization. However, there is a need for a detailed, government-authorized plan on how the judicial system will efficiently and smoothly undergo this digital transformation in a sustainable and resilient manner.
A presenter from LegalDesk.com mentioned Estonia’s model of complete digital governance as an example of successful digitalization: “If a small country like Estonia can do it, why can’t we?” While it is useful to draw examples and lessons from other countries, it is also crucial to recognize the contextual differences between countries. The presenter’s point was that Estonia is small in both size and population and has just recently gained independence in 1991—and has nonetheless been able to undergo technological reform and completely digitalize governance systems. India’s case is extremely different as one can logically argue that digital inclusion is more difficult to accomplish for large, spatially dispersed populations. Furthermore, the socioeconomic disparities in India, particularly in income and literacy, contribute to an immense digital divide that Estonia did not, to any comparable extent, face in order to digitalize governance over 1.3 million individuals. This is not to suggest that India cannot become a world leader in digital governance, or become comparable to Estonia. Rather, this is to highlight the importance of recognizing historical, political and sociocultural differences between countries when comparing governance models and digitalization processes. There is a need to indigenize digital reform strategies and platforms in India to cater to its unique context and vast diversity. This can be done by focusing on issues such as the language of digital governance, ensuring sufficient distribution of access to public digital platforms, and prioritizing the inclusion of all socioeconomic classes. I would argue that digitalization could come at a greater cost than benefit if it perpetuates the exclusion of the underprivileged members of society, especially from a system as critical as the judiciary. These topics were alarmingly overlooked in the conference.
The topic of privacy was also quite overlooked in the conference. As a step towards digital transformation, LegalDesk.com presented the new eNotary technology, which would be implemented by utilizing a combination of Adhaar based authentication, eSign, digilocker systems such as India Stack and video/audio recorded interviews. With the eNotary system, attestation, authentication and verification of legal instruments can be done remotely. This is expected to make paperwork easier, faster and more secure, as individuals would log into digital platforms using their Adhaar numbers to perform their judiciary procedures. A member of the audience asked about privacy concerns associated with digitalizing the legal records or property ownership information of individuals. Kishore Mandyam, from DAKSH, answered confidently with a statement that privacy is not a pressing issue here. He asserted that privacy concerns are a western construct that we have adopted in urban parts of India but that is not a concern for the majority of locals. It is clear, however, from examples such as the United States’ predictive policing practices, that accumulating data regarding the legal affiliations of individuals can result in discriminatory practices if this data does not remain strictly confidential to protect the privacy rights of citizens. This is not to mention the other forms of discrimination that can arise from the accumulation of such data, such as the targeting of certain demographics by corporate marketing and credit scoring practices that rely on trends in big data. To keep citizens’ legal records and affairs out of these databases, a digital legal system must be securely encrypted and protected by rigid privacy policies. India may have a varying context that leads to different privacy concerns with regards to a digital legal system. In any case, special attention must be given to privacy and security rights of individuals as their Adhaar numbers become attached to all their online personal data, including their legal records and judicial affairs.
CERT-In's Proactive Mandate - A Report on the Indian Computer Emergency Response Team’s Proactive Mandate in the Indian Cyber Security Ecosystem
Regarding the proactive mandate, the IT Act and CERT-In Rules include the following areas where CERT-In is required to carry out proactive measures in the interests of cyber security:
- Forecast and alert cyber security incidents (IT Act, 2000) & Predict and prevent cyber security incidents (CERT-In Rules, 2013)
- Issue guidelines, advisories and vulnerability notes etc. relating to information security practices, procedures, prevention, response and reporting (IT Act, 2000)
- Information Security Assurance (CERT-In Rules, 2013)
This article will track and analyse the CERT-In’s operations in each of these areas over the past twelve years, by analysing the information available on CERT-In’s website as well as other media in the public domain.
The analysis will be carried out using a mixed methodology. The basic quantitative analysis of the information available on the CERT-In’ website will be carried out in the form of simple comparatives of updates, bulletins and other forms of publicly available interaction and critical information dispersal on CERT-In’s website. The qualitative sections, on the other hand, will contain a comparative analysis of the content present in the technical documents of the CERT-In with the equivalent documentation (where present) of similar bodies in the USA and EU. Each section will then illustrate normative suggestions as to how CERT-In’s performance of that respective obligation can be improved to better serve its cyber security mandate.
The image is published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document.
Demonetisation Survey Limits the Range of Feedback that can be Provided by the User
The article was published by First Post on November 24, 2016.
At the time of writing, 90 percent of respondents expressed the feeling that the government's move was 'brilliant/nice'. However, one must look into the merits of the survey and its limitations to understand the true value and nature of the results of the survey.
The first step required in order to take the survey, is downloading the application itself, which forces the user to automatically grant access to Contacts, Phone and Storage functions of their phone. While there are ostensible reasons for these permissions, (sharing the data from within the application, storing downloaded information, etc.) unless the user is running Android 6.0 or above, the user doesn’t have a choice in giving these permissions. This leaves the application with the potential to collect the entire phone book of the user as as well as access any files stored on the user’s device. This is independent of the survey and provides a large scope for massive data collection from any user just choosing to install the application in the first place. It is easily possible to create a version of the application that carries out a vast majority of its current functions without these permissions and the government (along with the application developer) should endeavour to do so at the earliest. In the alternative, they should have a clear and distinct privacy policy that informs users of the data collection and its possible use.
The second major step required to take the survey is the long and tedious registration process, which requires all sorts of details with massive privacy implications. This includes the name, email ID, phone number, residency details, profession and interests, all of which are compulsory fields. Why all of these details are necessary to take a supposedly simple survey and what possible use this information can be put to by the government is both unclear and problematic. It is also possible to register using Google, Facebook, Twitter and other social networking sites where there is a varying standard of equally private and unnecessary information that is being collected by the application from these websites. There are no privacy notices or consent forms that govern this information collection nor is their any indication of how this information will be put to use beyond the scope of the survey. The generic, standard form privacy policy (less than 10 lines long) on the Narendra Modi website is hidden at the bottom of the application download page (not in the application itself) and leaves a lot to be desired to safeguard user interest.
Once the registration is complete, the user is presented with the survey, which has a total of 10 questions of 3 broad categories. 6 of these questions have multiple choice answers, 3 of them have a sliding rating meter and 1 question has general comments/suggestion page. The article will now look at these categories and analyze the design of the questions, the extent of the choice they give to the users and finally if the survey has a coercive or limiting effect on the feedback that can be given by the user via the application regarding the demonetisation move.
The first category of questions, the multiple choice questions (MCQ), have varying degree of choices that the user can select from. However, regardless of the extent of the choices, their exact nature is severely limiting and makes it almost impossible to express a truly negative opinion of the survey. This is done in two ways, first the explicit restriction of choices and second the more subtle negative colouring of responses by cleverly phrasing questions. An example of the explicit restriction of choices can be seen in Question No 7. “Demonetisation will bring real estate, higher education, healthcare in common man’s reach” which has three options, “Completely Agree, Partially Agree and Can’t Say.” There is no option to disagree with the paradigm set by the question and neither is there an option for the user to further explain or elucidate upon the answer, if he/she choose Can’t Say as an option. This also means that there will be no answers that will have “No” as an answer to the fairly open ended question, which can have a myriad of responses. The same can be said for Question No. 6 regarding the demonetisation move’s effectiveness in curbing illegal activities to which, once again, “No” is not an answer, with “Don’t Know” being the best a user disagreeing can do with the survey question.
The second, more subtle aspect of the MCQ questions are questions that serve as bait to demand a positive answer, which can be used to later bolster the survey's results in a positive light. For example, Question No. 1 reads “Do you think Black Money exists in India” and Question No. 2 reads “Do you think the evil of Corruption & Black Money needs to be fought and eliminated?” both of which have simple “Yes” and “No” as the only two possible responses. These rhetorical questions, which demand a positive answer, provide almost no aspect for the user to subtly or explicitly disagree with motivating factor behind the demonetisation move. The placement of these questions and the lack of choice in responses that can be given to them leaves huge potential to tilt the survey results in the favour of the government’s move. For example, you can’t simultaneously agree that black money is a problem and think the demonetisation move is a bad idea, simply because you can’t express that view in a single question within the survey.
The other two categories of questions do not suffer from the overt problems of encouraging positive bias that the MCQ questions do but leave a fair bit to be desired in their outlook towards individuals who disagree with the move. In the sliding rating meter questions, there are strong visual cues that hint that disagreeing with the demonetisation move is a negative, undesirable idea. They do so by using a large, danger red frown as the icon for Question No. 5 that asks for the survey takers opinion on the ban on old 500 and 1000 rupee notes. The same goes for Question No. 3 that deals with the general moves of the government to tackle black money. This makes any opinion or answer that disagrees with the validity of the move an answer that is portrayed in a negative light. Similarly, the general comments/suggestion section in Question No. 10 is the only place for anyone to express a negative or non-concurring opinion, which there is no way to measure statistically in the overall survey results and will mostly likely not be counted in the final survey results.
All of the above points clearly show that the design of both the Narendra Modi mobile application and its survey have huge potential for coercing a biased viewpoint upon any survey taker and ensure that it is almost possible to express a stark, negative opinion against the demonetisation move via the survey. This can and should be remedied by the government to allow for a more open, conducive and critical discourse to take place regarding the move among the public. It is only when such opinion is allowed to exist in the first place, that the government can understand, engage and respond to the various valid critiques of the move. The chilling effect that would take place in the current form of the survey would be counterproductive to the original intent behind its creation, which was to create a direct constructive feedback loop between the public and the government.
Navigating the 'Reconsideration' Quagmire (A Personal Journey of Acute Confusion)
Backdrop: What is the Reconsideration Request Process?
The Reconsideration Request process has been laid down in Article IV, Section 2 of the
ICANN Bylaws. Some of the key aspects of this provision have been outlined below[1],
- ICANN is obligated to institute a process by which a person materially affected by ICANN action/inaction can request review or reconsideration.
- To file this request, one must have been adversely affected by actions of the staff or the board that contradict ICANN’s policies, or actions of the Board taken up without the Board considering material information, or actions of the Board taken up by relying on false information.
- A separate Board Governance Committee was created with the specific mandate of reviewing Reconsideration requests, and conducting all the tasks related to the same.
- The Reconsideration Request must be made within 15 days of:
- FOR CHALLENGES TO BOARD ACTION: the date on which information about the challenged Board action is first published in a resolution, unless the posting of the resolution is not accompanied by a rationale, in which case the request must be submitted within 15 days from the initial posting of the rationale;
- FOR CHALLENGES TO STAFF ACTION: the date on which the party submitting the request became aware of, or reasonably should have become aware of, the challenged staff action, and
- FOR CHALLENGES TO BOARD OR STAFF INACTION: the date on which the affected person reasonably concluded, or reasonably should have concluded, that action would not be taken in a timely manner
- .The Board Governance Committee is given the power to summarily dismiss a reconsideration request if:
- the requestor fails to meet the requirements for bringing a Reconsideration Request;
- it is frivolous, querulous or vexatious; or
- the requestor had notice and opportunity to, but did not, participate in the public comment period relating to the contested action, if applicable
- If not summarily dismissed, the Board Governance Committee proceeds to review and reconsider.
- A requester may ask for an opportunity to be heard, and the decision of the Board Governance Committee in this regard is final.
- The basis of the Board Governance Committee’s action is public written record information submitted by the requester, by third parties, and so on.
- The Board Governance Committee is to take a decision on the matter and make a final determination or recommendation to the Board within 30 days of the receipt of the Reconsideration request, unless it is impractical to do so, and it is accountable to the Board to make an explanation of the circumstances that caused the delay.
- The determination is to be made public and posted on the ICANN website.
ICANN has provided a neat infographic to explain this process in a simple fashion, and I am reproducing it here:
(Image taken from https://www.icann.org/resources/pages/accountability/reconsiderationen)
Our Tryst with the Reconsideration Process
The Grievance
Our engagement with the Reconsideration process began with the rejection of two of our requests (made on September 1, 2015) under ICANN’s Documentary Information Disclosure Policy. The requests sought information about the registry and registrar compliance audit process that ICANN maintains, and asked for various documents pertaining to the same[2]:
- Copies of the registry/registrar contractual compliance audit reports for all the audits carried out as well as external audit reports from the last year (20142015).
- A generic template of the notice served by ICANN before conducting such an audit.
- A list of the registrars/registries to whom such notices were served in the last year.
- An account of the expenditure incurred by ICANN in carrying out the audit process.
- A list of the registrars/registries that did not respond to the notice within a reasonable period of time.
- Reports of the site visits conducted by ICANN to ascertain compliance.
- Documents which identify the registries/registrars who had committed material discrepancies in the terms of the contract.
- Documents pertaining to the actions taken in the event that there was found to be some form of contractual noncompliance.
- A copy of the registrar selfassessment form which is to be submitted to ICANN.
ICANN integrated both the requests and addressed them via one response on 1 October, 2015 (which can be found here). In their response, ICANN inundated us with already available links on their website explaining the compliance audit process, and the processes ancillary to it, as well as the broad goals of the programme none of which was sought for by us in our request. ICANN then went on to provide us with information on their ThreeYear Audit programme, and gave us access to some of the documents that we had sought, such as the preaudit notification template, list of registries/registrars that received an audit notification, the expenditure incurred to some extent, and so on .
Individual contracted party reports were denied to us on the basis of their grounds for nondisclosure. Further, and more disturbingly, ICANN refused to provide us with the names of the contracted parties who had been found under the audit process to have committed discrepancies. Therefore, a large part of our understanding of the way in which the compliance audit process works remains unfinished.
What we did
Dissatisfied with this response, I went on to file a Reconsideration request (number 1522) as per their standard format on November 2, 2015. (The request filed can be accessed here).As grounds for reconsideration, I stated that “As a part of my research I was tracking the ICANN compliance audit process, and therefore required access to audit reports in cases where discrepancies where formally found in their actions. This is in the public interest and therefore requires to be disclosed...While providing us with an array of detailed links explaining the compliance audit process, the ICANN staff has not been able to satisfy our actual requests with respect to gaining an understanding of how the compliance audits help in regulating actions of the registrars, and how they are effective in preventing breaches and discrepancies.” Therefore, I requested them to make the records in question publicly available “We request ICANN to make the records in question, namely the audit reports for individual contracted parties that reflect discrepancies in contractual compliance, which have been formally recognised as a part of your enforcement process. We further request access to all documents that relate to the expenditure incurred by ICANN in the process, as we believe financial transparency is absolutely integral to the values that ICANN stands by.”
The Board Governance Committee’s response3
The determination of the Board Governance Committee was that our claims did not merit reconsideration, as I was unable to identify any “misapplication of policy or procedure by the ICANN Staff”, and my only issue was with the substance of the DIDP Response itself, and substantial disagreements with a DIDP response are not proper bases for reconsideration
(emphasis supplied).
The response of the Board Governance Committee was educative of the ways in which they determine Reconsideration Requests. Analysing the DIDP process, it held that ICANN was well within its powers to deny information under its defined Conditions for NonDisclosure, and denial of substantive information did not amount to a procedural violation. Therefore, since the staff adhered to established procedure under the DIDP, there was no basis for our grievance, and our request was dismissed..
Furthermore, as a postscript, it is interesting to note that the Board Governance Committee delayed its response time by over a month, by its own admission “In terms of the timing of the BGC’s recommendation, it notes that Section 2.16 of Article IV of the Bylaws provides that the BGC shall make a final determination or recommendation with respect to a reconsideration request within thirty days, unless impractical. To satisfy the thirtyday deadline, the BGC would have to have acted by 2 December 2015. However, due to the timing of the BGC’s meetings in November and December, the first practical opportunity for the BGC to consider Request 1522 was 13 January 2016.”4
Whither do I wander now?
To me, this entire process reflected the absurdity of the Reconsideration request structure as an appeal mechanism under the Documentary Information Disclosure Policy. As our experience indicated, there does not seem to be any way out if there is an issue with the substance of ICANN’s response. ICANN, commendably, is particular about following procedure with respect to the DIDP. However, what is the way forward for a party aggrieved by the flaws in the existing policy? As I had analysed earlier, the grounds for ICANN to not disclose information are vast, and used to deny a large chunk of the information requests that they receive. How is the hapless requester to file a meaningful appeal against the outcome of a bad policy, if the only ground for appeal is noncompliance with the procedure of said bad policy? This is a serious challenge to transparency as there is no other way for a requester to acquire information that ICANN may choose to withold under one of its myriad clauses. It cannot be denied that a good information disclosure law ought to balance the free disclosure of information with the holding back of information that truly needs to be kept private.[3][4] However, it is this writer’s firm opinion that even instances where information is witheld, there has to be a stronger explanation for the same, and moreover, an appeals process that does not take into account substantive issues which might adversely affect the appellant falls short of the desirable levels of transparency. Global standards dictate that grounds for appeal need to be broad, so that all failures to apply the information disclosure law/policy may be remedied.6 Various laws across the world relating to information disclosure often have the following as grounds for appeal: an inability to lodge a request, failure to respond to a request within the set time frame, a refusal to disclose information, in whole or in part, excessive fees and not providing information in the form sought, as well as a catchall clause for other failures.7
Furthermore, independent oversight is the heart of a proper appeal mechanism in such situations[5]; the power to decide the appeal must not rest with those who also have the discretion to disclose the information, as is clearly the case with ICANN, where the Board Governance Committee is constituted and appointed by the ICANN Board itself [one of the bodies against whom a grievance may be raised].
Suggestions
We believe ICANN, in keeping with its global, multistakeholder, accountable spirit, should adopt these standards as well, especially now that the transition looms around the corner. Only then will the standards of open, transparent and accountable governance of the Internet upheld by ICANN itself as the ideal be truly, meaningfully realised. Accordingly, the following standards ought to be met with:
- Establishment of an independent appeals authority for information disclosure cases
- Broader grounds for appeal of DIDP requests
- Inclusion of disagreement with the substantive content of a DIDP response as a ground for appeal.
- Provision of proper reasoning for any justification of the witholding of information that is necessary in the public interest.
[1] Article IV, Section 2, ICANN Bylaws, 2014 available at https://www.icann.org/resources/pages/governance/bylawsen/#IV
[2] Copies of the request can be found here and here.
[3] Katherine Chekouras, Balancing National Security with a Community's RighttoKnow: Maintaining
Public Access to Environmental Information Through EPCRA 's NonPreemption Clause, 34 B.C. Envtl. Aff. L. Rev 107, (2007).
[4] Toby Mendel, Freedom of Information: A Comparative Legal Study 151 (2nd edn, 2008).
Id, at 152
34 Available here. https://www.icann.org/en/system/files/files/reconsideration1522cisfinaldetermination13jan16en.pdf
[5] Mendel, supra note 6.
Comments to the BIS on Smart Cities Indicators
Name of the Commentator/ Organisation: The Centre for Internet and Society, India[1]
PRELIMINARY
- This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Smart Cities - Indicators (dated 30 September 2016), released by the Bureau of Indian Standards (“BIS”).
- CIS is thankful for the opportunity to put forth its views.
- This submission is divided into three main parts. The first part, ‘Preliminary’, introduces the document; the second part, ‘About CIS’, is an overview of the organization; and, the third part contains the ‘Comments’.
ABOUT CIS
- CIS is a non-profit organisation[2] that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, freedom of speech and expression, intermediary liability, digital privacy, and cybersecurity.
- CIS values the fundamental principles of justice, equality, freedom and economic development. This submission is consistent with CIS' commitment to these values, the safeguarding of general public interest and the protection of India's national interest at the international level. Accordingly, the comments in this submission aim to further these principles.
III. Comments
Clause/ Para/ Table/ Figure No. commented |
Comments/Modified Wordings |
Justification of Proposed Change |
General Comment |
The indicators could generally utilize more of smart data, from both analog and digital sources, to better reflect the performance of various |
Using technology to gather information rather than limiting its scope to existing mostly non-digital sources of data. There is a lot of potential information, |
|
indicators. |
already collected, that simply goes unused or underutilized. Principled use of such information to make informed decisions on key aspects of urban development will lead to ‘truly’ smart cities. Further, the indicators should include actionable aspects and include avenues to leverage research to better their performance. Moreover, indicators that allow for audits for rights and transparency should be focused on as core indicators. |
General Comment |
Indicators are limited in scope to basic sustainability. |
The indicators in their current form restrict themselves to sustainability, focused on basic sustenance, which seems to limit the scope of the Smart Cities project. Having a core set of indicators that is more relevant to India but also have an optional, more ambitious set of indicators for cities to become truly advanced and for the standard to be more dynamic. Encourage them by leveraging technology in a sustainable, human welfare and development-oriented approach, which the indicators can inculcate. Further, policy pivots being driven by these indicators could be given to make the decision making in smart cities more transparent and accountable. |
Economy |
Granularity of information pertaining to macro-level economic indicators |
All the indicators in the Economic section pertain to macro-level standards/ indicators. Their limitation is that they provide very little information about the diversity of the economy of a city, the factors responsible for positive or negative effects and offer no real way to encourage microeconomic changes that can lead to the improvement of the economic condition of a city, aided by modern technology. Example indicators could be: average GDP of districts within a city, and total number of operating businesses and merchants in sub-localities in the city. All of this data can also be used to drive micro policies to enable localized development. |
Education |
Include data at city-level and indicators for higher education. |
The indicators measured in the Education section only look at city level information about schools, ignoring district and even school level information already recorded and present in the system. Teacher and student attendance rates, level of basic infrastructure present in schools, presence of toilets for both genders, provisions for meals, etc. are some of the parameters that can be included in the indicator list. Further, the list completely excludes college education (both degree and diploma level) as a relevant indicator, nor does it include indicators for the average education of the population of the city, both of which can be easily measured using census data. |
|
|
Further, data that allows for a holistic decision making process - poverty levels, distance to schools, transportation levels, access to higher learning, etc. can also be used as supporting indicators. These could come from studies already done that call out the factors. |
5. Education 5.1, 5.2, 5.3, 5.5 |
Include gender-specific indicators for students completing primary education, secondary education, and higher education, and enrolled in education institutions. Change the term “survival rate” to “retention rate”. |
Indicators for the “survival rate” (may be better represented as retention rate) of students who identify as female or transgender in schools and universities, and enrollment of school-aged and college-aged girls, women and transgender students would help work towards an inclusive smart city. |
Energy |
Better utilisation of data from digital electricity meters. |
The advent of digital meters allows for home/business level capturing of energy usage. This information can be leveraged to better target energy leaks, theft, repair work, pricing and even renewable energy incentives. |
Finance |
Indicators for digital and cashless payment and transaction systems. |
The strong push by the government towards digital payments could also be reflected on the list of indicators, such as the “number of establishments accepting (and not accepting) digital payment systems” being a supporting indicator. Similar standards can be extended to include microfinance (number of avenues available for lending, successful payback of loans, et cetera.) |
Governance |
Recommended inclusion of indicators pertaining to the Right to Information Act, 2005 |
The number of requests made under the Right to Information Act, 2005, and the time taken by the responding office to reply to them (in terms of the number of days) by the government offices in the city as a relevant factor to gauge transparency and accountability of the governance structures. The same can also be extended to map the parliamentary performance of the elected officials from the city at the state and national level, especially for the interests of the city. Parliamentary performance here would mean attendance records, number of question raised, resources spent on constituency development, et cetera. |
10. Governance 10.2, 10.3, 10.6 |
Indicators for the number of women and transgenders elected to public office in the city, employed in the government workforce in the city in reserved positions. Indicators for women and transgendered voters registered as a percentage of the voting-age population. |
In the interest of inclusive smart cities, this indicator would help fathom if positions reserved for women and transgenders are filled out and the possible reasons, if any, for some of them going vacant.The number of women and transgender voters would help track the participation of women and transgendered voters in democracy. Further, inclusion of indicators that check voter fraud, political participation levels and technologies that enable secure voter participation and involvement would also be beneficial. |
Health |
“Cost of basic health services” and number of healthcare facilities as a supporting indicator. |
The cost, quality and access of public primary healthcare services, which can be easily measured using digital systems, should also be included in the overall scheme as a supporting indicator. |
Recreation |
“Utilisation of public spaces” as a supporting indicator. |
Information about the utilisation of public spaces, such as parks and grounds. can be included as a supporting indicator. Relevant information could footfalls per month or year, number of public events held at these locations, et cetera. Most of this information is already present via figures for ticket sales while the rest could be collected using digital attendance systems. Other supporting indicators could include green space per resident, play area/park space per child, quality of the public space - (lack of garbage, sewage, etc). |
Safety |
“Overall crime reporting statistics”as a core indicator. |
The overall incidence rates of various crimes reported, crimes solved, and data regarding investigations (such as mapping of the crime to a map, number of FIR's filed, not filed, outcomes of investigations, etc.) should all be included as core indicators to better gauge the safety record of the city. |
Safety 13.3 |
Include “crimes carried out using technology or the Internet, as per the Criminal Procedure Code and Information Technology Act, 2008 (Amended)”. |
This indicator will expand the scope of crimes against women to include acts of crime carried out using the Internet as well. |
Safety 13.4 |
Include “Response time of the police department from the initial call in instances of crimes against women” |
This would include crimes against women as defined in 13.3. This indicator gives more granular information about safety in general and women’s safety in particular, and of the perception of certain kinds of crimes not being serious enough for the police to respond to. |
Shelter |
Expansion of indicators to include per capita living space, basic amenities within the houses. |
The scope of shelter should be expanded to include per capita living space in housing units as well as availability of basic home amenities to provide a more wholesome view of the living situation in a city. Some basic amenities that could be included are electricity uptime, water distribution (in liters/ per household), number of residents in the household, kind of house roofing, etc. |
Telecommunication and Innovation |
Inclusion of indicators on mobile phone usage, mobile network connectivity and computer literacy. |
There are no indicators for mobile phone usage and computer literacy, both of which are essential for the healthy functioning of any city. Indicators to gauge this could include number of mobile phone users, number of (active) mobile connections, number of computer literate people, etc. Similar indicators should also be included for cellphone network coverage, public WiFi connectivity and digital public service provisions as well. Indicators for the same could be number of neighbourhoods/ localities/ suburbs covered by 2G/3G/4G/ 5G out of the total number in city, total number of Public WiFi spots per unit area, etc. |
Transportation |
Inclusion of indicators for efficiency, sustainability and planning of city-level transportation. |
The current set of indicators do not include indicators to measure the efficiency, fuel consumption, sustainability and reach of public transport, especially in the outskirts or suburban areas. These can be included as supporting indicators: the number of GPS-connected public transport vehicles to the total number, number of vehicles equipped with panic buttons, quantum of vehicles in the city using renewable energy sources as fuel, automation of toll booths, automation of points where traffic offences can be logged (e.g illegal honking) or overspeeding. |
Urban Planning |
Digital information, such as geospatial data, remote sensing and digital mapping can be used to provide better and more sustainable core indicators. |
Geo-spatial information (from surveys and satellites) can be utilised to provide macro-level data that can then be utilised to factor city expansions, illegal structures, suburban development, etc. Digital mapping and remote sensing capabilities can be leveraged to provide this information and the utilisation of such information in city development can be made a supporting indicator. |
Sewerage and Sanitation |
Indicators governing community hygiene and sanitation. |
Information about covered toilets per capita of the population, sewage treatment plants, etc. are either absent or too vaguely detailed in the current set of indicators, despite the push from the government towards the Swachh Bharat programme. They should be included as Core Indicators to encourage sanitation at a citizen level. |
Water Supply |
Indicators for digital measurement of water consumption per capita and at the city-level. |
Digital water meters are starting to become pervasive and can provide detailed information about water consumption at a household level that was previously unavailable in city planning. A supporting indicator at a minimum can be included to further bolster information aware governance in the field. |
[1] This submission is authored, in alphabetical order, by Elonnai Hickok ([email protected]), Rohini Lakshané ([email protected]) and Udbhav Tiwari ([email protected]) on behalf of the Centre for Internet and Society, India.
[2] See The Centre for Internet and Society,available at http://cisindia.org for details of the organization, and our work.
The Technology behind Big Data
Download the Paper (PDF, 277 kb)
Introduction
Defining big data is a disputed area in the field of computer science[1], there is some consensus on a basic structure to its definition[2]. Big data is data that is collected in the form of datasets that has three main criteria: size, variety & velocity, all of which operate at an immense scale[3]. It is ‘big’ in size, often running into petabytes of information, has vast variety within its components, and is created, captured and analysed at an incredibly rapid velocity. All of this also makes big data difficult to handle using traditional technological tools and techniques.
This paper will attempt to perform a high-level literature review of the most commonly used technological tools and processes in the big data life cycle. The big data life cycle is a conceptual construct that can be used to study the various stages that typically occur in collecting, storing and analysing big data, along with the principles that can govern these processes. The big data life cycle consists of four components, which will also be the key structural points of the paper, namely: Data Acquisition, Data Awareness, Data Analytics & Data Governance.4 The paper will focus on the aspects that the author believes are relevant for analysing the technological impact of big data on both technology itself and society at large.
Scope: The scope of the paper is to study the technology used in big data using the "Life Cycle of Big Data" as model structure to categorise & study the vast range of technologies that are involved in big data. However, the paper will be limited to the study of technology related directly to the big data life cycle. It shall specifically exclude the use/utilisation of big data from its scope since big data is most often being fed into other, unrelated technologies for consumption leading to rather limitless possibilities.
Goal: Goal of the paper is twofold: a.) to use the available literature on the technological aspects of big data, to perform a brief overview of the technology in the field and b.) to frame the relevant research questions for studying the technology of big data and its possible impact on society.
Data Acquisition
Acquiring big data has two main sub components to it, the first being sensing the existence of the data’ itself and the second, the stage of collecting and storing this data. Both of these subcomponents are incredibly diverse fields, with lots of rapid change occurring in the technology utilised to carry out these tasks. The section will provide a brief overview of the subcomponents and then discuss the technology used to fulfil the tasks.
Data Sensing
Data does not exist in a vacuum and is always created as a part of a larger process, especially in the aspect of modern technology. Therefore, the source of the data itself plays a vital role in determining how it can be captured and analysed in the larger scheme of things. Entities constantly emit information into the environment that can be utilised for the purposes of big data, leading to two main kinds of data: data that is “born digital” or “born analogue.”[4]
Born Digital Data
Information that is “born digital,” is created, by a user or by a digital system, specifically for use by a computer or data‐processing system. This is a vast range of information and newer fields are being added to this category on a daily basis. It includes, as a short, indicative list: email and text messaging, any form of digital input, including keyboards, mouse interactions and touch screens, GPS location data, data from daily home appliances (Internet of Things), etc. All of this data can be tracked and tagged to users as well as be aggregated to form a larger picture, massively increasing the scope of what may constitute the ‘data’ in big data.
Some indicative uses of how such born digital data is catalogued by technological solutions on the user side, prior to being sent for collection/storage are:
a.) Cookies - There are small, often just text, files that are left on user devices by websites in order to that visit, task or action (for example, logging into an email account) with a subsequent event.[5] (for example, revisiting the website)
b.) Website Analytics[6] - Various services, such as Google Analytics, Piwik, etc., can use JavaScript and other web development languages to record a very detailed, intimate track of a user's actions on a website, including how long a user hovers above a link, the time spent on the website/application and in some cases, even the time spent specific aspects of the page.
c.) GPS[7] - With the almost pervasive usage of smartphones with basic location capabilities, GPS sensors on these devices are used to provide regular, minute driven updates to applications, operating systems and even third parties about the user's location. Modern variations such as A-GPS can be used to provide basic positioning information even without satellite coverage, vastly expanding the indoor capabilities of location collection.
All of these instances of sensing born digital data are common terms, used in daily parlance by billions of people from all over the world, which is a symbolic of just how deeply they have pervaded into our daily lifestyle. Apart from privacy & security concerns this in turn also leads to an exponential increase in the data available to collect for any interested party.
Sensor Data
Information is said to be “analogue” when it contains characteristics of the physical world, such as images, video, heartbeats, etc. Such information becomes electronic when processed by a “sensor,” a device that can record physical phenomena and convert it into digital information. Some examples to better illustrate information that is born analogue but collected via digital means are:
a.) Voice and/or video content on devices - Apart from phone calls and other forms communication, video and voice based interactions have started to regularly be captured to provide enhanced services. These include Google Now[8], Cortana[9] and other digital assistants as well as voice guided navigation systems in cars, etc.
b.) Personal health data such as heartbeats, blood pressure, respiration, velocity, etc. - This personal, potentially very powerful information is collected by dedicated sensors on devices such as Fitbit[10], Mi Band[11], etc. as well as by increasingly sophisticated smartphone applications such as Google Fit that can do so without any special device.
c.) Camera on Home Appliances - Cameras and sensors on devices such as video game consoles (Kinect[12] being a relevant example) can record detailed human interactions, which can be mined for vast amounts of information apart from carrying out the basic interactions with the devices itself.
While not as vast a category as born digital data, the increasingly lower costs of technology and ubiquitous usage of digital, networked devices is leading to information that was traditionally analogue in nature to be captured for use at a rapidly increasing rate.
Data Collection & Storage
Traditional data was normally processed using the Extract, Transform, Load (ETL) methodology, which was used to collect the data from outside sources, modify the data to fit needs, and then upload the data into the data storage system for future use.[13] Technology such as spreadsheets, RDBMS databases, Structured Query Languages (SQL), etc. were all initially used to carry out these tasks, more often than not manually. [14]
However, for big data, the methodology traditionally followed is both inefficient and insufficient to meet the demands of modern use. Therefore, the Magnetic, Agile, Deep (MAD) process is used to collect and store data[15][16]. The needs and benefits of such a system are: attracting all the data sources regardless of their quality (magnetic), logical and physical contents of storage systems adapting to the rapid data evolution in big data (agile) and complex algorithmic statistical analysis required of big data on a very short notice[17]. (deep)
The technology used to perform data storage using the MAD process requires vast amount of processing power, which is very difficult to create in a single, physical space/unit for nonstate or research entities, who cannot afford supercomputers. Therefore, most solutions used in big data rely on two major components to store data: distributed systems and Massive Parallel Processing[18] (MPP) that run on non-relational (in-memory) database systems. Database performance and reliability is traditionally gauged using pure performance metrics (FLOPS per second, etc.) as well as the Atomicity, consistency, isolation, durability (ACID) criteria.[19] The most commonly used database systems for big data applications are given below. The specific operational qualities and performance of each of these databases is beyond the scope of this review but the common criteria that makes them well suited for big data storage have been delineated below.
Non-relational databases
Databases traditionally used to be structured entities that operated solely on the ability to correlate information stored in them using explicitly defined relationships. Even prior to the advent of big data, this outlook was turning out to be a limiting factor in how large amounts of stored information could be leveraged, this led to the evolution of non relational database systems. Before going into them in detail, a basic primer on their data transfer protocols will be helpful in understanding their operation.
A protocol is a model that structures instructions in a particular manner so that it can be reproduced from one system to another[20][21]. The protocols which govern technology in the case of big data have gone through many stages of evolution, starting off with simple HTML based systems[22], which then evolved to XML driven SOAP systems[23], which led to JavaScript Object Notation, or JSON[24], the currently used form for in most big database systems. JSON is an open format used to transfer data objects, using human-readable text and is the basis for most of the commonly used non-relational database management systems. Examples of Non-relational databases also known as NoSQL databases, include MongoDB[25], Couchbase[26], etc. They were developed for both managing as well as storing unstructured data. They aim for scaling, flexibility, and simplified development. Such databases rather focus on the high-performance scalable data storage, and allow tasks to be written in the application layer instead of databases specific languages, allowing for greater interoperability.[27]
In-Memory Databases
In order to overcome performance limitation of traditional database systems, some modern databases now use in-memory databases. These systems manage the data in the RAM memory of the server, thus eliminating storage disk input/output. This allows for almost realtime responses from the database, in comparisons to minutes or hours required on traditional database systems. This improvement in the performance is so massive that, entirely new applications are being developed for using IMDB systems.[28] These IMDB systems are also being used for advanced analytics on big data, especially to increase the access speed to data and increase the scoring rate of analytic models for analysis.[29] Examples of IMDB include VoltDB[30], NuoDB[31], SolidDB[32] and Apache Spark[33].
Hybrid Systems
These are the two major systems used to store data prior to it being processed or analysed in a big data application. However, the divide between data storage and data management is a slim one and most database systems also contain various unique attributes that cater them to specific kinds of analysis. (as can be seen from the IMDB example above) One example of a very commonly used Hybrid system that deals with storage as well as awareness of the data is Apache Hadoop33, which is detailed below.
Apache Hadoop
Hadoop consists of two main components: the HDFS for the big data storage, and MapReduce for big data analytics, each of which will be detailed in their respective section.
- The HDFS[34][35] storage function in Hadoop provides a reliable distributed file system, stored across multiple systems for processing & redundancy reasons. The file system is optimized for large files, as single files are split into blocks and spread across systems known as cluster nodes.[36] Additionally, the data is protected among the nodes by a replication mechanism, which ensures availability even if any node fails. Further, there are two types of nodes: Data Nodes and Name Nodes.[37] Data is stored in the form of file blocks across the multiple Data Nodes while the Name Node acts as an intermediary between the client and the Data Node, where it directs the requesting client to the particular Data Node which contains the requested data.
This operating structure for storing data also has various variations within Hadoop such as HBase for key/value pair type queries (a NoSQL based system), Hive for relational type queries, etc. Hadoop’s redundancy, speed, ability to run on commodity hardware, industry support and rapid pace of development have led to it being almost co-equivalently associated with big data.[38]
Data Awareness
Data Awareness, in the context of big data, is the task of creating a scheme of relationships within a set of data, to allow different users of the data to determine a fluid yet valid context and utilise it for their desired tasks.[39] It is a relatively new field, in which most of the work is currently being done on semantic structures to allow data to gain context in an interoperable format, in contrast to the current system where data is given context using unique, model specific constructs.[40] (such as XML Schemes, etc.)
Some of the original work on this field was carried out in the form of utilising the Resource Description Framework (RDF), which was built primarily to allow describing of data in a portable manner, especially being agnostic towards platforms and systems for Semantic Web at the W3C. SPARQL is the language used to implement RDF based designs but both largely remain underutilised in both the public domain as well as big data. Authors such as Kurt
Cagle[41] and Bob DuCharme[42] predict its explosion in the next couple of years. Companies have also started realising the value of interoperable context, with Oracle Spatial[43] and IBM’s DB2[44] already including RDF and SPARQL support in the past 3 years.
While underutilised, the rapid developments taking place in the field will make the impact that data awareness may have on big data as big as Hadoop and maybe even SQL. Some aspects of it are already beginning to be used in Artificial Intelligence, Natural Language Processing, etc. with tremendous scope for development.[45]
Data Processing & Analytics
Data Processing largely has three primary goals: a. determines if the data collected is internally consistent; b. make the data meaningful to other systems or users using either metaphors or analogy they can understand; and (what many consider most importantly) provide predictions about future events and behaviours based upon past data and trends.[46]
Being a very vast field with rapidly changing technologies governing its operation, this section will largely concentrate on the most commonly used technologies in data analytics.
Data analytics requires four primary conditions to be met in order to carry out effective processing: fast, data loading, fast query processing, efficient utilisation of storage and adaptivity to dynamic workload patterns. The analytical model most commonly associated with meeting this criteria and with big data in general is MapReduce, detailed below. There are other, more niche models and algorithms (such as Project Voldemort[47] used by LinkedIn), which are used in big data but they are beyond the scope of the review, and more information about them can be read at article linked in the previous citation. (Reference architecture and classification of technologies, products and services for big data system)
MapReduce
MapReduce is a generic parallel programming concept, derived from the “Map” and “Reduce” of functional programming languages, which makes it particularly suited for big data operations. It is at the core of Hadoop[48], and performs the data processing and analytics functions in other big data systems as well.[49] The fundamental premise of MapReduce is scaling out rather than scaling up, i.e., (adding more numerical resources, rather than increasing the power of a single system)[50]
MapReduce operates by breaking a task down into steps and executing the steps in parallel, across many systems. This comes with two advantages, a reduction in the time needed to finish the task and also a decrease in the amount of resources one has to expend to perform the task, in both power and energy. This model makes it ideally suited for the large data sets and quick response times required of big data operations generally.
The first step of a MapReduce job is to correlate the input values to a set of keys/value pairs as output. The “Map” function then partitions the processing tasks into smaller tasks, and assigns them to the appropriate key/value pairs.[51] This allows unstructured data, such as plain text, to be mapped to a structured key/value pair. As an example, the key could be the punctuation in a sentence and the value of the pair could be the number of occurrences of the punctuation overall. This output of the Map function is then passed on “Reduce” function.[52] Reduce then collects and combines this output, using identical key/value pairs, to provide the final result of the task.[53] These steps are carried using the Job Tracker & Task Tracker in Hadoop but different systems have different methodologies to carry out similar tasks.
Data Governance
Data Governance is the act of managing raw big data as well as the processed information that arises from big data in order to meet legal, regulatory and business imposed requirements. While there is no standardized format for data governance, there have been increasing call with various sectors (especially healthcare) to create such a format to ensure reliable, secure and consistent big data utilisation across the board. The following tactics and techniques have been utilised or suggested for data governance, with varying degrees of success:
- Zero-knowledge systems: This technological proposal maintains secrecy with respect to the low-level data while allowing encrypted data to be examined for certain higherlevel abstractions.[54] For the system to be zero-knowledge, the client’s system will have to encrypt the data and send it to the storage provider. Due to this, the provider stores the data in the encrypted format and cannot decipher the same unless he/she is in possession of the key which will decrypt the data into plaintext. This allows the individual to store his data with a storage provider while also maintaining anonymity of the details contained in such information. However, these are currently just beginning to be used in simple situations. As of now, they are not expandable to unstructured and complex cases and have to be developed marginally before they can be used for research and data mining purposes.
- Homomorphic encryption: Homomorphic encryption is a privacy preserving technique which performs searches and other computations over data that is encrypted while also protecting the individual’s privacy.[55] This technique has however been considered to be impractical and is deemed to be an unlikely policy alternative for near future purposes in the context of preserving privacy in the age of big data.[56]
- Multi-party computation: In this technique, computation is done on encrypted distributed data stores.[57] This mechanism is closely related to homomorphic encryption where individual data is kept private using encryption algorithms called “collusion-robust” while the same is used to calculate statistics.[58] The parties involved are aware of some private data and each of them use a protocol which produces results based on the information they are aware of and the information they are not aware of, without revealing the data they are not already aware of.[59] Multi-party computations thus help in generating useful data for statistical and research purposes without compromising the privacy of the individuals.
- Differential Privacy: Although this technological development is related to encryption, it follows a different technique. Differential privacy aims at maximizing the precision of computations and database queries while reducing the identifiability of the data owners who have records in the database, usually through obfuscation of query results.[60] This is widely applied today in the existence of big data in order to ensure preservation of privacy while trying to reap the benefits of large scale data collection.[61]
- Searchable encryption: Through this mechanism, the data subject can make certain data searchable while minimizing exposure and maximizing privacy.[62] The data owner can make his information available through search engines by providing the data in an encrypted format but by adding tags consisting of certain keywords which can be deciphered by the search engine. This encrypted data shows up in the search results when searched with these particular keywords but can only be read when the person is in possession of the key which is required for decrypting the information.
This technique of encryption provides maximum security to the individual’s data and preserves privacy to the greatest possible extent.
- K-anonymity: The property of k-anonymity is being applied in the present day in order to preserve privacy and avoid re-identification.[63] A certain data set is said to possess the property of k-anonymity if individual specific data can be released and used for various purposes without re-identification. The analysis of the data should be carried out without attributing the data to the individual to whom it belongs and should give scientific guarantees for the same.
- Identity Management Systems: These systems enable the individuals to establish and safeguard their identities, explain those identities with the help of attributes, follow the activity of their identities and also delete their identities if they wish to.[64] It uses cryptographic schemes and protocols to make anonymous or pseudonymous the identities and credentials of the individuals before analysing the data.
- Privacy Preserving Data Publishing: This is a method in which the analysts are provided with the individual’s personal information with the ability to decipher particular information from the database while preventing the inference of certain other information which might lead to a breach of privacy.[65] Data which is essential for the analysis will be provided for processing while sensitive data will not be disclosed. This tool primarily focuses on microdata.
- Privacy Preserving Data Mining: This mechanism uses perturbation methods and randomization along with cryptography in order to permit data mining on a filtered version of the data which does not contain any form of sensitive information. PPDM focuses on data mining results unlike PPDP.[66]
Conclusion
Studying the technology surrounding big data has led to two major observations: the rapid pace of development in the industry and the stark lack of industry standards or government regulations directed towards big data technologies. These observations have been the primary motivating factor for framing further research in the field. Understanding how to deal with big data technologically, rather than just the potential regulation of possible harms after the technological processes have been performed might be critical for the human rights dialogue as these processes become even more extensive, opaque and technologically complicated.
[1] EMC: Data Science and Big Data Analytics. In: EMC Education Services, pp. 1–508 (2012)
[2] Bakshi, K.: Considerations for Big Data: Architecture and Approaches. In: Proceedings of the IEEE Aerospace Conference, pp. 1–7 (2012)
[3] Adams, M.N.: Perspectives on Data Mining. International Journal of Market Research 52(1), 11–19 (2010) 4 Elgendy, N.: Big Data Analytics in Support of the Decision Making Process. MSc Thesis, German University in Cairo, p. 164 (2013)
[4] Big Data and Privacy: A Technological Perspective - President’s Council of Advisors on Science and
Technology (May 2014)
[5] Chen, Hsinchun, Roger HL Chiang, and Veda C. Storey. "Business Intelligence and Analytics: From Big Data to Big Impact." MIS quarterly 36.4 (2012): 1165-1188.
[6] Chandramouli, Badrish, Jonathan Goldstein, and Songyun Duan. "Temporal analytics on big data for web advertising." 2012 IEEE 28th international conference on data engineering. IEEE, 2012.
[7] Laurila, Juha K., et al. "The mobile data challenge: Big data for mobile computing research." Pervasive Computing. No. EPFL-CONF-192489. 2012.
[8] Lazer, David, et al. "The parable of Google flu: traps in big data analysis." Science 343.6176 (2014): 12031205.
[9] ibid
[10] Banaee, Hadi, Mobyen Uddin Ahmed, and Amy Loutfi. "Data mining for wearable sensors in health monitoring systems: a review of recent trends and challenges." Sensors 13.12 (2013): 17472-17500.
[11] ibid
[12] Chung, Eric S., John D. Davis, and Jaewon Lee. "Linqits: Big data on little clients." ACM SIGARCH Computer Architecture News. Vol. 41. No. 3. ACM, 2013.
[13] Kornelson, Kevin Paul, et al. "Method and system for developing extract transform load systems for data warehouses." U.S. Patent No. 7,139,779. 21 Nov. 2006.
[14] Henry, Scott, et al. "Engineering trade study: extract, transform, load tools for data migration." 2005 IEEE Design Symposium, Systems and Information Engineering. IEEE, 2005.
[15] Cohen, Jeffrey, et al. "MAD skills: new analysis practices for big data." Proceedings of the VLDB Endowment
[16] .2 (2009): 1481-1492.
[17] Elgendy, Nada, and Ahmed Elragal. "Big data analytics: a literature review paper." Industrial Conference on Data Mining. Springer International Publishing, 2014.
[18] Wu, Xindong, et al. "Data mining with big data." IEEE transactions on knowledge and data engineering 26.1 (2014): 97-107.
[19] Supra Note 17
[20] Hu, Han, et al. "Toward scalable systems for big data analytics: A technology tutorial." IEEE Access 2 (2014):
[21] -687.
[22] Kurt Cagle, Understanding the Big Data Lifecycle - LinkedIn Pulse (2015)
[23] Coyle, Frank P. XML, Web services, and the data revolution. Addison-Wesley Longman Publishing Co., Inc., 2002.
[24] Pautasso, Cesare, Olaf Zimmermann, and Frank Leymann. "Restful web services vs. big'web services: making the right architectural decision." Proceedings of the 17th international conference on World Wide Web. ACM, 2008.
[25] Banker, Kyle. MongoDB in action. Manning Publications Co., 2011
[26] McCreary, Dan, and Ann Kelly. "Making sense of NoSQL." Shelter Island: Manning (2014): 19-20.
[27] ibid
[28] Zhang, Hao, et al. "In-memory big data management and processing: A survey." IEEE Transactions on Knowledge and Data Engineering 27.7 (2015): 1920-1948.
[29] ibid
[30] ibid
[31] Supra Note 20
[32] Ballard, Chuck, et al. IBM solidDB: Delivering Data with Extreme Speed. IBM Redbooks, 2011.
[33] Shanahan, James G., and Laing Dai. "Large scale distributed data science using apache spark." Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015. 33 Shvachko, Konstantin, et al. "The hadoop distributed file system." 2010 IEEE 26th symposium on mass storage systems and technologies (MSST). IEEE, 2010.
[34] Borthakur, Dhruba. "The hadoop distributed file system: Architecture and design." Hadoop Project Website
[35] .2007 (2007): 21.
[36] ibid
[37] ibid
[38] Zikopoulos, Paul, and Chris Eaton. Understanding big data: Analytics for enterprise class hadoop and streaming data. McGraw-Hill Osborne Media, 2011.
[39] Bizer, Christian, et al. "The meaningful use of big data: four perspectives--four challenges." ACM SIGMOD Record 40.4 (2012): 56-60.
[40] Kaisler, Stephen, et al. "Big data: issues and challenges moving forward." System Sciences (HICSS), 2013 46th Hawaii International Conference on. IEEE, 2013.
[41] Supra Note 21
[42] DuCharme, Bob. "What Do RDF and SPARQL bring to Big Data Projects?." Big Data 1.1 (2013): 38-41.
[43] Zhong, Yunqin, et al. "Towards parallel spatial query processing for big spatial data." Parallel and
Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2012 IEEE 26th International. IEEE, 2012.
[44] Ma, Li, et al. "Effective and efficient semantic web data management over DB2." Proceedings of the 2008 ACM SIGMOD international conference on Management of data. ACM, 2008.
[45] Lohr, Steve. "The age of big data." New York Times 11 (2012).
[46] Pääkkönen, Pekka, and Daniel Pakkala. "Reference architecture and classification of technologies, products and services for big data systems." Big Data Research 2.4 (2015): 166-186.
[47] Sumbaly, Roshan, et al. "Serving large-scale batch computed data with project voldemort." Proceedings of the 10th USENIX conference on File and Storage Technologies. USENIX Association, 2012.
[48] Bar-Sinai, Michael. "Big Data Technology Literature Review." arXiv preprint arXiv:1506.08978 (2015).
[49] ibid
[50] Condie, Tyson, et al. "MapReduce Online." Nsdi. Vol. 10. No. 4. 2010.
[51] Supra Note 47
[52] Dean, Jeffrey, and Sanjay Ghemawat. "MapReduce: a flexible data processing tool." Communications of the ACM 53.1 (2010): 72-77.
[53] ibid
[54] Big Data and Privacy: A Technological Perspective, White House,
https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014
[55] Tene, Omer, and Jules Polonetsky. "Big data for all: Privacy and user control in the age of analytics." Nw. J. Tech. & Intell. Prop. 11 (2012): xxvii.
[56] Big Data and Privacy: A Technological Perspective, White House,
https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014
[57] Privacy by design in big data, ENISA
[58] Big Data and Privacy: A Technological Perspective, White House,
https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014
[59] Id
[60] Id
[61] Tene, Omer, and Jules Polonetsky. "Privacy in the age of big data: a time for big decisions." Stanford Law Review Online 64 (2012): 63.
[62] Lane, Julia, et al., eds. Privacy, big data, and the public good: Frameworks for engagement. Cambridge University Press, 2014.
[63] Crawford, Kate, and Jason Schultz. "Big data and due process: Toward a framework to redress predictive privacy harms." BCL Rev. 55 (2014): 93.
[64] http://homes.esat.kuleuven.be/~sguerses/papers/DanezisGuersesSurveillancePets2010.pdf
[65] Seda Gurses and George Danezis, A critical review of 10 years of privacy technology, August 12th 2010, http://homes.esat.kuleuven.be/~sguerses/papers/DanezisGuersesSurveillancePets2010.pdf
[66] Id
Developer team fixed vulnerabilities in Honorable PM's app and API
This blog post has been authored by Bhavyanshu Parasher. The original post can be read here.
What were the issues?
The main issue was how the app was communicating with the API served by narendramodi.in.
- I was able to extract private data, like email addresses, of each registered user just by iterating over user IDs.
- There was no authentication check for API endpoints. Like, I was able to comment as any xyz user just by hand-crafting the requests.
- The API was still being served over HTTP instead of HTTPS.
Fixed
- The most important issue of all. Unauthorized access to personal info, like email addresses, is fixed. I have tested it and can confirm it.
- A check to verify if a valid user is making the request to API endpoint is fixed. I have tested it and can confirm it.
- Blocked HTTP. Every response is served over HTTPS. The people on older versions (which was serving over HTTP) will get a message regarding this. I have tested it. It says something like “Please update to the latest version of the Narendra Modi App to use this feature and access the latest news and exciting new features”. It’s good that they have figuered out a way to deal with people running older versions of the app. Atleast now they will update the app.
Detailed Vulnerability Disclosure
Found major security loophole in how the app accesses the “api.narendramodi.in/api/” API. At the time of disclosure, API was being served over “HTTP” as well as “HTTPS”. People who were still using the older version of the app were accessing endpoints over HTTP. This was an issue because data (passwords, email addresses) was being transmitted as plain text. In simple terms, your login credentials could easily be intercepted. MITM attack could easily fetch passwords and email addresses. Also, if your ISP keeps log of data, which it probably does, then they might already have your email address, passwords etc in plain text. So if you were using this app, I would suggest you to change your password immediately. Can’t leave out a possibility of it being compromised.
Another major problem was that the token needed to access API was giving a false sense of security to developers. The access token could easily be fetched & anyone could send hand-crafted HTTP requests to the server. It would result in a valid JSON response without authenticating the user making the request. This included accessing user-data (primarily email address, fb profile pictures of those registered via fb) for any user and posting comments as any registered user of the app. There was no authentication check on the API endpoint. Let me explain you with a demo.
The API endpoint to fetch user profile information (email address) was getprofile. Before the vulnerability was fixed, the endpoint was accessible via “http://www.narendramodi.in/api/getprofile?userid=useridvalue&token=sometokenvalue”. As you can see, it only required two parameters. userid, which we could easily iterate on starting from 1 & token which was a fixed value. There was no authentication check on API access layer. Hand-crafting such requests resulted in a valid JSON response which exposed critical data like email addresses of each and every user. I quickly wrote a very simply script to fetch some data to demonstrate. Here is the sample output for xrange(1,10).
Not just email addresses, using this method you could spam on any article pretending to be any user of the app. There was no authentication check as to who was making what requests to the API. See,
They have fixed all these vulnerabilities. I still believe it wouldn’t have taken so long if I would have been able to get in touch with team of engineers directly right from the beginning. In future, I hope they figure out an easier way to communicate. Such issues must be addressed as soon as they are found but the communication gap cost us lot of time. The team did a great job by fixing the issues and that’s what matters.
Disclosure to officials
The email address provided on Google play store returned a response stating “The email account that you tried to reach is over quota”. Had to get in touch with authorities via twitter.
Vulnerability disclosed to authorities on 30th sep, 2015 around 5:30 AM
After about 30 hours of reporting the vulnerabillity
Proposed Solution
Consulted @pranesh_prakash as well regarding the issue.
After this, I mailed them a solution regarding the issues.
Discussion with developer
Received phone call from a developer. Discussed possible solutions to fix it.
The solution that I proposed could not be implemented since the vulnerability is caused by a design flaw that should have been thought about right from the beginning when they started developing the app. It just proved how difficult it is to fix such issues for mobile apps. For web apps, it’s lot easier. Why? Because for mobile apps, you need to consider backward compatibility. If they applied my proposed solution, it would crash app for people running the older versions. Main problem is that people don’t upgrade to latest versions leaving themselves vulnerable to security flaws. The one I proposed is a better way of doing it I think but it will break for people using older versions as stated by the developer. Though, they (developers) have come up with solutions that I think would fix most of the issues and can be considered an alternative.
On Oct 3rd, I received mail from one of the developers who informed me they have fixed it. I could not check it out at that time as I was busy but I checked it around 5 PM. I can now confirm they have fixed all three issues.
Update 12/02/2016
This vulnerability in NM app is similar to the one I got fixed last year. Like I said before also, the vulnerability is because of how the API has been designed. They released the same patch which they did back then. Removing email addresses from the JSON output is not really a patch. I wonder why would they introduce personal information in JSON output again if they knew that’s a privacy problem and has been reported by me a year back. He showed how he was able to follow any user being any user. Similarly, I was able to comment on any post using account of any user of the app. When I talked to the developer back then he mentioned it will be difficult to migrate users to a newer/secure version of the app so they are releasing this patch for the meantime. It was more of a backward compatibility issue because of how API was designed. The only solution to this problem is to rewrite the API from scratch and add standard auth methods for API. That should take care of most of vulnerabilities.
Also read:
- Narendra Modi app hacked by youngster, points out risk to 7 million users’ data (New Indian Express; December 2, 2016)
- Security flaw: 22-year-old hacks Modi app and accesses private data of 7 million people (India Today; December 2, 2016)
- The NaMo App Non-Hack is Small Fry – the Tech Security on Government Apps Is Worse (The Wire; December 3, 2016)
Privacy and Security Implications of Public Wi-Fi - A Case Study
Download (PDF)
Contents
1. Introduction
3. Overview of Public Wi-Fi in India
4. Indian Policy and Legal Conundrum
5. Public Wi-Fi and Privacy Concerns
5.1. Data Theft
5.3. Makes the Electronic Devices Prone to Hacking and Setting up Fake Networks
5.4. Illegal Use of Data
6. Ranking Digital Rights Project
6.1. D-VoIS, Bangalore
8. Conclusion and Recommendations
8.1. Commitment
8.3. Privacy
1. Introduction
Recognizing internet as a critical tool for day-to-day work and facilitating increased access to it in the past few years,[1] the Indian Government as well as Governments across the world have rolled out plans for offering public Wi-Fi. However, privacy risks of using public Wi-Fi have also been flagged across jurisdictions, which will be discussed in this paper. Apart from highlighting key privacy concerns associated with the use of free public Wi-Fi, this case study aims to analyse the privacy policies of two of the Internet Service Providers in India-namely Tata Docomo[2] and D-VoiS[3], which offer public Wi-Fi services in Bangalore city against the indicators listed under the Ranking Digital Rights project[4], as well as the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011[5]. Based on this analysis, this paper shall list key recommendations to these ISPs to ensure sound privacy policies and practices with a view to have a balanced framework and ecosystem in light of key privacy considerations, especially in light of public Wi-Fi.
2. Global Scenario
Security and privacy concerns around the use of free and public Wi-Fi have been raised in India[6] as well as across the globe. In various cities like Bangalore, Delhi, Hyderabad, New York, London, Paris, etc., privacy experts have raised concerns over the public Wi-Fi systems at metro stations, malls, payphones and other such public places.[7]
For many years, New York City has been in the process of developing a “free” public Wi-Fi project called LinkNYC[8] to bring wireless Internet access to the residents of the city. However, privacy concerns have been raised by the users and privacy advocates like the New York Civil Liberties Union, where the latter also issued a letter to the Mayor's office regarding this[9] as the collection of potentially sensitive personal, locational and behavioral data, without adequate safeguards could result in sharing of such data without the data subject’s consent or knowledge. For example, one of the concerns raised has been regarding retention of user's data by CityBridge, the company behind the LinkNYC kiosks, often indefinitely, for building a massive database which carries a risk of security breaches and unwarranted surveillance by the police. [10] Also, users are concerned that their internet browsing history may reveal sensitive information about their political views, religious affiliations or medical issues[11], since registration is required to use LinkNYC by submitting their email addresses and by agreeing to allow CityBridge to collect information about the websites they visit, the duration for which they linger on certain information on a webpage and the links they click on. On the contrary, the privacy policy of CityBridge states that this massive amount of personally identifiable user information would be cleared only if there have been 12 months of user inactivity, raising an alarm in light of privacy concerns.[12]
In the year 2015, the Information Commissioner’s Office (ICO) conducted a review of public Wi-Fi services on a UK high street, where it was found that the Wi-Fi networks requested for varying levels of personal data, which was also processed for marketing purposes. The results highlighted that while some networks did not request any personal data, others asked for varying amounts, including information regarding name, postal and email address, mobile number, gender, as well as asking for a date of birth as a mandatory requirement (except for gender). During the sign-up process, though some Wi-Fi networks provided users with the choice to opt-in or opt-out for receiving electronic newsletters and updates, others offered no choice at all.[13] As a result of the review process, the ICO notified Wi-Fi network providers that it had reviewed and advised them of improvements that they could make to their service and issued guidance[14] regarding the dangers of using public Wi-Fi[15]. ICO also recommended users to take time to read all the information given by providers of Wi-Fi services before connecting.
In 2006, the European Data Retention Directive 2006/24/EC[16] was introduced for the retention of communications data by providers of public electronic communications services for national security. The Directive provides an obligation for providers of publicly available electronic communications services and public communications networks to retain traffic and location data for the purpose of the investigation, detection, and prosecution of serious crime.[17] Also, the Data Retention (EC Directive) Regulations 2009[18] were introduced to implement the Directive in the UK. However, this was challenged on grounds of insufficient safeguards for the privacy rights of individuals, given the substantial interference which it facilitated with those rights.[19]
To ensure protection of user’s data and information, the Data Protection Act 1998[20] in UK obliges businesses retaining people’s data to comply with the law, which involves informing people about what data is being collected and ensure that the data is stored securely.[21] . Therefore, in case of ISP’s providing public Wi-Fi service, this would relate to the information people provide when they log on, such as their email address. Under the Act, the data protection principles must be complied with by the data controllers and it needs to be ensured that the information is used fairly and lawfully, for limited and stated purposes, used in a way that is adequate, relevant and not excessive, kept for no longer than is absolutely necessary, handled according to people’s data protection rights, kept safe and secure and not transferred outside the European Economic Area without adequate protection.[22] This would soon be updated and synced with the European Union’s General Data Protection Directive (GDPR).
3. Overview of Public Wi-Fi in India
In India, the public Wi-Fi in some cases has been offered free for a limited duration, in several cities across the country. For example, in 2014, Bangalore became the first city in the country to establish free public Wi-Fi- Namma Wi-Fi (802.11N) to make Bangalore a smart and connected city. The service is offered at MG Road, Brigade Road and four other locations in Bangalore including Traffic and Transit Management Centres (TTMCs) at Shanthinagar, Yeshwanthpur, Koramangala and CMH Road in Indiranagar.[23] The internet and Wi-Fi service provider for Namma Wi-Fi is D-VoiS Broadband Ltd,a city-based firm.[24] However, it seems the State Government plans to pull the plug on the project, funds, lack of awareness and difficulty in access as key constraints.[25] Tata Docomo has inked an agreement with GMR Airports to offer Wi-Fi services at several International Airports in the country, including the Bangalore International Airport. It offers access to access free Wi-Fi service for 45 minutes, following which they users are required to pay for the service online, to continue using the Wi-Fi service.[26]
Delhi has also introduced free Wi-Fi at its premier shopping hubs of Connaught Place and Khan Market in the year 2014, and BSNL launched a free WiFi service at Karnataka’s Malpe beach in the year 2016 making it the first WiFi beach in the three coastal districts of the state.[27] The State Governments of Mumbai, Kolkata, Patna and Ahmedabad also offer free Wi-Fi services in limited areas.[28] As part of the flagship programme by Indian Government, Digital India, the Government announced the rollout of Wi-Fi services by June 2015 at select public places in 25 Indian cities with population of over 10 lakh and tourist destinations by December 2015.[29] Also, the Government has plans to digitise India by rolling out free Wi-Fi in 2500 towns and cities over a span of 3 years.[30] Google plans to deploy WiFi at 100 railway stations in partnership with Railtel. Under this scheme, Mumbai Central was the first station to get free Wi-Fi in the year 2016.[31] Also, Google's Project Loon aims to provide internet connectivity in remote and rural areas in India, which is currently being tested in other countries.[32].
4. Indian Policy and Legal Conundrum
In light of national security concerns around the misuse of public Wi-Fi, the Department of Telecommunication, GoI, published a regulation[33] dated February 2009, defining procedures for the establishment and use of public Wi-Fi to prevent misuse of public Wi-Fi and to be able to track the perpetrator in case of abuse. Indeed, the DOT has stated that “Insecure Wi-Fi networks are capable of being misused without any trail of user at later date”.[34]
As per the 2009 Regulations, DoT has instructed ISPs to enforce centralized authentication using Login ID and Password for each user to ensure that the identity of the user can be traced.[35] Regarding Wi-Fi services provided at public places, the Regulations state that bulk login IDs shall be created for controlled distribution, with authentication done at a centralized server. The subscribers are required to use public Wi-Fi by registering with temporary user ID and password, in the following methods:
- Obtaining copy of photo identity of the subscriber, to be maintained by Licensee for one year; or
- Providing details of user ID and password via SMS on subscriber's mobile phone , to be used as his/her identity by keeping the mobile number for one year.
Additionally, the data protection regime in India is governed by section 43A of the Information Technology Act, 2000 and the Rules[36] notified under it. It obliges corporate bodies which possess, deal or handle any sensitive personal data to implement and maintain reasonable security practices, failing which they would be held liable to compensate those affected by any negligence attributable to this failure. The said Rules also define requirements and safeguards that every Body Corporate is legally required to incorporate into the company's privacy policy. The Rules put restrictions on body corporates on collecting sensitive personal information, and also states that it must obtain prior consent from the “provider of information” regarding “purpose, means and modes of use of the information, along with limiting disclosure of such information.[37] Most of the ISPs in India being a private company, like D-VoiS and Tata Docomo, are obliged to comply with these provisions. Also, under the model License Agreement for Unified License[38] by Ministry of Communication & IT, Department of Telecommunications, Government of India, where the Unified Access License Framework allows for a single license for multiple services such as telecom, the internet and television and provides certain security guidelines, privacy of communications is to be maintained by the Licensee (the ISPs in this case) and network security practices and audits are mandated along with penalties for contravention in addition to what is prescribed under the Information Technology Act,2000. It also provides for ensuring unauthorized interception of messages does not take place. Therefore, the ISPs providing public Wi-Fi services in various cities across India would be governed by the data protection regime and could be held liable under these provisions in case of non-compliance with the security measures so stated.
In July 2016, the Telecom Regulatory Authority of India (hereinafter referred as “TRAI”) floated a Consultation paper on Proliferation of Broadband through Public Wi-Fi Networks[39] with an objective to examine the need of encouraging public Wi-Fi networks in the country from a public policy point of view and discuss the issues as well as solutions in its proliferation. The paper recognises the fact that India is still in a green field deployment phase in terms of adoption of public Wi-Fi services and requires solutions for resolving the challenges and risks being faced in the process and lay a strong foundation to evolve towards a meaningful position in the advancement of initiatives related to Internet of Things, Smart Cities, etc.[40] This is an important step towards fulfilment of the Digital India scheme of the Indian Government to ensure better connectivity. In the paper, TRAI has advocated development of a payment platform which allows easy access to Wi-Fi services across internet service providers (ISPs) and through any payment instrument.[41] Besides that, the paper raises issues of various regulatory, licensing or policy measures required to encourage ubiquitous city-wide Wi-Fi networks as well as expansion of Wi-Fi networks in remote or rural areas, along with the issue of encouraging interoperability between the Wi-Fi networks of different service providers, both within the country and internationally, as well as between cellular and Wi-Fi networks.[42]
5. Public Wi-Fi and Privacy Concerns
Since proliferation of public Wi-Fi in India is happening at a moderate pace, the paper discusses key issues towards this, one of them being the logistics of deploying this service. This section briefly states and acknowledges privacy and security concerns as an important factor that may be posing issues in the adoption of public Wi-Fi services in the country. Since there have been numerous cases of security vulnerabilities in public Wi-Fi networks worldwide, security of networks and cyber crimes is a key issue for consideration.[43]
Deployment of public wireless access points has made it more convenient for people to access the Internet outside of their offices or homes. Despite advantages like ease of accessibility, connectivity and convenience, public Wi-Fi connection pose serious concerns as well. “The proliferation of public Wi-Fi is one of the biggest threats to consumer data”, says David Kennedy, founder of TrustedSec, a specialised information security consulting company based in the United States of America.[44] Also, the networks become an easier target with little public awareness about the existence of such threats wherein users expose valuable personal data over Wi-Fi hotspots. The recently released Norton Cyber Security Report 2016[45] shows how the benefit of constant connectivity is often outweighed by consumer complacency, leaving consumers and their Wi-Fi networks at risk. For the purpose of this report, Norton surveyed 20,000 people (over a 1,000 from India ) which reflects that though users in India may be increasingly becoming aware of the cyber threats they face due to use of public Wi-Fi, they don’t fully understand the accompanying risks and their online behaviour is often contradictory.[46] Also, it is important to consider that the services which claim to be free, actually generate revenue by advertisements, where the model works by providing free access to internet in exchange for user's’ personal and behavioral data, which is subsequently used to target ads to them.[47]
Some of the privacy harms stemming from use of public Wi-Fi are listed below.
5.1. Data Theft
With hackers finding it easy to access personal information of the data subjects, data can be hijacked by unauthorized internet access by spoofing the MAC and IP addresses of the authenticated user’s device or by use of default settings (saved passwords or IPs).[48] The following kinds of data is at a risk of being stolen and further misused:
- demographic and locational data[49]
- forms of personal information acting as identifiers like financial information, social and personal information[50]
- private information like passwords to social networking sites, email accounts and banking websites[51]
- historical data from the devices[52]
5.2. Tracking an Individual
Like cell phones, Wi-Fi devices have unique identifiers that can be used for tracking purposes which can cause potential security issues. Tracking by using a Wi-Fi hotspot can also lead to third party harms like stalking.[53] To receive or use a service, often websites require the user to share their personal information such as name, age, ZIP code, or personal preferences, which is many times shared with advertisers and other third parties, without the knowledge or consent of the users.[54]
5.3. Makes the Electronic Devices Prone to Hacking and Setting up Fake Networks
A recent experiment conducted by the chief scientist at mobile security firm Appknox at the Bengaluru International Airport, India, found that the wireless devices could be easily hacked over the airport’s free Wi-Fi network due to the easily exploitable security holes in the software made by Apple, Google, and Microsoft.[55] A similar experiment was backed by the European law enforcement agency, Europol, where a mobile hotspot was created in central London[56] and the hacker was able to gain access to passwords, apps, and even credit card and banking information with ease.[57] Lack of secure softwares and prevalence of open, unprotected Wi-Fi has made it fairly easy for hackers to set up fake twin access points that give them access to data histories and personal information.[58] This makes is easy to track data histories of users. Even if certain softwares use encryption codes, a simple decryption software can be used to obtain the information.[59]
5.4. Illegal Use of Data
- By authorities: the authorities have easier access to people’s browsing details and habits, and with justification in the name of national security, could be used to monitor the people without their consent.[60]
- Wi-Fi provider: can sell the user’s demographic and location information. [61] Also, it was revealed in a study that the personal information of users is often transmitted by service providers without encryption. Anyone along the path between the user and the service’s data center can then intercept this information, opening users to grave privacy and security risks.[62]
- By hackers: steal information and hack into unsuspecting victim’s bank accounts and misuse corporate financial information and secrets[63]
6. Ranking Digital Rights Project
The "Ranking Digital Rights" project, an ongoing international non-profit research initiative, aims to promote greater respect for freedom of expression and privacy by focusing on the policies and practices of companies in the information communications technology (ICT) sector[64], rank such companies in this light, and undertake research to develop the ranking methodology.[65]
In November 2015, the Ranking Digital Rights project launched the Corporate Accountability Index. Since several actors like the Internet and telecommunications companies, software producers, and device and networking equipment manufacturers exert growing influence over the political and civil lives of people all over the world, it is important to state that these organisations share a responsibility to respect human rights. For this purpose, 16 Internet and telecommunications companies were evaluated according to 31 indicators, which focused on corporate disclosure of policies and practices that affect users’ freedom of expression and privacy.[66]
The data produced by the index can help companies improve their policies, practices and help them identify challenges faced by companies in meeting their corporate obligations to respect human rights like Freedom of Expression and Privacy in the digital space.[67] Some of the key corporate practices which affect these rights are :
- How companies handle government requests to hand over user data or restrict content;
- How companies enforce their own terms of service;
- What information companies collect about users and how long they retain it; and
- To whom they share or sell user information.[68]
The 2015 Corporate Accountability Index assesses transparency levels of the World’s most powerful Internet and telecommunications companies regarding their commitments, policies and practices that affect users’ freedom of expression and privacy and evaluates what companies share about these practices and offers recommendations for improvement. The methodology adopted relies on publicly available information so that advocates, researchers, journalists, policy makers, investors, and users can understand the extent to which different companies respect freedom of expression and privacy, and make appropriate policy, investment, and advocacy decisions. Also, public disclosures would enable researchers and journalists to investigate and verify the accuracy of company statements.[69]
For the purpose of this research, we would apply this index and the indicators to the internet service provider of public Wi-Fi in Bangalore-D-VoiS Ltd. and Tata Docomo to understand how comprehensive their privacy policies are when compared to global standards and make informed recommendations. Analysing policies against the index can help these companies identify best practices, as well as the obstacles they face in meeting their corporate obligations to respect human rights in the very digital spheres they helped to create.[70] The information has been gathered and analysed on the basis of publicly available information, and this can help companies empower users to make informed decisions about how they use technology, which would help build trust between users and companies in the long run.[71]
6.1. D-VoIS[72], Bangalore
For the purpose of this case study, the Privacy Policies of D-VoIS have been analysed on the basis of the Corporate Accountability index, and the answers can be accessed in Annex 1.
Summary
On the basis of the indicators and the information available, it can be ascertained that:
- The Company has a freely available and understandable Privacy Policy and Terms of Use, though only in the English language.
- The company does not commit to notify users in case of changes in the privacy policy of the company.
- The company states circumstances in which it would restrict use of its services, along with reasons for content restriction.
- The Company commits to the principle of data minimization, discloses circumstances when it shares information with third parties, and provides users with options to control the company’s collection and sharing of their information
- Deploys industry standards for security of products and services.
Analysis
- Commitment: D-VoIS fares low on Commitment since it has made no overarching public commitments to protect users’ freedom of expression or privacy in a manner that meets the Index’s criteria. The Company lacks adequate top-level policy commitments to users’ freedom of expression and privacy, establishing executive and management oversight over these issues, creating a process for human rights impact assessment, and lacks stakeholder engagement and a grievance mechanism.
- Freedom of Expression: The Company also fares low on Freedom of Expression as the terms of services, though easily available, are only in English language. Also, it does not commit to notify users about changes to the terms of service. While the company discloses what content and activities it prohibits , it provides no information about how the company notifies these restrictions to the users.
Regarding transparency about content restriction requests, since the Indian law prevents the company from disclosing government requests for content removal[73], but it does not prevent the company from publishing more information about private requests for content restriction. D-VoIS does not provide any information with respect to this. - Privacy: D-VoIS is required by law to have a privacy policy available on its website, this policy is available in English, but not in other languages spoken in India. Also, D-VoIS does not disclose what user information is collected, how and why, nor does it offer users meaningful access to their information. D-VoIS does not disclose any information regarding retention of user information, and the company could improve its disclosures about what user information it collects and how long it is retained.
Though the company discloses information about its security practices, it does not disclose any information regarding its efforts to educate users about security threats. It also does not disclose information regarding requests by non-governmental entities for user data.
6.2. Tata Docomo[74], Bangalore
The Privacy Policy and Terms & Conditions of Tata Docomo have been analysed on the basis of the Corporate Accountability index, and the answers can be accessed in Annex 2.
Summary
On the basis of the indicators and the information available, it can be ascertained that:
- The Company has a freely available and understandable Data Privacy Policy and Terms of Use, though only in English language.
- The Company has established electronic and administrative safeguards designed to secure the information collected to prevent unauthorized access to or disclosure of that information and to ensure it is used appropriately.
- The company states circumstances in which it would restrict use of its services, along with reasons for content restriction. The company’s disclosed policies and practices demonstrate how it works to avoid contributing to actions that may interfere with the right to freedom of expression, except where such actions are lawful, proportionate and for a justifiable purpose.
- The Company clearly states the kind of information collected, ways of collection and the reasons for collection as well as sharing.
- Deploys industry standards for security of products and services
Analysis
- Commitment: Tata Docomo fares low on Commitment since it has made no overarching public commitments to protect users’ freedom of expression or privacy in a manner that meets the Index’s criteria. Though the Company has established electronic and administrative safeguards designed to secure the information collected, it lacks adequate top-level policy commitments to users’ freedom of expression and privacy, establishing executive and management oversight over these issues, creating a process for human rights impact assessment, and lack of stakeholder engagement.
- Freedom of Expression: The Company fares low on Freedom of Expression as the terms of services, though easily available, are only in English language. Also, it does not commit to notify users about changes to the terms of service. While the company discloses what content and activities it prohibits , it provides no information about how the company notifies these restrictions to the users.
Regarding transparency about content restriction requests, since the Indian law prevents the company from disclosing government requests for content removal, it does not prevent the company from publishing more information about private requests for content restriction. Tata Docomo does not provide any information with respect to that. - Privacy: Tata Docomo is required by law to have a privacy policy available on its website, this policy is available in English, but not in other languages spoken in India. No information is publically available regarding users option to control company's collection of information. Tata Docomo discloses that user information shall be retained as long as required and does not mention a specific duration for the same. Though the company discloses information about its security practices, it does not disclose any information regarding its efforts to educate users about security threats. It also does not disclose information regarding requests by non-governmental entities for user data.
7. Compliance of Privacy Policies with Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011
The Privacy Policy and Terms & Conditions of D-VoIS and Tata Docomo have been analysed on the basis of the security measures and procedures stated under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 to ascertain how sound and compliant the framework is with the existing data protection regime in India. The comparison can be accessed in Annex 3.
Comparing the requirements listed under the Rules with the policies of both the companies, it can be said that though the websites of both companies provide privacy policies and are easily accessible, they lack crucial information regarding consent of the user before collection as well as sharing of information. Also, though the policies state the purpose of sharing such data with third parties, it does not state the purpose of collection of the information. The policies are also silent regarding the requirements to be complied with before transferring personal data into another jurisdiction . There is also no information about the companies having a grievance officer. Additionally, though the terms of services of D-VoIS state that the customer may choose to restrict the collection or use of their personal information, both companies do not specifically provide for an opt out mechanism to its users.
8. Conclusion and Recommendations
To allay the numerous concerns regarding privacy and security with respect to public Wi-Fi’s, the ISPs must have a sound Privacy Policy in place. For this purpose, adherence to the indicators as listed under the Corporate Accountability Index, along with requirements for security of personal information stated under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 and improving the policies accordingly shall greatly contribute to protection of Freedom of Expression and ensure Privacy of user information. Ensuring compliance with the existing data protection regime in the country becomes more important in light of the growing privacy and security concerns due to proliferation of free and public Wi-Fi service in India. Adequate measures like acquiring consent for collection and sharing of user data, commitment by company executives to ensure protection of rights of individuals, adoption of security standards, creating awareness about security concerns, etc. by such corporate must be considered to ensure protection of personal information and reduce the likelihood of a data breach. Both D-VoIS and Tata Docomo must consider the following recommendations in order to meet the criteria set by the Ranking Digital Rights project, ensuring commitment towards protection of right to freedom of expression and privacy of the users.
8.1. Commitment
- Set in place an oversight mechanism to monitor how the company’s policies and practices affect freedom of expression and privacy. In case the Company already has that in place, information regarding the same must be made publically available for greater transparency.
- Also, they must conduct regular, comprehensive, and credible due diligence, such as human rights impact assessments, to identify how all aspects of their business impact freedom of expression and privacy.
- In addition to that, they must Provide for a remedy or grievance mechanism. The Telecom Regulatory Authority of India also requires that all service providers have redress mechanisms. In case the Company already has that in place, information regarding the same must be made publically available for greater transparency.
8.2. Freedom of Expression
- The Companies must make an effort to make the Terms of Service available in the most commonly spoken languages by its users, besides English.
- Also, it is recommended that the Companies must ensure to provide meaningful notice to users regarding change in terms of service.
- Besides disclosing what content and activities the companies prohibit, they must disclose information regarding how it enforces these prohibitions and should provide examples regarding the circumstances under which it may suspend service to individuals or areas to help users understand such policies.
- The Companies must also disclose information regarding the process for evaluating and responding to requests from third parties to restrict content or service. Additionally, it must disclose how long it retains user information, publish process for evaluating and responding to requests from government and other third parties for stored user data and/or real-time communications.
8.3. Privacy
- Though both the Companies disclose that the user information shall be shared with third parties, and Tata Docomo discloses what information is collected and how, yet there should be no legal impediment for the companies to improve its disclosures about what user information it collects, with whom it is shared, and how long it is retained to protect the privacy of the users.
- Though Tata Docomo allows the users to review and correct their Personal Information collected by the Company, D-VoIS must release information regarding whether the users are able to view, download or otherwise obtain all of the information about them that the company holds. In case it does not allow, the Company must duly change its policy regarding the same.
- The Companies must also publish information to help users defend against cyber threats.
[1] The Financial Express, ‘Free wi-fi: Digital Dilemma’, February 22, 2015,
http://www.financialexpress.com/article/economy/free-Wi-Fi-digital-dilemma/45804/
[2] Tata Docomo, http://www.tatadocomo.com/
[3] D-VoIS Communication Pvt. Ltd. http://www.dvois.com/
[4] Ranking Digital Rights, https://rankingdigitalrights.org/
[5] the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. Available at : http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf
[6] See : http://indianexpress.com/article/technology/technology-others/public-wifi-can-be-used-to-steal-private-information-it-security-expert/, http://www.aljazeera.com/indepth/features/2016/03/india-unlocking-public-wi-fi-hotspots-160308072320835.html , http://www.business-standard.com/article/technology/indians-most-willing-to-share-personal-data-over-public-wifi-116083000673_1.html and http://articles.economictimes.indiatimes.com/2015-05-20/news/62413108_1_corporate-espionage-hotspots-bengaluru-airport
[7] Scroll, ‘Free wifi in Delhi is good news but here is the catch’, November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch
[8] LinkNYC, https://www.link.nyc/
[9] See : http://www.nyclu.org/files/releases/city%20wifi%20letter.pdf
[10] The Huffingtonpost, ‘Maybe You Shouldn't Use Public Wi-Fi In New York City’, March 16, 2016, http://www.huffingtonpost.in/entry/public-wifi-nyc_us_56e96b1ce4b0b25c9183f74a
[11] NYCLU, ‘City’s Public Wi-Fi Raises Privacy Concerns’, March 16, 2016,
http://www.nyclu.org/news/citys-public-wi-fi-raises-privacy-concerns
[12] NYCLU, ‘City’s Public Wi-Fi Raises Privacy Concerns’, March 16, 2016,
http://www.nyclu.org/news/citys-public-wi-fi-raises-privacy-concerns
[13]Information Commissioner’s Office Blog, ‘Be wary of public Wi-Fi’September 25, 2015, https://iconewsblog.wordpress.com/2015/09/25/be-wary-of-public-Wi-Fi/
[14]Information Commissioner’s Office Blog, ‘Be wary of public Wi-Fi’September 25, 2015, https://iconewsblog.wordpress.com/2015/09/25/be-wary-of-public-Wi-Fi/
[15]Marketing Law, ‘The ICO sounds a warning on public wi-fi and privacy’, November 24, 2015,
http://marketinglaw.osborneclarke.com/data-and-privacy/the-ico-sounds-a-warning-on-public-Wi-Fi-and-privacy/
[16]Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32006L0024
[17] Feiler, L., "The Legality of the Data Retention Directive in Light of the Fundamental Rights to Privacy and Data Protection", European Journal of Law and Technology, Vol. 1, Issue 3, 2010, http://ejlt.org/article/view/29/75
[18] The Data Retention (EC Directive) Regulations 2009 http://www.legislation.gov.uk/ukdsi/2009/9780111473894/pdfs/ukdsi_9780111473894_en.pdf
[19] Purple, ‘Update on the legal implications of offering public WiFi in the UK’, September 10, 2014, http://purple.ai/update-legal-implications-offering-public-wifi-uk/
[20] Data Protection Act 1998, http://www.legislation.gov.uk/ukpga/1998/29/contents
[21] Wireless Social, http://www.wireless-social.com/how-it-works/legal-compliance/
[22] Data Protection Act 1998, https://www.gov.uk/data-protection/the-data-protection-act
[23]The Hindu, ‘Free wifi on M.G. Road and Brigade Road from Friday’, January 23, 2014, http://www.thehindu.com/news/cities/bangalore/free-wifi-on-mg-road-and-brigade-road-from-friday/article5606757.ece
[24]The Telegraph, ‘Free Wi-fi on tech city streets- Bangalore offers five public hotspots’, January 25, 2014, http://www.telegraphindia.com/1140125/jsp/nation/story_17863705.jsp#.VwIv_Zx97IU
[25]Economic Times, ‘Karnataka Govt pulls the plug on public Wi-Fi spots in Bengaluru’, March 15, 2016, http://tech.economictimes.indiatimes.com/news/internet/karnataka-govt-pulls-the-plug-on-public-Wi-Fi-spots-in-bengaluru/51404414
[26] Medianama, ‘Why Don’t Indian Airports Offer Free WiFi To Passengers?’, May 22, 2013, http://www.medianama.com/2013/05/223-indian-airports-free-wifi/
[27]Hindustan Times, ‘BSNL launches free public WiFi at Karnataka’s Malpe beach’, January 25, 2016, http://www.hindustantimes.com/tech/bsnl-launches-free-public-wifi-on-karnataka-s-malpe-beach/story-XVM06KQKIcoyqV8CLJoYzJ.html
[28]TechTree, ‘Problems With Free City-Wide Wi-Fi Hotspots In India’, September 28, 2015,
[29]India Today, ‘25 Indian cities to get free public Wi-Fi by June 2015’, December 17, 2014, http://indiatoday.intoday.in/technology/story/25-indian-cities-to-get-free-public-Wi-Fi-by-june-2015/1/407214.html
[30]Business Insider, ‘Modi Government To Roll Out Free Wi-Fi In 2,500 Towns And Cities To Make India Digital’, January 23, 2015, http://www.businessinsider.in/Modi-Government-To-Roll-Out-Free-Wi-Fi-In-2500-Towns-And-Cities-To-Make-India-Digital/articleshow/45989339.cms
[31]RailTel launches free high-speed public Wi-Fi service with Google at Mumbai Central, http://www.railtelindia.com/images/Mumbai.pdf
[32]Economic Times, ‘Google may get government nod to conduct pilot for Project Loon in India’, May 24, 2016,
[33]Department of Telecommunications, Ministry of Communications & IT, Government of India, February 23, 2009, http://www.dot.gov.in/sites/default/files/Wi-%20fi%20Direction%20to%20UASL-CMTS-BASIC%2023%20Feb%2009.pdf
[34] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch
[35]MojoNetworks, ‘Complying with DoT Regulation on Secure Use of WiFi: Less in Letter, More in Spirit’, http://www.mojonetworks.com/fileadmin/pdf/Implementing_DoT_Regulation_on_WiFi_Security.pdf
[36] Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011
[37]The Centre for Internet & Society, ‘Privacy and the Information Technology Act — Do we have the Safeguards for Electronic Privacy?’, April 7, 2011, http://cis-india.org/internet-governance/blog/privacy/safeguards-for-electronic-privacy
[38]License Agreement for Unified License, http://www.dot.gov.in/sites/default/files/Unified%20Licence.pdf
[39] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf
[40] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf
[41] The Economic Times, ‘Trai floats consultation paper to boost broadband through Wi-Fi in public places’, July 14, 2016, http://economictimes.indiatimes.com/articleshow/53195586.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst
[42] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf
[43]Mint, ‘Trai issues paper on public Wi-Fi networks’ July 14, 2016, http://www.livemint.com/Industry/1jVgso2R2Lz4NR5IYFaCtN/Trai-issues-paper-on-public-WiFi-networks.html
[44]Forbes,’How To Avoid Data Theft When Using Public Wi-Fi’, March 4, 2014, http://www.forbes.com/sites/amadoudiallo/2014/03/04/hackers-love-public-wi-fi-but-you-can-make-it-safe/#373c75e32476
[45]Symantec, ‘Norton Cyber Security Insights Report’, 2016, https://www.symantec.com/content/dam/symantec/docs/reports/2016-norton-cyber-security-insights-report.pdf
[46]The Indian Express, ‘Indian cybercrime victims don’t learn from past experience: Norton Report’, November 18, 2016, http://indianexpress.com/article/technology/tech-news-technology/indian-users-complacent-when-it-comes-to-cyber-security-norton-report/
[47]Mashable, ‘This is the real price you pay for 'free' public Wi-Fi’, January 26, 2016, http://mashable.com/2016/01/25/actual-cost-free-Wi-Fi/?utm_cid=mash-com-Tw-main-link#WmAJGJ_COiq5
[48]MojoNetworks, ‘Complying with DoT Regulation on Secure Use of WiFi: Less in Letter, More in Spirit’, http://www.mojonetworks.com/fileadmin/pdf/Implementing_DoT_Regulation_on_WiFi_Security.pdf
[49]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374
[50]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374
[51]The Indian Express, ‘Public Wifi can be used to steal private information: IT Security Expert’, May 19, 2015, http://indianexpress.com/article/technology/technology-others/public-wifi-can-be-used-to-steal-private-information-it-security-expert/#sthash.xiuWtL6v.dpuf
[52]Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv
[53]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374
[54]University of Washington, Computer Science and Engineering, ‘When I am on Wi-Fi, I am Fearless:” Privacy Concerns & Practices in Everyday Wi-Fi Use’, https://djw.cs.washington.edu/papers/wifi-CHI09.pdf
[55]Breitbart, ‘Fre Public Wi-Fi poses security risks’, May 19, 2015, http://www.breitbart.com/big-government/2015/05/19/free-public-wifi-poses-security-risk/
[56]The Guardian, ‘Londoners give up eldest children in public Wi-Fi security horror show’, September 29, 2014, https://www.theguardian.com/technology/2014/sep/29/londoners-Wi-Fi-security-herod-clause
[57] Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv
[58]ABC13, ‘Hackers set up fake Wi-Fi hotspots to steal your information, July 10, 2015, http://abc13.com/technology/hackers-set-up-fake-Wi-Fi-hotspots-to-steal-your-information/835223/
[59]Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv
[60] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch
[61] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch
[62]University of Washington, Computer Science and Engineering, ‘When I am on Wi-Fi, I am Fearless:” Privacy Concerns & Practices in Everyday Wi-Fi Use’, https://djw.cs.washington.edu/papers/wifi-CHI09.pdf
[63] Breitbart, ‘Fre Public Wi-Fi poses security risks’, May 19, 2015, http://www.breitbart.com/big-government/2015/05/19/free-public-wifi-poses-security-risk/
[64] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/
[65] Business & Human Rights Resource Centre, ‘Ranking Digital Rights Project’, http://business-humanrights.org/en/documents/ranking-digital-rights-project
[66] Ranking Digital Rights, https://rankingdigitalrights.org/about/
[67] Ranking Digital Rights, https://rankingdigitalrights.org/about/
[68] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/
[69] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/
[70] Ranking Digital Rights, https://rankingdigitalrights.org/about/
[71] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/
[72] D-VoIS Communication Pvt. Ltd. http://www.dvois.com/
[73]Section 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 states that all request and complaints must be kept confidential.
[74] Tata Docomo, http://www.tatadocomo.com/
Habeas Data in India
Download the Paper (PDF)
Introduction
The writ of habeas data was a distinct response to these recent histories which provided individuals with basic rights to access personal information collected by the state (and sometimes byprivate agencies of a public nature) and to challenge and correct such data, requiring the state to safeguard the privacy and accuracy of people's personal data.[1]
The origins of Habeas Data are traced back, unsurprisingly, to the European legal regime since Europe is considered as the fountainhead of modern data protection laws. The inspiration for Habeas Data is often considered to be the Council of Europe's 108th Convention on Data Protection of 1981.[2] The purpose of the Convention was to secure the privacy of individuals regarding the automated processing of personal data. For this purpose, individuals were granted several rights including a right to access their personal data held in an automated database.[3]
Another source or inspiration behind Habeas Data is considered to be the German legal system where a constitutional right to information self-determination was created by the German Constitutional Tribunal by interpretation of the existing rights of human dignity and personality. This is a right to know what type of data is stored on manual and automatic databases about an individual, and it implies that there must be transparency on the gathering and processing of such data.[4]
Habeas Data is essentially a right or mechanism for an individual complaint presented to a constitutional court, to protect the image, privacy, honour, information self-determination and freedom of information of a person. [5]
A Habeas Data complaint can be filed by any citizen against any register to find out what information is held about his or her person. That person can request the rectification, update or even the destruction of the personal data held, it does not matter most of the times if the register is private or public.[6]
Habeas Data in different jurisdictions
Habeas Data does not have any one specific definition and has different characteristics in different jurisdictions. Therefore, in order to better understand the right, it will be useful to describe the scope of Habeas Data as it has been incorporated in certain jurisdictions in order to better understand what the right entails:[7]
Brazil
The Constitution of Brazil grants its citizens the right to get a habeas data “a. to assure knowledge of personal information about the petitioner contained in records or data banks of government agencies or entities of a public character; b. to correct data whenever the petitioner prefers not to do so through confidential judicial or administrative proceedings;[8]
The place or tribunal where the Habeas Data action is to be filed changes depending on who is it presented against, which creates a complicated system of venues. Both the Brazilian constitution and the 1997 law stipulate that the court will be:
- The Superior Federal Tribunal for actions against the President, both chambers of Congress and itself;
- The Superior Justice Tribunal for actions against Ministers or itself;
- The regional federal judges for actions against federal authorities;
- State tribunals according to each state law;
- State judges for all other cases.[9]
Paraguay
The Constitution of Paraguay grants a similar right of habeas data in its constitution which states:
"All persons may access the information and the data that about themselves, or about their assets, [that] is [obren] in official or private registries of a public character, as well as to know the use made of the same and of their end. [All persons] may request before the competent magistrate the updating, the rectification or the destruction of these, if they were wrong or illegitimately affected their rights."[10]
Compared to the right granted in Brazil, the text of the Paraguay Constitution specifically recognises that the citizen also has the right to know the use his/her data is being put to.
Argentina
Article 43 of the Constitution of Argentina grants the right of habeas data, though it has been included under the action of “amparo”,[11] the relevant portion of Article 43 states as follows:
"Any person may file an amparo action to find out and to learn the purpose of data about him which is on record in public registries or data banks, or in any private [registers or data banks] whose purpose is to provide information, and in case of falsity or discrimination, to demand the suppression, rectification, confidentiality, or updating of the same. The secrecy of journalistic information sources shall not be affected."[12]
The version of Habeas Data recognised in Argentina includes most of the protections seen in Brazil and Paraguay, such as the right to access the data, rectify it, update it or destroy it, etc. Nevertheless, the Argentinean constitution also includes certain other features such as the fact that it incorporates the Peruvian idea of confidentiality of data, being interpreted as the prohibition to broadcast or transmit incorrect or false information. Another feature of the Argentinean law is that it specifically excludes the press from the action, which may be considered as reasonable or unreasonable depending upon the context and country in which it is applied.[13]
Venezuela
Article 28 of the Constitution of Venezuela established the writ of habeas data, which expressly permits access to information stored in official and private registries. It states as follows:
"All individuals have a right to access information and data about themselves and about their property stored in official as well as private registries. Secondly, they are entitled to know the purpose of and the policy behind these registries. Thirdly, they have a right to request, before a competent tribunal, the updating, rectification, or destruction of any database that is inaccurate or that undermines their entitlements. The law shall establish exceptions to these principles. By the same token, any person shall have access to information that is of interest to communities and groups. The secrecy of the sources of newspapers-and of other entities or individuals as defined by law-shall be preserved."[14]
The Venezuelan writ of habeas data expressly provides that individuals "are entitled to know the purpose of and the policy behind these registries." Also, it expresses a right to "updating, rectification, or destruction of any database that is inaccurate or that undermines their entitlements." Article 28 also declares that the “secrecy of the sources of newspapers and of other entities or individuals as defined by law-shall be preserved."[15]
Philippines
It is not as if the remedy of Habeas Data is available only in Latin American jurisdictions, but even in Asia the writ of Habeas Data has been specifically granted by the Supreme Court of the Philippines vide its resolution dated January 22, 2008 which provides that “The writ of habeas data is a remedy available to any person whose right to privacy in life, liberty or security is violated or threatened by an unlawful act or omission of a public official or employee, or of a private individual or entity engaged in the gathering, collecting or storing of data or information regarding the person, family, home and correspondence of the aggrieved party.” According to the Rule on Writ of Habeas Data, the petition is to be filed with the Regional Trial Court where the petitioner or respondent resides, or which has jurisdiction over the place where the data or information is gathered, collected or stored, at the option of the petitioner. The petition may also be filed with the Supreme Court or the Court of Appeals or the Sandiganbayan when the action concerns public data files of government offices.[16]
Two major distinctions are immediately visible between the Philippine right and that in the latin jurisdictions discussed above. One is the fact that in countries such as Bazil, Argentina and Paraguay, there does not appear to be a prerequisite to filing such an action asking for the information, whereas in Philippines it seems that such a petition can only be filed only if an individual’s “right to privacy in life, liberty or security is violated or threatened by an unlawful act or omission”. This means that the Philippine concept of habeas data is much more limited in its scope and is available to the citizens only under certain specific conditions. On the other hand the scope of the Philippine right of Habeas Data is much wider in its applicability in the sense that this right is available even against private individual and entities who are “engaged in the gathering, collecting or storing of data or information regarding the person, family, home and correspondence”. In the Latin American jurisdictions discussed above, this writ appears to be available only against either public institutions or private institutions having some public character.
Main features of Habeas Data
Thus from the discussion above, the main features of the writ of habeas data, as it is applied in various jurisdictions can be culled out as follows: [17]
- It is a right to the individual or citizen to ask for his/her information contained with any data registry;
- It is available only against public (government) entities or employees; or private entities having a public character;[18]
- Usually it also gives the individuals the right to correct any wrong information contained in the data registry;
- It is a remedy that is usually available by approaching any single judicial forum.
Since the writ of Habeas Data has been established and evolved primarily in Latin American countries, there is not too much literature on it available freely in the English language and that is a serious hurdle in researching this area. For example, this author did not find many article mentioning the scope of the writ of habeas data, for example whether it is an absolute right and on what grounds can it be denied. The Constitution of Venezuela, for example, specifies that the law shall establish exceptions to these principles and infact mentions the secrecy of sources for newspapers as an exception to this rule.[19]
Similarly in Argentina, there exists a public interest exception to the issuance of the writ of Habeas Data.[20]
That said, although little literature on the specific exceptions to habeas data is freely available in English, references can still be found to exceptions such as state security (Brazil), secrecy of newspaper sources (Argentina and Venezuela), or other entities defined by law (Venezuela).[21]
This suggests that the, as would be expected, the right to ask for the writ of habeas data is not an absolute right but would also be subject to certain exceptions and balanced against other needs such as state security and police investigations.
Habeas Data in the context of Privacy
Data protection legislation and mechanisms protect people against misuse of personal information by data controllers. Habeas Data, being a figure for use only by certain countries, gives the individuals the right to access, correct, and object to the processing of their information.
In general, privacy is the genus and data protection is the species, data protection is a right to personal privacy that people have against the possible use of their personal data by data controllers in an unauthorized manner or against the requirements of force. Habeas Data is an action that is brought before the courts to allow the protection of the individual’s image, privacy, honour, self-determination of information and freedom of information of a person. In that sense, the right of Habeas Data can be found within the broader ambit of data protection. It does not require data processors to ensure the protection of personal data processed but is a legal action requiring the person aggrieved, after filing a complaint with the courts of justice, the access and/or rectification to any personal data which may jeopardize their right to privacy.[22]
Habeas Data in the Indian Context
Although a number of judgments of the Apex Court in India have recognised the existence of a right to privacy by interpreting the fundamental rights to life and free movement in the Constitution of India,[23]
the writ of habeas data has no legal recognition under Indian law. However, as is evident from the discussion above, a writ of habeas data is very useful in protecting the right to privacy of individuals and it would be a very useful tool to have in the hands of the citizens. The fact that India has a fairly robust right to information legislation means that atleast some facets of the right of habeas data are available under Indian law. We shall now examine the Indian Right to Information Act, 2005 (RTI Act) to see what facets of habeas data are already available under this Act and what aspects are left wanting. As mentioned above, the writ of habeas data has the following main features:
- It is a right to the individual or citizen to ask for his/her information contained with any data registry;
- It is available only against public (government) entities or employees; or private entities having a public character;[24]
- Usually it also gives the individuals the right to correct any wrong information contained in the data registry;
- It is a remedy that is usually available by approaching any single judicial forum.
We shall now take each of these features and analyse whether the RTI Act provides any similar rights and how they differ from each other.
Right to seek his/her information contained with a data registry
Habeas data enables the individual to seek his or her information contained in any data registry. The RTI Act allows citizens to seek “information” which is under the control of or held by any public authority. The term information has been defined under the RTI Act to mean “any material in any form, including records, documents, memos, e-mails, opinions, advices, press releases, circulars, orders, logbooks, contracts, reports, papers, samples, models, data material held in any electronic form and information relating to any private body which can be accessed by a public authority under any other law for the time being in force”.[25]
Further, the term “record” has been defined to include “(a) any document, manuscript and file; (b) any microfilm, microfiche and facsimile copy of a document; (c) any reproduction of image or images embodied in such microfilm (whether enlarged or not); and (d) any other material produced by a computer or any other device”. It is quite apparent that the meaning given to the term information is quite wide and can include various types of information within its fold. The term “information” as defined in the RTI Act has been further elaborated by the Supreme Court in the case of Central Board of Secondary Education v. Aditya Bandopadhyay,[26]
where the Court has held that a person’s evaluated answer sheet for the board exams held by the CBSE would come under the ambit of “information” and should be accessible to the person under the RTI Act.[27]
An illustrative list of items that have been considered to be “information” under the RTI Act would be helpful in further understanding the concept:
- Asset declarations by Judges;[28]
- Copy of inspection report prepared by the Reserve Bank of India about a Co-operative Bank;[29]
- Information on the status of an enquiry;[30]
- Information regarding cancellation of an appointment letter;[31]
- Information regarding transfer of services;[32]
- Information regarding donations given by the President of India out of public funds.[33]
The above list would indicate that any personal information relation to an individual that is available in a government registry would in all likelihood be considered as “information” under the RTI Act.
However, just because the information asked for is considered to come within the ambit of section 2(h) does not mean that the person will be granted access to such information if it falls under any of the exceptions listed in section 8 of the RTI Act. Section 8 provides that if the information asked falls into any of the categories specified below then such information shall not be released in an application under the RTI Act, the categories are:
"(a) information, disclosure of which would prejudicially affect the sovereignty and integrity of India, the security, strategic, scientific or economic interests of the State, relation with foreign State or lead to incitement of an offence;
(b) information which has been expressly forbidden to be published by any court of law or tribunal or the disclosure of which may constitute contempt of court;
(c) information, the disclosure of which would cause a breach of privilege of Parliament or the State Legislature;
(d) information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;
(e) information available to a person in his fiduciary relationship, unless the competent authority is satisfied that the larger public interest warrants the disclosure of such information;
(f) information received in confidence from foreign Government;
(g) information, the disclosure of which would endanger the life or physical safety of any person or identify the source of information or assistance given in confidence for law enforcement or security purposes;
(h) information which would impede the process of investigation or apprehension or prosecution of offenders;
(i) cabinet papers including records of deliberations of the Council of Ministers, Secretaries and other officers:
Provided that the decisions of Council of Ministers, the reasons thereof, and the material on the basis of which the decisions were taken shall be made public after the decision has been taken, and the matter is complete, or over:
Provided further that those matters which come under the exemptions specified in this section shall not be disclosed;
(j) information which relates to personal information the disclosure of which has no relationship to any public activity or interest, or which would cause unwarranted invasion of the privacy of the individual unless the Central Public Information Officer or the State Public Information Officer or the appellate authority, as the case may be, is satisfied that the larger public interest justifies the disclosure of such information:
Provided that the information which cannot be denied to the Parliament or a State Legislature shall not be denied to any person."
The above mentioned exceptions seem fairly reasonable and infact are important since public records may contain information of a private nature which the data subject would not want revealed, and that is exactly why personal information is a specific exception mentioned under the RTI Act. When comparing this list to the recognised exceptions under habeas data, it must be remembered that a number of the exceptions listed above would not be relevant in a habeas data petition such as commercial secrets, personal information, etc. The exceptions which could be relevant for both the RTI Act as well as a habeas data writ would be (a) national security or sovereignty, (b) prohibition on publication by a court, (c) endangering the physical safety of a person, (d) hindrance in investigation of a crime. It is difficult to imagine a court (especially in India) granting a habeas data writ in violation of these four exceptions.
Certain other exceptions that may be relevant in a habeas data context but are not mentioned in the common list above are (a) information received in a fiduciary relationship; (b) breach of legislative privilege, (c) cabinet papers; and (d) information received in confidence from a foreign government. These four exceptions are not as immediately appealing as the others listed above because there are obviously competing interests involved here and different jurisdictions may take different points of view on these competing interests.[34]
Available only against public (government) entities or entities having public character.
A habeas corpus writ is maintainable in a court to ask for information relating to the petitioner held by either a public entity or a private entity having a public character. In India, the right to information as defined in the RTI Act means the right to information accessible under the Act held by or under the control of any public authority. The term "public authority" has been defined under the Act to mean “any authority or body or institution of self-government established or constituted—
(a) by or under the Constitution;
(b) by any other law made by Parliament;
(c) by any other law made by State Legislature;
(d) by notification issued or order made by the appropriate Government, and includes any— (i) body owned, controlled or substantially financed; (ii) non-Government organisation substantially financed, directly or indirectly by funds provided by the appropriate Government;"[35]
Therefore most government departments as well as statutory as well as government controlled corporations would come under the purview of the term "public authority". For the purposes of the RTI Act, either control or substantial financing by the government would be enough to bring an entity under the definition of public authority.[36]
The above interpretation is further bolstered by the fact that the preamble of the RTI Act contains the term “governments and their instrumentalities".[37]
Right to correct wrong information
While certain sectoral legislations such as the Representation of the People Act and the Collection of Statistics Act, etc. may provide for correction of inaccurate information, the RTI Act does not have any such provisions. This stands to reason because the RTI Act is not geared towards providing people with information about themselves but is instead a transparency law which is geared at dissemination of information, which may or may not relate to an individual.
Available upon approaching a single judicial forum
While the right of habeas data is available only upon approaching a judicial forum, the right to information under the RTI Act is realised entirely through the bureaucratic machinery. This also means that the individuals have to approach different entities in order to get the information that they need instead of approaching just one centralised entity.
Conclusion
There is no doubt that habeas data, by itself cannot end massive electronic surveillance of the kind that is being carried out by various governments in this day and age and the excessive collection of data by private sector companies, but providing the citizenry with the right to ask for such a writ would provide a critical check on such policies and practices of vast surveillance.[38]
An informed citizenry, armed with a right such as habeas data, would be better able to learn about the information being collected and kept on them under the garb of law and governance, to access such information, and to demand its correction or deletion when its retention by the government is not justified.
As we have discussed in this paper, under Indian law the RTI Act gives the citizens certain aspects of this right but with a few notable exceptions. Therefore, if a writ such as habeas data is to be effectuated in India, it might perhaps be a better idea to approach it by amending/tweaking the existing structure of the RTI Act to grant individuals the right to correct mistakes in the data along with creating a separate department/mechanism so that the applications demanding access to one’s own data do not have to be submitted in different departments but can be submitted at one central place. This approach may be more pragmatic rather than asking for a change in the Constitution to grant to the citizens the right to ask for a writ in the nature of habeas data.
There may be calls to also include private data processors within the ambit of the right to habeas data, but it could be challenging to enforce this right. This is because it is still feasible to assume that the government can put in place machinery to ensure that it can find out whether information about a particular individual is available with any of the government’s myriad departments and corporations, however it would be almost impossible for the government to track every single private database and then scan those databases to find out how many of them contain information about any specific individual. This also throws up the question whether a right such as habeas data, which originated in a specific context of government surveillance, is appropriate to protect the privacy of individuals in the private sector. Since under Indian law section 43A and the Rules thereunder, which regulate data protection, already provide for consent and notice as major bulwarks against unauthorised data collection, and limit the purpose for which such data can be utilised, privacy concerns in this context can perhaps be better addressed by strengthening these provisions rather than trying to extend the concept of habeas data to the private sector.
[1]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803
[2]. Article 8 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, 1981, available at https://www.coe.int/en/web/conventions/full-list/-/conventions/rms/0900001680078b37
[3]. Guadamuz A, 'Habeas Data: The Latin-American Response to Data Protection',2000 (2) The Journal of Information, Law and Technology (JILT).
[4]. Id.
[5]. Speech by Chief Justice Reynato Puno, Supreme Court of Philippines delivered at the UNESCO Policy Forum and Organizational Meeting of the Information for all Program (IFAP), Philippine National Committee, on November 19, 2007, available at http://jlp-law.com/blog/writ-of-habeas-data-by-chief-justice-reynato-puno/
[6]. Guadamuz A, 'Habeas Data: The Latin-American Response to Data Protection',2000 (2) The Journal of Information, Law and Technology (JILT).
[7]. The author does not purport to be an expert on the laws of these jurisdictions and the analysis in this paper has been based on a reading of the actual text or interpretations given in the papers that have been cited as the sources. The views in this paper should be viewed keeping this context in mind.
[8]. Article 5, LXXII of the Constitution of Brazil, available at https://www.constituteproject.org/constitution/Brazil_2014.pdf
[9]. Guadamuz A, 'Habeas Data vs the European Data Protection Directive', Refereed article, 2001 (3) The Journal of Information, Law and Technology (JILT).
[10]. Article 135 of the Constitution of Paraguay, available at https://www.constituteproject.org/constitution/Paraguay_2011.pdf?lang=en
[11]. The petition for a writ of amparo is a remedy available to any person whose right to life, liberty and security is violated or threatened with violation by an unlawful act or omission of a public official or employee, or of a private individual or entity.
[12]. Article 43 of the Constitution of Argentina, available at https://www.constituteproject.org/constitution/Argentina_1994.pdf?lang=en
[13]. https://www2.warwick.ac.uk/fac/soc/law/elj/jilt/2001_3/guadamuz/
[14]. Article 28 of the Venezuelan Constitution, available at http://www.venezuelaemb.or.kr/english/ConstitutionoftheBolivarianingles.pdf
[15]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803
[16]. Rule on the Writ of Habeas Data Resolution, available at http://hrlibrary.umn.edu/research/Philippines/Rule%20on%20Habeas%20Data.pdf
[17]. The characteristics of habeas data culled out in this paper are by no means exhaustive and based only on the analysis of the jurisdictions discussed in this paper. This author does not claim to have done an exhaustive analysis of every jurisdiction where Habeas Data is available and the views in this paper should be viewed in that context.
[18]. Except in the case of the Philippines and Venezeula. This paper has not done an analysis of the writ of habeas data in every jurisdiction where it is available and there may be jurisdictions other than the Philippines which also give this right against private entities.
[19]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803
[20]. The case of Ganora v. Estado Nacional, Supreme Court of Argentina, September 16, 1999, cf.http://www.worldlii.org/int/journals/EPICPrivHR/2006/PHR2006-Argentin.html
[21]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803
[22]. http://www.oas.org/dil/data_protection_privacy_habeas_data.htm
[23]. Even the scope of the right to privacy is currently under review in the Supreme Court of India. See “Right to Privacy in Peril”, http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril
[24]. Except in the case of the Philippines. This paper has not done an analysis of the writ of habeas data in every jurisdiction where it is available and there may be jurisdictions other than the Philippines which also give this right against private entities.
[25]. Section 2(f) of the Right to Information Act, 2005.
[26]. 2011 (106) AIC 187 (SC), also available at http://judis.nic.in/supremecourt/imgst.aspx?filename=38344
[27]. The exact words of the Court were: “The definition of `information' in section 2(f) of the RTI Act refers to any material in any form which includes records, documents, opinions, papers among several other enumerated items. The term `record' is defined in section 2(i) of the said Act as including any document, manuscript or file among others. When a candidate participates in an examination and writes his answers in an answer-book and submits it to the examining body for evaluation and declaration of the result, the answer-book is a document or record. When the answer-book is evaluated by an examiner appointed by the examining body, the evaluated answer-book becomes a record containing the `opinion' of the examiner. Therefore the evaluated answer-book is also an `information' under the RTI Act.”
[28]. Secretary General, Supreme Court of India v. Subhash Chandra Agarwal, AIR 2010 Del 159, available at https://indiankanoon.org/doc/1342199/
[29]. Ravi Ronchodlal Patel v. Reserve Bank of India, Central Information Commission, dated 6-9-2006.
[30]. Anurag Mittal v. National Institute of Health and Family Welfare, Central Information Commission, dated 29-6-2006.
[31]. Sandeep Bansal v. Army Headquarters, Ministry of Defence, Central Information Commission, dated 10-11-2008.
[32]. M.M. Kalra v. DDA, Central Information Commission, dated 20-11-2008.
[33]. Nitesh Kumar Tripathi v. CPIO, Central Information Commission, dated 4-5-2012.
[34]. A similar logic may apply to the exceptions of (i) cabinet papers, and (ii) parliamentary privilege.
[35]. Section 2 (h) of the Right to Information Act, 2005.
[36]. M.P. Verghese v. Mahatma Gandhi University, 2007 (58) AIC 663 (Ker), available at https://indiankanoon.org/doc/1189278/
[37]. Principal, M.D. Sanatan Dharam Girls College, Ambala City v. State Information Commissioner, AIR 2008 P&H 101, available at https://indiankanoon.org/doc/1672120/
[38]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803
Comments on the Draft National Policy on Software Products
I. Preliminary
1. This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Draft National Policy on Software Products [1] (“draft policy”), released by the Ministry of Electronics & Information Technology (“MeitY ”).
2. CIS commends MeitY on its initiative to present a draft policy, and is thankful for the opportunity to put forth its views in this public consultation period.
3. This submission is divided into three main parts. The first part, ‘Preliminary’, introduces the document; the second part, ‘About CIS’, is an overview of the organization; and, the third part contains the comments by CIS on the Draft National Policy on Software Products.
II. About CIS
4. CIS is a non-profit organisation [2] that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, freedom of speech and expression, intermediary liability, digital privacy, and cyber security.
5. CIS values the fundamental principles of justice, equality, freedom and economic development. This submission is consistent with CIS' commitment to these values, the safeguarding of general public interest and the protection of India's national interest at the international level. Accordingly, the comments in this submission aim to further these principles.
III. Comments on the Draft National Policy on Software Products
General Comments
6. CIS commends MeitY on its initiative to develop a consolidated National Policy on Software Products. We believe that there are certain salient points in the draft policy that deserve particular appreciation for being in the interest of all stakeholders, especially the public. An indicative list of such points include:
- A focus on aiding digital inclusion via software, especially in the fields of finance, education and healthcare.
- The recognition of the need for openness and application of open data principles in the private and public sector. Identifying the need for diversification of the information technology sector into regions outside the developed cities in India.
- Identifying the need for innovation and original research in emerging fields such as Internet of Things and Big Data.
7. We observe that the draft policy weighs in the favour of creating a thriving digital economy, which indeed is a commendable objective per se. However, there are certain aspects which remain to be addressed by the draft policy, to ensure that the growth of our domestic software industry truly achieves the vision set out in Digital India for better delivery of government services and maximisation of the public interest.
8. We submit that the proposed policy should include certain additional guiding principles to direct creation of software and its end-utilisation. These principles would ensure responsible, inclusive, judicious and secure software product life cycle by all the relevant stakeholders, including the industry, the government and especially the public. An indicative list of such principles that we believe should be explicitly included in the policy are:
- Ensuring that internationally accepted principles of privacy are followed in software development and utilisation, including public awareness.
- Requiring basic yet sufficient standards of information security to ensure protection of user data at all stages of the software product life cycle.
- Enforcing lingual diversity in software to allow for India’s diverse population to operate indigenous software in an inclusive manner.
- Mandating minimum standards on accessibility in software creation, procurement and implementation to ensure sustainable use by the differently-abled.
- Focusing on transparency & accountability in software procurement for all public funded projects.
- Implementing the utilisation of Free and Open Source Software (“FOSS”) in the execution of public funded projects as per the mandate of the Policy on Adoption of Open Source Software for Government of India; thereby incentivising the creation of FOSS for use in both private and public sector.
- For software to be truly inclusive of the goals of Digital India, it is essential that to provide supports to Indic languages and scripts without yielding an inferior experience or results for the end user in non-English interfaces. Software already deployed should be translated and localised.
9. The inclusion of these principles in substantive clauses of the policy will go a long way in ensuring the sustainable and transparent growth of domestic software product ecosystem.
Specific Comments
10. Development of a robust Electronic Payment Infrastructure
10.1. CIS observes that clauses 5.4 and 6.7 of the draft policy aim to establish a seamless electronic payment infrastructure. We submit that an electronic payment infrastructure should be designed with strong standards of information security, privacy and inclusivity (both accessibility and lingual).
10.2. We recommend that the policy mandate minimum standards of information security, privacy and inclusivity in all payment systems across private and public sectors. The policy should, therefore, ideally specify the respective standards for these categories, for instance ISO 27001 and National Policy on Universal Electronics Accessibility [3], alongside other industry standards for Electronic Payment Infrastructure.
11. Government Procurement
11.1. CIS observes that clause 6.1 of the draft policy seeks to develop a framework for inclusion of Indian software in government procurement. It is commendable that the draft policy identifies the need for a better framework. CIS notes that the existing procurement procedure allows for usage of Indian software. In fact, the Government e-Marketplace(eGM) already has begun to incorporate some of these principles in general procurement.
11.2. Indeed, the presence of a transparent and accountable government procurement, which leverages technology and the internet, is key to ensuring a sustainable and fair market. CIS recommends that the policy refer to these guiding principles to enable the development of a viable cache of Indian software products by creating more avenues, including government procurement.
12. Incentives for Digital India oriented software
12.1. CIS observes that clause 6.3 of the draft policy incentivises the creation of software addressing the action pillars of the commendable Digital India programme.
12.2. For development of superior quality software which will ensure excellent success of the Digital India programme, CIS recommends that the incentives should be provided contingent to the incorporation of certain minimum standards of software development. Such products and services should, inter alia, adhere to the stipulations under National Policy on Universal Electronics Accessibility, the Guidelines for Indian Government Websites, Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011, etc. In the process, the software should be subjected to reviews by a neutral entity to gauge the compliance with the abovementioned minimum standards.
13. Increasing adoption of Open APIs and Open Data
13.1. CIS observes that clause 6.6 of the draft policy promotes the use of open APIs and open data in development of e-government services.
13.2. We strongly recommend that open APIs and open data principles be adopted by software used in all government organizations, and non-commercial software . Open Data and Open APIs can serve a vital role in ensuring transparent, accountable and efficient governance, which can be leveraged in a major way within the policy by the public and civil society.
14. Creation of Enabling Environment for Innovation, R&D, and IP Creation and Protection
14.1. CIS observes that clause 8.1 of the draft policy seeks to create an enabling environment for innovation, R&D, and IP creation and protection.
14.2. CIS submits that the existing TRIPS-compliant Indian intellectual property law regime is adequately designed to incentivise creativity and innovation in the area of software development. The Indian Patents Act, 1970 read with the Guidelines for Examination of Computer Related Inventions, 2016 do not permit the patenting of computer programmes per se. Several Indian software developers, notably small and medium sized development companies have made evidence-based submissions to the government previously on the negative impact of software patenting on software innovation [4].
14.3. CIS recommends that the proposed policy re-affirm the adequacy of the Indian intellectual property regime to protect software development, in compliance with the TRIPS Agreement.
IV. Conclusion
15. CIS commends the MeitY on the development of the draft policy. We strongly urge MeitY to address the issues highlighted above, especially emphasising the incorporation of essential principles such as information security, privacy, accessibility, etc. Adoption of such measures will ensure a fair balance between commercial growth of domestic software industry and the maximisation of public interest.
[1]. National Policy on Software Products (2016, Draft internal v1. 15) available at http://meity.gov.in/sites/upload_files/dit/files/National%20Policy%20on%20Software%20Products.pdf
[2]. See The Centre for Internet and Society, available at http://cis- india.org for details of the organization,and our work.
[3]. See http://meity.gov.in/sites/upload_files/dit/files/Accessible-format-National%20Policy%20on%20Universal%20Electronics.pdf
[4]. See http://economictimes.indiatimes.com/articleshow/52159304.cms?utm_source=contentofinterest&utm_me dium=text&utm_campaign=cppst
Enlarging the Small Print: A Study on Designing Effective Privacy Notices for Mobile Applications
Introduction: Ideas of Privacy and Consent Linked with Notices
The Notice and Choice Model
Most modern laws and data privacy principles seek to focus on individual control. As Alan Westin of Columbia University characterises privacy, "it is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other," [1] Or simply put, personal information privacy is "the ability of the individual to personally control information about himself."[2]
The preferred mechanism for protecting online privacy that has emerged is that of Notice and Choice.[3] The model, identified as "the most fundamental principle" in online privacy,[4] refers toconsumers consenting to privacy policies before availing of an online service. [5]
The following 3 standards of expectations of privacy in electronic communications have emerged in the United States courts:
- KATZ TEST: Katz v. United States,[6] a wiretap case, established expectation of privacy as one society is prepared to recognize as ―reasonable. [7]This concept is critical to a court's understanding of a new technology because there is no established precedent to guide its analysis[8]
- KYLLO/ KYLLO-KATZ HYBRID TEST: Society's reasonable expectation of privacy is higher when dealing with a new technology that is not ―generally available to the public.[9]This follows the logic that it is reasonable to expect common data collection practices to be used but not rare ones. [10] In Kyllo v. United States [11] law enforcement used a thermal imaging device to observe the relative heat levels inside a house. Though as per Katz the publicly available thermal radiation technology is reasonable, the uncommon means of collection was not. This modification to the Katz standard is extremely important in the context of mobile privacy. Mobile communications may be subdivided into smaller parts of audio from a phone call, e-mail, and data related to a user's current location. Following an application of the hybrid Katz/Kyllo test, the reasonable expectation of privacy in each of those communications would be determined separately[12], by evaluating the general accessibility of the technology required to capture each stream.[13]
- DOUBLE CLICK TEST: DoubleClick[14] illustrates the potential problems of transferring consent to a third party, one to whom the user never provided direct consent or is not even aware of. The court held that for DoubleClick, an online advertising network, to collect information from a user it needed only to obtain permission from the website that user accessed, and not from the user himself. The court reasoned that the information the user disclosed to the website was analogous to information one discloses to another person during a conversation. Just as the other party to the conversation would be free to tell his friends about anything that was said, a website should be free to disclose any information it receives from a user's visit after the user has consented to use the website's services.
These interpretations have weakened the standards of online privacy. While the Katz test vaguely hinges on societal expectations, the Kyllo Test to an extent strengthens privacy rights by disallowing uncommon methods of collection, but as the DoubleClick Test illustrates, once the user has consented to such practices he cannot object to the same. There have been sugestions to consider personal information as property when it shares features of property like location data.[15] It is fixed when it is in storage, it has a monetary value, and it is sold and traded on a regular basis. This would create a standard where consent is required for third-party access. [16] Consent will then play a more pivotal role in affixing liability.
The notice and choice mechanism is designed to put individuals in charge of the collection and use of their personal information. In theory, the regime preserves user autonomy by putting the individual in charge of decisions about the collection and use of personal information. [17] Notice and choice is asserted as a substitute for regulation because it is thought to be more flexible, inexpensive to implement, and easy to enforce.[18] Additionally, notice and choice can legitimize an information practice, whatever it may be, by obtaining an individual's consent and suit individual privacy preferences. [19]
However, the notice and choice mechanism is often criticized for leaving users uninformed-or misinformed, at least-as people rarely see, read, or understand privacy notices. [20] Moreover, few people opt out of the collection, use, or disclosure of their data when presented with the choice to do so.[21]
Amber Sinha of the Centre for Internet and Society argues that consent in these scenarios Is rarely meaningful as consumers fail to read/access privacy policies, understand the consequences and developers do not provide them the choice to opt out of a particular data practice while still being allowed to use their services. [22]
Of particular concern is the use of software applications (apps) designed to work on mobile devices. Estimates place the current number of apps available for download at more than 1.5 million, and that number is growing daily.[23] A 2011 Google study, "The Mobile Movement," identified that mobile devices are viewed as extensions of ourselves that we share with deeply personal relations with, raising fundamental questions of how apps and other mobile communications influence our privacy decision-making.
Recent research indicates that mobile device users have concerns about the privacy implications of using apps. [24] The research finds that almost 60 percent of respondents ages 50 and older decided not to install an app because of privacy concerns (see figure 1).[25]
Because no standards currently exist for providing privacy notice disclosure for apps, consumers may find it difficult to understand what data the app is collecting, how those data will be used, and what rights users have in limiting the collection and use of their data. Many apps do not provide users with privacy policy statements, making it impossible for app users to know the privacy implications of using a particular app. [26]Apps can make use of any or all of the device's functions, including contact lists, calendars, phone and messaging logs, locational information, Internet searches and usage, video and photo galleries, and other possibly sensitive information. For example, an app that allows the device to function as a scientific calculator may be accessing contact lists, locational data, and phone records even though such access is unnecessary for the app to function properly. [27]
Other apps may have privacy policies that are confusing or misleading. For example, an analysis of health and fitness apps found that more than 30 percent of the apps studied shared data with someone not disclosed in the app's privacy policy.[28]
Types of E-Contracts
Margaret Radin distinguishes two models of direct e-contracts based on consent as -"contract-as-consent" and "contract-as-product." [29]
The contract-as-consent model is the traditional picture of how binding commitment is arrived at between two humans. It involves a meeting of the minds which implies that terms be understood, alternatives be available, and probably that bargaining be possible.
In the contract-as-product model, the terms are part of the product, not a conceptually separate bargain; physical product plus terms are a package deal. For example the fact that a chip inside an electronics item will wear out after a year is an unseen contract creating a take-it-or-leave-it choice not to buy the package.
The product-as-consent model defies traditional ideas of consent and raises questions of whether consent is meaningful. Modern day e-contracts such as click wrap, shrink wrap, viral contracts and machine-made contracts which form the privacy policy of several apps have a product-as-consent approach where consumers are given the take-it-or-leave-it option.
Mobile application privacy notices fall into the product-as-consent model. Consumers often have to click "I agree" to all the innumerable Terms and Conditions in order to install the app. For instance terms that the fitness app will collect biometric data is a feature of the product that is non-negotiable. It is a classic take-it-or-leave-it approach where consumers compromise on privacy to avail services.
Contracts that facilitate these transactions are generally long and complicated and often agreed to by consumers without reading them.
Craswell strikes a balance in applying the liability rule to point out that as explaining the meaning of extensive fine print would be very costly to point out it could be efficient to affix the liability rule not as a written contract but rather on "reasonable" terms. This means that if a fitness app collects sensitive financial information, which is unreasonable given its core activities, then even if the user has consented to the same in the privacy policy's fine print the contract should be capable of being challenged.
The Concept of Privacy by Design
Privacy needs to be considered from the very beginning of system development. For this reason, Dr. Anne Cavoukian [30] coined the term "Privacy by Design", that is, privacy should be taken into account throughout the entire engineering process from the earliest design stages to the operation of the productive system. This holistic approach is promising, but it does not come with mechanisms to integrate privacy in the development processes of a system. The privacy-by-design approach, i.e. that data protection safeguards should be built into products and services from the earliest stage of development, has been addressed by the European Commission in their proposal for a General Data Protection Regulation. This proposal uses the terms "privacy by design" and "data protection by design" synonymously.
The 7 Foundational Principles[31] of Privacy by Design are:
- Proactive not Reactive; Preventative not Remedial
- Privacy as the Default Setting
- Privacy Embedded into Design
- Full Functionality - Positive-Sum, not Zero-Sum
- End-to-End Security - Full Lifecycle Protection
- Visibility and Transparency - Keep it Open
- Respect for User Privacy - Keep it User-Centric
Several terms have been introduced to describe types of data that need to be protected. A term very prominently used by industry is "personally identifiable information (PII)", i.e., data that can be related to an individual. Similarly, the European data protection framework centres on "personal data". However, some authors argue that this falls short since also data that is not related to a single individual might still have an impact on the privacy of groups, e.g., an entire group might be discriminated with the help of certain information. For data of this category the term "privacy-relevant data" has been used. [32]
An essential part of Privacy by Design is that data subjects should be adequately informed whenever personal data is processed. Whenever data subjects use a system, they should be informed about which information is processed, for what purpose, by which means and who it is shared is with. They should be informed about their data access rights and how to exercise them.[33]
Whereas system design very often does not or barely consider the end-users' interests, but primarily focuses on owners and operators of the system, it is essential to account the privacy and security interests of all parties involved by informing them about associated advantages (e.g. security gains) and disadvantages (e.g. costs, use of resources, less personalisation). By creating this system of "multilateral security" the demands of all parties must be realized.[34]
The Concept of Data Minimization
The most basic privacy design strategy is MINIMISE, which states that the amount of personal data that is processed should be restricted to the minimal amount possible. By ensuring that no, or no unnecessary, data is collected, the possible privacy impact of a system is limited. Applying the MINIMISE strategy means one has to answer whether the processing of personal data is proportional (with respect to the purpose) and whether no other, less invasive, means exist to achieve the same purpose. The decision to collect personal data can be made at design time and at run time, and can take various forms. For example, one can decide not to collect any information about a particular data subject at all. Alternatively, one can decide to collect only a limited set of attributes.[35]
If a company collects and retains large amounts of data, there is an increased risk that the data will be used in a way that departs from consumers' reasonable expectations.[36]
There are three privacy protection goals[37] that data minimization and privacy by design seek to achieve. These privacy protection goals are:
- Unlinkability - To prevent data being linked to an identifiable entity
- Transparency - The information has to be available before, during and after the processing takes place.
- Intervenability - Those who provide their data must have means of intervention into all ongoing or planned privacy-relevant data processing
Spiekermann and Cranor raised an intriguing point in their paper, they argued that those companies that employ privacy by design and data minimization practices in their applications should be allowed to skip the need for privacy policies and forgo need for notice and choice features. [38]
To Summarise: The emerging model and legal dialogue that regulates online privacy is that of Notice and Choice which has been severely criticised for not creating informed choice making processes. E-contracts such as agreeing to privacy notices follow the consent-as-product model. When there is extensive fine print liability must be affixed on the basis of reasonable terms. Privacy notices must incorporate the concepts of Privacy by Design through providing complete information and collecting minimum data.
|
Features of Privacy Notices in the Current Mobile Ecosystem
A privacy notice inform a system's users or a company's customers of data practices involving personal information. Internal practices with regard to the collection, processing, retention, and sharing of personal information should be made transparent.
Each app a user chooses to install on his smartphone can access different information stored on that device. There is no automatic access to user information. Each application has access only to the data that it pulls into its own 'sandbox'.
The sandbox is a set of fine-grained controls limiting an application's access to files, preferences, network resources, hardware etc. Applications cannot access each other's sandboxes.[39] The data that makes it into the sandbox is normally defined by user permissions.[40] These are a set of user defined controls[41]and evidence that a user consents to the application accessing that data. [42]
To gain permission mobile apps generally display privacy notices that explicitly seek consent. These can leverage different channels, including a privacy policy document posted on a website or linked to from mobile app stores or mobile apps. For example, Google Maps uses a traditional clickwrap structure that requires the user to agree to a list of terms and conditions when the program is initially launched. [43] Foursquare, on the other hand, embeds its terms in a privacy policy posted on its website, and not within the app. [44]
This section explains the features of current privacy notices on the 4 parameters of stage (at which the notice is given), content, length and user comprehension. Under each of these parameters the associated problems are identified and alternatives are suggested.
(1) Timing and Frequency of Notice:
This sub-section identifies the various stages that notices are given and highlights their advantages, disadvantages and makes recommendations. It concludes with the findings of a study on what the ideal stage to provide notice is. This is supplemented with 2 critical models to address the common problems of habituation and contextualization.
Studies indicate that timing of notices or the stage at which they are given impact how consumer's recall and comprehend them and make choices accordingly. [45] I ntroducing only a 15-second delay between the presentation of privacy notices and privacy relevant choices can be enough to render notices ineffective at driving user behaviour.[46]
Google Android and Apple iOS provide notices at different times. At the time of writing, Android users are shown a list of requested permissions while the app is being installed, i.e., after the user has chosen to install the app. In contrast, iOS shows a dialog during app use, the first time a permission is requested by an app. This is also referred to as a "just-in-time" notification. [47]
The following are the stages in which a notice can be given:
1) NOTICE AT SETUP: Notice can be provided when a system is used for the first time[48]. For instance, as part of a software installation process users are shown and have to accept the system's terms of use.
a) Advantages: Users can inspect a system's data practices before using or purchasing it. The system developer is benefitted due to liability and transparency reasons that gain user trust. It provides the opportunity to explain unexpected data practices that may have a benign purpose in the context of the system[49]. It can even impact purchase decisions. Egelman et al. found that participants were more likely to pay a premium at a privacy-protective website when they saw privacy information in search results, as opposed to on the website after selecting a search result[50].
b) Disadvantages: Users have become largely habituated to install time notices and ignore them[51]. Users may have difficulty making informed decisions because they have not used the system yet and cannot fully assess its utility or weigh privacy trade-offs. They may also be focused on the primary task, namely completing the setup process to be able to use the system, and fail to pay attention to notices [52].
c) Recommendations: Privacy notices provided at setup time should be concise and focus on data practices immediately relevant to the primary user rather than presenting extensive terms of service. Integrating privacy information into other materials that explain the functionality of the system may further increase the chance that users do not ignore it.[53]
2) JUST IN TIME NOTICE: A privacy notice can be shown when a data practice is active, for example when information is being collected, used, or shared. Such notices are referred to as "contextualized" or "just-in-time" notices[54].
a) Advantages: They enhance transparency and enable users to make privacy decisions in context. Users have also been shown to more freely share information if they are given relevant explanations at the time of data collection[55].
b) Disadvantages: Habituation can occur if these are shown too frequently. Moreover in apps such as gaming apps users generally tend to ignore notices displayed during usage.
c) Recommendations: Consumers can be given notice the first time a particular type of information is accessed such as email and then be given the option to opt out of further notifications. A Consumer may then seek to opt out of notices on email but choose to view all notices on health information that is accessed depending on his privacy priorities.
3) CONTEXT-DEPENDENT NOTICES: The user's and system's context can also be considered to show additional notices or controls if deemed necessary [56]. Relevant context may be determined by a change of location, additional users included in or receiving the data, and other situational parameters. Some locations may be particularly sensitive, therefore users may appreciate being reminded that they are sharing their location when they are in a new place, or when they are sharing other information that may be sensitive in a specific context. Facebook introduced a privacy checkup message in 2014 that is displayed under certain conditions before posting publicly. It acts as a "nudge" [57] to make users aware that the post will be public and to help them manage who can see their posts.
a) Advantages: It may help users make privacy decisions that are more aligned with their desired level of privacy in the respective situation and thus foster trust in the system.
b) Disadvantages: Challenges in providing context-dependent notices are detecting relevant situations and context changes. Furthermore, determining whether a context is relevant to an individual's privacy concerns could in itself require access to that person's sensitive data and privacy preferences. [58]
c) Recommendations: Standards must be evolved to determine a contextual model based on user preferences.
4) PERIODIC NOTICES: These are shown the first couple of times a data practice occurs, or every time. The sensitivity of the data practice may determine the appropriate frequency.
a) Advantages: It can further help users maintain awareness of privacy-sensitive information flows especially when data practices are largely invisible [59]such as in patient monitoring apps. This helps provide better control options.
b) Disadvantages: Repeating notices can lead to notice fatigue and habituation[60].
c) Recommendations: Frequency of these notices needs to be balanced with user needs. [61] Data practices that are reasonably expected as part of the system may require only a single notice, whereas practices falling outside the expected context of use which the user is potentially unaware of may warrant repeated notices. Periodic notices should be relevant to users in order to be not perceived as annoying. A combined notice can remind about multiple ongoing data practices. Rotating warnings or changing their look can also further reduce habituation effects [62]
5) PERSISTENT NOTICES: A persistent indicator is typically non-blocking and may be shown whenever a data practices is active, for instance when information is being collected continuously or when information is being transmitted[63]. When inactive or not shown, persistent notices also indicate that the respective data practice is currently not active. For instance, Android and iOS display a small icon in the status bar whenever an application accesses the user's location.
a) Advantages: These are easy to understand and not annoying increasing their functionality.
b) Disadvantages: These ambient indicators often go unnoticed.[64] Most systems can only accommodate such indicators for a small number of data practices.
c) Recommendations: Persistent indicators should be designed to be noticeable when they are active. A system should only provide a small set of persistent indicators to indicate activity of especially critical data practices which the user can also specify.
6) NOTICE ON DEMAND: Users may also actively seek privacy information and request a privacy notice. A typical example is posting a privacy policy at a persistent location[65] and providing links to it from the app. [66]
a) Advantages: Privacy sensitive users are given the option to better explore policies and make informed decisions.
b) Disadvantages: The current model of a link to a long privacy policy on a website will discourage users from requesting for information that they cannot fully understand and do not have time to read.
c) Recommendations: Better option are privacy settings interfaces or privacy dashboards within the system that provide information about data practices; controls to manage consent; summary reports of what information has been collected, used, and shared by the system; as well as options to manage or delete collected information. Contact information for a privacy office should be provided to enable users to make written requests.
Which of these Stages is the Most Ideal?
In a series of experiments, Rebecca Balekabo and others [67] have identified the impact of timing on smartphone privacy notices. The following 5 conditions were imposed on participants who were later tested on their levels of recall of the notices through questions:
- Not Shown: The participants installed and used the app without being shown a privacy notice
- App Store: Notice was shown at the time of installation at the app store
- App store Big: A large notice occupying more screen space was shown at the app store
- App Store Popup: A smaller popup was displayed at the app Store
- During use: Notice was shown during usage of the app
The results (Figure) suggest that even if a notice contains information users care about, it is unlikely to be recalled if only shown in the app store and more effective when shown during app usage.
Seeing the app notice during app usage resulted in better recall. Although participants remembered the notice shown after app use as well as in other points of app use, they found that it was not a good point for them to make decisions about the app because they had already used it, and participants preferred when the notice was shown during or before app usage.
Hence depending on the app there are optimal times to show smartphone privacy notices to maximize attention and recall with preference being given to the beginning of or during app use.
However several of these stages as outlined baove face the disadvantages of habituation and uncertainty on contextualization. The following 2 models have been proposed to address this:
Habituation
When notices are shown too frequently, users may become habituated. Habituation may lead to users disregarding warnings, often without reading or comprehending the notice[68]. To reduce habituation from app permission notices, Felt et al. identified a tested method to determine which permission requests should be emphasized [69]
They categorized actions on the basis of revertibility, severability, initiation, alterable and approval nature (Explained in figure) and applied the following permission granting mechanisms :
- Automatic Grant: It must be requested by the developer, but it is granted without user involvement.
- Trusted UI elements: They appear as part of an application's workflow, but clicking on them imbues the application with a new permission. To ensure that applications cannot trick users, trusted UI elements can be controlled only by the platform. For example, a user who is sending an SMS message from a third-party application will ultimately need to press a button; using trusted UI means the platform provides the button.
- Confirmation Dialog: Runtime consent dialogs interrupt the user's flow by prompting them to allow or deny a permission and often contain descriptions of the risk or an option to remember the decision.
- Install-time warning: These integrate permission granting into the installation flow. Installation screens list the application's requested permissions. In some platforms (e.g., Facebook), the user can reject some install-time permissions. In other platforms (e.g., Android and Windows 8 Metro), the user must approve all requested permissions or abort installation.[70]
Based on these conditions the following sequential model that the system must adopt was proposed to determine frequency of displaying notices:
Initial tests have proven to be successful in reducing habituation effects and it is an important step towards designing and displaying privacy notices.
Contextualization
Bastian Koning and others, in their paper "Towards Context Adaptive Privacy Decisions in Ubiquitous Computing" [71] propose a system for supporting a user's privacy decisions in situ, i.e., in the context they are required in, following the notion of contextual integrity. It approximates the user's privacy preferences and adapts them to the current context. The system can then either recommend sharing decisions and actions or autonomously reconfigure privacy settings. It is divided into the following stages:
Context Model: A distinction is created between the decision level and system level. The system level enables context awareness but also filters context information and maps it to semantic concepts required for decisions. Semantic mappings can be derived from a pre-defined or learnt world model. On the decision level, the context model only contains components relevant for privacy decision making. For example: An activity involves the user, is assigned a type, i.e., a semantic label, such as home or work, based on system level input.
Privacy Decision Engine : The context model allows to reason about which context items are affected by a context transition. When a transition occurs, the privacy decision engine (PDE) evaluates which protection worthy context items are affected. Protection worthiness (or privacy relevance) of context items for a given context are determined by the user's privacy preferences that are This serves as a basis for adapting privacy preferences and is subsequently further adjusted to the user by learning from the user's explicit decisions, behaviour, and reaction to system actions. [72] approximated by the system from the knowledge base.
The user's personality type is determined before initial system use to select a basic privacy profile.
It may also be possible that the privacy preference cannot be realized in the current context. In that case, the privacy policy would suggest terminating the activity. For each privacy policy variant a confidence score is calculated based on how well it fits the adapted privacy preference. Based on the confidence scores, the PDE selects the most appropriate policy candidate or triggers user involvement if the confidence is below a certain threshold determined by the user's personality and previous privacy decisions.
Realization and Enforcement: The selected privacy policy must be realized on the system level. This is by combining territorial privacy and information privacy aspects. The private territory is defined by a territorial privacy boundary that separates desired and undesired entities.
Granularity adjustments for specific Information items is defined. For example, instead of the user's exact position only the street address or city can be provided.
ADVANTAGES: The personalization to a specific user has the advantage of better emulating that user's privacy decision process. It also helps to decide when to involve the user in the decision process by providing recommendations only and when privacy decisions can be realized autonomously.
DISADVANTAGES: The entire model hinges on the ability of the system to accurately determine user profile before the user starts using it and not after, when preferences can be more accurately determined. There is no provision for the user to pick his own privacy profile, it is all system determined taking away an element of consent in the very beginning. As all further preferences are adapted on this base, it is possible that the system may not deliver. The use of confident scores is an approximation that can compromise privacy by a small numerical margin of difference.
However it is a useful insight on techniques of contextualization. Depending on the environment, different strategies for policy realization and varying degrees of enforcement are possible[73].
Length
The length of privacy policies is often cited as one reason they are so commonly ignored. Studies show privacy policies are hard to read, read infrequently, and do not support rational decision making. [74] Aleecia M. McDonald and Lorrie Faith Cranor in their seminal study, "The Cost of Reading Privacy Policies" estimated that the the average length of privacy policies is 2,500 words. Using the reading speed of 250 words per minute which is typical for those who have completed secondary education, the average policy would take 10 minutes to read.
The researchers also investigated how quickly people could read privacy policies when they were just skimming it for pertinent details. They timed 93 people as they skimmed a 934-word privacy policy and answered multiple choice questions on its content.
Though some people took under a minute and others up to 42 minutes, the bulk of the subjects of the research took between three and six minutes to skim the policy, which itself was just over a third of the size of the average policy.
The researchers used their data to estimate how much it costs to read the privacy policy of every site they visit once a year if their time was charged for and arrived at a mind boggling figure of $652 billion.
Problems
Though the figure of $652 billion has limited usefulness, because people rarely read whole policies and cannot charge anyone for the time it takes to do this, the researchers concluded that readers who do conduct a cost-benefit analysis might decide not to read any policies.
"Preliminary work from a small pilot study in our laboratory revealed that some Internet users believe their only serious risk online is they may lose up to $50 if their credit card information is stolen. For people who think that is their primary risk, our point estimates show the value of their time to read policies far exceeds this risk. Even for our lower bound estimates of the value of time, it is not worth reading privacy policies though it may be worth skimming them," said the research. This implies that seeing their only risk as credit card fraud suggests Internet users likely do not understand the risks to their privacy. As an FTC report recently stated, "it is unclear whether consumers even understand that their information is being collected, aggregated, and used to deliver advertising."[75]"
Recommendations
If the privacy community can find ways to reduce the time cost of reading policies, it may be easier to convince Internet users to do so. For example, if consumers can move from needing to read policies word-for-word and only skim policies by providing useful headings, or with ways to hide all but relevant information in a layered format and thus reduce the effective length of the policies, more people may be willing to read them. [76] Apps can also adopt short form notices that summarize and link to the larger more complete notice displayed elsewhere. These short form notices need not be legally binding and must candidate that it does not cover all types of data collection but only the most relevant ones. [77]
Content
In an attempt to gain permission most privacy policies inform users about: (1) the type of information collected; and (2) the purpose for collecting that information.
Standard privacy notices generally cover the points of:
- Methods Of Collection And Usage Of Personal Information
- The Cookie Policy
- Sharing Of Customer Information [78]
Certified Information Privacy Professionals divide notices into the following sequential sections[79]:
i. Policy Identification Details: Defines the policy name, version and description.
ii. P3P-Based Components: Defines policy attributes that would apply if the policy is exported to a P3P format. [80] Such attributes would include: policy URLs, organization information, PII access and dispute resolution procedures.
iii. Policy Statements and Related Elements: Groups, Purposes and PII Types-Policy statements define the individuals able to access certain types of information, for certain pre-defined purposes.
Problems
Applications tend to define the type of data broadly in an attempt to strike a balance between providing enough information so that application may gain consent to access a user's data and being broad enough to avoid ruling out specific information.[81]
This leads to usage of vague terms like "information collected may include."[82]
Similarly the purpose of the data acquisition is also very broad. For example, a privacy policy may state that user data can be collected for anything related to ―"improving the content of the Service." As the scope of ―improving the content of the Service is never defined, any usage could conceivably fall within that category.[83]
Several apps create user social profiles based on their online preferences to promote targeted marketing which is cleverly concealed in phrases like "we may also draw upon this Personal Information in order to adapt the Services of our community to your needs". [84] For instance Bees & Pollen is a "predictive personalization" platform for games and apps that "uses advanced predictive algorithms to detect complex, non-trivial correlations between conversion patterns and users' DNA signatures, thus enabling it to automatically serve each user a personalized best-fit game options, in real-time." In reality it analyses over 100 user attributes, including activity on Facebook, spending behaviours, marital status, and location.[85]
Notices also often mislead consumers into believing that their information will not be shared with third parties using the terms "unaffiliated third parties." Other affiliated companies within the corporate structure of the service provider may have access to user's data for marketing and other purposes. [86]
There are very few choices to opt-out of certain practices, such as sharing data for marketing purposes. Thus, users are effectively left with a take-it-or-leave-it choice - give up your privacy or go elsewhere.[87]Users almost always grant consent if it is required to receive the service they want which raises the query if this consent is meaningful[88].
Recommendations
The following recommendations have emerged:
- Notice - Companies should provide consumers with clear, conspicuous notice that accurately describe their information practices.
- Consumer Choice - Companies should provide consumers with the opportunity to decide (in the form of opting-out) if it may disclose personal information to unaffiliated third parties.
- Access and Correction - Companies should provide consumers with the opportunity to access and correct personal information collected about the consumer.
- Security - Companies must adopt reasonable security measures in order to protect the privacy of personal information. Possible security measures include: administrative security, physical security and technical security.
- Enforcement - Companies should have systems through which they can enforce the privacy policy. This may be managed by the company, or an independent third party to ensure compliance. Examples of popular third parties include BBBOnLine and TRUSTe.[89]
- Standardization : Several researchers and organizations have recommended a standardized privacy notice format that covers certain essential points. [90] However as displaying a privacy notice in itself is voluntary it is unpredictable whether companies would willingly adopt a standardized model. Moreover with the app market burgeoning with innovations a standard format may not cover all emergent data practices.
Comprehension
The FTC states that "the notice-and-choice model, as implemented, has led to long, incomprehensible privacy policies that consumers typically do not read, let alone understand. the question is not whether consumers should be given a say over unexpected uses of their data; rather, the question is how to provide simplified notice and choice"[91].
Notably, in a survey conducted by Zogby International, 93% of adults - and 81% of teens - indicated they would take more time to read terms and conditions for websites if they were written in clearer language.[92]
Most privacy policies are in natural language format: companies explain their practices in prose. One noted disadvantage to current natural language policies is that companies can choose which information to present, which does not necessarily solve the problem of information asymmetry between companies and consumers. Further, companies use what have been termed "weasel words" - legalistic, ambiguous, or slanted phrases - to describe their practices [93].
In a study by Aleecia M. McDonald and others[94], it was found that accuracy in what users comprehend span a wide range. An average of 91% of participants answered correctly when asked about cookies, 61% answered correctly about opt out links, 60% understood when their email address would be "shared" with a third party, and only 46% answered correctly regarding telemarketing. Participants found those questions harder which substituted vague or complicated terms to refer to practices such as telemarketing by "the information you provide may be used for marketing services." Overall accuracy was a mere 33%.
Problems
Natural language policies are often long and require college-level reading skills. Furthermore, there are no standards for which information is disclosed, no standard place to find particular information, and data practices are not described using consistent language. These policies are "long, complicated, and full of jargon and change frequently."[95]
Kent Walker list five problems that privacy notices typically suffer from -
a) overkill - long and repetitive text in small print,
b) irrelevance - describing situations of little concern to most consumers,
c) opacity - broad terms the reflect the truth that is impossible to track and control all the information collected and stored,
d) non-comparability - simplification required to achieve comparability will lead to compromising accuracy, and
e) inflexibility - failure to keep pace with new business models. [96]
Recommendations
Researchers advocate a more succinct and simpler standard for privacy notices,[97] such as representing the information in the form of a table. [98] However, studies show only an insignificant improvement in the understanding by consumers when privacy policies are represented in graphic formats like tables and labels. [99]
There are also recommendations to adopt a multi-layered approach where the relevant information is summarized through a short notice.[100] This is backed by studies that consumers find layered policies easier to understand. [101] However they were less accurate in the layered format especially with parts that were not summarized. This suggests participants that did not continue to the full policy when the information they sought was not available on the short notice. Unless it is possible to identify all of the topics users care about and summarize to one page, the layered notice effectively hides information and reduces transparency. It has also been pointed out that it is impossible to convey complex data policies in simple and clear language. [102]
Consumers often struggle to map concepts such as third party access to the terms used in policies. This is also because companies with identical practices often convey different information, and these differences reflected in consumer's ability to understand the policies. These policies may need an educational component so readers understand what it means for a site to engage in a given practice[103]. However it is unlikely that when readers fail to take time to read the policy that they will read up on additional educational components.
[1] Amber Sinha http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy
[2] Wang, et al., 1998) Milberg, et al. (1995)
[3] See e.g., White House, Consumer Privacy Bill of Rights (2012) http://www.whitehouse.gov/the-pressoffice/2012/02/23/we-can-t-wait-obama-administration-unveils-blueprint-privacy-bill-rights; Fed. Trade Comm'n, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Business and Policy Makers (2012) http://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commissionreport-protecting-consumer-privacy-era-rapid-change-recommendations/120326privacyreport.pdf.
[4] Fed. Trade Comm'n, Privacy Online: A Report to Congress 7 (June 1998), available at www.ftc.gov/reports/privacy3/priv-23a.pdf.
[5] U.S. Department of Commerce , Internet Policy Task Force, Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework 20 (Dec. 16, 2010) (full-text).
[6] 389 U.S. 347 (1967).
[7] Dow Chem. Co. v. United States, 476 U.S. 227, 241 (1986)
[8] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj
[9] Dow Chem. Co. v. United States, 476 U.S. 227, 241 (1986)
[10] Kyllo, 533 U.S. at 34 (―[T]he technology enabling human flight has exposed to public view (and hence, we have said, to official observation) uncovered portions of the house and its curtilage that once were private.‖).
[11] Kyllo v. United States, 533 U.S. 27
[12] See Katz, 389 U.S. at 352 (―But what he sought to exclude when he entered the booth was not the intruding eye-it was the uninvited ear. He did not shed his right to do so simply because he made his calls from a place where he might be seen.‖).
[13] See United States v. Ahrndt, No. 08-468-KI, 2010 WL 3773994, at *4 (D. Or. Jan. 8, 2010).
[14] In re DoubleClick Inc. Privacy Litig., 154 F. Supp. 2d 497 (S.D.N.Y. 2001).
[15] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj
[16] See Michael A. Carrier, Against Cyberproperty, 22 BERKELEY TECH. L.J. 1485, 1486 (2007) (arguing against creating a right to exclude users from making electronic contact to their network as one that exceeds traditional property notions).
[17] See M. Ryan Calo, Against Notice Skepticism in Privacy (and Elsewhere), 87 NOTRE DAME L. REV. 1027, 1049 (2012) (citing Paula J. Dalley, The Use and Misuse of Disclosure as a Regulatory System, 34 FLA. ST. U. L. REV. 1089, 1093 (2007) ("[D]isclosure schemes comport with the prevailing political philosophy in that disclosure preserves individual choice while avoiding direct governmental interference.")).
[18] See Calo, supra note 10, at 1048; see also Omri Ben-Shahar & Carl E. Schneider, The Failure of Mandated Disclosure, 159 U. PA. L. REV. 647, 682 (noting that notice "looks cheap" and "looks easy").
[19] Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness and Externalities, 6 I/S J. L. & POL'Y FOR INFO. SOC'Y 425, 440 (2011) (citing M. Ryan Calo, A Hybrid Conception of Privacy Harm Draft-Privacy Law Scholars Conference 2010, p. 28).
[20] Daniel J. Solove, Introduction: Privacy Self-Management and the Consent Dilemma, 126 HARV. L. REV. 1879, 1885 (2013) (citing Jon Leibowitz, Fed. Trade Comm'n, So Private, So Public: Individuals, the Internet & the Paradox of Behavioral Marketing, Remarks at the FTC Town Hall Meeting on Behavioral Advertising: Tracking, Targeting, & Technology (Nov. 1, 2007), available at http://www.ftc.gov/speeches/leibowitz/071031ehavior/pdf). Paul Ohm refers to these issues as "information-quality problems." See Paul Ohm, Branding Privacy, 97 MINN. L. REV. 907, 930 (2013). Daniel J. Solove refers to this as "the problem of the uninformed individual." See Solove, supra note 17
[21] See Edward J. Janger & Paul M. Schwartz, The Gramm-Leach-Bliley Act, Information Privacy, and the Limits of Default Rules, 86 MINN. L. REV. 1219, 1230 (2002) (stating that according to one survey, "only 0.5% of banking customers had exercised their opt-out rights").
[22] See Amber Sinha A Critique of Consent in Information Privacy http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy
[23] Leigh Shevchik, "Mobile App Industry to Reach Record Revenue in 2013," New Relic (blog), April 1, 2013, http://blog.newrelic.com/2013/04/01/mobile-apps-industry-to-reach-record-revenue-in-2013/.
[24] Jan Lauren Boyles, Aaron Smith, and Mary Madden, "Privacy and Data Management on Mobile Devices," Pew Internet & American Life Project, Washington, DC, September 5, 2012.
[25] http://www.aarp.org/content/dam/aarp/research/public_policy_institute/cons_prot/2014/improving-mobile-device-privacy-disclosures-AARP-ppi-cons-prot.pdf
[26] "Mobile Apps for Kids: Disclosures Still Not Making the Grade," Federal Trade Commission, Washington, DC, December 2012
[27] http://www.aarp.org/content/dam/aarp/research/public_policy_institute/cons_prot/2014/improving-mobile-device-privacy-disclosures-AARP-ppi-cons-prot.pdf
[28] Linda Ackerman, "Mobile Health and Fitness Applications and Information Privacy," Privacy Rights Clearinghouse, San Diego, CA, July 15, 2013.
[29] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 75 IND. L.J. 1125, 1126 (1999). http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=2199&context=ilj
[30] William Aiello, Steven M. Bellovin, Matt Blaze, Ran Canetti, John Ioannidis, Angelos D. Keromytis, and Omer Reingold. Just fast keying: Key agreement in a hostile internet. ACM Trans. Inf. Syst. Secur., 7(2):242-273, 2004.
[31] Privacy By Design The 7 Foundational Principles by Anne Cavoukian https://www.ipc.on.ca/images/resources/7foundationalprinciples.pdf
[32] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.
[33] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.
[34] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.
[35] John Frank Weaver, We Need to Pass Legislation on Artificial Intelligence Early and Often, SLATE FUTURE TENSE (Sept. 12, 2014),http://www.slate.com/blogs/future_tense/2014/09/12/we_need_to_pass_artificial_intelligence_laws_early_and_often.html
[36] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 75 IND. L.J. 1125, 1126 (1999).
[37] Richard Warner & Robert Sloan, Beyond Notice and Choice: Privacy, Norms, and Consent, J. High Tech. L. (2013). Available at: http://scholarship.kentlaw.iit.edu/fac_schol/568
[39] iOS Application Programming Guide: The Application Runtime Environment, APPLE, http://developer.apple.com/library/ ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/RuntimeEnvironment /RuntimeEnvironment.html (last updated Feb. 24, 2011)
[40] Security and Permissions, ANDROID DEVELOPERS, http://developer.android.com/guide/topics/security/security.html (last updated Sept. 13, 2011).
[41] iOS Application Programming Guide: The Application Runtime Environment, APPLE, http://developer.apple.com/library/ ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/RuntimeEnvironment /RuntimeEnvironment.html (last updated Feb. 24, 2011)
[42] See Katherine Noyes, Why Android App Security is Better Than for the iPhone, PC WORLD BUS. CTR. (Aug. 6, 2010, 4:20 PM), http://www.pcworld.com/businesscenter/article/202758/why_android_app_security_is_be tter_than_for_the_iphone.html; see also About Permissions for Third-Party Applications, BLACKBERRY, http://docs.blackberry.com/en/smartphone_users/deliverables/22178/ About_permissions_for_third-party_apps_50_778147_11.jsp (last visited Sept. 29, 2011); Security and Permissions, supra note 76.
[43] Peter S. Vogel, A Worrisome Truth: Internet Privacy is Impossible, TECHNEWSWORLD (June 8, 2011, 5:00 AM), http://www.technewsworld.com/ story/72610.html.
[44] Privacy Policy, FOURSQUARE, http://foursquare.com/legal/privacy (last updated Jan. 12, 2011)
[45] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing Notice: A Large-scale Experiment on the Timing of Software License Agreements. In Proc. of CHI. ACM, 2007.
[46] I. Adjerid, A. Acquisti, L. Brandimarte, and G. Loewenstein. Sleights of Privacy: Framing, Disclosures, and the Limits of Transparency. In Proc. of SOUPS. ACM, 2013.
[47] http://delivery.acm.org/10.1145/2810000/2808119/p63-balebako.pdf?ip=106.51.36.200&id=2808119&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E35B5BCE80D07AAD9&CFID=801296199&CFTOKEN=33661544&__acm__=1466052980_2f265a2442ea3394aa1ebab7e6449933
[48] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.
[49] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.
[50] S. Egelman, J. Tsai, L. F. Cranor, and A. Acquisti. Timing is everything?: the effects of timing and placement of online privacy indicators. In Proc. CHI '09. ACM, 2009.
[51] R. B¨ohme and S. K¨opsell. Trained to accept?: A field experiment on consent dialogs. In Proc. CHI '10. ACM, 2010
[52] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing notice: a large-scale experiment on the timing of software license agreements. In Proc. CHI '07. ACM, 2007.
[53] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing notice: a large-scale experiment on the timing of software license agreements. In Proc. CHI '07. ACM, 2007.
[54] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.
[55] A. Kobsa and M. Teltzrow. Contextualized communication of privacy practices and personalization benefits: Impacts on users' data sharing and purchase behavior. In Proc. PETS '05. Springer, 2005.
[56] F. Schaub, B. K¨onings, and M. Weber. Context-adaptive privacy: Leveraging context awareness to support privacy decision making. IEEE Pervasive Computing, 14(1):34-43, 2015.
[57] E. Choe, J. Jung, B. Lee, and K. Fisher. Nudging people away from privacy-invasive mobile apps through visual framing. In Proc. INTERACT '13. Springer, 2013.
[58] F. Schaub, B. K¨onings, and M. Weber. Context-adaptive privacy: Leveraging context awareness to support privacy decision making. IEEE Pervasive Computing, 14(1):34-43, 2015.
[59] Article 29 Data Protection Working Party. Opinion 8/2014 on the Recent Developments on the Internet of Things. WP 223, Sept. 2014.
[60] B. Anderson, A. Vance, B. Kirwan, E. D., and S. Howard. Users aren't (necessarily) lazy: Using NeuroIS to explain habituation to security warnings. In Proc. ICIS '14, 2014.
[61] B. Anderson, B. Kirwan, D. Eargle, S. Howard, and A. Vance. How polymorphic warnings reduce habituation in the brain - insights from an fMRI study. In Proc. CHI '15. ACM, 2015.
[62] M. S. Wogalter, V. C. Conzola, and T. L. Smith-Jackson. Research-based guidelines for warning design and evaluation. Applied Ergonomics, 16 USENIX Association 2015 Symposium on Usable Privacy and Security 17 33(3):219-230, 2002.
[63] L. F. Cranor, P. Guduru, and M. Arjula. User interfaces for privacy agents. ACM TOCHI, 13(2):135-178, 2006.
[64] R. S. Portnoff, L. N. Lee, S. Egelman, P. Mishra, D. Leung, and D. Wagner. Somebody's watching me? assessing the effectiveness of webcam indicator lights. In Proc. CHI '15, 2015
[65] M. Langheinrich. Privacy by design - principles of privacy-aware ubiquitous systems. In Proc. UbiComp '01. Springer, 2001
[66] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.
[67] The Impact of Timing on the Salience of Smartphone App Privacy Notices, Rebecca Balebako , Florian Schaub, Idris Adjerid , Alessandro Acquist ,Lorrie Faith Cranor
[68] R. Böhme and J. Grossklags. The Security Cost of Cheap User Interaction. In Workshop on New Security Paradigms, pages 67-82. ACM, 2011
[69] A. Felt, S. Egelman, M. Finifter, D. Akhawe, and D. Wagner. How to Ask For Permission. HOTSEC 2012, 2012.
[70] A. Felt, S. Egelman, M. Finifter, D. Akhawe, and D. Wagner. How to Ask For Permission. HOTSEC 2012, 2012.
[71] Towards Context Adaptive Privacy Decisions in Ubiquitous Computing Florian Schaub∗ , Bastian Könings∗ , Michael Weber∗ , Frank Kargl† ∗ Institute of Media Informatics, Ulm University, Germany Email: { florian.schaub | bastian.koenings | michael.weber }@uni-ulm.d
[72] M. Korzaan and N. Brooks, "Demystifying Personality and Privacy: An Empirical Investigation into Antecedents of Concerns for Information Privacy," Journal of Behavioral Studies in Business, pp. 1-17, 2009.
[73] B. Könings and F. Schaub, "Territorial Privacy in Ubiquitous Computing," in WONS'11. IEEE, 2011, pp. 104-108.
[74] The Cost of Reading Privacy Policies Aleecia M. McDonald and Lorrie Faith Cranor
[75] 5 Federal Trade Commission, "Protecting Consumers in the Next Tech-ade: A Report by the Staff of the Federal Trade Commission," March 2008, 11, http://www.ftc.gov/os/2008/03/P064101tech.pdf.
[76] The Cost of Reading Privacy Policies Aleecia M. McDonald and Lorrie Faith Cranor
I/S: A Journal of Law and Policy for the Information Society 2008 Privacy Year in Review issue http://www.is-journal.org/
[77] IS YOUR INSEAM YOUR BIOMETRIC? Evaluating the Understandability of Mobile Privacy Notice Categories Rebecca Balebako, Richard Shay, and Lorrie Faith Cranor July 17, 2013 https://www.cylab.cmu.edu/files/pdfs/tech_reports/CMUCyLab13011.pdf
[78] https://www.sba.gov/blogs/7-considerations-crafting-online-privacy-policy
[79] https://www.cippguide.org
[80] The Platform for Privacy Preferences Project, more commonly known as P3P was designed by the World Wide Web Consortium aka W3C in response to the increased use of the Internet for sales transactions and subsequent collection of personal information. P3P is a special protocol that allows a website's policies to be machine readable, granting web users' greater control over the use and disclosure of their information while browsing the internet.
[81] Security and Permissions, ANDROID DEVELOPERS, http://developer.android.com/guide/topics/security/security.html (last updated Sept. 13, 2011).
[82] See Foursqaure Privacy Policy
[83] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj
[84] Privacy Policy, FOURSQUARE, http://foursquare.com/legal/privacy (last updated Jan. 12, 2011)
[85] Bees and Pollen, "Bees and Pollen Personalization Platform," http://www.beesandpollen.com/TheProduct. aspx; Bees and Pollen, "Sense6-Social Casino Games Personalization Solution," http://www.beesandpollen. com/sense6.aspx; Bees and Pollen, "About Us," http://www.beesandpollen.com/About.aspx.
[86] CFA on the NTIA Short Form Notice Code of Conduct to Promote Transparency in Mobile Applications July 26, 2013 | Press Release
[87] P. M. Schwartz and D. Solove. Notice & Choice. In The Second NPLAN/BMSG Meeting on Digital Media and Marketing to Children, 2009.
[88] F. Cate. The Limits of Notice and Choice. IEEE Security Privacy, 8(2):59-62, Mar. 2010.
[89] https://www.cippguide.org/2011/08/09/components-of-a-privacy-policy/
[90] https://www.ftc.gov/public-statements/2001/07/case-standardization-privacy-policy-formats
[91] Protecting Consumer Privacy in an Era of Rapid Change. Preliminary FTC Staff Report.December 2010
[92] . See Comment of Common Sense Media, cmt. #00457, at 1.
[93] Pollach, I. What's wrong with online privacy policies? Communications of the ACM 30, 5 (September 2007), 103-108
[94] A Comparative Study of Online Privacy Policies and Formats Aleecia M. McDonald,1 Robert W. Reeder,2 Patrick Gage Kelley, 1 Lorrie Faith Cranor1 1 Carnegie Mellon, Pittsburgh, PA 2 Microsoft, Redmond, WA
http://lorrie.cranor.org/pubs/authors-version-PETS-formats.pdf
[95] Amber Sinha Critique
[96] Kent Walker, The Costs of Privacy, 2001 available at https://www.questia.com/library/journal/1G1-84436409/the-costs-of-privacy
[97] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf
[98] Allen Levy and Manoj Hastak, Consumer Comprehension of Financial Privacy Notices, Interagency Notice Project, available at https://www.sec.gov/comments/s7-09-07/s70907-21-levy.pdf
[99] Patrick Gage Kelly et al., Standardizing Privacy Notices: An Online Study of the Nutrition Label Approach available at https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416-544506-00037/544506-00037.pdf
[100] The Center for Information Policy Leadership, Hunton & Williams LLP, "Ten Steps To Develop A Multi-Layered Privacy Notice" available at https://www.informationpolicycentre.com/files/Uploads/Documents/Centre/Ten_Steps_whitepaper.pdf
[101] A Comparative Study of Online Privacy Policies and Formats Aleecia M. McDonald,1 Robert W. Reeder,2 Patrick Gage Kelley, 1 Lorrie Faith Cranor1 1 Carnegie Mellon, Pittsburgh, PA 2 Microsoft, Redmond, WA
[102] Howard Latin, "Good" Warnings, Bad Products, and Cognitive Limitations, 41 UCLA Law Review available at https://litigation-essentials.lexisnexis.com/webcd/app?action=DocumentDisplay&crawlid=1&srctype=smi&srcid=3B15&doctype=cite&docid=41+UCLA+L.+Rev.+1193&key=1c15e064a97759f3f03fb51db62a79a5
[103] Report by Kleimann Communication Group for the FTC. Evolution of a prototype financial privacy notice, 2006. http://www.ftc.gov/privacy/ privacyinitiatives/ftcfinalreport060228.pdf Accessed 2 Mar 2007
http://lorrie.cranor.org/pubs/authors-version-PETS-formats.pdf
Workshop Report - UIDAI and Welfare Services: Exclusion and Countermeasures
Introduction
The Centre for Internet and Society organised a workshop on "UIDAI and Welfare Services: Exclusion and Countermeasures" at the Institution of Agricultural on Technologists on August 27 in Bangalore to discuss, raise awareness of, and devise countermeasures to exclusion due to implementation of UID-based verification for and distribution of welfare services [1]. This was a follow-up to the workshop held in Delhi on “Understanding Aadhaar and its New Challenges” at the Centre for Studies in Science Policy, JNU on May 26th and 27th 2016 [2]. In this report we summarise the key concerns raised and the case studies presented by the participants at the workshop held on August 27, 2016.
Implementation of the UID Project
Question of Consent: The Aadhaar Act [3] states that the consent of the individual must be taken at the time of enrollment and authentication and it must be informed to him/her the purpose for which the data would be used. However, the Act does not provide for an opt-out mechanism and an individual is compelled to give consent to continue with the enrollment process or to complete an authentication.
Lack of Adherence to Court Orders: Despite of several orders by Supreme Court stating that use of Aadhaar cannot be made mandatory for the purpose of availing benefits and services, multiple state governments and departments have made it mandatory for a wide range of purposes like booking railway tickets [4], linking below the poverty line ration cards with Aadhaar [5], school examinations [6], food security, pension and scholarship [7], to name a few.
Misleading Advertisements: A concern was raised that individuals are being mislead in the necessity and purpose for enrollment into the project. For example, people have been asked to enrol by telling them that they might get excluded from the system and cannot get services like passports, banks, NREGA, salaries for government employees, denial of vaccinations, etc. Furthermore, the Supreme Court has ordered Aadhaar not be mandatory, yet people are being told that documentation or record keeping cannot be done without UID number.
Hybrid Governance: The participants pointed out that with the Aadhaar (Targeted delivery of financial and other subsidies, benefits and services) Act, 2016 (hereinafter referred to as Aadhaar Act, 2016 ) being partially enforced, multiple examples of exclusion as reported in the news are demonstrating how the Aadhaar project is creating a case of hybrid governance i.e private corporations playing a significant role in Governance. This can be seen in case of Aadhaar where we see many entities from private sector being involved in its implementation, as well as many software and hardware companies.
Lack of Transparency around Sharing of Biometric Data: The fact how and why the Government is relying on biometrics for welfare schemes is unclear and not known. Also, there is no information on how biometric data that is collected through the project is being used and its ability as an authenticating device. Along with that, there is very little information on companies that have been enlisted to hold and manage data and perform authentication.
Possibility of Surveillance: Multiple petitions and ongoing cases have raised concerns regarding the possibility of surveillance, tracking, profiling, convergence of data, and the opaque involvement of private companies involved in the project.
Denial of Information: In an RTI filed by one of the participant requesting to share the key contract for the project, it was refused on the grounds under section 8(1) (d) of the RTI Act, 2005. However, it was claimed that the provision would not be applicable since the contract was already awarded and any information disclosed to the Parliament should be disclosed to the citizens. The Central Information Commission issued a letter stating that the contractual obligation is over and a copy of the said agreement can be duly shared. However, it was discovered by the said participant that certain pages of the same were missing , which contained confidential information. When this issue went before appeal before the Information Commissioner, the IC gave an order to the IC in Delhi to comply with the previous order. However, it was communicated that limited financial information may be given, but not missing pages. Also, it was revealed that the UIDAI was supposed to share biometric data with NPR (by way of a MoU), but it has refused to give information since the intention was to discontinue NPR and wanted only UIDAI to collect data.
Concerns Arising from the Report of the Comptroller and Auditor General of India (CAG) on Implementation of PAHAL (DBTL) Scheme
A presentation on the CAG compliance audit report of PAHAL on LPG [8] revealed how the society was made to believe that UID will help deal with the issue of duplication and collection as well as use of biometric data will help. The report also revealed that multiple LPG connections have the same Aadhaar number or same bank account number in the consumer database maintained by the OMCs, the bank account number of consumers were also not accurately recorded, scrutiny of the database revealed improper capture of Aadhaar numbers, and there was incorrect seeding of IFSC codes in consumer database. The participants felt that this was an example of how schemes that are being introduced for social welfare do not necessarily benefit the society, and on the contrary, has led to exclusion by design. For example, in the year 2011, by was of the The Liquefied Petroleum Gas (Regulation of Supply and Distribution) Amendment Order, 2011 [9], the Ministry of Petroleum and Natural Gas made the Unique Identification Number (UID) under the Aadhaar project a must for availing LPG refills. This received a lot of public pushback, which led to non-implementation of the order. In October 2012, despite the UIDAI stating that the number was voluntary, a number of services began requiring the provision of an Aadhaar number for accessing benefits. In September 2013, when the first order on Aadhaar was passed by court [10], oil marketing companies and UIDAI approached the Supreme Court to change the same and allow them to make it mandatory, which was refused by the Court. Later in the year 2014, use of Aadhaar for subsidies was made mandatory. The participants further criticised the CAG report for revealing the manner in which linking Aadhaar with welfare schemes has allowed duplication and led to ghost beneficiaries where there is no information about who these people are who are receiving the benefits of the subsidies. For example, in Rajasthan, people are being denied their pension as they are being declared dead due to absence of information from the Aadhaar database.
It was said that the statistics of duplication mentioned in the report show how UIDAI (as it claims to ensure de-duplication of beneficiaries) is not required for this purpose and can be done without Aadhaar as well. Also, due to incorrect seeding of Aadhaar number many are being denied subsidy where there is no information regarding the number of people who have been denied the subsidy because of this. Considering these important facts from the audit report, the discussants concluded how the statistics reflect inflated claims by UIDAI and how the problems which are said to be addressed by using Aadhaar can be dealt without it. In this context, it is important to understand how the data in the aadhaar database maybe wrong and in case of e-governance the citizens suffer. Also, the fact that loss of subsidy-not in cash, but in use of LPG cylinder - only for cooking, is ignored. In addition to that, there is no data or way to check if the cylinder is being used for commercial purposes or not as RTI from oil companies says that no ghost identities have been detected.
UID-linked Welfare Delivery in Rajasthan
One speaker presented findings on people's experiences with UID-linked welfare services in Rajasthan, collected through a 100 days trip organised to speak to people across the state on problems related to welfare governance. This visit revealed that people who need the benefits and access to subsidies most are often excluded from actual services. It was highlighted that the paperless system is proving to be highly dangerous. Some of the cases discussed included that of a disabled labourer, who was asked to get an aadhaar card, but during enrollment asked the person standing next to him to put all his 5 fingers for biometric data collection. Due to this incorrect data, he is devoid of all subsidies since the authentication fails every time he goes to avail it. He stopped receiving his entitlements. Though problems were anticipated, the misery of the people revealed the extent of the problems arising from the project. In another case, an elderly woman living alone, since she could not go for Aadhaar authentication, had not been receiving the ration she is entitled to receive for the past 8 months. When the ration shop was approached to represent her case, the dealers said that they cannot provide her ration since they would require her thumb print for authentication. Later, they found out that on persuading the dealer to provide her with ration since Aadhaar is not mandatory, they found out that in their records they had actually mentioned that she was being given the ration, which was not the case. So the lack of awareness and the fact that people are entitled to receive the benefits irrespective of Aadhaar is something that is being misused by dealers. This shows how this system has become a barrier for the people, where they are also unaware about the grievance redressal mechanism.
Aadhaar and e-KYC
In this session, the use of Aadhaar for e-KYC verification was discussed The UID strategy document describes how the idea is to link UIDAI with money enabled Direct Benefit Transfer (DBT) to the beneficiaries without any reason or justification for the same. It was highlighted by one of the participants how the Reserve Bank of India (RBI) believed that making Aadhaar compulsory for e-KYC and several other banking services was a violation of the Money Laundering Act as well as its own rules and standards, however, later relaxed the rules to link Aadhaar with bank accounts and accepted its for e-KyC with great reluctance as the Department of Revenue thought otherwise. It was mentioned how allowing opening of bank accounts remotely using Aadhaar, without physically being present, was touted as a dangerous idea. However, the restrictions placed by RBI were suddenly done away with and opening bank accounts remotely was enabled via e-KYC.
A speaker emphasised that with emerging FinTech services in India being tied with Aadhaar via India Stack, the following concerns are becoming critical:
- With RBI enabling creation of bank accounts remotely, it becomes difficult to to track who did e-KYC and which bank did it and hold the same accountable.
- The Aadhaar Act 2016 states that UIDAI will not track the queries made and will only keep a record of Yes/No for authentication. For example, the e-KYC to open a bank account can now be done with the help of an Aadhaar number and biometric authentication. However, this request does not get recorded and at the time of authentication, an individual is simply told whether the request has been matched or not by way of a Yes/No [11]. Though UIDAI will maintain the authentication record, this may act as an obstacle since in case the information from the aadhaar database does not match, the person would not be able to open a bank account and would only receive a yes/no as a response to the request.
- Further, there is a concern that the Aadhaar Enabled Payment System being implemented by the National Payment Corporation of India (NCPI) would allow effectively hiding of source and destination of money flow, leading to money laundering and cases of bribery. This possible as NCPI maintains a mapper where each bank account is linked (only the latest one). However, Aadhaar number can be linked with multiple bank accounts of an individual. So when a transaction is made, the mapper records the transaction only from that 1 account. But if another transaction takes place with another bank account, that record is not maintained by the mapper at NCPI since it records only transactions of the latest account seeded in that. This makes money laundering easy as the money moves from aadhaar number to aadhaar number now rather than bank account to bank account.
Endnotes
[2] See: http://cis-india.org/internet-governance/blog/report-on-understanding-aadhaar-and-its-new-challenges.
[3] See: https://uidai.gov.in/beta/images/the_aadhaar_act_2016.pdf.
[4] See: http://scroll.in/latest/816343/aadhaar-numbers-may-soon-be-compulsory-to-book-railway-tickets.
[7] See: http://www.dailypioneer.com/state-editions/cs-calls-for-early-steps-to-link-aadhaar-to-ac.html.
[9] See: http://petroleum.nic.in/docs/lpg/LPG%20Control%20Order%20GSR%20718%20dated%2026.09.2011.pdf.
[10] See: http://judis.nic.in/temp/494201232392013p.txt.
[11] Section 8(4) of the Aadhaar Act, 2016 states that "The Authority shall respond to an authentication query with a positive, negative or any other appropriate response sharing such identity information excluding any core biometric information."
Protection of Privacy in Mobile Phone Apps
According to NASSCOM, the Indian fintech market is worth an estimated USD 1.2 billion, and is predicted to reach USD 2.4 billion by 2020.[1] The services brought forth by Fintech, such as digital wallets, lending, and insurance, have transformed the ways in which businesses and institutions execute dayto-day transactions. The rise of fintech in India has rendered the nation’s market a point of attraction for global investment.[2] Fintech in India is perceived both as a catalyst for economic growth and innovation, as well as a means of financial inclusion for the millions of unbanked individuals and businesses. The government of India, along with regulators such as SEBI (Securities and Exchange Board of India) and RBI (Reserve Bank India), has consistently supported the digitalization of the nation’s economy and the formation of a strong fintech ecosystem through funding and promotional initiatives.[3]
The RBI has been pivotal in enabling the development of India’s fintech sector and adopting a cautious approach in addressing concerns around consumer protection and law enforcement. Its key objective as a regulator has been to create an environment for unimpeded innovations by fintech, expanding the reach of banking services for unbanked populations, regulating an efficient electronic payment system and providing alternative options for consumers. The RBI’s prime focus areas for enabling fintech have been around payment, lending, security/biometrics and wealth management. For example, the RBI has introduced “Unified Payment Interface” with the NPCI (National Payments Corporation of India), which has been critical in revolutionizing digital payments and pushing India closer to the objective of a cash-less society. It has also released a consultation paper on regulating Peer 2 Peer (P2P) lending market in India, highlighting the advantages and disadvantages of regulating the sector.[4]
The consultation paper offers a definition of P2P lending as well as a general explanation of the activity and the digital platforms that facilitate transactions between lenders and borrowers. It also provides a set of arguments for and against regulating P2P lending. The arguments against regulating the sector mainly pertain to the risk of stifling the growth of an innovative, efficient and accessible avenue for borrowers who either lack access to formal financial channels or are denied loans by them.[5]
This is the general consensus around the positive impact of the Fintech sector in India: its facilitation of financial inclusion and economic opportunity. However, the paper lists many more arguments for regulation than against. One of the main points made is with regards to P2P lending’s potential to disrupt the financial sector by challenging traditional banking channels. There is also the argument that, if properly regulated, the P2P lending platforms can more efficiently and effectively exercise their potential of promoting alternative forms of finance.[6]
The paper concludes that the balance of advantage would lie in developing an appropriate regulatory and supervisory toolkit that facilitates the orderly growth of the P2P lending sector in order to harness its ability to provide an alternative avenue for credit for the right borrowers[7]
The RBI’s regulatory framework for P2P lending platforms encompasses the permitted activity, prudential regulations on capital, governance, business continuity plan (BCP) and customer interface, apart from regulatory reporting.[8]
The Securities and Exchange Board of India (SEBI) is also a prominent regulator of the Indian fintech sector. They issued a consultation paper on “crowdfunding”, which is defined as the solicitation of funds (small amounts) from multiple investors through a web-based platform or social networking site for a specific project, business venture or social cause. P2P lending is then a form of crowdfunding, which can be understood as an umbrella term that covers fintech lending practices. SEBI’s paper aimed to provide a brief overview of the global scenario of crowdfunding including the various prevalent models under it, the associated benefits and risks, the regulatory approaches in different jurisdictions, etc. It also discusses the legal and regulatory challenges in implementing the framework for crowdfunding. The paper proposes a framework for ushering in crowdfunding by giving access to capital markets to provide an additional channel of early stage funding to Start-ups and SME’s and seeks to balance the same with investor protection.[9] Unlike RBI’s consultation paper on P2P lending, SEBI’s paper on crowdfunding was intended mainly to invite discussion and not necessarily to implement a framework for regulation.
Some of the benefits cited in SEBI’s crowdfunding paper pertain to the commonly mentioned advantages of fintech: economic opportunity for the SME sector and start-ups, alternative lending systems to keep SMEs alive when traditional banks crash, new investment avenues for the local economy and increased competition in the financial sector.[10]
The paper also lists a set of risks that suggest the need for a regulatory framework for crowdfunding. For example, it mentions the “substitution of institutional risk by retail risk”, meaning that individual lenders, who’s risk tolerance may be low, bear the risk of low/no return investors when they lend to SMEs without adequate assessment of credit worthiness. Also, there is the risk that the digital platform that facilitates lending and issues all the transactions, may not conduct proper due diligence. If the platform is temporarily shut down or closed permanently, no recourse is available to the investors.[11]
The SEBI paper mentions a long list of other risks associated with crowdfunding, mostly associated with systemic failures, loan defaults, fraud practices, and information asymmetry. Information asymmetry refers partially to the chance that lending decisions are made based on incomplete data sets that are based on social networking platforms. There is a lack of transparency and reporting obligations in issuers including with respect to the use of funds raised.[12]
Similar to the RBI consultation paper, SEBI makes a decent effort to weigh the costs and benefits of crowdfunding practices but only does this from an economic/financial perspective. Most of the cited risks, benefits and concerns tend to overlook information security and risks of privacy breaches of the implicated borrowers.
India Stack is a paperless and cashless service delivery system that has been supported by the Indian government as part of the fintech sector. It is a new technology paradigm that is designed to handle massive data inflows, and is poised to enable entrepreneurs, citizens and governments to interact with one another transparently. It is intended to be an open system to electronically verify businesses, people and services. It allows the smartphone to become the delivery platform for services such as digital payments, identification and digital lockers. The vision of India Stack is to shift India towards a paperless economy.[13]
The central government, based on its experience with the Aadhaar project, decided to launch the opendata initiative in 2012 supported by an open API policy, which would pave the way for private technology solutions to build services on top of Aadhaar and to make India a digital cash economy. Unified Payments Interface (UPI), which will make mobile payments card-less and completely digital, allows consumers to transact directly through their bank account with a unique UPI identity that syncs to Aadhaar’s verification and connects to the merchant, the settlement and the issuing bank to close transactions.[14]
It is suspected that India Stack will shift in business models in banking from low-volume, high-value, high-cost and high fees to high-volume, low-value, low cost and no fees. This well lead to a drastic increase in accessibility and affordability, and the market force of consumer acquisition and the social purpose of mass inclusion will converge.[15]
India Stack serves as an example of how the Government of India has supported initiatives that would promote the fintech sector while facilitating economic growth and financial opportunity for unbanked individuals. However, there is continuous discussion around India Stack’s attachment to the Aadhaar system, which can lead to the exclusion of unregistered individuals from the benefits that would otherwise be reaped from the open-data initiative. It can also result in many privacy and security breaches when records of individuals’ daily transactions are attached to their Aadhaar numbers, which carry their biometric information and is linked to other personal data that is held by the government such as health records.
[1]. KPMG: https://assets.kpmg.com/content/dam/kpmg/pdf/2016/06/FinTech-new.pdf
[2]. Id.
[3]. Id.
[4]. Id.
[5]. RBI 2P2 Consultation Paper, https://rbidocs.rbi.org.in/rdocs/content/pdfs/CPERR280416.pdf
[6]. Id.
[7]. Id.
[8]. Id.
[9]. SEBI Crowdfunding consultation paper, http://www.sebi.gov.in/cms/sebi_data/attachdocs/1403005615257.pdf
[10]. Id.
[11]. Id.
[12]. Id.
[13]. Krishna, https://yourstory.com/2016/07/india-stack/
[14]. Id.
[15]. Nilekani, http://indianexpress.com/article/opinion/columns/the-coming-revolution-in-indian-banking-2924534/
ISIS and Recruitment using Social Media – Roundtable Report
The objective of this roundtable was to explore the recruitment process and methods followed by ISIS on social media platforms like Facebook and Twitter and to understand the difficulties faced by law enforcement agencies and platforms in countering the problem while understanding existing counter measures, with a focus on the Indian experience.
Reviewing Existing Literature
To provide context to the discussion, a few key pieces of existing literature on online extremism were highlighted. Discussing Charlie Winter’s “Documenting the Virtual Caliphate”, a participant outlined the multiple stages of the radicalisation process that begins with a person being exposed to general ISIS releases, entering an online filter bubble of like minded people, initial contact, followed by persuasion by the contact person to isolate the potential recruit from his/her family and friends. This culminates with the assignment of an ISIS task to such person. The takeaway from the paper, was the colossal scale of information and events put out by ISIS on the social media. It was pointed out that contrary to popular belief, ISIS publishes content under six broad themes: mercy, belonging, brutality, victimhood, war and utopia, least of which falls under the category of brutality which in fact garners the most attention worldwide. It was further elaborated that ISIS employs positive imagery in the form of nature and landscapes, and appeals to the civilian life within its borders. This strategy is that of prioritising quantity, quality, adaptability and differentiation while producing media. This strategy of producing media that is precise, adaptable and effective, according to the author, must be emulated by Governments in their counter measures, although there is no universal counter narrative that is effective. This effort, he stressed cannot be exclusively state-driven.
JM Berger’s “Making Countering Violent Extremism Work” was also discussed. Here, a slightly different model of radicalisation has been identified with potential recruits going through 4 stages: the first being that of Curiosity where there is exposure to violent extremist ideology, the second stage is Consideration where the potential recruit evaluates the ideology, the third being Identification where the individual begins to self identify with extremist ideology, and the last being that of Self-Critique which is revisited periodically. According to Berger, law enforcement need only be involved in the third stage identified in this taxonomy, through situational awareness programs and investigations. This paper stated that counter-messaging policies need not mimic the ISIS pattern of slick messaging. A data-driven study had found that suspending and suppressing the reach of violent extremist accounts and individuals on online platform was effective in reducing the reach of these ideologies, though not universally so. It also found that generic counter strategies used in the US was more efficient than targeted strategies followed in Europe.
Lack of Co-ordination, Fragmentation between the States and Centre
Speaking of the Indian scenario in particular, another participant brought to light the lack of co-ordination and consensus between the State and Central Governments and law enforcement agencies with respect to countering violent extremism with leads to a breakage in the chain of action. Another participant added that the underestimation of the problem at the state level coupled with the theoretical and abstract nature of work done at the Centre is another pitfall. While the fragmentation of agencies was stated to be ineffective, bringing them under the purview of a single agency was also proposed as an ineffective measure. It was instead suggested that a neutral policy body, and not an implementing body, should coordinate the efforts of the multiple groups involved.
Unreliable Intelligence Infrastructure
It was pointed out that countries are presently underequipped due to the lack of intelligence infrastructure and technical expertise. This was primarily because agencies in India tend to use off-the shelf hardware and software produced by foreign companies, and such heavy dependence on unreliable parts will necessarily be detrimental to building reliable security infrastructure. Emphasis was laid on the significance of collaboration and open-source intelligence in countering online radicalisation. An appeal was made to inculcate a higher IT proficiency, indigenous production of resources, funding, collaboration, integration of lower level agencies and more research to be produced in this regard.
Proactive Counter Narratives
The importance of proactive counter-narratives to extremist content was stressed on, with the possibility of generating inputs from government agencies and private bodies backing the government being discussed. Another solution identified was the creation and internal circulation of a clear strategy to counter the ISIS narrative and the public dissemination of research on online radicalization in the Indian context.
Policies of Social Media Platforms
The conversation moved towards understanding policies of social media. One participant shed light on a popular platform’s strategies against extremism, wherein it was pointed out that the site’s tolerance policy extends not only to directly extremist content but also content created by people who support violent extremism .The involvement of the platform with several countries and platforms in order to create anti-extremist messaging and its intention to expand these initiatives was in furtherance of its philosophy to prevent any celebration of violence. The participant further explained that research shows that anti-extremist content that made use of humour and a lighter tone was more effective than media which relied on gravitas.
Having identified the existing literature and current challenges, the roundtable concluded with suggestions for further areas of research:
- Understanding the use of encrypted messaging services like Whatsapp and Telegram for extremism, and an analysis of these platforms in the Indian context. A deeper understanding of these services is essential to gauge the dimensions of the problem and identify counter measures.
- A lexical analysis of Indian social media accounts to identify ISIS supporters and group them into meta-communities, similar to research done by the RAND Corporation
- Collation of ISIS media packages was also flagged off as an important measure in order to have a dossier to present to the government. This would help policymakers gain context around the issue, and also help them understand the scale of the problem.
Deep Packet Inspection: How it Works and its Impact on Privacy
Background
In the last few years, there has been extensive debate and discussion around network neutrality in India. The online campaign in favor of Network Neutrality was led by Savetheinternet.in in India. The campaign, captured in detail by an article in Mint, [1] was a spectacular success and facilitated sending over a million emails supporting the cause of network neutrality, eventually leading to ban on differential pricing. Following in the footsteps of the Shreya Singhal judgement, the fact that the issue of net neutrality has managed to attract wide public attention is an encouraging sign for a free and open Internet in India. Since the debate has been focused largely on zero rating, other kinds of network practices impacting network neutrality have yet to be comprehensively explored in the Indian context, nor their impact on other values. In this article, I focus on network management, in general, and deep packet inspection, in particular and how it impacts the privacy of users.
The Architecture of the Internet
The Internet exists as a network acting as an intermediary between providers of content and it users. [2] Traditionally, the network did not distinguish between those who provided content and those who were recipients of this service, in fact often, the users also functioned as content providers. The architectural design of the Internet mandated that all content be broken down into data packets which were transmitted through nodes in the network transparently from the source machine to the destination machine.[3] As discussed in detail later, as per the OSI model, the network consists of 7 layers. We will go into each of these layers in detail below, however is important to understand that at the base is the physical layer of cables and wires, while at the top is application layer which contains all the functions that people want to perform on the Internet and the content associated with it. The layers in the middle can be characterised as the protocol layers for the purpose of this discussion. What makes the architecture of the Internet remarkable is that these layers are completely independent of each other, and in most cases, indifferent to the other layers. The protocol layer is what impacts net neutrality. It is this layer which provides the standards for the manner in which the data must flow through the network. The idea was for the it to be as simple and feature free as possible such that it is only concerned with the transmission data as fast as possible ('best efforts principle') while innovations are pushed to the layers above or below it.[4]
This aspect of the Internet's architectural design, which mandates that network features are implemented as the end points only (destination and source machine), i.e. at the application level, is called the 'end to end principle'.[5] This means that the intermediate nodes do not differentiate between the data packets in any way based on source, application or any other feature and are only concerned with transmitting data as fast as possible, thus creating what has been described as a 'dumb' or neutral network. [6] This feature of the Internet architecture was also considered essential to what Jonathan Zittrain has termed as the 'generative' model of the Internet.[7] Since, the Internet Protocol remains a simple layer incapable of discrimination of any form, it meant that no additional criteria could be established for what kind of application would access the Internet. Thus, the network remained truly open and ensured that the Internet does not privilege or become the preserve of a class of applications, nor does it differentiate between the different kinds of technologies that comprise the physical layer below.
While the above model speaks of a dumb network not differentiating between the data packets that travel through it, in truth, the network operators engage in various kinds of practices that priorities, throttle or discount certain kinds of data packets. In her thesis essay at the Oxford Internet Institute, Alissa Cooper[8] states that traffic management involves three different set of criteria- a) Some subsets of traffic needs to be managed, and arriving at a criteria to identify those subsets the criteria can be based on source, destination, application or users, b) Trigger for the traffic management measure which - could be based upon time of the day, usage threshold or a specific network condition, and c) the traffic treatment put into practice when the trigger is met. The traffic treatment can be of three kinds. The first is Blocking, in which traffic is prevented from being delivered. The second is Prioritization under which identified traffic is sent sooner or later. This is usually done in cases of congestion and one kind of traffic needs to be prioritized. The third kind of treatment is Rate limiting where identified traffic is limited to a defined sending rate.[9] The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and in this way it treats all information sent over it as equal. In such a network, the content of the packets is not examined, and Internet providers act according to the destination of the data as opposed to any other factor. However, in order to perform traffic management in various circumstances, Deep packet Inspection technology, which does look at the content of data packets is commonly used by service providers.
Deep Packet Inspection
Deep packet inspection (DPI) enables the examination of the content of a data packets being sent over the Internet. Christopher Parsons explains the header and the payload of a data packet with respect to the OSI model. In order to understand this better, it is more useful to speak of network in terms of the seven layers in the OSI model as opposed to the three layers discussed above.[10]
Under the OSI model, the top layer, the Application Layer is in contact with the software making a data request. For instance, if the activity in question is accessing a webpage, the web-browser makes a request to access a page which is then passed on to the lower layers. The next layer is the Presentation Layer which deals with the format in which the data is presented. This lateral performs encryption and compression of the data. In the above example, this would involve asking for the HTML file. Next comes the Session Layer which initiates, manages and ends communication between the sender and receiver. In the above example, this would involve transmitting and regulating the data of the webpage including its text, images or any other media. These three layers are part of the 'payload' of the data packet.[11]
The next four layers are part of the 'header' of the data packet. It begins with the Transport Layer which collects data from the Payload and creates a connection between the point of origin and the point of receipt, and assembles the packets in the correct order. In terms of accessing a webpage, this involves connecting the requesting computer system with the server hosting the data, and ensuring the data packets are put together in an arrangement which is cohesive when they are received. The next layer is the Data Link Layer. This layer formats the data packets in such a way that that they are compatible with the medium being used for their transmission. The final layer is the Physical Layer which determines the actual media used for transmitting the packets.[12]
The transmission of the data packet occurs between the client and server, and packet inspect occurs through some equipment placed between the client and the server. There are various ways in which packet inspection has been classified and the level of depth that the inspection needs to qualify in order to be categorized as Deep Packet Inspection. We rely on Parson's classification system in this article. According to him, there are three broad categories of packet inspection - shallow, medium and deep.[13]
Shallow packet inspection involves the inspection of the only the header, and usually checking it against a blacklist. The focus in this form of inspection is on the source and destination (IP address and packet;s port number). This form of inspection primarily deals with the Data Link Layer and Network Layer information of the packet. Shallow Packet Inspection is used by firewalls.[14]
Medium Packet Inspection involves equipment existing between computers running the applications and the ISP or Internet gateways. They use application proxies where the header information is inspected against their loaded parse-list and used to look at a specific flows. These kinds of inspections technologies are used to look for specific kinds of traffic flows and take pre-defined actions upon identifying it. In this case, the header and a small part of the payload is also being examined.[15]
Finally, Deep Packet Inspection (DPI) enables networks to examine the origin, destination as well the content of data packets (header and payload). These technologies look for protocol non-compliance, spam, harmful code or any specific kinds of data that the network wants to monitor. The feature of the DPI technology that makes it an important subject of study is the different uses it can be put to. The use cases vary from real time analysis of the packets to interception, storage and analysis of contents of a packets.[16]
The different purposes of DPI
Network Management and QoS
The primary justification for DPI presented is network management, and as a means to guarantee and ensure a certain minimum level of QoS (Quality of Service). Quality of Service (QoS) as a value conflicting with the objectives of Network Neutrality, has emerged as a significant discussion point in this topic. Much like network neutrality, QoS is also a term thrown around in vague, general and non-definitive references. The factors that come into play in QoS are network imposed delay, jitter, bandwidth and reliability. Delay, as the name suggests, is the time taken for a packet to be passed by the sender to the receiver. Higher levels of delay are characterized by more data packets held 'in transit' in the network. [17] A paper by Paul Ferguson and Geoff Huston described the TCP as a 'self clocking' protocol.[18] This enables the transmission rate of the sender to be adjusted as per the rate of reception by the receiver. As the delay and consequent stress on the protocol increases, this feedback ability begins to lose its sensitivity. This becomes most problematic in cases of VoIP and video applications. The idea of QoS generally entails consistent service quality with low delay, low jitter and high reliability through a system of preferential treatment provided to some traffic on a criteria formulated around the need of such traffic to have greater latency sensitivity and low delay and jitter. This is where Deep Packet Inspection comes into play. In 1991, Cisco pioneered the use of a new kind of router that could inspect data packets flowing through the network. DPI is able to look inside the packets and its content, enabling it to classify packets according to a formulated policy. DPI, which was used a security tool, to begin with, is a powerful tool as it allows ISPs to limit or block specific applications or improve performances of applications in telephony, streaming and real-time gaming. Very few scholars believe in an all-or-nothing approach to network neutrality and QoS and debate often comes down to what forms of differentiations are reasonable for service providers to practice. [19]
Security
Deep Packet inspection was initially intended as a measure to manage the network and protect it from transmitting malicious programs . As mentioned above, Shallow Packet Inspection was used to secure LANs and keep out certain kinds of unwanted traffic. [20] Similarly, DPI is used for identical purposes, where it is felt useful to enhance security and complete a 'deeper' inspection that also examines the payload along with the header information.
Surveillance
The third purpose of DPI is what concerns privacy theorists the most. The fact that DPI technologies enable the network operators to have access to the actual content of the data packets puts them a position of great power as well as making them susceptible to significant pressure from the state. [21] For instance, in US, the ISPs are required to conform to the provisions of the Communications Assistance for Law Enforcement Act (CALEA) which means they need to have some surveillance capacities designed into their systems. What is more disturbing for privacy theorists compared to the use of DPI for surveillance under legislation like CALEA, are the other alleged uses by organisation like the National Security Agency through back end access to the information via the ISPs. Aside from the US government, there have been various reports of use of DPI by governments in countries like China,[22] Malaysia[23] and Singapore. [24]
Behavioral targeting
DPI also enables very granular tracking of the online activities of Internet users. This information is invaluable for the purposes of behavioral targeting of content and advertising. Traditionally, this has been done through cookies and other tracking software. DPI allows new way to do this, so far exercised only through web-based tools to ISPs and their advertising partners. DPI will enable the ISPs to monitor contents of data packets and use this to create profiles of users which can later be employed for purposes such as targeted advertising. [25]
Impact on Privacy
Each of the above use-cases has significant implications for the privacy of Internet users as the technology in question involves access, tracking or retention of their online communication and usage activity.
Alyssa Cooper compares DPI with other technologies carrying out content inspection such as caching services and individual users employing firewalls or packet sniffers. She argues that one of the most distinguishing feature of DPI is the potential for "mission-creep." [26] Kevin Werbach writes that while networks may deploy DPI for implementation under CALEA or traffic peer-to-peer shaping, once deployed DPI techniques can be used for completely different purposes such as pattern matching of intercepted content and storage of raw data or conclusions drawn from the data.[27] This scope of mission creep is even more problematic as it is completely invisible. As opposed to other technologies which rely on cookies or other web-based services, the inspection occurs not at the end points, but somewhere in the middle of the network, often without leaving any traces on the user's system, thus rendering them virtually undiscoverable.
Much like other forms of surveillance, DPI threatens the sense that the web is a space where people can engage freely with a wide range of people and services. For such a space to continue to exist, it is important for people to feel secure about their communication and transaction on medium. This notion of trust is severely harmed by a sense that users are being surveilled and their communication intercepted. This has obvious chilling effect on free speech and could also impact electronic commerce.[28]
Allyssa Cooper also points out another way in which DPI differs from other content tracking technologies. As the DPI is deployed by the ISPs, it creates a greater barrier to opting out and choosing another service. There are only limited options available to individuals as far as ISPs are concerned. Christopher Parsons does a review of ISPs using DPI technology in UK, US and Canada and offers that various ISPs do provide in their terms of services that they use DPI for network management purposes. However, this information is often not as easily accessible as the terms and conditions of online services. A;so, As opposed to online services, where it is relatively easier to migrate to another service, due to both presence of more options and the ease of migration, it is a much longer and more difficult process to change one's ISP.[29]
Measures to mitigate risk
Currently, there are no existing regulatory frameworks in India which deal govern DPI technology in any way. The International Telecommunications Union (ITU) prescribes a standard for DPI[30] however, the standard does not engage with any questions of privacy and requires all DPI technologies to be capable of identifying payload data, and prescribing classification rules for specific applications, thus, conflicting with notions of application agnosticism in network management. More importantly, the requirements to identify, decrypt and analyse tunneled and encrypted data threaten the reasonable expectation of privacy when sending and receiving encrypted communication. In this final section, I look at some possible principles and practices that may be evolved in order to mitigate privacy risks caused due to DPI technology.
Limiting 'depth' and breadth
It has been argued that inherently what DPI technology intends to do is matching of patterns in the inspected content against a pre-defined list which is relevant to the purpose how which DPI is employed. Much like data minimization principles applicable to data controllers and data processors, it is possible for network operators to minimize the depth of the inspection (restrict it to header information only or limited payload information) so as to serve the purpose at hand. For instance, in cases where the ISP is looking to identify peer-to-peer traffic, there are protocols which declare their names in the application header itself. Similarly, a network operators looking to generate usage data about email traffic can do so simply by looking at port number and checking them against common email ports.[31] However, this mitigation strategy may not work well for other use-cases such as blocking malicious software or prohibited content or monitoring for the sake of behavioral advertising.
While depth referred to the degree of inspection within data packets, breadth refers to the volume of packets being inspected. Alyssa Cooper argues that for many DPI use cases, it may be possible to rely on pattern matching on only the first few data packets in a flow, in order to arrive at sufficient data to take appropriate response. Cooper uses the same example about peer-to-peer traffic. In some cases, the protocol name may appear on the header file of only the first packet of a flow between two peers. In such circumstances, the network operators need not look beyond the header files of the first packet in a flow, and can apply the network management rule to the entire flow.[32]
Data retention
Aside from the depth and breadth of inspection, another important question whether and for along is there a need for data retention. All use cases may not require any kind of data retention and even in case where DPI is used for behavioral advertising, only the conclusions drawn may be retained instead of retaining the payload data.
Transparency
One of the issues is that DPI technology is developed and deployed outside the purview of standard organizations like ISO. Hence, there has been a lack of open, transparent standards development process in which participants have deliberated the impact of the technology. It is important for DPI to undergo these process which are inclusive, in that there is participation by non-engineering stakeholders to highlight the public policy issues such as privacy. Further, aside from the technology, the practices by networks need to be more transparent. [33] Disclosure of the presence of DPI, the level of detail being inspected or retained and the purpose for deployment of DPI can be done. Some ISPs provide some of these details in their terms of service and website notices. [34] However, as opposed to web-based services, users have limited interaction with their ISP. It would be useful for ISPs to enable greater engagement with their users and make their practices more transparent.
Conclusion
The very nature of of the DPI technology renders some aspects of recognized privacy principles like notice and consent obsolete. The current privacy frameworks under FIPP[35] and OECD [36] rely on the idea of empowering the individual by providing them with knowledge and this knowledge enables them to make informed choices. However, for this liberal conception of privacy to function meaningfully, it is necessary that there are real and genuine choices presented to the alternatives. While some principles like data minimisation, necessity and proportionality and purpose limitation can be instrumental in ensuring that DPI technology is used only for legitimate purposes, however, without effective opt-out mechanisms and limited capacity of individual to assess the risks, the efficacy of privacy principles may be far from satisfactory.
The ongoing Aadhaar case and a host of surveillance projects like CMS, NATGRID, NETRA[37] and NMAC [38] have raised concerns about the state conducting mass-surveillance, particularly of online content. In this regard, it is all the more important to recognise the potential of Deep Packet Inspection technologies for impact on privacy rights of individuals. Earlier, the Centre for Internet and Society had filed Right to Information applications with the Department of Telecommunications, Government of India regarding the use of DPI, and the government had responded that there was no direction/reference to the ISPs to employ DPI technology. [39] Similarly, MTNL also responded to the RTI Applications and denied using the technology.[40] It is notable though, that they did not respond to the questions about the traffic management policies they follow. Thus, so far there has been little clarity on actual usage of DPI technology by the ISPs.
[1] Ashish Mishra, "India's Net Neutrality Crusaders", available at http://mintonsunday.livemint.com/news/indias-net-neutrality-crusaders/2.3.2289565628.html
[3] Vinton Cerf and Robert Kahn, "A protocol for packet network intercommunication", available at https://www.semanticscholar.org/paper/A-protocol-for-packet-network-intercommunication-Cerf-Kahn/7b2fdcdfeb5ad8a4adf688eb02ce18b2c38fed7a
[4] Paul Ganley and Ben Algove, "Network Neutrality-A User's Guide", available at http://wiki.commres.org/pds/NetworkNeutrality/NetNeutrality.pdf
[5] J H Saltzer, D D Clark and D P Reed, "End-to-End arguments in System Design", available at http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
[6] Supra Note 4.
[7] Jonathan Zittrain, The future of Internet - and how to stop it, (Yale University Press and Penguin UK, 2008) available at https://dash.harvard.edu/bitstream/handle/1/4455262/Zittrain_Future%20of%20the%20Internet.pdf?sequence=1
[8] Alissa Cooper, How Regulation and Competition Influence Discrimination in Broadband Traffic Management: A Comparative Study of Net Neutrality in the United States and the United Kingdom available at http://ora.ox.ac.uk/objects/uuid:757d85af-ec4d-4d8a-86ab-4dec86dab568
[9] Id .
[10] Christopher Parsons, "The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?", available at https://www.christopher-parsons.com/the-politics-of-deep-packet-inspection-what-drives-surveillance-by-internet-service-providers/ at 15.
[11] Ibid at 16.
[12] Id .
[13] Ibid at 19.
[14] Id .
[15] Id .
[16] Jay Klein, "Digging Deeper Into Deep Packet Inspection (DPI)", available at http://spi.unob.cz/papers/2007/2007-06.pdf
[17] Tim Wu, "Network Neutrality: Broadband Discrimination", available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=388863
[18] Paul Ferguson and Geoff Huston, "Quality of Service on the Internet: Fact, Fiction,
or Compromise?", available at http://www.potaroo.net/papers/1998-6-qos/qos.pdf
[19] Barbara van Schewick, "Network Neutrality and Quality of Service: What a non-discrimination Rule should look like", available at http://cyberlaw.stanford.edu/downloads/20120611-NetworkNeutrality.pdf
[20] Supra Note 14.
[21] Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance," available at http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf
[22] Ben Elgin and Bruce Einhorn, "The great firewall of China", available at http://www.bloomberg.com/news/articles/2006-01-22/the-great-firewall-of-china .
[23] Mike Wheatley, "Malaysia's Web Heavily Censored Before Controversial Elections", available at http://siliconangle.com/blog/2013/05/06/malaysias-web-heavily-censored-before-controversial-elections/
[24] Fazal Majid, "Deep packet inspection rears it ugly head" available at https://majid.info/blog/telco-snooping/.
[25] Alissa Cooper, "Doing the DPI Dance: Assessing the Privacy Impact of Deep Packet Inspection," in W. Aspray and P. Doty (Eds.), Privacy in America: Interdisciplinary Perspectives, Plymouth, UK: Scarecrow Press, 2011 at 151.
[26] Ibid at 148.
[27] Kevin Werbach, "Breaking the Ice: Rethinking Telecommunications Law for the Digital Age", Journal of Telecommunications and High Technology, available at http://www.jthtl.org/articles.php?volume=4
[28] Supra Note 25 at 149.
[29] Supra Note 25 at 147.
[30] International Telecommunications Union, Recommendation ITU-T.Y.2770, Requirements for Deep Packet Inspection in next generation networks, available at https://www.itu.int/rec/T-REC-Y.2770-201211-I/en.
[31] Supra Note 25 at 154.
[32] Ibid at 156.
[33] Supra Note 10.
[34] Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance", available at http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf .
[36] https://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm
[37] "India's Surveillance State" Software Freedom Law Centre, available at http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/
[38] Amber Sinha, "Are we losing our right to privacy and freedom on speech on Indian Internet", DNA, available at http://www.dnaindia.com/scitech/column-are-we-losing-the-right-to-privacy-and-freedom-of-speech-on-indian-internet-2187527
[40] Smita Mujumdar, "Use of DPI Technology by ISPs - Response by the Department of Telecommunications" available at http://cis-india.org/telecom/dot-response-to-rti-on-use-of-dpi-technology-by-isps
ISO/IEC JTC 1 SC 27 Working Group Meetings - A Summary
Being a member of Working Group 5: Information technology - Security techniques – Identity management and privacy technologies, we attended the following meetings:
- WD 29184 Guidelines for online privacy notices and consent- As technological advancement and wider availability of communication infrastructures has enabled collection and analysis of information regarding an individuals' activities, along with people becoming aware about privacy implications of the same, this standard aims to provides a framework for organizations to provide clear and easily under information to consumers about how the organization will process their PII.
- SP PII Protection Considerations for Smartphone App providers - Being a 1-year long project proposed during the ISO/IEC SC 27 JTC 1 Working Group Meetings in Jaipur in the year 2015. This group aims to build off a privacy framework for mobile applications to guide app developers on the lines of ISO/IEC 29100 international standard (which defines a broad privacy framework for information technologies) in light of excessive data collection by apps in absence of consent or justification, lack of comprehensive policies, Non transparent practices, Lack of adequate choice and consent, to ensure protection of rights of the individuals, etc. and will work towards ensuring a harmonized and standardized privacy structure for mobile application data policies and practices.
- WD 20889 Privacy enhancing data de-identification techniques- Given the importance of Data de-identification techniques when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles, the selection, design, use and assessment of these techniques needs to be performed appropriately in order to effectively address the risks of re-identification in a given context.
- SP Privacy in Smart Cities- Being a 1-year long project proposed during the ISO/IEC SC 27 JTC 1 Working Group Meetings in Jaipur this group saw contributions from Japan, India, PRIPARE in EU, to name a few. The scope for the group was proposed to produce a framework in light of data ownership, communication channels, privacy risk and impact assessment in smart cities, data lifecycle privacy governance for smart cities, and Develop use cases and contexts for Privacy Controls w.r.t the data lifecycle in Smart Cities, along with detailed documentation of Privacy Controls for Smart Cities aligned to the primary controls and associated sub controls.
Inputs to the Working Group on Enhanced Cooperation on Public Policy Issues Pertaining to the Internet (WGEC)
What are the high level characteristics of enhanced cooperation?
- The Tunis Agenda leaves the term “enhanced cooperation” unclearly defined. What is clear, however, is that enhanced cooperation is distinct from the Internet Governance Forum.
- According to Paragraph 69 of the Tunis Agenda, enhanced cooperation will enable "governments, on an equal footing, to carry out their roles and responsibilities, in international public policy issues pertaining to the Internet, but not in the day-to-day technical and operational matters, that do not impact on international public policy issues." In other words enhanced cooperation should result in in the development and enforcement of international public policy and only "day-to-day technical and operational matters" with no public policy impact and national public policy is exempt from government-to-government enhanced cooperation.
- According to Paragraph 70, enhanced cooperation includes "development of globally-applicable principles on public policy issues associated with the coordination and management of critical Internet resources." According to the paragraph, “organizations responsible for essential tasks associated with the
Internet should create an environment that facilitates this development of these principles using "relevant international organizations". In other words, both Internet institutions [ICANN, ISOC and RIRs] and multilateral organisations [WIPO, ITU, UNESCO etc] should be used to develop principles.
- Paragraph 71 gives some further clarity. According to this paragraph, the process for enhanced cooperation should 1) be “started by the UN Secretary General” 2) "involve all stakeholders in their respective roles" 3) "proceed as quickly as possible" 4) be "consistent with legal process" 5) "be responsive to innovation".
- Again according to Paragraph 71, enhanced cooperation should be commenced by "relevant organisations" and should involve "all stakeholders". But only the "relevant organisations shall be requested to provide annual performance reports." Enhanced cooperation as envisioned in the Tunis Agenda, therefore, calls for a multistakeholder model where each constituency leads the process of developing principles and self-regulatory mechanisms that does involve all stakeholders at all stages, but rather, one that requires participation from relevant stakeholders in accordance with the issue at hand at the relevant stage.
- For government-to-government enhanced cooperation, governments need to agree on what is within the exclusive realm of "national public policy" for ex. national security, intellectual property policy, and protection of children online. Governments also need to agree on what is within the remit of “international public policy” for ex. cross border taxation, cross border criminal investigations, cross border hate speech. Once this is done, the governments of the world should pursue the development and enforcement of international law and norms at the appropriate forums if they exist or alternatively they must create new forums that are appropriate.
- For enhanced cooperation with respect to non-government "relevant organisations" [different sub-groups within the private sector, technical community and civil society], we believe that the requirements of Paragraph 71 can be understood to mean that enhanced cooperation is the “development of self regulatory norms” as a complement to traditional multilateral norm setting and international law making envisioned in Paragraph 69. In other words, the real utility of the multi-stakeholder model is self-regulation by the private sector. Besides the government, it is the private sector that has the greatest capacity for harm and therefore is in urgent need of regulation. The multistakeholder model will best serve its purpose if the end result is that the private sector self-regulates. Most of the harm emerging from large corporations can only be addressed if they agree amongst themselves. Having a centralised or homogenous model of enhanced cooperation will not suffice, the model of cooperation should be flexible in accordance with the issue being brought to the table.
Taking into consideration the work of the previous WGEC and the Tunis Agenda, particularly paragraphs 69-71, what kind of recommendations should we consider?
The previous work of the WGEC is useful as a mapping exercise. However, the working group was unable to agree on a definition of Enhanced Cooperation. In our previous response we have clearly indicated that enhanced cooperation is 1) development of international law and norms by governments at appropriate international/multilateral fora 2) articulation of principles by "organizations responsible for essential tasks associated with the Internet" and "relevant international organizations" and 3) development of self-regulatory norms and enforcement mechanisms by private sector, technical community and civil society with a priority for the private sector because they have the greatest potential after government for harms. To repeat, the Tunis Agenda makes it very clear that enhanced cooperation is distinct from the IGF. If the IGF is only the learning forum, we need a governance forum like ICANN so that different constituencies can develop self regulatory norms and enforcement mechanisms with inputs from other stakeholder constituencies and the public at large.
The Curious Case of Poor Security in the Indian Twitterverse
The article was originally published in the Wire on December 15, 2016.
The term legion, an oft-referred identity in popular culture, has begun to attain recent notoriety in Indian cyberspace due to the spate of hacks being carried out by a group of hackers calling themselves ‘Legion Crew’. The group has compromised four Twitter and/or email accounts in the past two weeks, with confirmed hacks of Rahul Gandhi, Vijay Mallya, Barkha Dutt and Ravish Kumar. Lalit Modi, Apollo Hospitals and the parliament (sansad) have been singled out as future targets, with dire warnings of catastrophic data leaks if the group were to be investigated by the authorities. The ethical impression of the hacks have been divided, with some segments of the public supporting the supposedly hacktivist outlook of the group while others condemning their actions as reckless and invasive. In the meantime, no individuals or entities have been accused of the hacks by the police, with most reports claiming the foreign origin of the hacks being the biggest impediment to the investigations.
A technical and legal perspective
The hacks first began against the politician Gandhi, whose Twitter account was hacked almost two weeks ago, with various demeaning tweets being posted for a few hours before access to the account was restored to the rightful owner. The same hacks were then carried out on business tycoon Mallya’s Twitter account last Friday but this time around, his bank details (apparently obtained from his compromised email accounts) were also leaked to the public via Twitter. Similar hacks targeting both the Twitter and email accounts of Dutt and Kumar were also carried out the past weekend. Sensitive details and data dumps (around 1.5 GB in size) of the journalists were released to the public, along with escalating warnings about future attacks. The data dumps released by the hackers seemed to be indicative that the hackers obtained far more information than they had disclosed via the Twitter hacks and were willing to leverage this data as ransom. Twitter, via both their Indian policy representatives and their international office, has denied any compromise to their systems and has claimed that all accounts were legitimately accessed with valid credentials at the time of the hacks. This leads to three main questions: How were the Twitter and email accounts hacked? What is the recourse, especially in terms of investigation, available to the afflicted parties and the authorities? What can potential targets do to secure their online presence from such attacks?
Regarding their technical nature, all of these hacks were sustained compromises that lasted for a few hours each (a long time in cyberspace) and seemed to be reflective of only a fragment of the power the hackers held over the individual’s online presence. Considering Twitter’s denial that the attacks were due to a security flaw on their end as well as the fact that legitimate login details were used to gain access to the accounts, a rather simple investigation can show that the most likely attack vector used by the Legion Crew for these hacks was a DNS Hijacking attack in combination with a Man in the Middle (MITM) attack. These methods abuse the rather simple and (by default) insecure DNS system that is responsible for directing the world’s Internet traffic including email. While the use of DNS to map websites to the IP address of the systems where they are physically hosted (for instance, www.thewire.in maps to 52.76.81.135 at the time of writing this article) is fairly well known, the DNS system also directs most of the world’s email. Similar to DNS A and AAA name records regarding websites, DNS MX records direct email sent to domain names to the correct email servers where they are processed for storage or forwarding, as required. If these MX records are compromised, then hackers can easily redirect emails sent to legitimate email address of the domain name (for instance, [email protected]) to whatever system they want, including other compromised email addresses.
The original operator of the email account is unaware of any email that is redirected in such a way and has no way of knowing the account has been hacked until they notice they are not consistently receiving emails sent to them, which in well planned hacks can be as for many weeks or even months. These attacks can also be further augmented if the hackers also decide to implement an MITM. In an MITM attack, hackers can redirect all traffic attempting to reach an email account via the MX records to a system they operate by changing the MX records on the domain name server to a malicious system. They can access and store all these emails (along with attachments) via the malicious system and also manipulate the information contained in these emails. Then, either in bulk or selectively, they can re-send the emails to the original email accounts they were intended for from their own servers. The owner will then receive the emails in their inboxes with the apparent impression they are private and being received for the first time. This entire MITM process can be setup in a manner that the emails are rerouted to compromised servers by MX records changes, stored for future analysis and then forwarded to the original recipient account in a matter of seconds.
Given the reliance placed by most websites on email IDs being a primary form of identity authentication, compromising an email ID can give access to most of the social networking, entertainment and even banking websites’ login details of the owner to any individual who has the login details of the account. This is because of the password reset or forgotten password feature available in most services that use only email IDs by default as a form of authenticating account ownership and allowing the user to reset their passwords by setting a reset email to their registered email accounts. Once they gain access to the compromised accounts, hackers can perform these resets with impunity, granting them unrestricted access to the online presence of the owner. In fact, hackers can use these attacks to perform password resets on the email accounts themselves, allowing them unlimited access to past conversation, records and login details that may be stored in the email accounts.
Keeping this background in mind, the most likely methodology behind the hacks is quite simple to explain. The Legion Crew most likely first compromised the email systems of these celebrities by changing the DNS MX records of the email IDs which were registered with Twitter as login IDs for these accounts. This allowed them to redirect emails sent to these email IDs to an alternative system of their choosing. They then used the password reset feature of Twitter, which is similar to those provided by most social networking services, to reset the password of these accounts. However, due to the compromise of the MX records of the domain names used by these celebrities, instead of reaching the inboxes of the entities operating the accounts, the password reset emails were sent to the alternative systems set up by the hackers solely for receiving such emails. After receiving this email, it was a simple matter of resetting the account credentials by clicking on the password reset link on the email and changing the passwords of these accounts to unique passwords only known to the hackers.
The hackers then would (and did) have complete control of the account until the service provider itself intervened and provided an emergency reset along with recommending rectifying the MX records from the malicious one’s inserted by the hackers. The only question left to be answered in the methodology followed by the hackers is how they gained access to the MX records, as DNS records can only be changed using the dashboard of the domain name provider, which in turn is protected by a login password. Allegations have arisen that most (if not all) of the compromised accounts used ‘Net4india’ as their domain name provider. Therefore, it is very possible either that it is a vulnerability on the Net4india systems, an internal compromise of the personnel Net4india and so on leading to access detail to domain name accounts from being compromised. Such security and personnel breaches could have been responsible for providing access to the domain name management dashboard of the hacked celebrities email IDs, after which the attack would have followed the methodology described above by changing the MX records to a malicious system.
Jurisdictional issues
The legal avenues available to the affected parties are fairly clear within the Information Technology Act, 2000 and the Indian Penal Code, 1862. Section 66 and Section 66C of the IT Act, which govern hacking and misuse of passwords respectively, would apply along with possible application of the provisions concerning mischief (Section 425), cheating (Section 420) and extortion (Section 383) of the IPC. However, recent investigations have already begun to show that the various jurisdictional symptoms that plague cybercrimes investigations are also hindering investigations for these hacks. The global nature of the internet ensures that the operating servers, attackers, compromised users and unwitting intermediaries are more often than not all located in different jurisdictions, each with their own set of protections, vulnerabilities and laws. For example, investigations by the Delhi police into IP addresses that accessed Gandhi’s Twitter account during the hack have shown that in the period of few hours the account was accessed from the US, Sweden, Canada, Thailand and Romania. Of course, given the pervasive availability of IP spoofing tools, none of these countries is indicative of the actual location of the hacker. Gaining information from these different servers, in order to trace a route of the hacker’s digital geographical journey, is a bureaucratic and legal nightmare with long delays, unanswered Mutual Legal Assistance Treaty requests and unresponsive service providers being the norm. Like in most cybercrime investigation, if the hackers take certain basic steps to mask their identities and geographical location, their odds being caught by traditional law enforcement are negligible. Investigations that have successfully managed to catch such hacker groups, such as the Project Safe Childhood by the FBI against child pornography on the Tor web, take millions of dollars, months of efforts and a high level of skill. Whether these Twitter hacks will generate the sustained, multijurisdictional effort across law enforcement agencies in India required to catch such crimes remains to be seen. Until then, the questions of attribution, liability and justice will remain unanswered like in a majority of large scale cyber hacks.
Possible measures
Given that various other targets have already been singled out by the hacker group, the need for vigilance and improved security is greater than ever. One basic measure, easily available within Twitter and most other services, that should be carried out is enabling two factor authentication (2FA) on both email and social media accounts. 2FA ensures that the user has to input a One Time Password (OTP) generated on a separate device (such as a mobile phone) at the time of logging in or resetting the password for the account. This would mean that even if the hackers obtain the password or compromise the emails being sent to an account, they will be unable to login into an account without also being in physical possession of the device with the OTP generation application. If this option, which is already available within Twitter, was enabled for the four accounts that were hacked, for example, they would have remained protected despite the email account compromise. Further, domain name service providers should also implement Domain Name System Security Extensions and Domain Keys Identified Mail to prevent DNS and email hijacking, as was carried out on Net4India servers in these Twitter attacks. Using HTTPS on all pages on websites will also go a long way in preventing spoofing and securing user information in transit. Finally, nothing can replace customer education and awareness as the most effective tool to combat the growing cyber threats faced by the average netizen. The weakest link in a digital system is often the end user. A core set of security measures that can be percolated into common practice will serve as the first and best line of defence against such attacks in the future, for both the common man and celebrities alike.
Incident Response Requirements in Indian Law
A recent example of such an attack that we have seen from India is the recent data breach involving an alleged 3.2 million debit cards in India.[1] In the case of this hack the payment processing networks such as National Payments Corporation of India, Visa and Mastercard, informed the banks regarding the leaks, based on which the banks started the process of blocking and then reissuing the compromised cards. It has also been reported that the banks failed to report this incident to the Computer Emergency Response Team of India (CERT-In) even though they are required by law to do so.[2] Such risks are increasingly faced by consumers, businesses, and governments. A person who is a victim of a cyber incident usually looks to receive assistance from the service provider and government agencies, which are prepared to investigate the incident, mitigate its consequences, and help prevent future incidents. It is essential for an effective response to cyber incidents that authorities have as much knowledge regarding the incident as possible and have that knowledge as soon as possible. It is also critical that this information is communicated to the public. This underlines the importance of reporting cyber incidents as a tool in making the internet and digital infrastructure secure.. Like any other crime, an Internet-based crime should be reported to those law enforcement authorities assigned to tackle it at a local, state, national, or international level, depending on the nature and scope of the criminal act. This is the first in a series of blog posts highlighting the importance of incident reporting in the Indian regulatory context with a view to highlight the Indian regulations dealing with incident reporting and the ultimate objective of having a more robust incident reporting environment in India.
Incident Reporting under CERT Rules
In India, section 70-B of the Information Technology Act, 2000 (the “IT Act”) gives the Central Government the power to appoint an agency of the government to be called the Indian Computer Emergency Response Team. In pursuance of the said provision the Central Government issued the Information Technology (The Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013 (the “CERT Rules”) which provide the location and manner of functioning of the Indian Computer Emergency Response Team (CERT-In). Rule 12 of the CERT Rules gives every person, company or organisation the option to report cyber security incidents to the CERT-In. It also places an obligation on them to mandatorily report the following kinds of incidents as early as possible:
- Targeted scanning/probing of critical networks/systems;
- Compromise of critical systems/information;
- Unauthorized access of IT systems/data;
- Defacement of website or intrusion into a website and unauthorized changes such as inserting malicious code, links to external websites, etc.;
- Malicious code attacks such as spreading of virus/worm/Trojan/botnets/spyware;
- Attacks on servers such as database, mail, and DNS and network devices such as routers;
- Identity theft, spoofing and phishing attacks;
- Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks;
- Attacks on critical infrastructure, SCADA systems and wireless networks;
- Attacks on applications such as e-governance, e-commerce, etc.
The CERT Rules also impose an obligation on service providers, intermediaries, data centres and body corporates to report cyber incidents within a reasonable time so that CERT-In may have scope for timely action. This mandatory obligation of reporting incidents casts a fairly wide net in terms of private sector entities, however it is notable that prima facie the provision does not impose any obligation on government entities to report cyber incidents unless they come under any of the expressions “service providers”, “data centres”, “intermediaries” or “body corporate”. This would mean that if the data kept with the Registrar General & Census Commissioner of India is hacked in a cyber incident, then there is no statutory obligation under the CERT Rules on it to report the incident. It is pertinent to mention here that although there is no obligation on a government department under law to report such an incident, such an obligation may be contained in its internal rules and guidelines, etc. which are not readily available.
It is pertinent to note that although the CERT Rules provide for a mandatory obligation to report the cyber incidents listed therein, the Rules themselves do not provide for any penalty for non compliance. However this does not mean that there are no consequences for non compliance, it just means that we have to look to the parent legislation i.e. the IT Act for the appropriate penalties for non compliance. Section 70B(6) gives the CERT-In the power to call for information and give directions for the purpose of carrying out its functions. Section 70B(7) provides that any service provider, intermediary, data center, body corporate or person who fails to provide the information called for or comply with the direction under sub-section (6), shall be liable to imprisonment for a period up to 1 (one) year or fine of up to 1 (one) lakh or both.
It is possible to argue here that sub-section (6) only talks about calls for information by CERT-In and the obligation under Rule 12 of the CERT Rules is an obligation placed by the central government and not CERT-In. It can also be argued that sub-section (6) is only meant for specific requests made by CERT-In for information and sub-section (7) only penalises those who do not respond to these specific requests. However, even if these arguments were to be accepted and we were to conclude that a violation of the obligation imposed under Rule 12 would not attract the penalty stipulated under sub-section (7) of section 70B, that does not mean that Rule 12 would be left toothless. Section 44(b) of the IT Act provides that where any person is required under any of the Rules or Regulations under the IT Act to furnish any information within a particular time and such person fails to do so, s/he may be liable to pay a penalty of upto Rs. 5,000/- for every day such failure continues. Further section 45 provides for a further penalty of Rs.25,000/- for any contravention of any of the rules or regulations under the Act for which no other penalty has been provided.
Incident Reporting under Intermediary Guidelines
Section 2(1)(w) of the IT Act defined the term “intermediary” in the following manner;
“intermediary” with respect to any particular electronic record, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web hosting service providers, search engines, online payment sites, online-auction sites, online market places and cyber cafes.
Rule 3(9) of the Information Technology (Intermediaries Guidelines) Rules, 2011 (the “Intermediary Guidelines”) also imposes an obligation on any intermediary to report any cyber incident and share information related to cyber security incidents with the CERT-In. Since neither the Intermediary Guidelines not the IT Act specifically provide for any penalty for non conformity with Rule 3(9) therefore any enforcement action against an intermediary failing to report a cyber security incident would have to be taken under section 45 of the IT Act containing a penalty of Rs. 25,000/-.
Incident Reporting under the Unified License
Clause 39.10(i) of the Unified License Agreement obliges the telecom company to create facilities for the monitoring of all intrusions, attacks and frauds on its technical facilities and provide reports on the same to the Department of Telecom (DoT). Further clause 39.11(ii) provides that for any breach or inadequate compliance with the terms of the license, the telecom company shall be liable to pay a penalty amount of Rs. 50 crores (Rs. 50,00,00,000) per breach.
Conclusion
It is clear from the above discussion that there is a legal obligation service providers to report cyber incidents to the CERT-In. Presently, the penalty prescribed under Indian law may not be enough to incentivise companies to adopt comprehensive and consistent incident response programmes. , except in cases of telecom companies under the Unified License Agreement. A fine of Rs. 25,000/- appears to be inconsequential when compared to the possible dangers and damages that may be caused due to a security breach of data containing, for example, credit card details.. Further, it is also imperative that apart from the obligation to report the cyber incident to the appropriate authorities (CERT-In) there should also be a legal obligation to report it to the data subjects whose data is stolen or is put at risk due to the said breach. A provision requiring notice to the data subjects could go a long way in ensuring that service providers, intermediaries, data centres and body corporates implement the best data security practices since a breach would then be known by general consumers leading to a flurry of bad publicity which could negatively impact the business of the data controller, and for a business entity an economic stimulus may be an effective way to ensure compliance.
As we continue to research incident response, the questions and areas we are exploring include the ecosystem of incidence response including what is reported, how, and when, appropriate incentives to companies and governments to report incidents, various forms of penalties, the role of cross border sharing of information and jurisdiction and best practices for incident reporting and citizen awareness.
Published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document
[1] http://www.huffingtonpost.in/2016/10/21/atm-card-hack-what-banks-are-saying-about-india-s-biggest-data/
[2] http://tech.economictimes.indiatimes.com/news/internet/cert-in-had-warned-banks-on-oct-7-about-expected-targeted-attacks-from-pakistan/54991025
Mapping of India’s Cyber Security-Related Bilateral Agreements
Download: Infographic (PDF) and data (XLSX)
The data used for the info-graphic consists of India’s MLATs, cyber security-related MoUs and Joint Statements, and Cyber Frameworks. An MLAT is an agreement between two or more countries, drafted for the purpose of gathering and exchanging information in an effort to enforce public or criminal laws. A MoU (Memorandum of Understanding) is a nonbinding agreement between two or more states outlining the terms and details of an understanding, including each party’s requirements and responsibility; it is often the first stage in the formation of a formal contract. For the purpose of this research, we have grouped Joint Statements with MoUs, as they both generally entail the informal agreement between two states to strengthen cooperation on certain issues. Lastly, a Cyber Framework consists of standards, guidelines and practices to promote protection of critical infrastructure. The data accounts for agreements centered on cyber security as well as any agreements mentioning cooperation efforts in Cyber Security, information security or cybercrime.
The Mapping of India’s Cybersecurity-related bilateral agreement has been updated on April 12, 2017 with the following changes:
- A new MoU was signed between Australia and India in April 2017, focusing on combating terrorism and civil aviation security. Cybersecurity cooperation is mentioned in the MoU[1].
- A new MoU was signed between Bangladesh and India in April 2017. The Indian Computer Emergency Response Team (CERT-In), Indian Ministry of Electronics and Information Technology and the ICT Division of Bangladesh are the signing parties of the MoU. The agreement focuses on Cooperation in the area of Cyber Security[2].
- A preexisting MoU between France and India was added to the mapping, signed in January of 2016. Officials of both countries agreed to intensify cooperation between the Indian and French security forces in the fields of homeland security, cyber security, Special Forces and intelligence sharing to fight against criminal networks and tackle the common threat of terrorism[3].
- A new MoU was signed between Indonesia and India in March 2017. It focuses on enhancing cooperation in cyber security and intelligence sharing[4].
- A new MoU was signed between Kenya and India in January 2017, with “cyber security” mentioned as one of the key areas of cooperation[5].
- A preexisting MoU between Malaysia and India was added to the mapping, signed in November of 2015. Both sides agreed to promote cooperation and the exchange of information regarding cyber security incident management, technology cooperation and cyber attacks, prevalent policies and best practices and mutual response to cyber security incidents[6].
- A preexisting MoU between Mauritius and India, signed July 2016, was added to the mapping. This is a non-governmental MoU. Leading bourse BSE signed an agreement with Stock Exchange of Mauritius (SEM) for collaboration in areas including cyber security[7].
- A new joint statement between India and Portugal was signed in March 2017. The two countries agreed to set up an institutional mechanism to collaborate in the areas of electronic manufacturing, ITeS, startups, cyber security and e-governance.[8]
- A preexisting MoU, signed between Qatar and India in December of 2016, was added to the mapping. The agreement was regarding a protocol on technical cooperation in cyberspace and combatting cybercrime[9].
- A new MoU was signed between Serbia and India in January 2017, focusing on cooperation in the field of IT, Electronics. The MoU itself does not explicitly mention cybersecurity. However, the MoU calls for cooperation and exchanges in capacity building institutions, which should entail cyber security strengthening[10].
- A preexisting MoU between Singapore and India was added to the mapping. The MoU was signed in January 2016, focusing on the establishment of a formal framework for professional dialogue, CERT-CERT related cooperation for operational readiness and response, collaboration on cyber security technology and research related to smart technologies, exchange of best practices, and professional exchanges of human resource development[11].
- A new joint statement was signed between UAE and India in January 2017, following up on their previous Technical Cooperation MoU signed in February 2016. To further deepen cooperation in this area, they agreed to set up joint Research & Development Centres of Excellence[12].
- A preexisting MoU has been included in the mapping, signed in May of 2016. CERT-In agreed with the UK Ministry of Cabinet Office to promote close cooperation between both countries in the exchange in knowledge and experience in detection, resolution and prevention of security related incidents[13].
- A new MoU between India and the US was signed in March 2017. CERT-In and CERT-US signed a MoU agreeing to promote closer co-operation and exchange of information pertaining to cyber security in accordance with relevant laws, rules and regulations and on the basis of equality, reciprocity and mutual benefit[14].
- A new MoU was signed between Vietnam and India in January 2017, agreeing to promote closer cooperation for exchange of knowledge and experience in detection, resolution and prevention of cyber security incidents between both countries[15].
NOTE: Some preexisting MoUs were added as we were initially only including the most recent agreements in the mapping. Upon adding newly signed MoUs, we decided to also keep the preexisting ones and revisit the other entries to include any preexisting MoUs that were initially excluded due to not being the most-recent. In this respect, the visualization will be adjusted to indicate the number of MoUs per country.
[1]http://www.dnaindia.com/india/report-india-australia-sign-mous-on-combating-terrorism-civil-aviation-security-2393843
[2]http://www.theindependentbd.com/arcprint/details/89237/2017-04-09
[3]http://www.thehindu.com/news/resources/Full-text-of-Joint-Statement-issued-by-India-France/article14019524.ece
[4]http://indianexpress.com/article/india/indianhome-ministry-indonesian-ministry-of-security-and-coordination/
[5]https://telanganatoday.news/india-kenya-focus-defence-security-cooperation-pm
[6]http://economictimes.indiatimes.com/news/economy/foreign-trade/india-and-malaysia-sign-3-mous-including-cyber-security/articleshow/49891897.cms
[7]http://indiatoday.intoday.in/story/bse-mauritius-stock-exchange-tie-up-to-promote-financial-mkts/1/723635.html
[8]http://www.tribuneindia.com/news/business/india-portugal-to-collaborate-in-ites-cyber-security/373666.html
[9]http://naradanews.com/2016/12/india-qatar-sign-agreements-on-visa-cybersecurity-investments/
[10]http://ehub.newsforce.in/cabinet-approves-mou-india-serbia-cooperation-field-electronics/
[11]http://www.businesstimes.com.sg/government-economy/singapore-and-india-strengthen-cooperation-on-cyber-security
[12]http://mea.gov.in/bilateral-documents.htm?dtl/27969/India++UAE+Joint+Statement+during+State+visit+of+Crown+Prince+of+Abu+Dhabi+to+India+January+2426+2017
[13]http://www.bestcurrentaffairs.com/india-uk-mou-cyber-security/
[14]http://www.dqindia.com/india-cert-signs-an-mou-with-us-cert/
[15]http://pib.nic.in/newsite/PrintRelease.aspx?relid=157458
Mapping of Sections in India’s MLAT Agreements
Download: Infographic (PDF) and data (XLSX)
We have found that India’s 39 MLAT documents are worded, formatted and sectioned differently. At the same time, many of the same sections exist across several MLATs. This diagram lists the sections found in the MLAT documents and indicates the treaties in which they were included or not included. To keep the list of sections concise and to more easily pinpoint the key differences between the agreements, we have merged sections that are synonymous in meaning but were worded slightly differently. For example: we would combine “Entry into force and termination” with “Ratification and termination” or “Expenses” with “Costs”.
At the same time, some sections that seemed quite similar and possible to merge were kept separate due to potential key differences that could be overlooked as a result. For example: “Limitation on use” vs. “Limitation on compliance” or “Serving of documents” vs. “Provision of (publicly available) documents/records/objects” remained separate for further analysis and comparison.
These differences in sectioning can be analysed to facilitate a thorough comparison between the effectiveness, efficiency, applicability and enforceability of the various provisions across the MLATs. The purpose of this initial mapping is to provide an overall picture of which sections exist in which MLAT documents. There will be further analysis of these sections to produce a more holistic content-based comparison of the MLATs.
Aggregated Analysis of Sections of MLAT Agreements
Comments on the Report of the Committee on Digital Payments (December 2016)
1. Preliminary
1.1. This submission presents comments by the Centre for Internet and Society (“CIS”) [1] in response to the report of the Committee on Digital Payments, chaired by Mr. Ratan P. Watal, Principal Advisor, NITI Aayog, and constituted by the Ministry of Finance, Government of India (“the report”) [2].
2. The Centre for Internet and Society
2.1. The Centre for Internet and Society, CIS, is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, and open access), internet governance, telecommunication reform, digital privacy, and cyber-security.
2.2. CIS is not an expert organisation in the domain of banking in general and payments in particular. Our expertise is in matters of internet and communication governance, data privacy and security, and technology regulation. We deeply appreciate and are most inspired by the Ministry of Finance’s decision to invite entities from both the sectors of finance and information technology. This submission is consistent with CIS’ commitment to safeguarding general public interest, and the interests and rights of various stakeholders involved, especially the citizens and the users. CIS is thankful to the Ministry of Finance for this opportunity to provide a general response on the report.
3. Comments
3.1. CIS observes that the decision by the Government of India to withdraw the legal tender character of the old high denomination banknotes (that is, Rs. 500 Rs. 1,000 notes), declared on November 08, 2016 [3], have generated unprecedented data about the user base and transaction patterns of digital payments systems in India, when pushed to its extreme use due to the circumstances. The majority of this data is available with the National Payments Corporation of India and the Reserve Bank of India. CIS requests the authorities concerned to consider opening up this data for analysis and discussion by public at large and experts in particular, before any specific policy and regulatory decisions are taken towards advancing digital payments proliferation in India. This is a crucial opportunity for the Ministry of Finance to embrace (open) data-driven regulation and policy-making.
3.2. While the report makes a reference to the European General Data Protection Directive, it does not make a reference to any substantive provisions in the Directive which may be relevant to digital payments. Aside from the recommendation that privacy protections around the purpose limitation principle be relaxed to ensure that payment service providers be allowed to process data to improve fraud monitoring and anti-money laundering services, the report is silent on significant privacy and data protection concerns posed by digital payments services. CIS strongly warns that the existing data protection and security regulations under Information Technology (Reasonable security practices and procedures and sensitive personal data or information), Rules are woefully inadequate in their scope and application to effectively deal with potential privacy concerns posed by digital payments applications and services. Some key privacy issues that must be addressed either under a comprehensive data protection legislation or a sector specific financial regulation are listed below. The process of obtaining consent must be specific, informed and unambiguous and through a clear affirmative action by the data subject based upon a genuine choice provided along with an option to opt out at any stage. The data subjects should have clear and easily enforceable right to access and correct their data. Further, data subjects should have the right to restrict the usage of their data in circumstances such as inaccuracy of data, unlawful purpose and data no longer required in order to fulfill the original purpose.
3.3. The initial recommendation of the report is to “[m]ake regulation of payments independent from the function of central banking” (page 22). This involves a fundamental transformation of the payment and settlement system in India and its regulation. We submit that a decision regarding transformation of such scale and implications is taken after a more comprehensive policy discussion, especially involving a wider range of stakeholders. The report itself notes that “[d]igital payments also have the potential of becoming a gateway to other financial services such as credit facilities for small businesses and low-income households” (page 32). Thus, a clear functional, and hence regulatory, separation between the (digital) payments industry and the lending/borrowing industry may be either effective or desirable. Global experience tells us that digital transactions data, along with other alternative data, are fast becoming the basis of provision of financial and other services, by both banking and non-banking (payments) companies. We appeal to the Ministry of Finance to adopt a comprehensive and concerted approach to regulating, enabling competition, and upholding consumers’ rights in the banking sector at large.
3.4. The report recognises “banking as an activity is separate from payments, which is more of a technology business” (page 154). Contemporary banking and payment businesses are both are primarily technology businesses where information technology particularly is deployed intimately to extract, process, and drive asset management decisions using financial transaction data. Further, with payment businesses (such as, pre-paid instruments) offering return on deposited money via other means (such as, cashbacks), and potentially competing and/or collaborating with established banks to use financial transaction data to drive lending decisions, including but not limited to micro-loans, it appears unproductive to create a separation between banking as an activity and payments as an activity merely in terms of the respective technology intensity of these sectors. CIS firmly recommends that regulation of these financial services and activities be undertaken in a technology-agnostic manner, and similar regulatory regimes be deployed on those entities offering similar services irrespective of their technology intensity or choice.
3.5. The report highlights two major shortcomings of the current regulatory regime for payments. Firstly “the law does not impose any obligation on the regulator to promote competition and innovation in the payments market” (page 153). It appears to us that the regulator’s role should not be to promote market expansion and innovation but to ensure and oversee competition. We believe that the current regulator should focus on regulating the existing market, and the work of the expansion of the digital payments market in particular and the digital financial services market in general be carried out by another government agency, as it creates conflict of interest for the regulator otherwise. Secondly, the report mentions that Payment and Settlement Systems Act does not “focus the regulatory attention on the need for consumer protection in digital payments” and then it notes that a “provision was inserted to protect funds collected from customers” in 2015 (page 153). This indicates that the regulator already has the responsibility to ensure consumer protection in digital payments. The purview and modalities of how this function of course needs discussion and changes with the growth in digital payments.
3.6. The report identifies the high cost of cash as a key reason for the government’s policy push towards digital payments. Further, it mentions that a “sample survey conducted in 2014 across urban and rural neighbourhoods in Delhi and Meerut, shows that despite being keenly aware of the costs associated with transacting in cash, most consumers see three main benefits of cash, viz. freedom of negotiations, faster settlements, and ensuring exact payments” (page 30). It further notes that “[d]igital payments have significant dependencies upon power and telecommunications infrastructure. Therefore, the roll out of robust and user friendly digital payments solutions to unelectrified areas/areas without telecommunications network coverage, remains a challenge.” CIS much appreciates the discussion of the barriers to universal adoption and rollout of digital payments in the report, and appeals to the Ministry of Finance to undertake a more comprehensive study of the key investments required by the Government of India to ensure that digital payments become ubiquitously viable as well as satisfy the demands of a vast range of consumers that India has. The estimates about investment required to create a robust digital payment infrastructure, cited in the report, provide a great basis for undertaking studies such as these.
3.7. CIS is very encouraged to see the report highlighting that “[w]ith the rising number of users of digital payment services, it is absolutely necessary to develop consumer confidence on digital payments. Therefore, it is essential to have legislative safeguards to protect such consumers in-built into the primary law.” We second this recommendation and would like to add further that financial transaction data is governed under a common data protection and privacy regime, without making any differences between data collected by banking and non-banking entities.
3.8. We are, however, very discouraged to see the overtly incorrect use of the word “Open Access” in this report in the context of a payment system disallowing service when the client wants to transact money with a specific entity [4]. This is not an uncommon anti-competitive measure adopted by various platform players and services providers so as to disallow users from using competing products (such as, not allowing competing apps in the app store controlled by one software company). The term “Open Access” is not only the appropriate word to describe the negation of such anti-competitive behaviour, its usage in this context undermines its accepted meaning and creates confusion regarding the recommendation being proposed by the report. The closest analogy to the recommendation of the report would perhaps be with the principle of “network neutrality” that stands for the network provider not discriminating between data packets being processed by them, either in terms of price or speed.
3.9. A major recommendation by the report involves creation of “a fund from savings generated from cash-less transactions … by the Central Government,” which will use “the trinity of JAM (Jan Dhan, Adhaar, Mobile) [to] link financial inclusion with social protection, contributing to improved Social and Financial Security and Inclusion of vulnerable groups/ communities” (page 160-161). This amounts to making Aadhaar a mandatory ID for financial inclusion of citizens, especially the marginal and vulnerable ones, and is in direct contradiction to the government’s statements regarding the optional nature of the Aadhaar ID, as well as the orders by the Supreme Court on this topic.
3.10. The report recommends that “Aadhaar should be made the primary identification for KYC with the option of using other IDs for people who have not yet obtained Aadhaar” (page 163) and further that “Aadhaar eKYC and eSign should be a replacement for paper based, costly, and shared central KYC registries” (page 162). Not only these measures would imply making Aadhaar a mandatory ID for undertaking any legal activity in the country, they assume that the UIDAI has verified and audited the personal documents submitted by Aadhaar number holders during enrollment. A mandate for replacement of the paper-based central KYC agencies will only remove a much needed redundancy in the the identity verification infrastructure of the government.
3.11. The report suggests that “[t]ransactions which are permitted in cash without KYC should also be permitted on prepaid wallets without KYC” (page 164-165). This seems to negate the reality that physical verification of a person remains one of the most authoritative identity verification process for a natural person, apart from DNA testing perhaps. Thus, establishing full equivalency of procedure between a presence-less transaction and one involving a physically present person making the payment will only amount to removal of relatively greater security precautions for the former, and will lead to possibilities of fraud.
3.12. In continuation with the previous point, the report recommends promotion of “Aadhaar based KYC where PAN has not been obtained” and making of “quoting Aadhaar compulsory in income tax return for natural persons” (page 163). Both these measures imply a replacement of the PAN by Aadhaar in the long term, and a sharp reduction in growth of new PAN holders in the short term. We appeal for this recommendation to be reconsidered as integration of all functionally separate national critical information infrastructures (such as PAN and Aadhaar) into a single unified and centralised system (such as Aadhaar) engenders massive national and personal security threats.
3.13. The report suggest the establishment of “a ranking and reward framework” to recognise and encourage for the best performing state/district/agency in the proliferation of digital payments. It appears to us that creation of such a framework will only lead to making of an environment of competition among these entities concerned, which apart from its benefits may also have its costs. For example, the incentivisation of quick rollout of digital payment avenues by state government and various government agencies may lead to implementation without sufficient planning, coordination with stakeholders, and precautions regarding data security and privacy. The provision of central support for digital payments should be carried out in an environment of cooperation and not competition.
3.14. CIS welcomes the recommendation by the report to generate greater awareness about cost of cash, including by ensuring that “large merchants including government agencies should account and disclose the cost of cash collection and cash payments incurred by them periodically” (page 164). It, however, is not clear to whom such periodic disclosures should be made. We would like to add here that the awareness building must simultaneously focus on making public how different entities shoulder these costs. Further, for reasons of comparison and evidence-driven policy making, it is necessary that data for equivalent variables are also made open for digital payments - the total and disaggregate cost, and what proportion of these costs are shouldered by which entities.
3.15. The report acknowledges that “[t]oday, most merchants do not accept digital payments” and it goes on to recommend “that the Government should seize the initiative and require all government agencies and merchants where contracts are awarded by the government to provide at-least one suitable digital payment option to its consumers and vendors” (page 165). This requirement for offering digital payment option will only introduce an additional economic barrier for merchants bidding for government contracts. We appeal to the Ministry of Finance to reconsider this approach of raising the costs of non-digital payments to incentivise proliferation of digital payments, and instead lower the existing economic and other barriers to digital payments that keep the merchants away. The adoption of digital payments must not lead to increasing costs for merchants and end-users, but must decrease the same instead.
3.16. As the report was submitted on December 09, 2016, and was made public only on December 27, 2016, it would have been much appreciated if at least a month-long window was provided to study and comment on the report, instead of fifteen days. This is especially crucial as the recently implemented demonetisation and the subsequent banking and fiscal policy decisions taken by the government have rapidly transformed the state and dynamics of the payments system landscape in India in general, and digital payments in particular.
Endnotes
[1] See: http://cis-india.org/.
[2] See: http://finmin.nic.in/reports/Note-watal-report.pdf and http://finmin.nic.in/reports/watal_report271216.pdf.
[3] See: http://finmin.nic.in/cancellation_high_denomination_notes.pdf.
[4] Open Access refers to “free and unrestricted online availability” of scientific and non-scientific literature. See: http://www.budapestopenaccessinitiative.org/read.
Comments on the Proposed ICANN Community Anti-Harassment Policy
We at CIS are grateful for the opportunity to comment on the proposed ICANN Community Anti-Harassment Policy (“Policy”). We provide our specific comments to the Policy below, in three sections. The first section addresses the Terms of Participation, the second deals with the Reporting and Complaint Procedure, and the third places on record our observations on questions and issues for further consideration which have not been covered by the Policy.
Besides various other observations, CIS broadly submitted:
- The attempt to provide an exhaustive definition of “Specified Characteristics” results in its meaning being unclear and exclusionary.
- CIS strongly supports the phrase “including, but not limited to” that is followed by a bulleted list of inappropriate conduct.
- The word “consent” is entirely missing from the draft policy even though the deciding factor in the “appropriateness” of an act or conduct is active and explicit consent to the act by both/ all individuals involved.
- There is a need for clarity of communication platforms. The current Policy fails to specify instances of face-to-face and online communications.
- The policy fails to account for a body of persons (as is provided for in the IETF policy) for the redressal of harassment complaints.
- The provision for an informal resolution of a harassment issue is problematic as it could potentially lead to negative consequences for the complainant.
- The Ombudsperson’s discretion in the determination of remedial action is detrimental to transparency and accountability.
- The Policy in its current form lacks provisions for ensuring privacy and confidentiality of the complainant as well as interim relief while the Ombudsperson is looking into the complaint
Read the Complete Submission here
Social Media Monitoring
Social Media Monitoring: Download (PDF)
Introduction
In 2014, the Government of India launched the much lauded and popular citizen outreach website called MyGov.in. A press release by the government announced that they had roped in global consulting firm PwC to assist in the data mining exercise to process and filter key points emerging from debates on Mygov.in. While this was a welcome move, the release also mentioned that the government intended to monitor social media sites in order to gauge popular opinion. Further, earlier this year, the government set up National Media Analytics Centre (NMAC) to monitor blogs, media channels, news outlets and social media platforms. The tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments, and also look at the past patterns of posts. A project called NETRA has already been reported in the media a few years back which would intercept and analyse internet traffic using pre-defined filters. Alongside, we see other initiatives which intend to use social media data for predictive policing purposes such as CCTNS and Social Media Labs.
Thus, we see a trend of social media and communication monitoring and surveillance initiatives announced by the government which have the potential to create a chilling effect on free speech online and raises question about the privacy of individuals. Various commentators have raised concerns about the legal validity of such programmes and whether they were in violation of the fundamental rights to privacy and free expression, and the existing surveillance laws in India. The lack of legislation governing these programmes often translates into an absence of transparency and due procedure. Further, a lot of personal communication now exists in the public domain which renders traditional principles which govern interception and monitoring of personal communications futile. In the last few years, the blogosphere and social media websites in India have also changed and become platforms for more dissemination of political content, often also accompanied by significant vitriol, ‘trolling’ and abuse. Thus, we see greater policing of public or semi-public spaces online. In this paper, we look at social media monitoring as a tool for surveillance, the current state of social media surveillance in India and evaluate how the existing regulatory framework in India may deal with such practices in future.
The Design & Technology behind India’s Surveillance Programmes
While the legal and policy avenues of state surveillance in India have been analysed by various organisations, there is very little available information about the technology and infrastructure used to carry out this surveillance. This appears to be largely, according to the government, due to reasons of national security and sovereignty.[1] This blog post will attempt to paint a picture of the technological infrastructure being used to carry out state surveillance in India.
Background
The revelations by Edward Snowden about mass surveillance in mid-2013 led to an explosion of journalistic interest in surveillance and user privacy in India.[2] The reports and coverage from this period, leading up to early 2015, serve as the main authority for the information presented in this blog post. The lack of information from official government sources as well as decreasing public spotlight on surveillance since that point of time generally have both led to little or no new information turning up about India’s surveillance regime since this period. However, given the long term nature of these programmes and the vast amounts of time it takes to set them up, it is fairly certain that the programmes detailed below are still the primary bedrock of state surveillance in the country, albeit having become operational and inter-connected only in the past 2 years.
The technology being used to carry out surveillance in India over the past 5 years is largely an upgraded, centralised and substantially more powerful version of the surveillance techniques followed in India since the advent of telegraph and telephone lines: the tapping & recording of information in transit.[3] The fact that all the modern surveillance programmes detailed below have not required any new legislation, law, amendment or policy that was not already in force prior to 2008 is the most telling example of this fact. The legal and policy implication of the programmes illustrated below have been covered in previous articles by the Centre for Internet & Society which can be found here,[4] here[5] and here.[6] Therefore, this post will solely concentrate on the technological design and infrastructure being used to carry out surveillance along with any new developments in this field that the three source mentioned would not have covered from a technological perspective.
The Technology Infrastructure behind State Surveillance in India
The programmes of the Indian Government (in public knowledge) that are being used to carry out state surveillance are broadly eight in number. These exclude specific surveillance technology being used by independent arms of the government, which will be covered in the next section of this post. Many of the programmes listed below have overlapping jurisdictions and in some instances are cross-linked with each other to provide greater coverage:
- Central Monitoring System (CMS)
- National Intelligence Grid (NAT-GRID)
- Lawful Intercept And Monitoring Project (LIM)
- Crime and Criminal Tracking Network & Systems (CCTNS)
- Network Traffic Analysis System (NETRA)
- New Media Wing (Bureau of New and Concurrent Media)
The post will look at the technological underpinning of each of these programmes and their operational capabilities, both in theory and practice.
Central Monitoring System (CMS)
The Central Monitoring System (CMS) is the premier mass surveillance programme of the Indian Government, which has been in the planning stages since 2008[7] Its primary goal is to replace the current on-demand availability of analog and digital data from service providers with a “central and direct” access which involves no third party between the captured information and the government authorities.[8] While the system is currently operated by the Centre for Development of Telematics, the unreleased three-stage plan envisages a centralised location (physically and legally) to govern the programme. The CMS is primarily operated by Telecom Enforcement and Resource Monitoring Cell (TERM) within the Department of Telecom, which also has a larger mandate of ensuring radiation safety and spectrum compliance.
The technological infrastructure behind the CMS largely consists of Telecom Service Providers (TSPs) and Internet Service Providers (ISPs) in India being mandated to integrate Interception Store & Forward (ISF) servers with their Lawful Interception Systems required by their licences. Once these ISF servers are installed they are then connected to the Regional Monitoring Centres (RMC) of the CMS, setup according to geographical locations and population. Finally, Regional Monitoring Centre (RMC) in India is connected to the Central Monitoring System (CMS) itself, essentially allowing the collection, storage, access and analysis of data collected from all across the country in a centralised manner. The data collected by the CMS includes voice calls, SMS, MMS, fax communications on landlines, CDMA, video calls, GSM and even general, unencrypted data travelling across the internet using the standard IP/TCP Protocol.[9]
With regard to the analysis of this data, Call Details Records (CDR) analysis, data mining, machine learning and predictive algorithms have been allegedly implemented in various degrees across this network.[10] This allows state actors to pre-emptively gather and collect a vast amount of information from across the country, perform analysis on this data and then possibly even take action on the basis of this information by directly approaching the entity (currently the TERM under C-DOT) operating the system. [11] The system has reached full functionality in mid 2016, with over 22 Regional Monitoring Centres functional and the system itself being ‘switched on’ post trials in gradual phases.[12]
National Intelligence Grid (NATGRID)
The National Intelligence Grid (NATGRID) is a semi-functional[13] integrated intelligence grid that links the stored records and databases of several government entities in order to collect data, decipher trends and provide real time (sometimes even predictive) analysis of data gathered across law enforcement, espionage and military agencies. The programme intends to provide 11 security agencies real-time access to 21 citizen data sources to track terror activities across the country. The citizen data sources include bank account details, telephone records, passport data and vehicle registration details, the National Population Register (NPR), the Immigration, Visa, Foreigners Registration and Tracking System (IVFRT), among other types of data, all of which are already present within various government records across the country.[14]
Data mining and analytics are used to process the huge volumes of data generated from the 21 data sources so as to analyse events, match patterns and track suspects, with big data analytics[15] being the primary tool to effectively utilise the project, which was founded to prevent another instance of the September, 2011 terrorist attacks in Mumbai. The list of agencies that will have access to this data collection and analytics platform are the Central Board of Direct Taxes (CBDT), Central Bureau of Investigation (CBI), Defense Intelligence Agency (DIA), Directorate of Revenue Intelligence (DRI), Enforcement Directorate (ED), Intelligence Bureau (IB), Narcotics Control Bureau (NCB), National Investigation Agency (NIA), Research and Analysis Wing (RAW), the Military Intelligence of Assam , Jammu and Kashmir regions and finally the Home Ministry itself.[16]
As of late 2015, the project has remained stuck because of bureaucratic red tape, with even the first phase of the four stage project not complete. The primary reason for this is the change of governments in 2014, along with apprehensions about breach of security and misuse of information from agencies such as the IB, R&AW, CBI, and CBDT, etc.[17] However, the office of the NATGRID is now under construction in South Delhi and while the agency claims an exemption under the RTI Act as a Schedule II Organisation, its scope and operational reach have only increased with each passing year.
Lawful Intercept And Monitoring Project
Lawful Intercept and Monitoring (LIM), is a secret mass electronic surveillance program operated by the Government of India for monitoring Internet traffic, communications, web-browsing and all other forms of Internet data. It is primarily run by the Centre for Development of Telematics (C-DoT) in the Ministry of Telecom since 2011.[18]
The LIM Programme consists of installing interception, monitoring and storage programmes at international gateways, internet exchange hubs as well as ISP nodes across the country. This is done independent of ISPs, with the entire hardware and software apparatus being operated by the government. The hardware is installed between the Internet Edge Router (PE) and the core network, allowing for direct access to all traffic flowing through the ISP. It is the primary programme for internet traffic surveillance in India, allowing indiscriminate monitoring of all traffic passing through the ISP for as long as the government desires, without any oversight of courts and sometimes without the knowledge of ISPs.[19] One of the most potent capabilities of the LIM Project are live, automated keyword searches which allow the government to track all the information passing through the internet pipe being surveilled for certain key phrases in both in text as well in audio. Once these key phrases are successfully matched to the data travelling through the pipe using advanced search algorithms developed uniquely for the project, the system has various automatic routines which range from targeted surveillance on the source of the data to raising an alarm with the appropriate authorities.
LIM systems are often also operated by the ISPs themselves, on behalf of the government. They operate the device, including hardware upkeep, only to provide direct access to government agencies upon requests. Reports have stated that the legal procedures laid down in law (including nodal officers and formal requests for information) are rarely followed[20] in both these cases, allowing unfettered access to petabytes of user data on a daily basis through these programmes.
Crime and Criminal Tracking Network & Systems (CCTNS)
The Crime and Criminal Tracking Network & System (CCTNS) is a planned network that allows for the digital collection, storage, retrieval, analysis, transfer and sharing of information relating to crimes and criminals across India.[21] It is supposed to primarily operate at two levels, one between police stations and the second being between the various governance structures around crime detection and solving around the country, with access also being provided to intelligence and national security agencies.[22]
CCTNS aims to integrate all the necessary data and records surrounding a crime (including past records) into a Core Application Software (CAS) that has been developed by Wipro.[23] The software includes the ability to digitise FIR registration, investigation and charge sheets along with the ability to set up a centralised citizen portal to interact with relevant information. This project aims to use this CAS interface across 15, 000 police stations in the country, with up to 5, 000 additional deployments. The project has been planned since 2009, with the first complete statewide implementation going live only in August 2016 in Maharashtra. [24]
While seemingly harmless at face value, the project’s true power lies in two main possible uses. The first being its ability to profile individuals using their past conduct, which now can include all stages of an investigation and not just a conviction by a court of law, which has massive privacy concerns. The second harm is the notion that the CCTNS database will not be an isolated one but will be connected to the NATGRID and other such databases operated by organisations such as the National Crime Records Bureau, which will allow the information present in the CCTNS to be leveraged into carrying out more invasive surveillance of the public at large.[25]
Network Traffic Analysis System (NETRA)
NETRA (NEtwork TRaffic Analysis) is a real time surveillance software developed by the Centre for Artificial Intelligence and Robotics (CAIR) at the Defence Research and Development Organisation. (DRDO) The software has apparently been fully functional since early 2014 and is primarily used by Indian Spy agencies, the Intelligence Bureau (IB) and the Research and Analysis Wing (RAW) with some capacity being reserved for domestic agencies under the Home Ministry.
The software is meant to monitor Internet traffic on a real time basis using both voice and textual forms of data communication, especially social media, communication services and web browsing. Each agency was initially allocated 1000 nodes running NETRA, with each node having a capacity to analyse 300GB of information per second, giving each agency a capacity of around 300 TB of information processing per second.[26] This capacity is largely available only to agencies dealing with External threats, with domestic agencies being allocated far lower capacities, depending on demand. The software itself is mobile and in the presence of sufficient hardware capacity, nothing prevents the software from being used in the CMS, the NATGRID or LIM operations.
There has been a sharp and sudden absence of public domain information regarding the software since 2014, making any statements about its current form or evolution mere conjecture.
Analysis of the Collective Data
Independent of the capacity of such programmes, their real world operations work in a largely similar manner to mass surveillance programmes in the rest of the world, with a majority of the capacity being focused on decryption and storage of data with basic rudimentary data analytics.[27] Keyword searches for hot words like 'attack', 'bomb', 'blast' or 'kill' in the various communication stream in real time are the only real capabilities of the system that have been discussed in the public domain,[28] which along with the limited capacity of such programmes[29] (300 TB) is indicative of basic level of analysis that is carried on captured data. Any additional details about the technical details about how India’s surveillance programmes use their captured data is absent from the public domain but they can presumed, at best, to operate with similar standards as global practices.[30]
Capacitative Global Comparison
As can be seen from the post so far, India’s surveillance programmes have remarkably little information about them in the public domain, from a technical operation or infrastructure perspective. In fact, post late 2014, there is a stark lack of information about any developments in the mass surveillance field. All of the information that is available about the technical capabilities of the CMS, NATGRID or LIM is either antiquated (pre 2014) or is about (comparatively) mundane details like headquarter construction clearances.[31] Whether this is a result of the general reduction in the attention towards mass surveillance by the public and the media[32] or is the result of actions taken by the government under the “national security” grounds under as the Official Secrets Act, 1923[33] can only be conjecture.
However, given the information available (mentioned previously in this article) a comparative points to the rather lopsided position in comparison to international mass surveillance performance. While the legal provisions in India regarding surveillance programmes are among the most wide ranging, discretionary and opaque in the world[34] their technical capabilities seem to be anarchic in comparison to modern standards. The only real comparative that can be used is public reporting surrounding the DRDO NETRA project around 2012 and 2013. The government held a competition between the DRDO’s internally developed software “Netra” and NTRO’s “Vishwarupal” which was developed in collaboration with Paladion Networks.[35] The winning software, NETRA, was said to have a capacity of 300 GB per node, with a total of 1000 sanctioned nodes.[36] This capacity of 300 TB for the entire system, while seemingly powerful, is a miniscule fragment of 83 Petabytes traffic that is predicted to generated in India per day.[37] In comparison, the PRISM programme run by the National Security Agency in 2013 (the same time that the NETRA was tested) has a capacity of over 5 trillion gigabytes of storage[38], many magnitudes greater than the capacity of the DRDO software. Similar statistics can be seen from the various other programmes of NSA and the Five Eyes alliance,[39] all of which operated at far greater capacities[40] and were held to be minimally effective.[41] The questions this poses of the effectiveness, reliance and proportionality of the Indian surveillance programme can never truly be answered due to the lack of information surrounding capacity and technology of the Indian surveillance programmes, as highlighted in the article. With regard to criminal databases used in surveillance, such as the NATGRID, equivalent systems both domestically (especially in the USA) and internationally (such as the one run by the Interpol)[42] are impossible due to the NATGRID not even being fully operational yet.[43]
Conclusion
Even if we were to ignore the issues in principle with mass surveillance, the pervasive, largely unregulated and mass scale surveillance being carried in India using the tools and technologies detailed above have various technical and policy failings. It is imperative that transparency, accountability and legal scrutiny be made an integral part of the security apparatus in India. The risks of security breaches, politically motivated actions and foreign state hacking only increase with the absence of public accountability mechanisms. Further, opening up the technologies used for these operations to regular security audits will also improve their resilience to such attacks.
[1] http://cis-india.org/internet-governance/blog/the-constitutionality-of-indian-surveillance-law
[2] http://india.blogs.nytimes.com/2013/07/10/how-surveillance-works-in-india/
[3] https://www.privacyinternational.org/node/818
[4] http://cis-india.org/internet-governance/blog/state-of-cyber-security-and-surveillance-in-india.pdf
[5] http://cis-india.org/internet-governance/blog/security-surveillance-and-data-sharing.pdf
[6] http://cis-india.org/internet-governance/blog/paper-thin-safeguards.pdf
[7] http://pib.nic.in/newsite/PrintRelease.aspx?relid=54679 & http://www.dot.gov.in/sites/default/files/English%20annual%20report%202007-08_0.pdf
[8] http://ijlt.in/wp-content/uploads/2015/08/IJLT-Volume-10.41-62.pdf
[9] http://www.thehindu.com/scitech/technology/in-the-dark-about-indias-prism/article4817903.ece
[10] http://cis-india.org/internet-governance/blog/india-centralmonitoring-system-something-to-worry-about
[11] https://www.justice.gov/sites/default/files/pages/attachments/2016/07/08/ind195494.e.pdf
[12] http://www.datacenterdynamics.com/content-tracks/security-risk/indian-lawful-interception-data-centers-are-complete/94053.fullarticle
[13] http://natgrid.attendance.gov.in/ [Attendace records at the NATGRID Office!]
[14] http://articles.economictimes.indiatimes.com/2013-09-10/news/41938113_1_executive-order-nationalintelligence-grid-databases
[15] http://www.business-standard.com/article/current-affairs/natgrid-to-use-big-data-analytics-to-track-suspects-1
[16] http://sflc.in/wp-content/uploads/2014/09/SFLC-FINAL-SURVEILLANCE-REPORT.pdf
[17] http://indiatoday.intoday.in/story/natgrid-gets-green-nod-but-hurdles-remain/1/543087.html
[18] http://www.thehindu.com/news/national/govt-violates-privacy-safeguards-to-secretly-monitor-internet-traffic/article5107682.ece
[19] ibid
[20] http://www.thehoot.org/story_popup/no-escaping-the-surveillance-state-8742
[21] http://ncrb.gov.in/BureauDivisions/CCTNS/cctns.htm
[22] ibid
[23] http://economictimes.indiatimes.com/news/politics-and-nation/ncrb-to-connect-police-stations-and-crime-data-across-country-in-6-months/articleshow/45029398.cms
[24] http://indiatoday.intoday.in/education/story/crime-criminal-tracking-network-system/1/744164.html
[25] http://www.dailypioneer.com/nation/govt-cctns-to-be-operational-by-2017.html
[26] http://articles.economictimes.indiatimes.com/2012-03-10/news/31143069_1_scanning-internet-monitoring-system-internet-data
[27] Surveillance, Snowden, and Big Data: Capacities, consequences, critique: http://journals.sagepub.com/doi/pdf/10.1177/2053951714541861
[28] http://www.thehindubusinessline.com/industry-and-economy/info-tech/article2978636.ece
[29] See previous section in the article “NTRO”
[30] Van Dijck, José. "Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology." Surveillance & Society 12.2 (2014): 197.
[31] http://www.dailymail.co.uk/indiahome/indianews/article-3353230/Nat-Grid-knots-India-s-delayed-counter-terror-programme-gets-approval-green-body-red-tape-stall-further.html
[32] http://cacm.acm.org/magazines/2015/5/186025-privacy-behaviors-after-snowden/fulltext
[33] https://freedomhouse.org/report/freedom-press/2015/india
[34] http://blogs.wsj.com/indiarealtime/2014/06/05/indias-snooping-and-snowden/
[35] http://articles.economictimes.indiatimes.com/2012-03-10/news/31143069_1_scanning-internet-monitoring-system-internet-data
[36] http://economictimes.indiatimes.com/tech/internet/government-to-launch-netra-for-internet-surveillance/articleshow/27438893.cms
[37] http://trak.in/internet/indian-internet-traffic-8tbps-2017/
[38] http://www.economist.com/news/briefing/21579473-americas-national-security-agency-collects-more-information-most-people-thought-will
[39] http://www.washingtonsblog.com/2013/07/the-fact-that-mass-surveillance-doesnt-keep-us-safe-goes-mainstream.html
[40] http://www.washingtonpost.com/wp-srv/special/politics/prism-collection-documents/
[41] Supra Note 35
[42] http://www.papillonfoundation.org/information/global-crime-database/
[43] http://www.thehindu.com/opinion/editorial/Revive-NATGRID-with-safeguards/article13975243.ece
Privacy after Big Data - Workshop Report
This workshop aimed to build a dialogue around some of the key government-led big data initiatives in India and elsewhere that are contributing significant new challenges and concerns to the ongoing debates on the right to privacy. It was an open event.
In this age of big data, discussions about privacy are intertwined with the use of technology and the data deluge. Though big data possesses enormous value for driving innovation and contributing to productivity and efficiency, privacy concerns have gained significance in the dialogue around regulated use of data and the means by which individual privacy might be compromised through means such as surveillance, or protected. The tremendous opportunities big data creates in varied sectors ranges from financial technology, governance, education, health, welfare schemes, smart cities to name a few. With the UID project re-animating the Right to Privacy debate in India, and the financial technology ecosystem growing rapidly, striking a balance between benefits of big data and privacy concerns is a critical policy question that demands public dialogue and research to inform an evidence based decision. Also, with the advent of potential big data initiatives like the ambitious Smart Cities Mission under the Digital India Scheme, which would rely on harvesting large data sets and the use of analytics in city subsystems to make public utilities and services efficient, the tasks of ensuring data security on one hand and protecting individual privacy on the other become harder.
This workshop sought to discuss some of the emerging problems due to the advent of big data and possible ways to address these problems. The workshop began with Amber Sinha of CIS and Sandeep Mertia of Sarai introducing the topic of big data and implications for privacy. Both speakers tried to define big data and brief history of the evolution of the term and raised questions about how we understand it. Dr. Usha Ramanathan spoke on the right to privacy in the context of the ongoing Aadhaar case and Vipul Kharbanda introduced the concept of Habeas Data as a possible solution to the privacy problems posed by big data. Amelia Andersotter discussed national centralised digital ID systems and their evolution in Europe, often operating at a cross-functional scale, and highlighted its implications for discussions on data protection, welfare governance, and exclusion from public and private services. Srikanth Lakshmanan spoke of the issues with technology and privacy, and possible technological solutions. Dr. Anupam Saraph discussed the rise of digital banking and Aadhaar based payments and its potential use for corrupt practices. Astha Kapoor of Microsave spoke about her experience of implementation of digital money solution in rural India.
Post lunch, Dr. Anja Kovacs and Mathew Rice spoke on the rise of mass communication surveillance across the world, and the evolving challenges of regulating surveillance by government agencies. Mathew also spoke of privacy movements by citizens and civil society in regions. In the final speaking session, Apar Gupta and Kritika Bhardwaj traced the history of jurisprudence on the right to privacy and the existing regulations and procedures. In the final session, the participants discussed various possible solutions to privacy threats from big data and identity projects including better regulation, new approached such as harms based regulation and privacy risk assessments, and conceiving privacy as a horizontal right. The workshop ended with vote of thanks from the organizers.
The agenda for the event can be accessed here, and the transcript is available here.
Comparison of General Data Protection Regulation and Data Protection Directive
Download the file here
INTRODUCTION
The GDPR i.e. General Data Protection Regulation (REGULATION (EU) 2016/679) was adopted on May 27th, 2016. It will come into force after a two-year transition period on May 25th, 2018 and will replace the Data Protection Directive (DPD 95/46/EC). The Regulation intends to empower data subjects in the European Union by giving them control over the processing of their personal data. This is not an enabling legislation. Unlike the previous regime under the DPD (Data Protection Directive), wherein different member States legislated their own data protection laws, the new regulation intends uniformity in application with some room for individual member states to legislate on procedural mechanisms. While this will ensure a predictable environment for doing business, a number of obligations will have to be undertaken by organizations, which might initially burden them financially and administratively.
2. SUMMARY
The Regulation contains a number of new provisions as well as modified provisions that were under DPD and has removed certain requirements under the DPD. Some significant changes mentioned in the document have been summarized in this section.. These changes suggest that GDPR is a comprehensive law with detailed substantive and procedural provisions. Yet, some ambiguities remain with respect to its workability and interpretation. Clarifications will be required.
2.1 Provisions from the DPD that were retained but altered in the GDPR include:
2.1.1 Scope:
GDPR has an expanded territorial scope and is applicable under two scenarios; 1) when processor or controller is established in the Union, and 2) when processor or controller is not established in the Union. The conditions for applicability of the GDPR under the two are much wider than those provided for DPD. Also, the criteria under GDPR are more specific and clearer to demonstrate application.
2.1.2 Definitions:
Six definitions have remained the same while those of personal data and consent have been expanded.
2.1.3 Consent:
GDPR mentions "unambiguous" consent and spells out in detail what constitutes a valid consent. Demonstration of valid consent is an important obligation of the controller. Further, the GDPR also explains situations in which child's consent will be valid. Such provisions are absent in DPD.
2.1.4 Special categories of data:
Two new categories, biometric and genetic data have been added under GDPR.
2.1.5 Rights:
The GDPR strengthens certain rights granted under the DPD. These include:
a. Right to restrict processing: Under DPD the data subject can block processing of data on the grounds of data inaccuracy or incomplete nature of data. GDPR, on the other hand , is more elaborate and defined in this respect. Many more grounds are listed together with consequences of enforcement of this right and obligations on controller.
b. Right to erasure: This is known as the "right to be forgotten". Here, the DPD merely mentions that the data subject has the right to request erasure of data on grounds of data inaccuracy or incomplete nature of data or in case of unlawful processing. The GDPR has strengthened this right by laying out 7 conditions for enforcing this right including 5 grounds on which the request for erasure shall not be processed. This means that the "right to erasure" is not an absolute right. GDPR provides that if data has been made public, controllers are under an obligation to inform other controllers processing the data about the request.
c. Right to rectification: This right is similar under GDPR and DPD.
d. Right to access: GDPR has broadened the amount of information data subject can have regarding his/her own data. For example, under the DPD the data subject could know about the purpose of processing, categories of processing, recipients or categories to whom data are disclosed and extent of automated decision involved. Now under GDPR, the data subject can also know about retention period, existence of certain rights, about source of data and consequences of processing. It specifically states controllers obligations in this regard.
e. Automated individual decision making including profiling: This is an interesting provision that applies solely to automate decision-making. This includes profiling, which is a process by which personal data is evaluated solely by automated means for the purpose of analyzing a person's personal aspect such as performance at work, health, location etc. The intent is that data subjects should have the right to obtain human intervention into their personal data. This upholds philosophy of data safeguard as the subject can get an opportunity to express himself, obtain explanation and challenge the decision. Under GDPR, such decision-making excludes data concerning a child.
2.1.6 Code of conduct:
A voluntary self-regulating mechanism has been provided under both GDPR and DPD.
2.1.7 Supervisory Authority:
As compared to the DPD, the GDPR lays down detailed and elaborate provisions on Supervisory Authority.
2.1.8 Compensation and Liability:
Although compensation and liability provisions under GDPR and DPD are similar, the GDPR specifically mentions this as a right with a wider scope. While the Directive enforces liability on the controller only, under the GDPR, compensation can be claimed from both, processor and controller.
2.1.9 Effective judicial remedies:
Provisions in this area are also quite similar between the DPD and GDPR. The difference is that GDPR specifically mentions this as a "right" and the Directive does not. Use of such words is bound to bring legal clarity. It is interesting to note that in the DPD, recourse to remedy has been mentioned in the Recitals and it is the national law of individual member states, which shall regulate the enforceability. GDPR, on the other hand, mentions this under its Articles together with the jurisdiction of courts and exceptions to this right.
2.1.10 Right to lodge complaint with supervisory authority:
The right conferred to the data subject to seek remedy under unlawful processing has been strengthened under GDPR. Again, as mentioned above, GDRP specifically words this as a "right" while the DPD does not.
2.2 New provisions added to the GDPR include:
2.2.1 Data Transfer to third countries:
Provisions under Chapter V of GDPR regulate data transfers from EU to third countries and international organizations and data transfer onward. DPD only provides for data transfer to third countries without reference to international organizations.
A mechanism called adequacy decisions for such transfers remains the same under both laws. However, in situations where Commission does not take adequacy decisions, alternate and elaborate provisions on "Effective Safeguards" and "Binding Corporate Rules" have been mentioned under the GDPR. Other certain situations have been envisaged under both GDPR and DPD for data transfers in absence of adequacy decision. These are more or less similar with a only few modifications.
Significantly, GDPR brings clarity with respect to enforceability of judgments and orders of authorities that are outside of EU over their decision on such data transfer. Additionally, it provides for international cooperation for protection of personal data. These are not mentioned in the DPD.
2.2.2 Certification mechanism:
Just like code of conduct, this is also a voluntary mechanism, which can aid in demonstrating compliance with Regulation.
2.2.3 Records of processing activities:
This is a mandatory "compliance demonstration" mechanism under GDPR, which is not mentioned under DPD. Organizations are likely to face initial administrative and financial burdens in order to maintain records of processing activities.
2.2.4 Obligations of processor:
DPD fixes liability on controllers but leaves out processors. GDPR includes both. Consequently, GDPR specifies obligations of the processor, the kinds of processors the controller can use and what will govern processing.
2.2.5 Data Protection officer:
This finds no mention in the DPD. Under the GDPR, a data protection officer must be mandatorily appointed where the core business activity of the organization pertains to processing, which requires regular and systematic monitoring of data subjects on large scale, processing of large scale special categories of data and offences, or processing carried out by public authority or public body.
2.2.6 Data protection impact assessment:
This is a Privacy Impact assessment for ensuring and demonstrating compliance with the Regulation. Such assessment can identify and minimize risks. GDPR mandates that such assessment must be carried out when processing is likely to result in high risk. The relevant Article mentions when to carry out processing, the type of information to be contained in assessment and a clause for prior consultation with supervisory authority prior to processing if assessment indicates high risk.
2.2.7 Data Breach:
Under this provision, the controller is responsible for two things: 1) reporting personal data breach to supervisory authority no later than 72 hours . Any delay in notifying the authority has to be accompanied by reasons for delay; and 2) communicating the breach to the data subject in case the breach is likely to cause high risk to right and freedoms of the person. As far as the processor is concerned, in the event of data breach, the processor must notify the controller. This provision is likely to push some major changes in the workings of various organizations. A number of detection and reporting mechanisms will have to be implemented. Above all, these mechanisms will have to be extremely efficient given the time limit.
2.2.8 Data Protection by design and default:
This entails a general obligation upon the controller to incorporate effective data protection in internal policies and implementation measures.
2.2.9 Rights:
Under the GDPR, a new right called the " Right to data portability " has been conferred upon the data subjects. This right empowers the data subject to receive personal data from one controller and transfer it to another.
2.2.10 New Definitions:
Out of 26 definitions, 18 new definitions have been added. "Pseudonymisation" is one such new concept that can aid data privacy. This data processing technique encourages processing in a way that personal data can no longer be attributed to a specific data subject without using additional information. This additional information is to be stored separately in a way that it is not attributed to an identified or identifiable natural person.
2.2.11 Administrative fines:
Perhaps much concern about GDPR is due to provisions on high fines for non-compliance of certain provisions. Organizations simply cannot afford to ignore it. Non-compliance can lead to imposition of very heavy fines up to 20,000,000 EUR or 4% of total worldwide turnover.
2.3 Deleted provisions under DPD include :
2.3.1 Working Party:
Working party under the DPD has been replaced by the European Data Protection Board provided by the GDPR. The purpose of the Board is to ensure consistent application of the Regulation.
2.3.2 Notification Requirement:
The general obligation to notify processing supervisory authorities has been removed. It was observed that this requirement imposed unnecessary financial and administrative burden on organizations and was not successful in achieving the real purpose that is protection of personal data. Instead, now the GDPR focuses on procedures and mechanisms like Privacy Impact assessment to ensure compliance.
3. BRIEF OVERVIEW
The GDPR is the new uniform law, which will now replace older laws. A brief overview has been given below:
Topic |
GDPR (General Data Protection Regulation) |
DPD (Data Protection Directive) |
Name |
REGULATION (EU) 2016/679 |
DPD 95/46/EC |
Enforcement |
Adopted on 27 May 2016 To be enforced on 25 May 2018 |
Adopted on 24 October 1995 |
Effect of legislation |
It is a Regulation. Is directly applicable to all EU member states without requiring a separate national legislation. |
It is an enabling legislation. Countries have to pass their own separate legislations. |
Objective |
To protect "natural persons" with regard to processing of personal data and on free movement of such data. It repeals DPD 95/46/EC. |
To protect "individuals" with regard to processing of personal data and on free movement of such data. |
Number of Chapters |
XI |
VII |
99 |
34 |
|
Number of Recitals |
173 |
72 |
Applicability |
To processors and controllers |
Same |
4. COMPARATIVE ANALYSIS OF GDPR AND DPD
This section offers a comparative analysis through a set of tables and text analysing and comparing the provisions of General Data Protection Regulation (GDPR) with those of the Data Protection Direction (DPD). Spaces left blank in the tables imply lack of similar provisions under the respective data regime.
4.1 Territorial Scope
GDPR has expanded territorial scope. The application of Regulation is independent of the place where processing of personal data takes places under certain conditions. The focus is the data subject and not the location. The DPD made application of national law, a criterion for determining the applicability of the Directive. Under the GDPR, the following conditions need to be satisfied for application of Regulation.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
3 |
4 |
When processor or controller is established in the Union, the Regulation/ Directive will apply if: (DPD is silent on location of processors ) |
1. Processing is of personal data 2. Processing is in "context of activities" of the establishment 3. Processing may or may not take place in the Union |
Processing is of personal data. |
When processor or controller is not established in Union, the Regulation/Directive will apply if: (DPD is silent on location of processors ) |
1. Data subjects are in the Union; and 2. Processing activity is related to: I. Offering of goods or services; or II. Monitoring their behavior within Union 3. Will apply when Member State law is applicable to that place by the virtue of public international law |
1. Like GDPR the DPD mentions that national law should be applicable to that place by virtue of public international law; Or 2. If the equipment for processing is situated on Member state territory unless it is used only for purpose of transit. |
4.2 Material Scope
The Recital under GDPR explains that data protection is not an absolute right. Principle of proportionality has been adopted to respect other fundamental rights.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
2 |
3 |
Applies to |
Processing of personal data Processing is by automated means, wholly or partially When processing is not by automated means, the personal data should form or are intended to form a part of filing system |
Same |
Does not apply to |
Processing of personal data: 1. For activities which lie outside scope of Union law 2. By Member State under Chapter 2 Title V of TEU 3. By natural person in course of purely personal or household activity 4. By competent authorities in relation to criminal offences and penalties and threats to public security 5. Under Regulation (EC) No 45/2001. This needs to be adapted for consistency with GDPR 6. Which should not prejudice the E commerce Directive 2000/31/EC especially the liability rules of intermediary service providers |
The provisions in DPD are similar to GDPR. In addition to Title V, the DPD did not apply to Title VI of TEU. DPD doesn't mention Regulation (EC) No 45/2001 or the E commerce Directive 2000/31/EC. |
4.3 Definitions
GDPR incorporates 26 definitions as compared to 8 definitions under DPD. There are 18 new definitions in GDPR. Some definitions have been expanded.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
4 |
2 |
New Definitions under GDPR |
1. Restriction of processing 2. Profiling 3. Pseudonymisation 4. Personal data breach 5. Genetic data 6. Biometric data 7. Data concerning health 8. Main establishment 9. Representative 10. Enterprise 11. Group of undertakings 12. Binding corporate rules 13. Supervisory authority 14. Supervisory authority concerned 15. Cross border processing 16. Relevant and reasoned objection 17. Information society service 18. International organizations |
|
2 definitions that have been expanded under GDPR |
1. Personal data 2. Consent |
|
6 Definitions which have remained same in GDPR and DPD |
1. Processing of personal data 2. Personal data filing system 3. Controller 4. Processor 5. Third party recipient |
4.3.1 Expanded definition of personal data
Both DPD and GDPR apply to 'personal data'. The GDPR gives an expanded definition of 'personal data'. Recital 30 gives example of an online identifier such as IP addresses.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
4(1) |
2(a) |
New term added in the definition |
A new term " online identifier" has been added. Example of online identifier is given under Recital 30. An IP address is one such example. |
4.3.2 Expanded definition of consent
Valid consent must be given by the data subject. The definition of valid consent has been added under GDPR. Recital 32 further explains that consent can be given by "means of a written statement including electronic means or an oral statement". For example, ticking a box on websites signifies acceptance of processing while "pre ticked boxes, silence or inactivity" do not constitute consent.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
4(11) |
2(h) |
Term added in GDPR |
Consent must be unambiguous, freely given, specific and informed. |
The word "unambiguous" is not contained in DPD. |
Means of signifying assent to processing own data |
Assent can be given by a statement or by clear affirmative action signifying assent to processing. |
DPD merely mentions that freely given, specific and informed consent signifies assent. |
4.4 Conditions for consent
GDPR lays down detailed provisions for valid consent. Such provisions are not given in DPD.
Sub-topics in the section |
GDPR |
DPD |
Article |
7 |
|
Obligation of controller |
Must demonstrate consent has been given |
|
Presentation of written declaration of consent |
It should be in a clearly distinguishable, intelligible and easily accessible form. Language should be clear and plain. |
|
If declaration or any part of it infringes on Regulation |
Declaration will be non-binding. |
|
Right of data subject |
To withdraw consent at any time. |
|
If consent is withdrawn, it will not make processing done earlier unlawful. |
||
For assessing whether consent is freely given |
Must consider whether performance of contract or provision of service is made conditional on consent to processing of data not necessary for performance of contract. |
4.5 Conditions applicable to child's consent in relation to information society services
This article prescribes an age limit for making processing lawful when information society services (direct online service) are offered directly to a child.
Sub Topics in the Section |
GDPR |
DPD |
Given in Article |
8 |
|
Conditions for valid consent in this case |
If child is at least 16 years old his consent is valid. If child is below 16 years consent must be obtained from holder of parental responsibility over the child. |
|
Age relaxation can be given when |
Member States provides a law lowering the age. Age cannot be lowered below 13 years. |
|
Controller's responsibility |
Verify who has given the consent |
|
Exceptions |
This law will not affect: General contract law of member states; Effect of contract law on a child; |
4.6 Processing of special categories of personal data
Like the DPD, the GDPR spells out the data that is considered sensitive and the conditions under which this data can be processed. Two new categories of special data, "genetic data" and "biometric data", have been added to the list in the GDPR.
Sub Topics in the Section |
GDPR |
DPD |
Article |
9 |
8 |
Categories of data considered sensitive |
Racial or ethnic origin |
Same |
Political opinions |
Same |
|
Religious or philosophical beliefs |
Same |
|
Trade union membership |
Same |
|
Health or sex life or sexual orientation |
Same |
|
Genetic data or Biometric data uniquely identifying natural person |
||
Circumstances in which processing of personal data may take place |
If there is explicit consent of data subject provided Member State laws do not prohibit such processing |
|
Necessary for carrying out specific rights of controller or data subject |
Under DPD these rights can be for employment. The GDPR adds social security and social protection to this list. These rights are to be authorized by Member state or Union. The GDPR adds "Collective agreements" to this. |
|
In the vital interest of data subject who cannot give consent due to physical or legal causes. |
Same |
|
In the vital interest of a Natural person physically or legally incapable of giving consent |
Same |
|
For legitimate activities carried on by not-for profit-bodies for political, philosophical or trade union aims subject to certain conditions. |
Same |
|
When personal data is made public by data subject |
Same |
|
For establishment, exercise of defense of legal claims or for courts |
Same |
|
For substantial public interest in accordance with Member State or Union law |
||
Is necessary for: Preventive or occupational medicine Assessing working capacity of employee Medical diagnosis Healthcare or social care services Contract with health professional |
||
Is necessary in Public interest in the area of public health |
||
For public interest, scientific or historical research or statistical purpose |
||
Data for preventive or occupational medicine, medical diagnosis etc. can be processed when: |
Data is processed by or under responsibility of a professional under obligation of professional secrecy as state in law |
Here the processing is done by health professional under obligation of professional secrecy |
4.7 Principles relating to processing of personal data
The principles set out in GDPR are similar to the ones under DPD. Some changes have been introduced. Accountability of the controller has been specifically given under GDPR.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
5 |
6 |
Lawfulness, fairness, transparency |
Processing must be Lawful, fair and transparent |
Does not mention transparent |
Purpose limitation |
Data must be specified, explicit and legitimate. |
Same |
Processing for achieving public interest, scientific or historical research or statistical purpose is not to be considered incompatible with initial purpose. |
Same |
|
Data minimization |
Processing is adequate, relevant and limited to what is necessary |
Same |
Accuracy |
Data is accurate, up to date, erased or rectified without delay |
Same |
Storage limitation |
Data is to be stored in a way that data subject can be identified for no longer than is necessary for purpose of processing |
Same |
Data can be stored for longer periods when it is processed solely in public interest, scientific or historical research or statistical purpose |
Same However, public interest is not mentioned. |
|
There must be appropriate technical and organizational measures to safeguard rights and freedoms |
Same Additionally, it specifically states that Member States must lay down appropriate safeguards |
|
Integrity and confidentiality |
Manner of processing must: Ensure security of personal data, Protection against unlawful processing and accidental loss, destruction or damage |
Not mentioned |
Accountability |
Controller is responsible for and must demonstrate compliance with all of the above. |
DPD states it is for the controller to ensure compliance with this Article. Unlike GDPR, DPD doesn't specifically state the responsibility of controller for demonstrating compliance. |
4.8 Lawfulness of processing
The conditions for "lawfulness of processing" under DPD have been retained in the GDPR with certain modifications allowing flexibility for member states to introduce specific provisions in public interest or under a legal obligation. It should be noted that protection given to child's data and rights and freedoms of data subject should not be prejudiced. Additionally, a non-exhaustive list has been laid down in the GDPR for determining if processing is permissible in situations where the new purpose of processing is different from original purpose.
Sub Topics in the Section |
GDPR |
DPD |
Given in Article |
6 |
7 |
Processing is lawful when : |
If at least one of the principles applies: Data subject has given consent to processing for specific purpose(s). |
Same However it mentions "unambiguous" consent. |
Processing is necessary for performance of contract to which data subject is party or at request of data subject before entering into a contract |
Same |
|
Processing is necessary for controller's compliance with legal obligation. |
Same |
|
Is necessary for legitimate interests pursued by controller or by third party subject to exceptions (should not override rights and freedoms of data subject and protections given to child's data.) |
Same |
|
It is necessary for performance of task carried out in public interest or for exercise of official authority vested in controller |
Same It additionally mentions third party: "…exercise of official authority vested in controller or in a third party to whom data are disclosed" |
|
For protections of vital interest of data subject or another natural person |
Same Does not mention natural person. |
|
Member States may introduce specific provisions when: |
When processing is necessary for compliance with a legal obligation or to protect public interest |
|
Basis for processing for shall be laid down by: Union law or Member State law |
||
If processing is done for purpose other than for which data is collected and is without data subject's consent or is not collected under law: |
||
To determine if processing for another purpose is compatible with the original purpose |
Controller shall take into account following factors: |
|
Link between purposes for which data was collected and the other purpose |
||
Context in which personal data have been collected |
||
Nature of personal data |
||
Possible consequences of other purpose |
||
Existence of appropriate safeguards |
4.9 Processing which does not require identification:
This article lays down the conditions under which the controller is exempted from gathering additional data in order to identify a data subject for the purpose of complying with this Regulation. If the controller is able to demonstrate that identification is not possible, the data subject is to be informed if possible.
Sub Topics in the Section |
GDPR |
DPD |
Given in Article |
11 |
|
Conditions under which the controller is not obliged to maintain process or acquire additional information to identify data subject |
If purpose for processing doesn't not require identification of data subject by the controller |
|
Consequence of not maintaining the data |
Art 15 to 20 shall not apply provided controller is able to demonstrate its inability to identify the data subject |
|
Exception to above consequence will apply when : |
Data subject provides additional information enabling identification |
4.10 Rights of the data subject
The General Data Protection Rules (GDPR) confers 8 rights upon the data subject.These rights are to be honored by the controller:-
1. Right to be informed
2. Right of access
3. Right to rectification
4. Right to erasure
5. Right to restrict processing
6. Right to data portability
7. Right to object
8. Rights in relation to automated decision making and profiling
4.10.1 Right to be informed
The controller must provide information to the data subject in cases where personal data has not been obtained from the data subject. A number of exemptions have been listed. Additionally, GDPR lays down the time period within which the information has to be provided.
Sub Topics in the Section |
GDPR |
DPD |
Given in Article |
14 |
10 |
Type of information to be provided |
Identity and contact details of the controller or controller's representative |
Same |
Contact details of the data protection officer |
||
Purpose and legal basis for processing |
Purpose of processing |
|
Recipients or categories of recipients of personal data |
Same |
|
Intention to transfer data to third country or international organization and Information regarding adequacy decision or suitable safeguards or Binding Corporate Rules or derogations. This includes means to obtain a copy of these as well as information on place of availability. |
||
Additional information to be provided by controller to ensure fair and transparent processing |
Storage period of personal data and criteria for determining the period |
|
Legitimate interests pursued by controller or third party |
||
Existence of data subject's rights with regard to access or rectification or erasure of personal data, automated decision making |
||
Where applicable, existence of right to withdraw consent |
||
Time period within which information is to be provided |
Information to be given within a reasonable period, latest within one month. |
|
To be provided latest at the time of first communication to data subject, if personal data are to be used for communication with data subject |
||
In case of intended disclosure to another recipient , at the latest when personal data are first disclosed. |
||
If processing is intended for a new purpose other than original purpose, information to be provided prior to processing on new purpose. |
||
Situations in which exceptions are applicable |
Data subject already has information |
Same |
Provision of information involves disproportionate effort or is impossible or renders impossible or seriously impairs achievement of objective of processing. This is particularly with respect to processing for archiving purposes in public interest, scientific or historical research or statistical purpose. However controller must take measures to protect data subject's rights and freedom and legitimate interests including make information public. |
Provision involves impossible or disproportionate effort, in particular where processing is for historical or scientific research. However, appropriate safeguards must be provided by Member States. |
|
Obtaining or disclosure is mandatory under Union or member law and it provides protection to data subject's legitimate interests |
Where law expressly lays down recording or disclosure provided appropriate safeguards are provided by Member States. This is particularly applicable to processing for scientific or historical research. |
|
Confidentiality of data mandated by professional secrecy under Union or Member State law |
4.10.2 Right to access
Both Data Protection Directive (DPD) and General Data Protection Rules (GDPR) confer right to access information regarding personal data on the data subject.
CJEU in YS V. Minister voor Immigrate Integratie en Asiel stated that it is the data subject's right "to be aware of and verify the lawfulness of the processing".
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
15 |
12 |
Data subject has the right to know about: |
Purpose of processing |
Same |
Categories of processing the data |
Same |
|
Recipients or categories to whom data are disclosed |
Same |
|
Retention period of the data and criteria for this |
||
Existence of right to request erasure, rectification or restriction of processing |
||
Right to lodge complaint with supervisory authority |
||
Knowledge about source of data |
||
To know about any significant and envisaged consequences of processing for the data subject |
||
Existence of automated decision making and logic involved |
Same |
|
In case of data transfer to third country |
Right to be informed about the safeguards |
|
Controller's obligation |
To provide a copy of data undergoing processing. Reasonable fee based on administrative costs can be charged for this. |
4.10.3 Right to rectification
GDPR and DPD both give the data subject the right to rectify their personal data. Under the GDPR the data subject can complete the incomplete data by giving a supplementary statement.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
16 |
12(b) |
Right can be exercised when: |
Processing does not comply with the Directive i.e. damage is caused due to unlawful processing (Recital 55) OR |
|
When data is incomplete |
When data is incomplete or inaccurate |
|
Obligations of controller |
To enforce the right without undue delay |
|
Obligation of controller to give notification when data is disclosed to third party |
Given under Art 19 Request of erasure of personal data to be communicated to each recipient of such data |
Given under Article 12(c) Request must be communicated to third parties |
It should not involve an impossible or disproportionate effort |
Same |
4.10.4 Right to erasure
This is also referred to as the "right to be forgotten". It empowers the individual to erase personal data under certain circumstances. The data subject can request the controller to remove the data for attaining this purpose.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
17 |
12(b) |
Obligation of the controller |
To erase the data without undue delay |
|
Conditions under which the right can be exercised |
When processing does not comply with the Directive i.e. damage is caused due to unlawful processing (Recital 55) OR When data is incomplete or inaccurate |
|
Personal data is no longer necessary for the purpose for which it was collected or processed |
||
Data Subject withdraws consent for processing |
||
Data subject objects to processing and there are no overriding legitimate grounds for processing |
||
Data subject objects to processing for direct marketing purpose |
||
Personal data has been unlawfully processed |
||
When personal data has to be erased under a legal obligation of Union or member State law |
||
When personal data has been collected in offer of information society services to a child |
||
Condition of processing under which request to erasure shall not be granted |
For exercising right of freedom of expression and information |
|
Processing is done under Union or Member State law in public interest or exercise of official authority vested in controller |
||
Done for public interest in public health |
||
For public interest, scientific or historical research or statistical purpose. |
||
For establishment, exercise or defense of legal claims. |
||
Controller's obligations when personal data has been made public |
Controller to take reasonable steps to inform controllers who are processing the data, of the request of erasure. All links, copy or replication of personal data to be erased. Technology available and cost of implementation to be taken into account. |
|
Notification when data is disclosed to third party |
Given under obligation of controller under Art 19: Request of erasure of personal data to be communicated to each recipient of such data |
Given under obligation of controller under 12(c) : Request must be communicated to third parties |
It should not involve an impossible or disproportionate effort |
Same |
4.10.5 Right to restrict processing
While DPD provided for "blocking", the GDPR strengthened this right by specifically conferring the " Right to Restrict Processing" upon the data subject. This Article gives data subject the right to restrict processing under certain conditions. Recital 67 explains that these methods could include steps like removing published data from website or temporarily moving the data to another processing system.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
18 |
12(b) |
About this right |
Data subject can restrict processing of data |
Data subject is allowed to erase, rectify or block processing of personal data. |
Conditions under which the right can be exercised |
When accuracy of personal data is contested |
Besides accuracy, the DPD also mentions "incomplete nature of data" as grounds for exercising this right. |
When processing is unlawful and data subject opposes erasure and requests restriction of data use |
||
When data is no longer needed by controller but is required by data subject for establishment, exercise or defense of legal claims. |
||
Data subject objects to processing and the verification by controller of compelling legitimate grounds for processing is ongoing |
||
Consequences of this enforcement of this right |
Controller can store data but not process it |
|
Processing can be done only with the data subject's consent; or |
||
Processing can be done for establishment exercise or defense of legal claims; or |
||
Processing can be done for protecting rights of another natural or legal person ;or |
||
It can be done in public interest of Union or Member State. |
||
Obligations of controller under Art 18 |
The controller must inform the data subject before the restrictions are lifted. |
|
Obligations of controller under Art 19 |
Inform each recipient of personal data about the restriction. |
|
This obligation need not be performed if it is impossible to do so or it involved disproportionate effort. |
||
Inform data subject about the recipients when requested by the data subject. |
4.10.6 Right to data portability
This right empowers the data subject to receive personal data from one controller and transfer it to another. This gives the data subject more control over his or her own data. The controller cannot hinder this right when the following conditions are met.
Sub-topics in the section |
GDPR |
DPD |
Given in article |
20 |
|
Conditions for data transmission |
The data must have been provided to the controller by data subject himself; and |
|
Processing is based on: Consent; or For performance of contract; and is carried out by automated means |
||
Data transfer must be technically feasible |
||
Format of personal data |
It should be in a: Structured Commonly-used Machine readable format |
|
Time and cost for data transfer |
Given in Art 12(3) Should be free of charge Information to be provided within one month. Further extension by two months permissible under certain circumstances. |
|
Circumstance under which this Right cannot be exercised |
When the exercise of the Right prejudices rights and freedom of another individual |
|
When processing is necessarily carried out in public interest |
||
When processing is necessarily done in exercise of official authority vested in controller |
||
When this Right adversely affects the "Right to be forgotten" |
4.10.7 Right to Object
Both DPD and GDPR confer upon the data subject the right to object to processing on a number of grounds. The GDPR strengthens this right . Under GDPR, there is a visible shift from the data subject to the controller as far as the burden of showing " compelling legitimate grounds" is concerned. Under the DPD, when processing is undertaken in public interest or in exercise of official authority or in legitimate interests of third party or controller, the data subject not only has to show existence of compelling legitimate grounds but also that objection is justified. On the other hand, GDPR spares the data subject from this exercise and instead places the onus on the controller of demonstrating that "compelling legitimate grounds" exist such that these grounds override the interests, rights and freedom of the data subject.
GDPR also provides a new ground for objecting to processing. The data subject can object to processing when it is for scientific or historical research or statistical purpose unless such processing is necessary in public interest.
Under the GDPR the data subject must be informed of this right "clearly and separately" and "at the time of first communication with data subject" when processing is done in public interest/exercise of official authority/legitimate interest of third party or controller or for direct marketing purpose. This right can be exercised by automated means in case of information society service.
The DPD also provides that the data subject must be informed of this right if the controller anticipates processing for direct marketing or disclosure of data to third party. It specifically states that this right is to be offered "free of charge". Additionally, it places responsibility upon the Member States to ensure that data subjects are aware of this right.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
21 |
14 |
Conditions under which the right can be exercised during processing |
When performance of task is carried out in public interest or in exercise of official authority vested in controller. (Art 6(1)(e)) Exception: If controller demonstrates processing is for compelling legitimate grounds which override interests of data subject For establishment, exercise or defense of legal claims. |
Grounds are same but the data subject also has to show existence of compelling legitimate grounds. Processing will cease if objection is justified. Exceptions: Unless provided by national legislation the data subject can object on this ground. |
For legitimate interests of controller or third party (Art 6(1)(f)) Exception: 1. If controller demonstrates processing is for compelling legitimate grounds that override interests of data subject. 2. For establishment, exercise or defense of legal claims. |
Same as above |
|
When data is processed for scientific/historical research/ statistical purpose under Art 89(1) Exception: If processing is necessary for public interest |
||
When personal data is used for marketing purpose. Can object at anytime. No exceptions |
Same |
4.10.8 Rights in relation to automated individual decision making including profiling
This Article empowers the data subject to challenge automated decisions under certain conditions. This is to protect individuals from decisions taken without human intervention.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
22 |
15 |
This right can be exercised when decisions are based: |
||
Only on automated processing Including profiling; and |
Same |
|
Produce legal effects or have similarly significant effects on data subject |
Same |
|
Conditions under which this right will not be guaranteed |
||
For entering into or performance of contract; |
Same |
|
If Member State or Union law authorizes the decision provided it lays down suitable measures for safeguarding data subject's rights, freedoms and legitimate interests; Or |
Same |
|
When decision is based on data subject's explicit consent. |
||
Controller's obligation |
Enforce measures to safeguard rights and freedom and interests |
|
Ensure data subject can obtain human intervention, express his point of view, challenge decisions |
||
Automated decision making will not apply when: |
"Special categories of personal data" are to be processed However, if the data subject gives his explicit consent or such processing serves substantial public interest then the restriction can be waived. |
|
Concerns a child |
4.11 Security and Accountability
4.11.1 Data protection by design and default
This is another new concept under GDPR. It is a general obligation on the controller to incorporate effective data protection in internal policies and implementation measures. Measures include: minimization of processing, pseudonymisation, transparency while processing, allowing data subjects to monitor data processing etc. The implementation of organizational and technical measures is essential to demonstrate compliance with Regulation.
Sub-topics in the section |
GDPR |
DPD |
Article |
25 |
|
Responsibility of controller when determining means of processing and at the time of processing |
Implementation of appropriate technical and organizational measures for data protection |
|
Ensure that by default only personal data necessary for purpose of processing is processed |
||
Means of demonstrating compliance with this Article |
Approved certification mechanism may be used. Data minimization Transparency etc. |
4.11.2 Security of personal data
Security of processing is mentioned in the GDPR under Article 32. The controller and processor must implement technical and organizational measures to ensure data security. These may include pseudonymisation, encryption, ensuring confidentiality, restoring availability and access to personal data, regularly testing etc. Compliance with the code may be demonstrated by adherence to Code of conduct and certification mechanism. Further, all processing which is done by a natural person acting under authority of controller or processor can be done only under instructions from the controller.
4.11.3 Notification of personal data breach
This Article provides the procedure for communicating the personal data breach to supervisory authority. If the breach is not likely to result in risk to rights and freedoms of natural persons, then the controller is not required to notify the supervisory authority.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
33 |
|
Responsibility of controller |
Report personal data breach to supervisory authority after being aware of it |
|
Time limit for reporting data breach |
Must be reported no later than 72 hours |
|
In case of delay in reporting |
Reasons to be stated |
|
Responsibility of processor |
Notify the controller after being aware of breach |
|
Description of notification |
Describe nature of personal data |
|
Name contact details of data protection officer |
||
Likely consequences of personal data breach |
||
Measures to be taken or proposed to be taken by controller to address the breach or mitigate its possible effect |
||
When information cannot be provided at same time |
Provide it in phases without further undue delay |
|
For verification of compliance |
Controller has to document any personal data breach. It must contain Facts , effects and remedial action taken |
4.11.4 Communication of personal data breach to the data subject
Not only is the supervisory authority to be notified, but data subjects are also to be informed about personal data breaches without undue delay under certain conditions.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
34 |
|
Conditions under which controller is to communicate the breach to data subject |
When breach is likely to cause high risk to rights and freedoms of natural persons |
|
Nature of communication |
Must be in a clear and plain language. Must describe the nature of breach. Must Contain at least: Name contact details of data protection officer Likely consequences of personal data breach Measures to be taken or proposed to be taken by controller to address the breach or mitigate its possible effect |
|
Condition under which communication will not be required |
If controller has implemented appropriate technical and organizational measures and these were applied to the affected data. E.g.: encryption |
|
Subsequent measures have been taken by controller to ensure there is no high risk |
||
If communication involves disproportionate effort. Public communication or similar measures can be undertaken under such circumstances. |
||
Role of supervisory authority |
In case of likelihood of high risk, the authority may require the controller to communicate the breach if the controller has not already done so. |
4.11.5 Data protection impact assessment
This is also known as Privacy Impact Assessment. While DPD provides general obligation to notify the processing to supervisory authorities, the GDPR, taking into account the need for more protection of personal data, has replaced the notification process by different set of mechanisms.
To serve the above purpose, the data protection impact assessment (DPIA) has been provided under this Article.
Sub-topics in the section |
GDPR |
DPD |
Given in Article |
35 |
|
When to carry out assessment |
When new technology is used; and Processing is likely to result in high risk to rights and freedoms of natural persons |
|
Automated processing including profiling involving systematic and extensive evaluation of personal aspects of natural persons; and When decisions based on such processing produce legal effects |
||
Large scale processing of special categories of data or personal data relating to criminal convictions and offences |
||
Large scale systematic monitoring of publicly accessible area |
||
Type of information contained in assessment |
Description of processing operations and purpose |
|
Assessment of necessity and proportionality of processing operations |
||
Assessment of risks to individuals |
||
Measures to address risks and demonstration of compliance with Regulation |
||
Sub-topics in the section |
GDPR |
DPD |
Topic |
Prior Consultation |
|
Given in Article |
36 |
|
When should controller consult supervisory authority |
Prior to processing; and DPIA indicates high risk; and In absence of risk mitigation measures by controller |
Data protection officer
GDPR mandates that a person with expert knowledge of data protection law and practice is appointed for helping the controller or processor to comply with the data protections laws. A single data protection officer (DPO) may be appointed by a group of undertakings or where controller or processor is a public authority or body.The DPO must be accessible from each establishment.
Sub Topics in the Section |
GDPR |
DPD |
Article |
37 |
|
Situations in which DPO must be appointed |
When processing is carried out by public authority or body. Note: Courts acting in judicial capacity are excluded. |
|
Core activity involves processing which requires regular and systematic monitoring of data subjects on large scale; or |
||
Core activity involves processing of large scale special categories of data and criminal convictions and offences |
Position of Data Protection Officer
The DPO must directly report to the highest management level of the controller or processor. Data subjects may contact the DPO in case of problems related to processing and exercise of rights.
Sub Topics in the Section |
GDPR |
DPD |
Article |
38 |
|
Responsibility of controller and processor |
Ensure DPO is involved properly and in timely manner |
|
Provide DPO with support, resources and access to personal data and processing operations |
||
Not dismiss or penalize DPO for performing his task. |
||
Ensure independence of working and not give instruction to DPO |
Tasks of Data Protection officer
The DPO must be involved in all matters concerning data protection. He is expected to act independently and advice the controllers and processors to facilitate the establishment's compliance with Regulations.
Sub Topics in the Section |
GDPR |
DPD |
Article |
39 |
|
Tasks |
Inform and advise the controller or processor and employees over data protection laws |
|
Monitor compliance with data protection laws. Includes assigning responsibilities, awareness- raising, staff training and audits |
||
Advice and monitor performance |
||
Cooperate with supervisory authority |
||
Act as point of contact for supervisory authority for processing, prior consultation and consultation on other matter |
4.11.6 European Data Protection Board
For consistent application of the Regulation, the GDPR envisages a Board that would replace the Working Party on Protection of Individuals With Regard to Processing of Personal Data established under the DPD. This Regulation confers legal personality on the Board.
Sub Topics in the Section |
GDPR |
DPD |
Article |
68 |
|
Represented by |
Chair |
|
Composition of the Board |
Head of one supervisory authority of each Member State and European Data Protection Supervisor or of their representatives. Joint representative can be appointed where Member State has more than one supervisory authority. |
|
Role of Commission |
Right to participate in activities and meetings of the Board without voting rights. Commission to designate a representative for this. |
|
Functions of the Board |
Consistent application of Regulation |
|
Advise Commission of level of protection in third countries or international organizations |
||
Promote cooperation of supervisory authorities |
||
Board is to act independently |
4.11.7 Supervisory Authority
GDPR lays down detailed provisions on supervisory authorities, defining their functions, independence, appointment of members, establishment rules, competence, competence of lead supervisory authority, tasks, powers and activity reports. Such elaborate provisions are absent in DPD.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
Chapter VI, Article 51 -59 |
28 |
4.12 Processor
The Article spells out the obligations of a processor and conditions under which other processors can be involved.
Sub Topics in the Section |
GDPR |
DPD |
Article |
28 |
|
What kind of processors can be used by controller |
● Those which provide sufficient guarantees to implement appropriate technical and organizational measures ● Those which comply with Regulation and Rights |
|
Obligations of processor in case of addition or replacement of processor |
● Not engage another processor without controller's authorization ● In case of general written authorization inform the controller |
|
Processing shall be governed by |
Contract or legal act under Union or Member State law. |
|
Elements of Contract |
● Is binding on processor ● Sets out subject matter and duration of processing ● Nature of processing ● Type of personal data ● Categories of data subjects ● Obligations and Rights of the controller |
|
Obligations of processor under contract or legal act |
Processor shall process under instructions from controller unless permitted under law itself. Controller is to be informed in the latter case. |
|
Ensures that persons authorized to process have committed themselves to confidentiality |
||
Processor to undertake all data security measures (mentioned under Art 32) |
||
Enforces conditions on engaging another processor |
||
Assists the controller by appropriate technical and organizational measures |
||
Assists controller in compliance with Art 32 to 36 |
||
Delete or return all personal data to controller at the choice of controller at the end of processing |
||
Make information available to controller for demonstrating compliance with obligations. Contribute to audits, inspections etc. Inform the controller if it believes that an instruction infringes the regulation or law. |
||
Conditions under which a processor can engage another processor |
● Same data protection obligations will be applicable to other processor. ● If other processor fails to fulfill data protection obligations, initial processor shall remain fully liable to controller for such performance. |
4.13 Records of processing activities
The controller or processor must maintain records of processing activities to demonstrate compliance with the Regulation. They are obliged to cooperate with and make record available to the supervisory authority upon request. DPD does not contain similar obligations.
Sub Topics in the Section |
GDPR |
DPD |
Article |
30 |
|
Obligation of controller or controller's representative |
Maintain a record of processing activities |
|
Information to be contained in the record |
Name and contact details of: ● Controller /joint controller / controller's representatives ● Data protection officer |
|
Purpose of processing |
||
Categories of data subjects and categories of personal data |
||
Categories of recipients to whom data has been or will be disclosed |
||
Transfers of personal data to third party, identification of third party, documentation of suitable safeguards |
||
Expected time duration for erasure of different categories of data |
||
Technical and organizational security measures |
||
Obligation of processor |
Maintain a record of processing activities carried out on behalf of controller |
|
Record maintained by processor shall contain information such as: |
Name and contact details of: ● Processor /processor's representative ● Controller /controller's representative ● Data protection officer |
|
Categories of processing |
||
Data transfer to third party Identification of third party Documentation of safeguards |
||
Technical and organizational security measures |
||
Form in which record is to be maintained |
In writing and electronic form |
|
Conditions under which exemption will apply |
● Organizations employing fewer than 250 employees are exempted; ● Processing should not cause risk to rights and freedoms of data subjects ● Processing should not be occasional ● Processing should not include special categories of data |
4.14 Code of Conduct
These mechanisms have been provided under GDPR to demonstrate compliance with the Regulation. This is important as the GDPR ( under Art 83 ) provides that adherence to code of conduct shall be one of the factors taken into account for calculating administrative fines. This is not an obligatory provision.
Sub Topics in the Section |
GDPR |
DPD |
Article |
40 |
27 |
Who will encourage drawing up of code of conduct |
● Member States ● Supervisory Authorities ● Commission. Specific needs of micro, small and medium enterprises to be taken into account. |
● Member States ● Commissions Does not mention the rest |
Who may prepare amend or extend code of conduct |
Associations and other bodies representing categories of controller or processors |
|
Information contained in the code |
Fair and transparent processing |
|
Legitimate interests of controller |
||
Collection of personal data |
||
Pseudonymisation |
||
Information to public and data subjects |
||
Exercise of rights of data subject |
||
Information provided to and protection of children and manner in which consent of holders of parental responsibility is obtained |
||
Measures under: ● Data protection by design and default ● Controller responsibilities ● Security of processing |
||
Notification of data breach to authorities and communication of same to data subjects |
||
Data transfer to third party |
||
Dispute resolution procedures between controllers and data subjects |
||
Mechanisms for mandatory monitoring |
||
Mandatory monitoring |
Code of conduct containing the above information enables mandatory monitoring of compliance by body accredited by supervisory authority. (Art 41) |
4.15 Certification
Like the code of conduct, Certification is a voluntary mechanism that demonstrates compliance with the Regulation. Establishment of data protection certification mechanism and data protection seals and marks shall be encouraged by Member States, supervisory authorities, Boards and Commission. As in case of code of conduct, specific needs of micro, small and medium sized enterprise ought to be taken into account. DPD does not mention such mechanisms.
Sub Topics in the Section |
GDPR |
DPD |
Article |
42 |
|
Who will issue the certificate |
Certification bodies or competent supervisory authority on basis of approved criteria. |
|
Time period during which certification shall be issued |
Maximum period of three years. Can be renewed under same conditions. |
|
Who accredits certification bodies |
Competent Supervisory bodies or National accreditation body. |
|
When can accreditation be revoked |
When conditions of accreditation are not or no longer met. OR Where actions taken by certification body infringe this Regulation. |
|
Who can revoke |
Competent supervisory authority or national accreditation body |
4.16 Data Transfer
4.16.1 Transfers of personal data to third countries or international organizations
Chapter V lays down the conditions with which the data controller must comply in order to transfer data for the purpose of processing outside of the EU to third countries or international organizations. The chapter also stipulates conditions that must be complied with for onward transfers from the third country or international organization.
4.16.2 Transfer on the basis of an adequacy decision
Under GDPR, transfer of data can take place after the Commission decides whether the third country, territory, specified sector within that third country or international organization ensures adequate level of data protection. This is called adequacy decision. A list of countries or international organizations which ensure adequate data protection shall be published in the Official Journal of the European Union and on the website by the Commission. Once data transfer conditions are found to be compliant with the Regulation, no specific authorization would be required for data transfer from the supervisory authorities. The commission would decide this by means of an "Implementing Act" specifying a mechanism for periodic review, its territorial and sectoral application and identification of supervisory authorities. Decisions of Commission taken under Art 25(6) of DPD shall remain in force. DPD also provides parameters for the same.
Sub-topics in this section |
GDPR |
DPD |
Given in article |
45 |
25 |
Conditions apply when transfers take place to |
Third country or international organization |
International organization not mentioned. |
Functions of the commission |
Take adequacy decisions |
Same |
Review the decision periodically every four years |
||
Monitor developments on ongoing basis |
||
Repeal, amend or suspend decision |
||
Inform Member States if third country doesn't ensure adequate level of protection. Similarly, member state has to inform the Commission. |
||
Functions of Member State |
Inform Commission if third country doesn't ensure adequate level of protection. |
|
Take measures to comply with Commission's decisions |
||
Prevent data transfer if Commission finds absence of adequate level of protection. |
||
Factors, with respect to third country or international organization, to be considered while deciding adequacy of safeguards |
Rule of law, human rights, fundamental freedoms, access of public authorities to personal data, data protection rules, rules for onward transfer of personal data to third country or international organization etc. |
Circumstances surrounding data transfer operations: nature of data; purpose and duration of processing operation; rule of law, professional rules and security measures in third country; country of origin and final destination; professional rules and security measures; |
Functioning of independent supervisory authorities, their powers of enforcing compliance with data protection rules and powers to assist and advise data subject to exercise their rights. |
||
International commitments entered into. Obligations under legally binding conventions. |
Same |
|
When adequate level of protection no longer ensues |
The Commission, to the extent necessary: repeal, amend or suspend the decision. This is to be done by the means of an implementing act. No retroactive effect to take place |
The member state will have to suspend data transfer if Commission finds absence of adequate level of protection. |
Commission to enter into consultation with the third country or international organization to remedy the situation |
Same |
4.16.3 Transfers subject to appropriate safeguards
This article provides for a situation when the Commission takes no decision. (Mentioned above under Transfer on the basis of an adequacy decision). In this case, the controller or processor can transfer data to third country or international organization subject to certain conditions. Specific authorization from supervisory authorities is not required in this context. Procedure for the same has been mentioned.
Sub-topics in this section |
GDPR |
DPD |
Given in article |
46 |
|
When can data transfer take place |
When appropriate safeguards are provided by the controller or processor; AND On condition that data subject enjoys enforceable rights and effective legal remedies for data safety. |
|
Conditions to be fulfilled for providing appropriate safeguards without specific authorization from supervisory authority |
Existence of legally binding and enforceable instrument between public bodies or authorities |
|
Existence of Binding Corporate Rules |
||
Adoption of Standard Protection Clauses adopted by the Commission |
||
Adoption of Standard data protection clauses by supervisory authorities and approved by Commission. |
||
Approved code of conduct along with binding and enforceable commitments of controller or processor in third country to apply appropriate safeguards and data subject's rights OR Approved certification mechanism along with binding and enforceable commitments of controller or processor in third country to apply appropriate safeguards and data subject's rights. |
||
Conditions to be fulfilled for providing appropriate safeguards subject to authorization from competent authority |
Existence of contractual clauses between: Controller or Processor and Controller, Processor or recipient of personal data (third party) |
|
Provisions inserted in administrative arrangements between public authorities or bodies. Provisions to contain enforceable and effective data subject rights. |
||
Consistency mechanism to be applied by supervisory authority |
||
Unless amended, replaced or repealed, authorization to transfer given under DPD will remain valid when: |
Third country doesn't ensure adequate level of protection but controller adduces adequate safeguards; or Commission decides that standard contractual clauses offer sufficient safeguards |
4.16.4 Binding Corporate Rules
These are agreements that govern transfers between organizations within a corporate group
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
47 |
|
Elements of Binding Corporate Rules |
Legally binding |
|
Apply to and are enforced by every member of group of undertakings or group of enterprises engaged in joint economic activity. Includes employees |
||
Expressly confer enforceable rights on data subject over processing of personal data |
||
What do they specify |
Structure and contact details of group of undertakings |
|
Data transfers or set of transfers including categories of personal data , type of processing, type of data subjects affected, identification of third countries |
||
Legally binding nature |
||
Application of general data protection principles |
||
Rights of data subjects Means to exercise those right |
||
How the information on BCR is provided to data subjects |
||
Tasks of data protection officer etc. |
||
Complaint procedure |
||
Mechanisms within the group of undertakings, group of enterprises for ensuring verification of compliance with BCR. Eg. Data protection audits Results of verification to be available to person in charge of monitoring compliance with BCR and to board of undertaking or Group of enterprises. Should be available upon request to competent supervisory authority |
||
Mechanism for reporting and recording changes to rules and reporting changes to supervisory authority |
||
Cooperation mechanism with supervisory authority |
||
Data protection training to personnel having access to personal data |
||
Role of Commission |
May specify format and procedures for exchange of information between controllers, processors and supervisory authorities for BCR |
4.16.5 Transfers or disclosures not authorized by Union law
This Article lays down enforceability of decisions given by judicial and administrative authorities in third countries with regard to transfer or disclosure of personal data.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
48 |
|
Article concerns |
Transfer of personal data under judgments of courts, tribunals, decision of administrative authorities in third countries. |
|
When can data be transferred or disclosed |
International agreement between requesting third country and member state or union. E.g.: mutual legal assistance treaty |
4.16.6 Derogations for specific situations
This Article comes into play in the absence of adequacy decision or appropriate safeguards or of binding corporate rules. Conditions for data transfer to a third country or international organization under such situations have been laid down.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
49 |
26 |
Conditions under which data transfer can take place |
On obtaining Explicit consent of data subject after being informed of possible risks |
On obtaining unambiguous consent of data subject to the proposed transfer |
Transfer is necessary for conclusion or performance of contract. The contract should be in the interest of data subject. The contract is between the controller and another natural or legal person. |
Contractual conditions are same. DPD also includes implementation of pre contractual measures taken upon data subject's request. |
|
Transfer is necessary in public interest |
Same |
|
Is necessary for establishment, exercise or defense of legal claims |
Same |
|
To protect vital interest of data subject or of other persons where data subject is physically or legally incapable of giving consent |
Includes vital interest of data subject but doesn't include "other person". Condition for consent is also not included. |
|
Transfer made from register under Union or Member State law to provide information to public and is open to consultation by public or person demonstrating legitimate interest. |
Same |
|
Conditions for transfer when even the above specific situations are not applicable |
Transfer is not repetitive |
|
Concerns limited number of data subjects |
||
Necessary for compelling legitimate interests pursued by controller |
||
Legitimate interests are not overridden by interests or rights and freedoms of data subject |
||
Controller has provided suitable safeguards after assessing all circumstances surrounding data transfer |
||
Controller to inform supervisory authority about the transfer |
||
Controller to inform data subject of transfer and compelling legitimate interests pursued |
||
Member may authorize transfer personal data to third country where controller adduces adequate safeguards for protection of privacy and fundamental rights and freedoms of individuals |
4.17 International cooperation for protection of personal data
This Article lays down certain steps to be taken by Commissions and supervisory authorities for protection of personal data.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
50 |
|
Steps will include |
Development of international cooperation mechanisms to facilitate enforcement of legislation for protection of personal data |
|
Provide international mutual assistance in enforcement of legislation for protection of personal data |
||
Engage relevant stakeholders for furthering international cooperation |
||
Promote exchange and documentation of personal data protection legislation and practice |
4.18 Remedies, Liability and Compensation
4.18.1 Right to lodge complaint with a supervisory authority
This article gives the data subject the right to seek remedy against unlawful processing of data. GDPR strengthens this right as compared to the one provided under DPD.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
77 |
28(4) |
Right given |
Right to lodge complaint |
Under GDPR the data subject has been conferred the "right" specifically. This is not so in DPD. DPD merely obliges the supervisory authority to hear claims concerning rights and freedoms. |
Who can lodge complaint |
Data subject |
Any person or association representing that person |
Complaint to be lodged before |
Supervisory authority in the Member State of habitual residence, place of work or place of infringement |
Supervisory authority |
When can the complaint be lodged |
When processing of personal data relating to data subject allegedly infringes on Regulation |
When rights and freedom are to be protected while processing. When national legislative measures to restrict scope of Regulations is adopted and processing is alleged to be unlawful. |
Accountability |
Complainant to be informed by Supervisory authority on progress and outcome of complaint and judicial remedy to be taken up |
Complainant to be informed on outcome of claim or if check on unlawfulness has taken place |
4.18.2 Right to an effective judicial remedy against supervisory authority
The concerned Article seeks to make supervisory authorities accountable by bringing proceedings against the authority before the courts. GDPR gives a specific right to the individual. DPD under Article 28(3) merely provides for appeal against decisions of supervisory authority in the courts.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
78 (1) |
|
Who has the right |
Every natural or legal person |
|
When can the right be exercised |
Against legally binding decision of supervisory authorities concerning the complainant |
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
78(2) |
|
Who has the right |
Data subject |
|
When can the right be exercised |
When the competent supervisory authority doesn't handle the complaint Or Doesn't inform data subject about progress / outcome of complaint within 3 months |
The jurisdiction of court will extend to the territory of the Member State in which the supervisory authority is established (GDPR Art 78(3)). The supervisory authority is required to forward proceedings to the court if the decision was preceded by the Board's decision in the consistency mechanism. (GDPR 78(4))
4.18.3 Right to effective judicial remedy against a controller or processor
The data subject has been conferred with the right to approach the courts under certain circumstance. The GDPR confers the specific right while DPD provides for judicial remedy without using the word "right".
Sub-topics in this section |
GDPR |
DPD |
Given in |
Art 79 |
Recital 55 |
Right can be exercised when: |
1. Data has been processed; and 2. Processing Results in infringement of rights; and 3. Infringement is due to non compliance of Regulation |
Similar provisions provided under DPD: When controller fails to respect the rights of data subjects and national legislation provides a judicial remedy. Processors are not mentioned. |
Jurisdiction of the courts |
Proceedings can be brought before the courts of Member States wherein: 1. Controller or processor has an establishment Or 2. Data Subject has habitual residence |
|
Right cannot be exercised when |
1. The controller or processor is a public authority of Member State And 2. Is exercising its public powers |
4.18.4 Right to compensation and liability
GDPR enables a person who has suffered damages to claim compensation as a specific right. DPD merely entitles the person to receive compensation. Although Liability provisions under GDPR and DPD are similar, the liability under GDPR is stricter as compared to DPD. This is because DPD exempts the processor from liability but GDPR does not. For example, DPD imposes liability on controllers only.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
82 |
23 |
Who can claim compensation |
Any person who has suffered material or non material damage |
Similar provisions. But DPD doesn't mention "material or non-material damage" specifically. |
Right arises due to |
Infringement of Regulation |
Same |
Right granted |
Right to receive compensation |
Same |
Compensation has to be given by |
Controller or processor |
Compensation can be claimed only from controller |
Liability of controller arises when |
Damage is caused by processing due to infringement of regulation |
Same |
Liability of processor arises when |
1. Processor has not complied with directions given to it under Regulation OR 2. Processor has acted outside or contrary to lawful instructions of controller |
|
Exemptions to controller or processor from liability |
If there is proof that they are not responsible |
Exemption for controller is same |
Liability when more than one controller or processor cause damage |
Each controller or processor to be held liable for entire damage |
4.19 General conditions for imposing administrative fines
GDPR makes provision for imposition of administrative fines by supervisory authorities in case of infringement of Regulation. Such fines should be effective, proportionate and dissuasive. In case of minor infringement, "reprimand may be issued instead of a fine" [1]. Means of enforcing accountability of supervisory authority have been provided. If Member state law does not provide for administrative fines, then the fine can be initiated by the supervisory authority and imposed by courts. However, by 25 May 2018, Member States have to adopt laws that comply with this Article.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
83 |
|
Who can impose fines |
Supervisory Authority |
|
Fines to be issued against |
Controllers or Processors |
|
Parameters to be taken into account while determining administrative fines |
Nature, gravity and duration of infringement and Nature scope or purpose of processing and Number of data subjects affected and Level of damage suffered |
|
Intentional or negligent character of infringement |
||
Action taken by controller or processor to mitigate damage suffered by data subjects |
||
Degree of responsibility of con controller or processor. Technical and organizational measures implemented to be taken into account. |
||
Relevant previous infringement |
||
Degree of cooperation with supervisory authority |
||
Categories of personal data affected |
||
Manner in which supervisory authorities came to know of the infringement and Extent to which the controller or processor notified the infringement |
||
Whether corrective orders of supervisory authority under Art 58(2) have been issue before and complied with |
||
Adherence to approved code of conduct under Art 40 or approved certification mechanisms under Art 42 |
||
Other aggravating or mitigating factors like financial benefits gained losses avoided etc. |
||
If infringement is intentional or due to negligence of processor or controller |
Total amount of administrative fine to not exceed amount specified for gravest infringement |
|
Means checking power of supervisory authority to impose fines |
Procedural safeguards under Member State or Union law. Including judicial remedy and due process |
Article 83 splits the amount of administrative fines according to obligations infringed by controllers, processors or undertakings. The first set of infringements may lead to imposition of fines up to 10,000,000 EUR or 2% of total worldwide turnover.
Sub-topics in this section |
GDPR |
DPD |
Article |
83(4) |
|
Fine imposed |
Up to 10,000,000 EUR or in case of undertaking, 2% of total worldwide turnover of preceding financial year, whichever is higher |
|
Infringement of these provisions will cause imposition of fine (Provisions infringed) |
Obligations of controller and processor under: |
|
Art 8 Conditions applicable to child's consent in relation to information society services |
||
Art 11 Processing which does not require identification |
||
Art 25 to 39 General obligations , Security of personal data , Data Protection impact assessment and prior consultation |
||
Art 42 Certification |
||
Art 43 Certification bodies |
||
Obligations of certification body under: Art 42 Art 43 |
||
Obligations of monitoring body under: Art 41(4) |
Second set of infringements may cause the authority to impose higher fines up to 20,000,000 EUR or 4% of total worldwide turnover.
Sub-topics in this section |
GDPR |
DPD |
Article |
83(5) |
|
Fine imposed |
Up to 20,000,000 EUR or in case of undertaking, 4% of total worldwide turnover of preceding financial year, whichever is higher |
|
Infringement of provisions that will cause imposition of fine (Provisions infringed) |
Basic principles for processing and conditions for consent under: |
|
Art 5 Principles relating to processing of personal data |
||
Art 6 Lawfulness of processing |
||
Art 7 Conditions for consent |
||
Art 9 Processing of special categories of personal data |
||
Data subject's rights under: Art 12 to 22 |
||
Transfer of personal data to third country or international organization under: Art 44 to 49 |
||
Obligations under Member State law adopted under Chapter IX |
||
Non Compliance with supervisory authority's powers under provisions of Art 58: |
||
Imposition of temporary or definitive limitation including ban on processing (Art 58 (2)(f)) |
||
Suspension of data flows to third countries or international organization (Art 58(2) (j)) |
||
Provide access to premises or data processing equipment and means (Art 58 (1) (f)) |
4.20 Penalties
Article 84 makes provision for penalties in case of infringement of Regulation.
The penalties must be effective, proportionate and dissuasive.
Sub-topics in this section |
GDPR |
DPD |
Given in Article |
84 |
|
When will penalty be imposed |
In case of infringements that are not subject to administrative fines |
|
Who imposes them |
Member State |
|
Responsibility of Member State |
To lay down the law and ensure implementation. To notify to the Commission, the law adopted, by 25 May 2018 |
Survey on Data Protection Regime
Ranking Digital Rights in India
Download the PDF
The report draws upon the methodology of Ranking Digital Rights project, which analysed 16 of the world’s major internet companies, including internet services and telecommunications providers based on their commitment towards upholding human rights through their services – in particular towards their commitment to users’ freedom of expression and privacy. The report comprehensively assessed the performance of companies on various indicators related to these human rights, as per information which was made publicly available by these companies or was otherwise in the public domain. This report follows the methodology of the proposed 2017 Ranking Digital Rights index, updated as of October 2016.
This report studied Indian companies which have, or have had, a major impact on the use and experience of the Internet in India. The companies range from online social media and micro-blogging platforms to major telecommunications companies providing critical national communications infrastructure. While some of the companies have operations outside of India as well, our study was aimed at how these companies have impacted users in India. This allowed us to study the impact of the specific legal and social context in India upon the behaviour of these firms, and conversely also the impact of these companies on the Indian internet and its users.
VSNL, the company later to be acquired by and merged into TATA Communications, was the first company to provide public Internet connections to India, in 1996. In 2015, India surpassed the United States of America, as the jurisdiction with the worlds second-largest internet user base, with an estimated 338 million users. With the diminishing costs of wireless broadband internet and the proliferation of cheaper internet-enabled mobile devices, India is expected to house a significant number of the next billion internet users.
Concomitantly, the internet service industry in India has grown by leaps and bounds, particularly the telecommunications sector, a large part of whose growth can be attributed to the rising use of wireless internet across India. The telecom/ISP industry in India remains concentrated among a few firms. As of early 2016 just three of the last mile ISPs which are studied in this report, are responsible for providing end-user connectivity to close to 40% of mobile internet subscribers in India. However, the market seems to be highly responsive to new entrants, as can be seem from the example of Reliance Jio, a new telecom provider, which has built its brand specifically around affordable broadband services, and is also one of the companies analysed in this report. As the gateway service providers of the internet to millions of Indian users, these corporations remain the focal point of most regulatory concerns around the Internet in India, as well as the intermediaries whose policies and actions have the largest impact on internet freedoms and user experiences.
Besides the telecommunications companies, India has a thriving internet services industry – by some estimates, the Indian e-commerce industry will be worth 119 Billion USD by 2020. While the major players in the e-commerce industry are shipping and food aggregation services, other companies have emerged which provide social networking services or mass-communication platforms including micro-blogging platforms, matrimonial websites, messaging applications, social video streaming services, etc. While localised services, including major e-commerce websites (Flipkart, Snapdeal), payment gateways (Paytm, Freecharge) and taxi aggregators (Ola), remain the most widely utilized internet services among Indians, the services analysed in this report have been chosen for their potential impact they have upon the user rights analysed in this report – namely freedom of speech and privacy. These services provide important alternative spaces of localised social media and communication, as alternatives to the currently dominant services such as Facebook, Twitter and Google, as well as specialised services used mostly within the Indian social context, such as Shaadi.com, a matrimonial match-making website which is widely used in India. The online service providers in this report have been chosen on the basis of the potential impact that these services may have on online freedoms, based on the information they collect and the communications they make possible.
Legal and regulatory framework
Corporate Accountability in India
In the last decade, there has been a major push towards corporate social responsibility (“CSR”) in policy. In 2009, the Securities Exchange Board of India mandated all listed public companies to publish ‘Business Responsibility Reports’ disclosing efforts taken towards, among other things, human rights compliances by the company. The new Indian Companies Act, 2013 introduced a ‘mandatory’ CSR policy which enjoins certain classes of corporations to maintain a CSR policy and to spend a minimum percentage of their net profits towards activities mentioned in the Act. However, these provisions do not do much in terms of assessing the impact of corporate activities upon human rights or enforcing human rights compliance.
Privacy and Data Protection in India
There is no explicit right to privacy under the Constitution of India. However, such as right has been judicially recognized as being a component of the fundamental right to life and liberty under Article 21 of the Constitution of India. However, there have been varying interpretations of the scope of such a right, including who and what it is meant to protect. The precise scope of the right to privacy, or whether a general right to privacy exists at all under the Indian Constitution, is currently being adjudicated by the Supreme Court. Although the Indian Supreme Court has had the opportunity to adjudicate upon telephonic surveillance conducted by the Government, there has been no determination of the constitutionality of government interception of online communications, or to carry out bulk surveillance.
As per Section 69 of the Information Technology Act, the primary legislation dealing with online communications in India, the government is empowered to monitor, surveil and decrypt information, “in the interest of the sovereignty or integrity of India, defense of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence.” Moreover, intermediaries, as defined under the act, are required to provide facilities to enable the government to carry out such monitoring. The specific procedure to be followed during lawful interception of information is given under the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009, (“Interception Rules”) which provides a detailed procedure for government agencies to issue monitoring directions as well as the obligations of intermediaries to facilitate the same. The Interception Rules require intermediaries who are enlisted for facilitating monitoring of information to maintain strict confidentiality regarding such directions for lawful interception or decryption, as well as to destroy any records of such directions every six (6) months. Intermediaries are required to designate specific authorities (the designated authority) to receive and handle any of the above government directions and also to maintain records and provide proper facilities to the government agencies. The designated authority is also responsible for maintaining the security and confidentiality of all information which ‘affects the privacy’ of individuals. Further, the rules prescribe that no person may intercept any online communication or information, except the intermediary for the limited purposes specified in the rules, which include for tracing persons who may have contravened any provision of the IT Act or rules.
With respect to decryption, besides the government’s power to order decryption of content as described above, the statutory license between the telecommunications providers and the Department of Telecommunications (“DoT”), prescribes, among other things, that only encryption “up to 40 bit key length in the symmetric algorithms or its equivalent in others” may be utilized by any person, including an intermediary. In the case that any person utilizes encryption stronger than what is prescribed, the decryption key must be stored with the DoT. At the same time, the license prescribes that ISP’s must not utlilize any hardware or software which makes the network vulnerable to security breaches, placing intermediaries in a difficult position regarding communications privacy.. Moreover, the license (as well as the Unified Access Service License) prohibit the use of bulk encryption by the ISP for their network, effectively proscribing efforts towards user privacy by the ISP’s own initiative.
There is no statute in India generally governing data protection or for the protection of privacy. However, statutory rules address privacy concerns across different sectors, such as banking and healthcare. A more general regulation for data protection was enacted under Section 43A of the Information Technology Act, 2000 (“IT Act”) and the rules made thereunder, in particular, the Information Technology (Reasonable Security Practices and Procedures and sensitive personal data or information) Rules, 2011 (“Rules”). Section 43A requires body corporates (defined as any company) handling sensitive personal information, (as defined under the IT Act and Rules), to maintain reasonable security practices regarding handling such information, and penalises failure to maintain such practices, in case it causes ‘wrongful loss or wrongful gain to any person.’ The Rules prescribed under Section 43A detail the general obligations of body corporates that handle sensitive personal information more comprehensively.
The Rules specify that all body corporates which “collects, receives, possess, stores, deals or handle information”, directly from the holder of such information through a lawful contract, shall provide a privacy policy, which must – (a) be clearly accessible; (b) specify the data collected; (c) specify the purpose for collection and the disclosure of such information and; (d) specify the reasonable security practices for the protection of such data. There are also specific requirements for body corporates which handle sensitive personal information, which includes obtaining consent from the data subject, and permitting data collection for a specified and limited purpose as well as a limited time. The body corporate is also supposed to ensure the data subject is aware of: (a) the fact that the information is being collected; (b) the purpose for which the information is being collected; (c) the intended recipients of the information; and (d) the name and address of he agency that is collecting the information as well as the agency that will retain the information. The rules also require the body corporate to provide an explicit option for users to opt-out of having their personal information collected, which permission can also be withdrawn at any time.
Apart from the above, the IT (Intermediary Guidelines) Rules, 2011, (“Guidelines) also contain a prescription for providing information to government agencies, although the rules have been enacted under the provisions of the safe-harbour conditions of the IT Act. Rule 3(7) of the Guidelines states that “…When required by lawful order, the intermediary shall provide information or any such assistance to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. The information or any such assistance shall be provided for the purpose of verification of identity, or for prevention, detection, investigation, prosecution, cyber security incidents and punishment of offences under any law for the time being in force, on a request in writing staling clearly the purpose of seeking such information or any such assistance.” While this regulation outside the scope of the rule-making power under Section 79 of the IT Act, it continues to remain in force, although the extent to which it is utilized to obtain information is unknown.
Content Restriction, Website blocking and Intermediary Liability in India
Section 79 of the IT Act contains the safe harbor provision for intermediaries, sheltering them from liability, under specific circumstances, against information, data, or communication links made available by any third party. For the safe harbor to apply, the role of the intermediaries must be limited to (a) providing access to a communication system over which information made available by third parties is transmitted or temporarily stored or hosted; or (b) a platform which does not initiate the transmission, modify it or select the receiver of the transmission. Moreover, the safe-harbour does not apply when the ISP has received actual knowledge, or been notified by the appropriate government agency, about potentially unlawful material which the intermediary has control over, fails to act on such knowledge by disabling access to the material.
The Central Government has further prescribed guidelines under Section 79 of the IT Act, which intermediaries must comply with to have the shelter of the safe harbor provisions. The guidelines contain prescriptions for all intermediaries to inform their users, through terms of service and user agreements, of information and content which is restricted, including vague prescriptions against content which is “…grossly harmful, harassing, blasphemous defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever;” or that infringes any proprietary rights (including Intellectual Property rights).
Rule 3(4) is particularly important, and provides the procedure to be followed for content removal by intermediaries. This rule provides that any intermediary, who hosts, publishes or stores information belonging to the above specified categories, shall remove such information within 36 hours of receiving ‘actual knowledge’ about such information by any ‘affected person’. Further, any such flagged content must be retained by the intermediary itself for a period of 90 days. The scope of this rule led to frequent misuse of the provision for removal of content. As non-compliance would make the intermediaries liable for potentially illegal conduct, intermediaries were found to be eager to remove any content which was flagged as objectionable by any individual. However, the scope of the rule received some clarification from the Supreme Court judgement in Shreya Singhal v Union of India. While the Supreme Court upheld the validity of Section 79 and the Guidelines framed under that section, it interpreted the requirement of ‘actual knowledge’ to mean the knowledge obtained through the order of a court asking the intermediary to remove specific content. Further, the Supreme Court held that any such court order for removal of restriction must conform Article 19(2) of the Constitution of India, detailing permissible restrictions to the freedom of speech and expression.
For the enforcement of the above rules, Rule 11 directs intermediaries to appoint a Grievance Officer to redress any complaints for violation of Rule 3, which must be redressed within one month. However, there is no specific mention of any remedies against wrongful removal of content or mechanisms to address such concerns.
Apart from the above, there is a parallel mechanism for imposing liability on intermediaries under the Copyright Act, 1957. According to various High Courts in India, online intermediaries fall under the definition of Section 51(a)(ii), which includes as an infringer, “…any person who permits for profit any place to be used for the communication of the work to the public where such communication constitutes an infringement of the copyright in the work, unless he was not aware and had no reasonable ground for believing that such communication to the public would be an infringement of copyright.”
Section 52(1) provides for exemptions from liability for infringement. The relevant part of S.52 states –
“(1) The following acts shall not constitute an infringement of copyright, namely:
(b) the transient or incidental storage of a work or performance purely in the technical process of electronic transmission or communication to the public;
(c) transient or incidental storage of a work or performance for the purpose of providing electronic links, access or integration, where such links, access or integration has not been expressly prohibited by the right holder, unless the person responsible is aware or has reasonable grounds for believing that such storage is of an infringing copy:
Provided that if the person responsible for the storage of the copy has received a written complaint from the owner of copyright in the work, complaining that such transient or incidental storage is an infringement, such person responsible for the storage shall refrain from facilitating such access for a period of twenty-one days or till he receives an order from the competent court refraining from facilitating access and in case no such order is received before the expiry of such period of twenty-one days, he may continue to provide the facility of such access;”
While Section 52 of the Act provides for safe harbour for certain kinds of online intermediaries, this does not apply where the intermediary has ‘reasonable grounds for believing’ that storage is an infringing copy, similar to language used in 51(a)(ii), which has been broadly interpreted by high courts. The procedure for notifying the intermediary for taking down infringing content is given in the Rules prescribed under the Copyright Act, which requires that the holder of the Copyright must give written notice to the intermediary, including details about the description of work for identification, proof of ownership of original work, proof of infringement by work sought to be removed, the location of the work, and details of the person who is responsible for uploading the potentially infringing work. Upon receipt of such a notice, the intermediary must disable access to such content within 36 hours. Further, intermediaries are required to display reasons for disabling access to anyone trying to access the content. However, the intermediary may restore the content after 21 days if no court order is received to endorse its removal, although this is not a requirement. After this notice period, the intermediary may choose not to respond to further notices from the same complainant about the same content at the same location.
Besides the safe harbour provisions, which require intermediaries to meet certain conditions to avoid liability for content hosted by them, intermediaries are also required to comply with government blocking orders for removal of content, as per Section 69A of the IT Act. This section specifies that the government may, according to the prescribed procedure, order any intermediary to block access to any information “in the interest of sovereignty and integrity of India, defense of India, security of the State, friendly relations with foreign states or public order or for preventing incitement to the commission of any cognizable offence relating to above.” Failure to comply by the intermediary results in criminal penalties for the personnel of the intermediary.
The procedure for blocking has been prescribed in the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The Rules under Section 69A allow any Central Government or State Government ministry or department to issue blocking requests, which may be made by any person to specific departmental representatives known as ‘nodal officers’, may request the blocking of access to content by any intermediary. The nodal officers forward such requests for blocking of access to the ‘designated officer’, who is an officer of the Central Government not below the rank of the joint secretary, as nominated by the Central Government. The blocking request is then considered by a committee which recommends whether the designated officer should approve such request or not. Once approved, the request is forwarded to the intermediary, who must nominate at least one person to handle all such requests. In case of non-compliance, the designated officer may initiate action under Section 69A against the intermediary.
The rules contain some safeguards to ensure due process before blocking orders are made. The designated officer is required to make ‘reasonable efforts’ to locate the user or intermediary who has hosted the content and allow for such person or intermediary to appear before the committee to submit their reply and clarifications. Rule 9 lays down the emergency procedure for blocking in which case the above detailed procedural safeguards such as the committee deliberation or providing a hearing are dispensed with. However, Rule 16 requires the confidentiality of all such requests and actions taken under the rules, which defeats any attempts at the transparency or fairness of the process.
Finally, the ISP and Unified Services License (USL) issued by the DoT prescribe further obligations to block content. Under Clause 38 of the USL, for example, ISP’s must take measures to prevent the “flow of obscene, objectionable, unauthorised or any other content infringing copy-rights, intellectual property right and international & domestic Cyber laws in any form” over their network. Moreover, as per Clause 7 of the USL, the licensee is obliged to block subscribers as well as content, as identified by the Licensor (DoT). Failure to comply with license conditions can lead to the cancellation of the telecommunication operators license with the DoT, without which they are not permitted to operate in India.
Findings and Recommendations
General
- Most companies’ policies are only tailored towards minimum compliance with national regulations;
- As detailed in the above sections, companies are mandated by law to comply with certain procedures including data protection and content restriction policies. While compliance with these regulations also varies from company to company, there are barely any instances of companies taking initiative to ensure better privacy procedures than mandated by law, or to go beyond human rights reporting requirements as detailed in corporate social responsibility regulations. For example, Vodafone was the only company in this index to disclose (even in a limited manner) government requests for user information or for content restriction.
- While compliance with regulations is an understandable threshold for companies to maintain, companies should make efforts to at least explain the import of the regulations to their users and explain how their policies are likely to affect their users’ rights.
- Company policies are usually tailored towards regulations in specific regulations;
- Jurisdiction is a major issue in regulating internet services. Internet service providers may operate and have users in several jurisdictions, but their policies do not always meet the requirements of each jurisdiction in which they operate, where there services are accessed. Even in cases of large ISPs which operate across jurisdictions, the policies may be tailored to specific jurisdictions. Tata Communications Ltd. for example, specifically references the law of the United States of America in its policies, though the same policies may operate for users in other jurisdictions. This is problematic since most company policies have accession to the terms as a condition of service, which means that restrictions (or protections, as the case may be) on user rights placed in one jurisdiction can be responsible for similar restrictions across the board in several jurisdictions.
- Companies do not seek meaningful consent from their users before subjecting them to their policies;
- The study highlights the importance of company policies to users rights. These policies define the relationship between the service provider and the user, including delimiting the rights available to users and their control over the information collected from them (often automatically). However, most companies take very little effort in obtaining meaningful user consent towards their policies, including efforts towards educating users about the import of their policies. In many cases, mere use of the service is mentioned as a sufficient condition for making the policies binding upon the users. Even in other cases, where notice of policies is more prominent, few efforts are made to ensure that users fully understand the scope and effect of the policies.
- Further, while most companies have committed to informing users of changes to their policies in some form, only Reliance Jio disclosed that it directly informed users of changes to policies, subject to its discretion; while others did not maintain any clear standard for notice to changes to policies. None of the companies provided access to any archives where changes to the company policies could be reviewed.
- It is apparent that most companies do not take much effort in maintaining robust or meaningful terms and conditions or privacy policies, which include an explanation of how the service could potentially affect a user’s privacy or freedom of expression. Nor do most companies attempt to take safeguards for protecting such freedoms beyond complying with regulations. Only Shaadi.com commits to informing users about data protection and how to take reasonable steps for ensuring their online privacy, above and beyond the regulations.
- Finally, a study of TCL’s policy indicates that in some cases, the actions or policies of upstream providers (backbone internet providers such as TCL), can affect users’ experience of the internet without their consent or even notice, since these terms must be complied with by the last-mile provider to whom the users may connect.
- The formalistic manner in which these policies are framed and worded effectively prevents many users from understanding their import upon online freedoms. Companies which are serious about committing to human rights should take steps towards making their policies easily accessible, and to clearly explain the scope of their policies and their impact on users’ online human rights in an easy and understandable manner instead of a formalistic, legal statement which is not accessible to lay users. Companies should also take steps towards educating users about how to protect their online freedoms while utilizing the services of the company.
- Indian regulations hinder transparency and prevent companies from being accountable to their users;
- The regulations outlined in Part – I of this report are telling in the broad restrictions they place on company transparency, in particular for disclosing any information about government requests for user information, or government or third party requests for content restriction. The policies are vaguely worded and broad in their confidentiality requirements, which potentially causes a chilling effect around the release of even aggregate or depersonalized information by companies.
- Government regulations often provide the framework around which company policies operate. Regulators must include principles for safeguarding online freedom of expression and privacy as a fundamental part of their regulations. This includes clearly specifying the scope of confidentiality requirements as a response to government requests and to enable some form of transparency and oversight.
Commitment
- Most companies do not adequately disclose efforts towards assessing their impact on online freedoms or compliance with the same;
- Except Vodafone India (through Vodafone plc, its parent company), none of the companies surveyed in this report have disclosed any assessments of the impact of their services on online freedom of speech or privacy. The lack of such disclosures indicates companies’ lack of concern over ensuring transparency in such issues.
- Although no legal framework exists for such assessment, companies must independently assess the impact of their services upon basic online freedoms as the first step towards committing to protecting those freedoms, possibly through a third party such as the Global Network Initiative. The findings from these assessments should, to the extent possible, be made public.
- Some companies have implemented internal policies for training on and to monitor compliance with online freedoms;
- Some companies have disclosed internal mechanisms which emphasise on protecting online freedoms, for example, through employee training on such issues. These internal policies are an important aspect of accountability for company processes which are generally outside of public oversight. Four of the eight companies surveyed, for example, have whistle-blower policies protecting the internal reporting of violations of ‘ethical conduct’. In addition, some companies, for example Tata Communications and Aircel disclose an internal code of ethics and measures for ensuring compliance with the same. Similarly, Vodafone discloses the existence of a Privacy Management System for training employees on the importance of customer privacy.
- While some companies have robust internal processes for accountability, companies should also specify that these processes explicitly deal with concerns about user privacy or censorship, above and beyond general requirements for ethical conduct.
- Companies do not disclose direct efforts to lobby against regulatory policies which negatively impact online freedoms;
- None of the companies disclosed efforts towards directly lobbying for clearer regulations on government censorship of online privacy. However, the lack of transparency could possibly be attributed to the nature of the public consultancy process by Indian regulators. In fact, where the consultancy process is made public and transparent, companies have shown efforts at engaging with regulators. For example, several of the companies studied in this report have responded to the TRAI’s call for public comments on the network neutrality framework for the Indian internet, including TCL, Airtel, Aircel and Vodafone India.
- The obvious implication for regulators is to improve the public consultancy process and attempt to engage stakeholders in a more transparent manner. Companies should also put regulatory pressure against regulations which stifle free speech or user privacy, if not through legal challenges, through public statements against regulatory overreach or oversight in these areas.
- However, companies are making efforts towards better regulation through industry groups, particularly for privacy and data protection;
- Most telecommunication companies surveyed in this report are members of some industry body which advocates in favour of protecting online freedoms. In particular, the companies are members of associations such as the Data Security Council of India or the Internet Service Providers Association of India, which commit to protecting different aspects of users rights. The DSCI, for example, is an influential industry association which lobbies for better regulations for data protection. However, there are few such associations actively committed towards tackling private or governmental censorship online.
- While industry bodies are a growing voice in lobbying efforts towards better regulation, companies should also participate in civil society forums which advocate for protecting online freedoms.
- All companies disclose some forum for grievance redressal, however, none of these specifically address freedom of speech and privacy issues;
- All the companies surveyed have disclosed some forum for grievance redressal. As indicated above, this forum is also a statutory requirement under both the Reasonable Security Practices Rules and the Intermediaries Guidelines Rules under the IT Act. In most cases, however, these policies do not specify whether and to what extent the grievance redressal forum addresses issues of online censorship or privacy concerns, although some companies, such as Vodafone, have specifically designated Privacy Officers. Only Aircel, TCL and RCL disclosed an appellate process or timelines for resolution of complaints. Further, Aircel is the only company in this report which disclosed aggregate data of complaints received and dealt with.
- Companies must take steps towards improving customer protection, particularly in cases involving violations of online freedoms. Grievance redressal by the company is generally the first step towards addressing rights violations and can also prevent future legal problems which the company may face. Further, companies should be transparent in their approach towards resolving customer grievances, and should publish aggregate data including complaints received and resolved, and to the extent possible, classifying the nature of the complaints received.
Freedom of Speech
- Most companies do not disclose processes or safeguards in case of content restriction requests by private third parties or by the government;
- Few of the companies surveyed have any form of checking misuse by government or third parties of blocking procedures prescribed under their terms and conditions. Some policies, such as TCL’s acceptable use policy, specifies that the company shall attempt to contact the owner of the content upon notice of private requests for content restriction, however, this requirement is entirely discretionary.
- Some companies, such are Rediff, have a well-defined procedure for content restriction on intellectual property claims, but not in case of general content restriction measures.
- However, there is evidence that at least some of the companies do provide some notice to users when the information they attempt to access has been removed or blocked by court order. TCL, for example, redirects users to a notice stating that the information has been blocked as per the provisions of a specific law. However, this does not reflect in its policies.
- Companies must have internal procedural safeguards to ensure the authenticity of content restriction claims and their compliance with regulations. Companies must commit to objecting against overbroad requests for restriction. One important step in this regard is to clarify the scope of companies liabilities as intermediaries, for actions taken in good faith.
- Companies must also provide clear and detailed notice to both users attempting to access blocked content as well as to the person whose content has been restricted. Such notice must specify whether the removal was due to a judicial, executive or privacy order, and to the extent possible, should specify the law, regulation or company policy under which the content has been restricted.
- Companies do not disclose internal processes on content restriction or termination of services taken independently of third party requests;
- None of the companies disclosed their process for removal of content independently of third party requests, for the enforcement of their terms. None of the company policies disclose processes for identification or investigation of any violation of their terms. In fact, many companies, including Rediff, Hike Messenger and Vodafone expressly state that services may be terminated without notice and entirely at the discretion of the service provider.
- Further, none of the companies surveyed disclose their network management principles or make any public commitments against throttling of blocking of specific content or differential pricing, although, some of the telecommunications companies did vouch for some form of network neutrality, in their response to the TRAI’s public consultation on network neutrality regulations. As an outcome of those consultations, regulations now effectively prevent telecoms from discriminatory tariffs based on the nature of content.
- Company processes for enforcement of their terms of use must be disclosed. Further, companies should commit to transparency in the enforcement of the terms of use, to the extent possible.
Privacy
- Company practices on data protection vary widely – most companies show some commitment towards users’ privacy, but fall short on many grounds
- Despite the existence of a privacy regulation (the Reasonable Security Practices Rules), company practices on data collection vary. Some companies, such as TCL, have robust commitments towards important privacy principles including user consent and collection limitation, however, on the other end of the spectrum, RCL does not have a publicly available privacy policy governing the use of its internet services. In fact, none of the companies have data collection policies which contain the minimum safeguards as expected from such policies, such as compliance with the OECD Privacy Principles, or the National Privacy Principles as laid out in the A.P. Shah Committee Report on Privacy.
- Most of the companies surveyed make some form of commitment to notifying users of the collection and use of their data, including specifying the purposes for which information would be used and specifying the third parties with whom such information may be shared, and the option to opt-out of sharing their data with third parties. However, none of the policies explicitly commit to limiting collection of data to that which is necessary for the service. Further, while companies generally specify that data may be shared with ‘third parties’, usually for commercial purposes, theses parties are usually not explicitly mentioned in the policies.
- Some of the companies, including TCL and Reliance Jio also explicitly allow individual participation to access, amend or delete the information companies have stored about them. However, in other cases, users can only delete specific information upon account termination. Moreover, other companies do not specify if they continue to hold user information beyond the period for which services are provided. In fact, none of the companies except Hike Messenger disclose that they limit the storage of information to a specified time period.
- Companies must follow acceptable standards for data protection and user privacy, which, at the very least, require them to commit to collection and use limitations, specify time periods for retaining the data, allowing users to access, amend and delete data and to ensure that data stored is not out-dated or wrong. These policies must clearly specify the third parties with whom information may be shared, and should specify whether and how user consent is to be obtained before sharing of this information.
- Companies’ processes for sharing of user information upon request by private third parties or governments are not transparent
- With the exception of the Vodafone Transparency Report (undertaken by Vodafone India’s holding company), none of the companies studied attempt to disclose any information about their processes for sharing user information with governments. Even in the case of private third parties, only some companies expressly commit to user notification before sharing of information.
- Companies should be more transparent about third-party requests for user data. While regulations regarding confidentiality could be clearer, companies should at least indicate that governments have requested user data and present this information in aggregate form.
- Some companies disclose specific measures taken to secure information collected through the use of their services, including the use of encryption
- While all companies collecting sensitive personal information are requested to comply with the reasonable security standards laid down under the Rules, companies’ disclosures about measures taken to secure data are generally vague. Rediff, for example, merely specifies that it uses the SSL encryption standard for securing financial data and ‘accepted industry standards’ for securing other data and Vodafone discloses that it takes ‘reasonable steps’ to secure data.
- None of the companies surveyed disclose the existence of security audits by independent professionals, or the procedure followed in case of a breach of security. Further none of the companies commit to encrypting communications with or between the users end-to-end.
- Companies should specify the safety standards utilized for the handling, transmission and storage of personal information. They must specify that the security used is in compliance with acceptable industry standards or legally prescribed standards. Further, they should ensure, wherever possible, that end-to-end encryption is used to secure the information of their users.
RDR Company Reports
Tata Communications Limited
www.tatacommunications.com
Industry: Telecommunications
Services evaluated: Tier-1 Internet Backbone Services, VSNL Mail
Market Capitalization: INR 194 Billion
TATA Communications Ltd. (TCL) is a global telecommunications company, headquartered in Mumbai and Singapore. A part of the TATA group of companies, TCL was founded as Videsh Sanchar Nigam Limited (VSNL), which was the first public-access gateway internet provider in India. VSNL was later acquired by the TATA group, and entirely merged with TATA Communications in 2008. TATA continues to retain the VSNL domain for its personal and enterprise email service.
According to its latest annual report, TCL provides backbone connectivity to over 240 countries and territories and carries close to 24% of the world’s Internet routes. TCL also owns three of the ten submarine cable landing stations in India, responsible for India’s connectivity to the global internet.
Commitment
TCL scores averagely on disclosure of its commitment to human rights on the internet, including on disclosures relating to freedom of expression and privacy. Although TCL maintains a corporate social responsibility policy as well as business responsibility report, which include policy commitments to protecting human rights, (which are mandated by Indian law), none of its publicly available policies make a reference to its commitments to freedom of expression of its users.
The TATA group maintains a code of conduct, applicable to all of its group companies, including TCL. The code makes an explicit reference to data security and privacy of TATA’s customers. As per that code, the Managing Director and Group CEO is the Chief Ethics Officer, responsible for the implementation of the Code of Conduct.
TCL’s internal policies concerning internal implementation of human rights, as well as grievance redressal, are more robust than their public policy commitments to the same. As per in the TATA group code of conduct, which is applicable to its group companies, TCL provides employee training and conducts ethics awareness workshops at frequent intervals, and also takes other initiatives to ensure compliance with the code of conduct, which includes a commitment to customer privacy and data protection. Further, TCL has a well articulated whistleblower policy which states the processes to be followed in case any employee observes any unethical conduct within the company, including violations of the TATA code of conduct. The whistleblower policy commits to protecting any employee who reports unethical conduct under the policy, but contains no explicit references to freedom of speech or censorship issues, or issues of user privacy.
Concerning stakeholder engagement, TCL seems to be somewhat involved in engaging with issues of privacy, but makes no commitments on issues of freedom of expression. TCL is a member of the Data Security Council of India, an industry body which makes public commitments towards user privacy and data security, which includes guiding the Indian IT industry on self-regulation on issues of privacy and data security.
TCL maintains various grievance redressal forums, evidenced through different policies. For example, their consumer charter provides a general forum for addressing grievances, which include complaints regarding service outages. However, this does not refer specifically to complaints about censorship or privacy-related concerns. TCL’s Acceptable Use Policy and privacy policy also guide users to specific grievance redressal forums, for complaints under those policies. Besides this, there are recorded instances where TCL has advertised grievance redressal mechanisms relating to cases of private or judicial requests for blocking of content. However, TCL does not make any public disclosures about the inputs to or outcomes of its grievance redressal mechanisms.
Freedom of Expression
General
TCL’s Acceptable Use Policy (“AUP”) governs the use of TCL services by its customers, which includes downstream providers, which TCL is responsible for interconnection with, as a backbone internet provider. VSNL mail maintains its own terms and conditions for users, which are available on its website. Both TCL’s AUP and VSNL’s terms and conditions are easily locatable through their websites, are presented in a clear and understandable manner and are available in English.
TCL does not commit to notifying users of important changes to their terms of use, stating that it may chose to notify its customers of changes to the AUP, either directly, or by posting such modifications on its website. VSNLs policy states that the terms and conditions of the use of the webmail service may change without any notice to users.Although TCL is an Indian company and its terms are applicable to its customers worldwide, the AUP contains several references are to laws and procedures of the United States of America, such as the US PATRIOT Act, ostensibly due to TATA’s heavy presence in the US market coupled with stricter disclosure requirements in that jurisdiction.
Content Restrictions and Termination of Services
The AUP does not place any obligations on TCL to ensure a fair judgement before sanctions such as removal of content, termination or suspension for violations of terms of use. Although the AUP identifies categories of content which is prohibited by the service, the AUP also states that TCL may suspend or terminate a users account, for any action they may deem to be inappropriate or abusive, whether or not stated in their policies. The AUP clearly states that TCL may remove of edit content in violation of the AUP or content which is harmful or offensive. Although it states that TCL shall attempt to first contact a user who is suspected of violations, they may suspend or terminate the services of the customer at their sole discretion. There is evidence, although not stated explicitly in its policies, that TCL provides general notice when content is taken down on its network through judicial order. However, there is no disclosure of any requirement to contact the relevant user, in case of takedown of user-generated content in compliance with judicial order.Although TCL has voiced its opinion on network neutrality, for example, by issuing public comments to the Telecom Regulatory Authority of India, it does not disclose its policies regarding throttling or degrading of content over its network, or its network management principles.As a backbone connection provider, TCL’s major customers include downstream ISP’s who connect through TCL’s network. Therefore, the AUP states that the downstream provider shall ensure that its customers comply with the AUP, failing which TCL may terminate the services of the downstream provider. Further, importantly, TCL treats violations of the AUP by the end-user as violations by the downstream ISP, making them directly liable for the violations of the terms and subject to any actions TCL may take in that regard. The AUP further expressly states that TCL shall co-operate with appropriate law enforcement agencies and other parties investigating claims of illegal or inappropriate conduct, but does not mention whether this involves taking down content or disconnecting users.
Technical observations on TCL’s blocking practices in 2015 showed that TCL appeared to be using a proxy server to inspect and modify traffic to certain IP addresses.
Privacy
General
TCL has one privacy policy which covers all services provided by the company with the exception of VSNL mail, which has its own privacy policy. The policy is easily accessible and available in English. The policy partially discloses that users are updated of any changes to the policy, however, any notification of the changes is only on the website and not done directly. In addition to the above, TCL also has a separate cookie policy, which contains information about its use of cookies for the collection of user information on its websites. Use of TCL’s services entails acceptance of its privacy policy.
Disclosure of Collection, Use and Sharing of Personal Information
TCL, as well as VSNL mail, discloses that it collects users’ personal information, based on the service utilized by them, both as solicited information and as automatically collected information through the use of technologies such as cookies, or through third parties. TCL’s privacy policy states the various purposes to which such personal collection might be used, including for the investigation of fraud or unlawful activity, and for the provision of services, including for marketing. TCL discloses that it may combine this information prior to use. VSNL does not clearly state the purpose for which information may be collected, nor how it is shared.
TCL discloses that it may share personal information with affiliates, marketing partners, service providers as well as in response to legal processes including court orders or subpoena’s or in any other case which TCL deems necessary or appropriate. Where personal information is shared with third parties, TCL commits to ensure that third parties (which include third party downstream carriers) also have appropriate data protection policies. TCL does not disclose its process for responding to orders for interception or for user information from private parties or from governmental agencies, nor does it provide any specific or aggregate data regarding the same.
User control over information
The policy discloses that TCL explicitly seeks user consent before it transfers data across legal jurisdictions. Although the policy states that TCL may share user information with law enforcement agencies in compliance with legal requests, it does not disclose any process for vetting such requests, nor does it disclose any data (specific or aggregate) about any such requests received. With the exception of California, USA, TCL does not permit users to access data about any requests for their personal information which may have been received or granted by TCL to private third parties. Further, in contrast to most companies studied in this index, TCL discloses that it permits users to access, amend or delete information which the company stores about them. VSNL does not disclose that it allows users to access, amend or delete their personal information collected by VSNL.
Security
TCL does not disclose that it uses or permits the use of encryption for any communications transmitted through its network, nor does it provide users any training or disclaimers to consumers on data protection.
Rediff.com India Ltd.
www.rediff.com
Industry: Internet Software Services and Media
Services evaluated: Rediff.com, Rediff Mail, Rediff iShare, Rediff Shopping
Market Capitalization: USD 6.07 Million
Rediff.com is a company, operating several internet services, including personal and enterprise email services, news services, a media-sharing platform and a shopping platform. It has its headquarters in Mumbai, India. According to the Alexa Index, Rediff.com is the 47th most visited website in India, and the 407th overall. Approximately 87% of its traffic originates from Indian users.
Commitment
Of the companies studied in this survey, Rediff.com (“Rediff”) received the lowest scores on commitment indicators. None of Rediff’s publicly available policies, including government mandated filings, disclose efforts towards protecting online freedoms. Rediff also does not disclose that it maintains a whistleblower policy or a company ethics policy. As a major online media and internet services provider in India, Rediff makes no public commitment towards freedom of speech and user privacy, and has not disclosed any efforts at engaging with stakeholders in this regard. Although the terms of use for various services provided by Rediff disclose the existence of a grievance redressal mechanism, it is only within the bounds of Rule 3 of the Intermediary Guidelines Rules, 2011. The terms of use do not explicitly make mention of grievances related to online freedoms, nor is any specific or aggregate data about the complaints mechanism released by the company. Rediff does not disclose that it undertakes any impact assessment of how its services may impact online freedoms.
Freedom of expression
General
Rediff has an umbrella policy covering the use of all services offered by Rediff.com, as well as separate policies governing the use of its video sharing platform, its blogging platform and messaging boards. The use of any Rediff services is construed as acceptance of their terms of use. Rediff discloses that it may change any of its terms of use without prior notification to its users. Rediff’s services are accessible through a Rediffmail account, which does not require verification through any government issued license to link online users to their offline identity. The existence of various disparate policies and the manner and format of the policies somewhat decrease their accessibility.
Content Restriction and Termination of Services
Rediff’s General Terms of Use specify content which is prohibited on its various services, which is materially similar to the content prohibited under the guidelines issued under the Information Technology Act. Further, Rediff’s messaging board policy lists a number of vague and broad categories which are prohibited and may be restricted on the forums, including “negatively affecting other participants, disrupt the normal flow of the posting.”
As per the General Terms of Use, Rediff reserves the right to remove any content posted by users, solely at its own discretion. Rediff’s General Terms of Use do not disclose any process for responding to requests by law enforcement or judicial or other government bodies for the takedown of content. However, the terms of Rediff’s video sharing platform specifies that written substantiation of any complaint from the complaining party is required. Rediff’s process for responding to complaints regarding intellectual property infringement are well detailed in this policy, although it does not substantiate the process for responding to other requests for restriction of content from private parties or law enforcement agencies.
Rediff further reserves the right to terminate the services offered to its users, with or without cause and without notice of the same. Similar to most companies surveyed, Rediff does not disclose its process for responding to requests for restriction of content or services by private parties or by government agencies, nor does it publish specific or aggregate data about restriction of content, the number of requests for takedown received or the number complied with.
Privacy
General
Rediff’s performance on privacy indicators is marginally better than those on freedom of expression. A single privacy policy is applicable to all of Rediff’s services, which is easily accessible through its various websites, including on its homepage. Rediff discloses that any material changes of its privacy policy will be notified prominently. Use of Rediff’s services entails acceptance of its privacy policy.
Disclosure of Collection, Use and Sharing of Personal Information
Rediff specifies that it collects both anonymous and personally identifiable information, automatically as well as what is solicited through their services, including financial information and ‘user preferences and interests’. Rediff does not disclose if any information so collected is combined for any purpose. It also specifies the purpose to which such information may be used, which includes its use ‘to preserve social history as governed by existing law or policy’, or to investigate violations of Rediff’s terms of use. The policy further specifies that Rediff may share information with third parties including law enforcement agencies or in compliance of court orders or legal process. Rediff discloses that it notifies users in case any personal information is being used for commercial purposes, and gives users the option to opt-out of such use. Rediff does not disclose its process for responding to orders for interception or for user information from private parties or from governmental agencies, nor does it provide any specific or aggregate data regarding the same.
User Control over Information
Rediff discloses that its users may chose to correct, update or delete their information stored with Rediff if they chose to discontinue the use of its services. However, unless users specifically chose to do so, Rediff continues to store user information even after termination of their account.
Security
Rediff discloses that it encrypts sensitive information (including financial information) through SSL encryption, and uses ‘accepted industry standards’ to protect other personal information submitted by users, although it does not define what these standards are.
Vodafone India Limited
www.vodafone.in
Industry: Telecommunications
Services evaluated: Broadband and Narrowband mobile internet services
Vodafone India Limited is a wholly owned subsidiary of the Vodafone Group Plc., the world’s second largest telecommunications provider. As of March 2016, Vodafone India was the second largest telecommunications provider in India, with a market share of 19.71% of internet subscribers (broadband and narrowband). Vodafone entered the Indian market after acquiring Hutchison Telecom in 2007.
This survey has only examined the policies of Vodafone India and those policies of Vodafone plc. which may be applicable specifically to Vodafone India.
Commitment
Vodafone India Limited (“Vodafone”) scores the highest on the commitment indicators of the companies examined in this survey. While the Vodafone Group, (the Group/holding company) examined as part of the global Ranking Digital Rights Index, discloses its compliance with the UN Guiding Principles on Business and Human Rights, Vodafone India does not specifically make any such disclosures independently. The companies annual report, corporate responsibility policies or business responsibility reports do not disclose any commitments towards online freedoms. However, Vodafone India does disclose the existence of a Privacy Management Framework, under which employees are provided training regarding data privacy of users. Moreover, Vodafone’s public statements disclose the existence of a privacy impact assessment procedure to ensure ‘data minimisation’ and reduce the risk of breach of privacy. Vodafone is also a member of the Data Security Council of India, an industry body which makes public commitments towards user privacy and data security, which includes guiding the Indian IT industry on self-regulation on issues of privacy and data security, as well as the Cellular Operators Association of India, another industry organization which also commits to protecting consumer rights, including consumers right to privacy.
Vodafone also discloses a multi-tiered grievance redressal mechanism, which includes an appellate authority as well as a timeline of 39 days for the resolution of the complaint. However, the mechanism does not specify if grievances related to online freedoms may be reported or resolved. In addition, Vodafone has designated a Privacy Officer for redressing concerns under its privacy policy.
Freedom of Expression
General
Vodafone scored the lowest on disclosures under this head of the companies surveyed. The terms of use for Vodafone India’s services are not available on their homepage or site-map nor are they presented in a clear or easily accessible manner. They may be accessed through the Vodafone Telecom Consumers Charter, with different terms of use for pre-paid and post-paid customers. There is no policy specific to the use of internet services through the use of the Vodafone network, nor do these policies make reference to the use of internet services by Vodafone users. Vodafone does not disclose that it provides any notification of changes to the policies to its users.
Content Restriction and Termination of Services
While the Terms of Use do not specifically refer to online content, Vodafone’s Terms of Use prohibit users from “sending messages” under various categories, which include messages which infringe upon or affect “national or social interest”. Vodafone reserves the right to terminate, suspend or limit the service upon any breach of its Terms of Use or for any reason which Vodafone believes warrants such termination, suspension or limitation. Vodafone does not disclose its process for responding to violations of its terms of use.
Vodafone does not disclose its process for responding to requests for restriction of content or services by private parties or by government agencies, nor does it publish specific or aggregate data about restriction of content, the number of requests for takedown received or the number complied with. Although the Vodafone group internationally publishes a comprehensive law enforcement disclosure report (making it one of few major internet companies to do so), the report does not contain information on orders for blocking or restricting services or content.
Vodafone has made public statements of its commitment to network neutrality and against any kind of blocking or throttling of traffic, although it does not have any policies in place for the same.
As with all telecommunications companies in India, users must be authenticated by a valid government issued identification in order to use Vodafone’s telecommunication services.
Privacy
General
Vodafone India’s privacy policy which is applicable to all users of its services is not as comprehensive as some other policies surveyed. It is accessibly through the Vodafone India website, and available in English. Vodafone merely discloses that the policy may change from time to time and does not disclose that it provides users any notice of these changes. Use of Vodafone’s services entails acceptance of its privacy policy.
Collection, Use and Sharing of Personal Information
Vodafone’s policy discloses the personal information collected, as well as the purpose and use of such information, and the purpose for which such information may be shared with third parties, including law enforcement agencies. However, Vodafone does not disclose how such information may be collected or for what duration.
Vodafone India’s privacy policy does not disclose its process for responding to government requests for user information, including for monitoring or surveillance. However, the Vodafone law enforcement disclosure report elaborates upon the same, including the principles followed by Vodafone upon requests for user information or for monitoring their network in compliance with legal orders. However, as per the applicable laws in India, Vodafone does not publish any aggregate or specific data about such requests, although it states that the Indian government has made such requests.
User Control over Personal Information
Vodafone does not disclose that it allows users to access, amend, correct or delete any information it stores about its users. It does not disclose if user information is automatically deleted after account termination.
Security
Vodafone only discloses that it takes ‘reasonable steps’ to secure user information. Vodafone does not disclose that it employs encryption over its network, or if it allows users to encrypt communications over their network. Vodafone also does not disclose that it provides any guidance to users on securing their communications over their network.
Reliance Communications Limited
Industry: Telecommunications
Services evaluated: Broadband and Narrowband mobile internet services
Market Capitalization: INR 118.35 Billion
Reliance Communications Limited (“RCL”) is an Indian telecommunication services provider, and a part of the Reliance Anil Dhirubai Ambani group of companies. RCL is the fourth largest telecommunications provider in India, with a market share of 11.20% of Indian internet subscribers. Reliance also owns one of ten submarine cable landing stations in India, responsible for India’s connectivity to the global internet.
Commitment
RCL does not disclose any policy commitment towards the protection of online freedoms. Although RCL has filed business responsibility reports which include a report on the company’s commitment towards human rights, the same do not make a reference to privacy or freedom of expression of its users either. RCL does not disclose that it undertakes any impact assessment of how its services may impact online freedoms.
While RCL does maintain a whistle-blower policy for reporting any unethical conduct within the company, the policy too does not expressly mention that it covers any conduct in violation of user privacy or freedom of expression. RCL is a member of at least three industry bodies which work towards stakeholder engagement on the issues of privacy and consumer protection and welfare, namely, the Data Security Council of India, the Internet Service Providers Association of India and the Association of Unified Telecom Service Providers of India (although none of these bodies expressly mention that they advocate for freedom of expression).
RCL maintains a comprehensive manual of practice for the redressing consumer complaints. The manual of practice specifies the procedure for grievance redressal as well the timelines within which grievances should be resolved and the appellate authorities which can be approached, however, it does not specify whether complaints regarding privacy or freedom of expression are covered under this policy.
Freedom of Expression
General
RCL’s terms of use for its internet services are part of its Telecom Consumer’s Charter, its Acceptable Use Policy (“AUP”) and its Consumer Application Form, which are not easily accessible through the RCL website. The charter contains the terms for its post-paid and pre-paid services as well the terms for broadband internet access. RCL discloses that it may change the terms of use of its services without any prior notification to its users.
Content Restriction and Termination of Services
RCL’s AUP lists certain categories of content which is not permitted, which includes vague categories such as ‘offensive’, ‘abusive’ or ‘indecent’, which are not clearly defined. In the event that a user fails to comply with its terms of use, RCL discloses that their services may be terminated or suspended. Further, as per the CAF, RCL reserves the right to terminate, suspend or vary its services at its sole discretion and without notice to users. The terms of use also require the subscriber/user to indemnify RCL in case of any costs or damages arising out of breach of the terms by any person with or without the consent of the subscriber.
RCL discloses that upon receiving any complaints or upon any intimation of violation of its terms of use, RCL shall investigate the same, which may also entail suspension of the services of the user. RCL does not disclose that it provides users any notice of such investigation or reasons for suspension or termination of the services. RCL does not disclose specific or aggregate data regarding restriction of content upon requests by private parties or governmental authorities.
RCL does not disclose its network practices relating to throttling or prioritization of any content or services on its network. However, RCL has published an opinion to the Telecom Regulatory Authority of India, wherein it supported regulation prohibiting throttling or prioritization of traffic. However, RCL was the network partner for Facebook’s Free Basics platform which was supposed to provide certain services free of cost through the RCL network. The Free Basics initiative was abandoned after the TRAI prescribed regulations prohibiting price discrimination by ISPs.
Privacy
RCL scores the lowest on this indicator of the companies surveyed. RCL does not disclose that it has a privacy policy which governs the use of its internet services. RCL’s AUP only discloses that it may access and use personal information which is collected through its services in connection with any investigation of violation of its AUP, and may share such information with third parties for this purpose, as it deems fit. Further, RCL’s terms of use further disclose that it may provide user information to third parties including security agencies, subject to statutory or regulatory factors, without any intimation to the user.
Security
RCL does not disclose any information on the security mechanisms in place in its network, including whether communications over the network are encrypted or whether end-to-end encrypted communications are allowed.
Shaadi.Com
www.shaadi.com
Industry: Internet Marriage Arrangement
Services evaluated: Online Wedding Service
Shaadi.com, a subsidiary of the People group, is an online marriage arrangement service launched in 1996. While India is its primary market, the service also operates in the USA, UK, Canada, Singapore, Australia and the UAE. As of 2017, it was reported to have a user base of 35 million.
Governance
Shaadi.com makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. The company also does not disclose whether it has any internal mechanisms such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. The company does not disclose if it is part of any multi-stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by the company’s business. While details of a Grievance Officer are provided in the company’s Privacy Policy, it is not clearly disclosed if the mechanism may be used for freedom of expression or privacy related complaints. The company makes no public report of the complaints that it receives, and provides no clear evidence that it responds to them.
Freedom Of Expression
General
The Terms of Service are easily locatable on the website, and are available in English. The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms. Shaadi.com makes no disclosure about whether it notifies users to changes in the Terms, and how it may do so. Shaadi.com also does not maintain any public archives or change log.
Content Restriction and Termination of Services
Shaadi.com discloses an indicative list of prohibited activities and content, but states that it may terminate services for any reason. Shaadi.com makes no disclosures about the process it uses to identify violations and enforce rules, or whether any government or private entity receives priority consideration in flagging content. Shaadi.com does not disclose data about the volume and nature of content and accounts it restricts. Shaadi.com makes no disclosures about its process for responding to requests from any third parties to restrict any content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Shaadi.com makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Shaadi.com discloses that it notifies users via email when restricting their accounts.
Shaadi.com also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts or URLs affected, the types of subject matter associated with the requests, etc. Registration for the service requires a Mobile Number, which may be tied to offline identity.
Privacy
General
The Privacy Policy is easily locatable on the website, and is available in English. The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.
Shaadi.com discloses that material changes to the Privacy Policy will be notified by posting a prominent link on the Homepage. Further, if personally identified information is used in a materially different manner from that stated at the time of collection, Shaadi.com commits to notify users by email. However, Shaadi.com does not disclose a time frame within which it notifies users prior to the changes coming into effect. Shaadi.com also does not maintain any public archives or change log.
Collection, Use and Sharing of Personal Information
Shaadi.com clearly discloses the types of personal and non personal information it may collect, but does not explicitly disclose how it collects the information. There is no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.
While the Privacy Policy states the terms of sharing information, it makes no type-specific discloses about how different types of user information may be shared or the purpose for which it may be shared. Shaadi.com also does not disclose the types of third parties with which information may be shared. Shaadi.com clearly discloses that it may share user information with government or legal authorities.
The Privacy Policy discloses the purposes for which the information is collected, but does not disclose if user information is combined from different services. Shaadi.com makes no commitment to limit the use of information to the purpose for which it was collected. Shaadi.com makes no disclosures about how long it retains user information. It does not disclose whether it retains de-identified information, or its process for de-identification.
Shaadi.com does not disclose whether it collects information from third parties through technical means, how it does so, or its policies about use, sharing, retention etc. Shaadi.com does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Shaadi.com makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.
Shaadi.com also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.
User Control over Information
Shaadi.com does not disclose the time frame within which it may delete user information, if at all, after users terminate their account. Shaadi.com does not disclose whether users can control the collection of information by Shaadi.com. The Policy states that users are allowed to remove both public or private information from the database. However, certain (unspecified) financial information and account related information submitted at the time of registration may not be removed or changed.
Shaadi.com does not disclose if users are provided options to control how their information is used for targeted advertising, or if targeted advertising is off by default.
Shaadi.com does not disclose whether users may access a copy of their information, or what information may be available. Shaadi.com does not disclose whether it notifies users when their information is sought by government entities or private parties.
Security
Shaadi.com discloses that it follows generally accepted industry standards to protect personal information. Employees are granted access on a need to know basis. Shaadi.com does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.
Shaadi.com does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them. Shaadi.com does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Shaadi.com does not disclose whether it notifies affected users about breaches, and any steps it may take to minimize impact.
Shaadi.com discloses that sensitive information, such as card numbers, are transmitted using the Secure Socket Layer protocol, but not whether all user communications are encrypted by default. Shaadi.com does not disclose whether it uses advanced authentication methods to prevent unlawful access. Shaadi.com does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.
Shaadi.com publishes privacy and security tips on its website which provide guidance about risks associated with the service, and how they may be avoided.
Hike Messenger
www.get.hike.in
Industry: Internet Instant Messaging
Services evaluated: Instant Messaging and VoIP application
Hike messenger is an Indian cross platform messaging application for smartphones. Users can exchange text messages, communicate over voice and video calls, and exchange pictures, audio, video and other files. Hike launched in November 2012 and, as of January 2016 Hike became the first Indian internet company to have crossed 100 million users in India. It logs a monthly messaging volume of 40 billion messages. Hike’s parent Bharti SoftBank is a joint venture between Bharti Enterprises and SoftBank, a Japanese telecom firm. As of August 2016, hike was valued at $1.4 billion.
Governance
Hike makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. Hike also does not disclose whether it has any internal mechanisms such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. Hike does not disclose if it is part of any multi stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by Hike’s business.
Hike’s Terms of Use provide contact details for submitting queries and complaints about the usage of the application. It notes that the complaints will be addressed in the manner prescribed by the Information Technology Act, 2000 and rules framed thereunder. The Terms do not disclose if the mechanism may be used for freedom of expression or privacy related issues. Hike makes no public report of the complaints that it receives, and provides no clear evidence that it responds to them.
General
The Terms of Service are easily locatable on the website, and are available in English. The terms are presented in an understandable manner, with section headers, and often provide examples to explain the terms. Hike may make changes to the Terms at its discretion without any prior notice to the users. Hike does not disclose whether users are notified after changes have been made, or whether it maintains a public archive or change log.
Though the Terms disclose a range of content and activities prohibited by the service, Hike may delete content, for any reason at its sole discretion. Further, Hike may terminate or suspend the use of the Application at anytime without notice to the user.
Content Restriction and Termination of Services
Hike makes no disclosures about the process it uses to identify violations and enforce its rules, or whether any government or private entity receives priority consideration in flagging content. Hike does not disclose data about the volume and nature of content and accounts it restricts.
Hike makes no disclosures about its process for responding to requests from any third parties to restrict any content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Hike makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.
Hike also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts, etc.
Identity Policy
Mobile Numbers would be required to sign up for the service, which could potentially be connected to offline identity.
General
The Privacy Policy is easily locatable on the website, and are available in English. The terms are presented in an understandable manner, with section headers, and often provide examples to explain the terms.
Hike discloses that changes to the Privacy Policy will be posted on Hike website, and does not commit to directly notifying users of changes. Users are advised to review the website from time to time to remain aware of the terms. Hike does not disclose a time frame within which it may notify changes prior to them coming into effect. Hike also does not disclose whether it maintains a public archive or change log.
Collection, Use and Sharing of Information
Hike clearly discloses the types of user information it collects. However, Hike makes no explicit commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.
Hike discloses that user information may be shared for a variety of purposes, but does not disclose the type, or names of third parties that may be given access to the information. Hike discloses that it may share user information with government entities and legal authorities.
The Privacy Policy states the purposes for which user information is collected and shared, but makes no commitment to limit the use of information to the purpose for which it was collected.
Hike discloses that undelivered messages are stored with Hike’s servers till they are delivered, or for 30 days, whichever is earlier. Messages or files sent through the service also reside on Hike’s servers for a short (unspecified) period of time till the delivery of the messages or files is complete. Hike does not disclose the duration for which it retains information such as profile pictures and status updates. Hike does not disclose whether it retains de-identified information, or its process for de-identification. Hike discloses that, subject to any applicable data retention laws, it does not retain user information beyond 30 days from deletion of the account.
Hike does not disclose whether it collects information from third parties through technical means, and how it does so, or its policies about use, sharing, retention etc.
Hike does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Hike makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.
Hike also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.
Hike does not disclose whether it notifies users when their information is sought by government entities or private parties.
User Control over Information
Hike discloses that the user may chose to not submit certain user information, but also notes that this may hinder use of the application. Hike makes no disclosure about whether users may request deletion of their user information.
Hike discloses that users may opt out or opt in for specific services or products which may allow user information to be used for marketing or advertising purposes. Hike does not disclose if targeted advertising is on by default.
Hike does not disclose whether users may obtain a copy of their user information.
Security
Hike discloses that it has security practices and procedures to limit employee access to user information on a need to know basis only. Hike does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits. Hike does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them.
Hike does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach, but discloses that it may attempt to notify the user electronically. However, company does not the types of steps it would take to minimize impact of a data breach.
Hike does not disclose if transmission of user information is encrypted by default, or whether it uses advanced authentication methods to prevent unlawful access. Hike does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.
Hike does not publish and materials that educate users about cyber risks relevant to their service.
Aircel
www.aircel.com
Industry: Telecommunications
Services evaluated: Broadband and Narrowband Mobile Internet Services
The Aircel group is a joint venture between Maxis Communications Berhad of Malaysia and Sindya Securities & Investments Private Limited. It is a GSM mobile service provider with a subscriber base of 65.1 million users. The company commenced operations in 1999 and has since become a pan India operator providing a host of mobile voice and data telecommunications services.
Governance
Aircel’s Terms and Conditions state that it is a duty of all service providers to assure that the privacy of their subscribers (not affecting national security) shall be scrupulously guarded. However, the company makes no similar commitment to freedom of expression.
Aircel also does not disclose whether it has any oversight mechanisms in place. However, Aircel does disclose that it has established a Whistleblower Policy and an Ethics Hotline. Further, the Privacy Policy states that employees are expected to follow a Code of Conduct and Confidentiality Policies in their handling of user information. There are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. Aircel does not disclose if it is part of any multi stakeholder initiatives, or any other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by Aircel’s business.
Aircel has a process for receiving complaints on its website under the section of Customer Grievance. However, it is not clearly disclosed whether this process may be applicable for freedom of expression and privacy related issues. Though Aircel does disclose information such as the number of complaints received and redressed, the number of appeals filed, it makes no disclosure if any complaints were specifically related to freedom of expression and privacy.
Freedom Of Expression
General
The Terms and Conditions are not easily locatable, and are found as part of a larger document titled Telecom Consumers Charter, which is itself posted as an inconspicuous link on the Customer Grievance page. The Terms are provided only in English, but it is likely that Aircel has a large Hindi speaking user base. The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms.
Aircel discloses that it may make changes to the Terms without notice to users, or with written notice addressed to the last provided address, at its sole discretion. Aircel does not disclose if it maintains a public archive or change log.
Content Restriction and Termination of Services
The Terms prohibit certain activities, but Aircel discloses that it may terminate services for a user at its sole discretion for any reason, including a violation of its Terms.
Aircel makes no disclosures about its process it uses to identify violations and enforce its rules, or whether any government or private entity receives priority consideration in flagging content. Aircel does not disclose data about the volume and nature of content and accounts it restricts.
Aircel makes no disclosures about its process for responding to requests from third parties to restrict content or users. The Terms do not disclose the basis under which Aircel may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Aircel makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Aircel does not disclose if it notifies users when they try to access content that has been restricted, and the terms expressly waive users’ right to notice if their services are suspended/terminated.
Aircel does not disclose its policy on network management, or whether it prioritizes, blocks, or delays certain types of traffic, applications, protocols, or content for reasons beyond assuring quality of service and reliability. Notably, in its comments to the Telecom Regulatory Authority of India on the issue of regulation of Over-The-Top Services, it argued for the right of Telecom Service Providers to negotiate commercial agreements with OTT providers, as well as the right to employ non price differentiation and network management practices.
Aircel discloses that it may terminate its services in wholly or in part, at its sole discretion, and for any reasons, including directions from the government. Aircel does not disclose its process for responding to requests for network shutdowns, or the legal authority that makes the requests, nor does it commit to push back on such requests. The terms waive the users’ right to notice when services are suspended. Aircel also provides no data about the number of request received or complied with.
Aircel discloses that it requires government approved identification in order to perform verifications.
The Privacy Policy is easily locatable on the website, and is available in English. It is likely that Aircel has a large Hindi and vernacular speaking user base. However, the website does not provide any other language versions of the Privacy Policy. The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.
The Privacy Policy states that changes will be reflected on the website, and makes no disclosure about whether it will directly notify users. Aircel does not disclose a time frame within which it may notify users prior to the changes coming into effect. Aircel also does not maintain any public archives or change log.
Collection, Use and Sharing of Information
Though Aircel discloses the types of user information it may collect, it does not explicitly disclose how it collects the information. Aircel makes no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.
While the Privacy Policy states the terms of sharing information, it makes no type-specific disclosures about how different types of user information may be shared. Further, while Aircel broadly discloses the type of third parties with which it may share information, it does not provide a specific list of names. Aircel clearly discloses that it may share user information with government or legal authorities.
The Privacy Policy broadly states the purposes for which the information is collected, but does not disclose in more specific terms the purposes for which various types of user information may be collected. Aircel also does not disclose if user information is combined from different services. Aircel makes no commitment to limit the use of information to the purpose for which it was collected.
Aircel makes no disclosures about how long it retains user information, and the Privacy Policy states that it may retain information for as long as it requires. Aircel does not disclose whether it retains de-identified information, or its process for de-identification. Aircel does not disclose the time frame within which it may delete user information, if at all, after users terminate their account.
Aircel does not disclose whether it collects information from third parties through technical means, how it does so, or its policies about use, sharing, retention etc. Aircel does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Aircel makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.
Aircel also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.
Aircel does not disclose whether it notifies users when their information is sought by government entities or private parties.
User Control over Information
Aircel does not disclose whether users can control the collection of information by Aircel. The Privacy Policy discloses that if information is not provided, or consent for usage is withdrawn, Aircel reserves the right to discontinue the service for which the information is sought. Aircel does not disclose if users can request the deletion of information.
Aircel discloses that users can opt in or opt out of receiving telemarketing communications, and discloses that they must be specifically opted in for. However, Aircel does not disclose any options with respect to the usage of use information for such purposes. Users may only choose to opt in or opt out of receiving commercial communications, and have no control over whether user information is used in the first place.
Aircel does not disclose whether users may access a copy of their information, or what information may be available.
Security
Aircel discloses that it has adopted measures to protect information from unauthorized access and to ensure that personal information is accessible to employees or partners employees strictly on a need to know basis. Aircel discloses that its employees are bound by a Code of Conduct and Confidentiality Policies. Aircel does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.
Aircel does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, or how it would respond to them.
Aircel does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Aircel does not disclose whether it notifies affected users about breaches, or any steps it may take to minimize impact.
Aircel discloses that highly confidential information such as passwords and credit card numbers are transmitted using the Secure Socket Layer protocol. However, Aircel does not disclose if all user communications are encrypted by default. Aircel also does not disclose whether it uses advanced authentication methods to prevent unlawful access. Aircel does not disclose whether users can view their recent account activity, or if it notifies users about unusual activity and possibly unauthorized access.
Aircel publishes information about Security Awareness and Alerts that details various threats on the internet, and how they may be countered.
Reliance Jio
www.jio.com
Industry: Telecommunications
Services evaluated: Broadband and Narrowband mobile internet services
Reliance Jio Infocomm Ltd. is a wholly owned subsidiary of Reliance Industries Ltd., and provides wireless 4G LTE service network across all 22 telecom circles in India. It does not offer 2G/3G based services, making it India’s only 100% VoLTE network. Jio began a massive rollout of its service in September 2016, as was reported to have reached 5 million subscribers in its first week. As of October 25, 2016, Jio is reported to have reached 24 million subscribers.
Governance
Jio does not score well in the Governance metrics. It makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. The company also does not disclose whether it has any internal mechanisms in place such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. The company does not disclose if it is part of any multi-stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by the company’s business.
Jio’s website discloses a process for grievance redressal, along with the contact details of for their Grievance Officer. The Regulatory Policy also lays down a Web Based Complaint Monitoring System for customer care. However, neither mechanism clearly discloses that the process may be for freedom of expression and privacy issues. In fact, the Grievance Redressal process under the Terms and Conditions process seems primarily meant for copyright owners alleging infringement. Jio makes no public report of the complaints it receives, and provides no clear evidence that it responds to them.
General
The Terms of Service are easily locatable on the website, and are available in English. It is likely that Jio has a large Hindi and vernacular speaking user base. However, the website does not have any other language versions of the Terms of Service.
The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms.
Jio discloses that changes to the Terms of Service may be communicated through a written notice to the last address given by the Customer, or through a public notice in print media. However, this may be at Jio’s sole discretion. Further, Jio does not disclose a time frame within which it notifies users prior to the changes coming into effect. Jio also does not maintain any public archives or change log.
The Terms of Service disclose a range of proscribed activities, and states that any violation of the Terms may be grounds to suspend or terminate services. However, Jio makes no disclosures about its process of identifying violations and enforcing rules, or whether any government or private entity receives priority consideration in flagging content. There are no clear examples provided to help users understand the provisions.
Jio does not disclose data about the volume and nature of content and accounts it restricts.
Content Restriction and Termination of Services
Jio makes no disclosures about its process for responding to requests from third parties to restrict content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to requests. Jio makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Jio does not disclose if it notifies users when they try to access content that has been restricted, or if it notifies users when their account has been restricted.
Jio also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts or URLs affected, the types of subject matter associated with the requests, etc.
Jio does not disclose its policy on network management, or whether it prioritizes, blocks, or delays certain types of traffic, applications, protocols, or content for reasons beyond assuring quality of service and reliability.
Jio makes no disclosures about its policy on network shutdowns, or why it may shut down service to a particular area or group of users. Jio does not disclose its process for responding to such requests, or the legal authority that makes the requests, or whether it notifies users directly when it restricts access to the service. It also provides no data about the number of request received or complied with.
Jio requires that users verify their identity with government issued identification such as Passport, Driver’s License or Aadhaar.
General
The Privacy Policy is easily locatable on the website, and is available in English. It is likely that Jio has a large Hindi and vernacular speaking user base. However, the website does not have any other language versions of the Privacy Policy
The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.
Jio commits to make all efforts to communicate significant changes to the policy, but does not disclose its process for doing so. The policy recommends that users periodically review the website for any changes. Jio does not disclose a time frame within which it notifies users prior to the changes coming into effect. Jio also does not maintain any public archives or change log.
Collection, Use and Sharing of Information
Jio clearly discloses the types of personal and non personal information it may collect, but does not explicitly disclose how it collects the information. There is no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.
Jio commits to not sell or rent user information to third parties, but discloses that it may use and share non personal information at its discretion.
Jio discloses the broad circumstances in which it may share personal information with third parties and the types of entities it may disclose such information to. The policy states that such partners operate under contract and strict confidentiality and security restrictions. However, it does not specifically disclose the names of third parties it shares information with. Jio clearly discloses that it may share user information with government or legal authorities.
Jio discloses that it may share user information with third party websites or applications at the behest of the user (for instance, when logging into services with a Jio account). It discloses that Jio will provide notice to the user, and obtain consent regarding the details of the information that will be shared. In such a situation, the third party’s privacy policy would be applicable to the information shared.
The Privacy Policy broadly states the purposes for which the information is collected, but does not disclose if user information is combined from different services. In detailing the types of third parties that Jio may share user information with, Jio also discloses the respective purposes for sharing. However, Jio makes no commitment to limit the use of information to the purpose for which it was collected.
Jio does not disclose whether it collects information from third parties through technical means, and how it does so, or its policies about use, sharing, retention etc.
Jio does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Jio makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.
Jio also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.
Jio does not disclose whether it notifies users when their information is sought by government entities or private parties.
User Control over Information
Jio makes no disclosures about how long it retains user information. It does not disclose whether it retains de-identified information, or its process for de-identification. Jio does not disclose the time frame within which it may delete user information, if at all, after users terminate their account.
Jio does not disclose whether users can control the collection of information by Jio. The Privacy Policy does allow requests for access, correction or deletion of user information, but also notes that deletion of certain (unspecified) information may lead to termination of the service. However, deletion of information would be subject to any applicable data retention laws, law enforcement requests, or judicial proceedings. Further, the request may be rejected if there is extreme technical difficulty in implementing it, or may risk the privacy of others.
Though the Privacy Policy allows for access requests, it does not disclose what user information may be obtained, or whether it may be made available in a structured data format. Jio does not disclose if targeted advertising is on by default, or whether users can control how their information is used for these purposes.
Jio discloses that it has adopted measures to protect information from unauthorized access and to ensure that personal information is accessible to employees or partners employees strictly on a need to know basis. Jio does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.
Jio discloses that it has reasonable security practices and procedures in place in line with international standard IS/ISO/IEC 27001, to protect data and information. Jio does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them. Jio does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Jio does not disclose whether it notifies affected users about breaches, and any steps it may take to minimize impact.
Jio does not disclose if transmission of user information is encrypted by default, or whether it uses advanced authentication methods to prevent unlawful access. Jio does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.
Jio does not publish and materials that educate users about cyber risks relevant to their service.
For more information about the detailed methodology followed, please see - https://rankingdigitalrights.org/wp-content/uploads/2016/07/RDR-revised-methodology-clean-version.pdf.
Internet Users Per 100 People, World Bank, available at http://data.worldbank.org/indicator/IT.NET.USER.P2.
Telecommunications Indicator Report, Telecom Regulatory Authority of India, available at http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Reports.pdf.
The upstaging of extant telecos did, however, lead to allegations of anti-competitive practices by both Jio as well as existing telecos such as Vodafone and Airtel. See http://thewire.in/64966/telecom-regulator-calls-time-out-as-reliance-jio-coai-battle-turns-anti-consumer/.
Get Ready for India’s Internet Boom, Morgan Stanley, available at http://www.morganstanley.com/ideas/rise-of-internet-in-india.
Circular on Business Responsibility Reports, Securites Exchange Board of India, (August 13, 2012), available at http://www.sebi.gov.in/cms/sebi_data/attachdocs/1344915990072.pdf.
FAQ on Corporate Social Responsibility, Ministry of Coporate Affairs, available at https://www.mca.gov.in/Ministry/pdf/FAQ_CSR.pdf.
Govind vs. State of Madhya Pradesh, (1975) 2 SCC 148; R. Rajagopal vs. State of Tamil Nadu
(1994) 6 S.C.C. 632; PUCL v. Union of India, AIR 1997 SC 568; Distt. Registrar & Collector vs Canara Bank, AIR 2005 SC 186.
Justice K.S. Puttaswamy (Retd.) & Another Versus Union of India & Others, available at
http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841
PUCL v Union of India, AIR 1997 SC 568.
According to Section 2(w) of the IT Act, “Intermediary” with respect to any particular electronic records, means “…any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web hosting service providers, search engines, online payment sites, online-auction sites, online market places and cyber cafes.”
See http://cis-india.org/internet-governance/resources/it-procedure-and-safeguards-for-interception-monitoring-and-decryption-of-information-rules-2009
Rule 19 & 20, Interception Rules.
See http://tikona.in/sites/default/files/pdf_using_mpdf/1-ISP%20Agreement%20Document.pdf.
Pranesh Prakash and Jarpreet Grewal, How India Regulates Encryption, Centre for Internet and Society, (October 30, 2015) available at http://cis-india.org/internet-governance/blog/how-india-regulates-encryption.
See http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf.
As clarified in a Central Governemnt Press Note, this does not apply to corporates collecting data from other corporations, but only those handling data directly from natural persons, See http://meity.gov.in/sites/upload_files/dit/files/PressNote_25811.pdf.
Section 79 – ‘Exemption from liability of intermediary in certain cases - (1) Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link hosted by him.
(2) The provisions of sub-section (1) shall apply if-
(a) the function of the intermediary is limited to providing access to a communication
system over which information made available by third parties is transmitted or
temporarily stored; or
(b) the intermediary does not-
(i) initiate the transmission,
(ii) select the receiver of the transmission, and
(iii) select or modify the information contained in the transmission
(c) the intermediary observes due diligence while discharging his duties under this Act
and also observes such other guidelines as the Central Government may prescribe in
this behalf
(3) The provisions of sub-section (1) shall not apply if-
(a) the intermediary has conspired or abetted or aided or induced whether by threats or
promise or otherwise in the commission of the unlawful act (ITAA 2008)
(b) upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link residing in orconnected to a computer resource controlled by the intermediary is being used to
commit the unlawful act, the intermediary fails to expeditiously remove or disable
access to that material on that resource without vitiating the evidence in any manner.
Explanation:- For the purpose of this section, the expression "third party information" means
any information dealt with by an intermediary in his capacity as an intermediary.
Information Technology (Intermediaries guidelines) Rules, 2011, available at http://dispur.nic.in/itact/it-intermediaries-guidelines-rules-2011.pdf.
See http://cis-india.org/internet-governance/resources/information-technology-procedure-and-safeguards-for-blocking-for-access-of-information-by-public-rules-2009.
License Agreement For Unified License, available at http://www.dot.gov.in/sites/default/files/Amended%20UL%20Agreement_0_1.pdf?download=1.
http://www.trai.gov.in/WriteReadData/WhatsNew/Documents/Regulation_Data_Service.pdf.
OECD Privacy Principles, available at http://oecdprivacy.org/; Report of the Group of Experts on Privacy, Planning Commission of India, available at http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf.
TATA Communications Annual Report 2016, available at https://www.tatacommunications.com/sites/default/files/FIN-AnnualReport2015-16-AR-20160711.pdf.
Submarine Cable Networks Data, available at http://www.submarinenetworks.com/stations/asia/india.
National Voluntary Guidelines on Social, Environmental and Economic Responsibilities of Business, Ministry of Corporate Affairs, Government of India; SEBI Amendment to Listing Agreement, (August 13, 2012) available at http://www.sebi.gov.in/cms/sebi_data/attachdocs/1344915990072.pdf.
Employee Code of Conduct, TATA Group, available at http://www.tata.com/pdf/tcoc-booklet-2015.pdf.
TATA Communications Busines Responsibility Policies, available at http://www.tatacommunications.com/sites/default/files/Business_Responsibility_Policies.pdf.
TATA Communications Whistleblower Policy, available at https://www.tatacommunications.com/sites/default/files/Whistleblower%20Policy%20-%20Designed%20Version.pdf.
Kamlesh Bajaj, DSCI: A self-regulatory organization, available at https://www.dsci.in/sites/default/files/DSCI%20Privacy%20SRO.pdf.
Customer Charter, TATA Communications, available at https://www.tatacommunications.com/legal/customer-charter.
AUP Violations Grievances Portal, available at http://www.tatacommunications.com/reporting-aup-violations; Privacy Policy, TATA Communications, available at https://www.tatacommunications.com/policies/privacy-policy.
Shamnad Basheer, Busting a Baloney: Merely Viewing Blocked Websites Will Not Land You in Jail, Spicy IP, (August 23, 2016), available at http://spicyip.com/2016/08/busting-a-baloney-merely-viewing-blocked-websites-will-not-land-you-in-jail.html.
Acceptable Use Policy, TATA Communications, available at https://www.tatacommunications.com/policies.
See http://login.vsnl.com/terms_n_conditions.html.
This includes inappropriate content, which may be threatening, hateful or abusive content; content that infringes any intellectual property right; transfer of viruses or harmful content, fraudulent content (such as credit card fraud) and spam or unsolicited email.
Response to Consultation Paper on Regulatory Framework for Over-the-top (OTT) Services, TATA Communications, available at http://trai.gov.in/Comments/Service-Providers/TCL.pdf.
Kaustabh Srikanth, Technical Observations about Recent Internet Censorship in India, Huffington Post, (January 6, 2015) available at http://www.huffingtonpost.in/kaustubh-srikanth/technical-observations-about-recent-internet-censorship-in-india/
See https://www.tatacommunications.com/policies/privacy-policy; http://login.vsnl.com/privacy_policy.html (VSNL); However, there are other documents available on the TCL website purpoting to be the Privacy Policy. Since the policies are not dated, it is not entirely clear which is applicable. (See http://www.tatacommunications.com/downloads/Privacy-Policy-for-TCL-and-Indian-Subs.pdf).
The disclosure of governmental requests may be affected by laws which require such information to remain confidential, as explained in detail in Section I of this report.
See http://www.alexa.com/siteinfo/rediff.com.
See http://www.rediff.com/terms.html.
See http://ishare.rediff.com/templates/tc.html.
See http://blogs.rediff.com/terms/.
See http://www.rediff.com/news/disclaim.htm.
See http://blogs.rediff.com/terms/.
Performance Indicator Report, Telecom Regulatory Authority of India, (August, 2016) available at (http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Report_05_August_2016.pdf.
See https://www.vodafone.com/content/sustainabilityreport/2015/index/operating-responsibly/human-rights.html.
Vodafone Sustainability Report, See http://static.globalreporting.org/report-pdfs/2015/ffaa6e1f645aa009c2af71ab9505b6b0.pdf.
Amit Pradhan, CISO, on Data Privacy at Vodafone, DSCI Blog, (July 15, 2015), available at https://blogs.dsci.in/interview-amit-pradhan-vodafone-india-on-privacy/.
See http://www.coai.com/about-us/members/core-members.
Process for registration of a complaint, Vodafone India Telecom Consumers’ Charter, available at https://www.vodafone.in/documents/pdfs/IndiaCitizensCharter.pdf.
Vodafone India: We are Pro Ne Neutrality, Gadgets Now, (May 20, 2015), available at http://www.gadgetsnow.com/tech-news/vodafone-wont-toe-zero-rating-plan-of-airtel/articleshow/47349710.cms; Vodafone Response to TRAI Consultation Paper on Regulatory Framework for Over-the-Top (OTT) services, Vodafone India, (March 27, 2015) available at http://trai.gov.in/Comments/Service-Providers/Vodafone.pdf.
See http://www.vodafone.in/privacy-policy.
Vodafone Law Enforcement Disclosure Report, available at https://www.vodafone.com/content/sustainabilityreport/2014/index/operating_responsibly/privacy_and_security/law_enforcement.html.
Performance Indicator Report, Telecom Regulatory Authority of India, (August, 2016) available at (http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Report_05_August_2016.pdf.
Business Responsibility Reports, Reliance Communications Ltd., available at http://www.rcom.co.in/Rcom/aboutus/ir/pdf/Business-Responsibility-Report-2015-16.pdf.
Manual of Practice, Reliance Communications Ltd., available at http://www.rcom.co.in/Rcom/personal/customercare/pdf/Manual_of_Practice.pdf.
See http://www.rcom.co.in/Rcom/personal/home/pdf/1716-Telecom-Consumer-Charter_TRAI-180412.pdf.
See http://www.rcom.co.in/Rcom/personal/pdf/AUP.pdf.
See http://myservices.relianceada.com/ImplNewServiceAction.do#.
Prohibition Of Discriminatory Tariffs For Data Services Regulations, Telecom Regulatory Authority of India, February 8, 2016), available at http://www.trai.gov.in/WriteReadData/WhatsNew/Documents/Regulation_Data_Service.pdf.
Shaadi.com Terms of Use/Service Agreement, available at http://www.shaadi.com/shaadi-info/index/terms (Last visited on November 10, 2016).
Shaadi.com Privacy Policy, available at http://www.shaadi.com/shaadi-info/index/privacy (Last visited on November 10, 2016).
Shaadi.com Privacy Tips, available at http://www.shaadi.com/customer-relations/faq/privacy-tips (Last visited on November 10, 2016).
https://blog.hike.in/hike-unveils-its-incredible-new-workplace-3068f070af08#.zagtgq5lk
http://economictimes.indiatimes.com/small-biz/money/hike-messaging-app-raises-175-million-from-tencent-foxconn-and-others-joins-unicorn-club/articleshow/53730336.cms
https://medium.com/@kavinbm/175-million-tencent-foxconn-d9cc8686821f#.7w6yljaii
[75] Hike Terms of Use, available at http://get.hike.in/terms.html (Last visited on November 10, 2016).
Hike Privacy Policy, available at http://get.hike.in/terms.html (Last visited on November 10, 2016).
Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).
Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).
Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).
Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).
Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).
Aircel National Customer Preference Registry, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=customercare_ndnc_page (Last visited on November 10, 2016).
Aircel National Customer Preference Registry, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=customercare_ndnc_page (Last visited on November 10, 2016).
http://www.counterpointresearch.com/reliancejio/
http://economictimes.indiatimes.com/tech/internet/gujarat-andhra-top-circles-for-jio-subscribers-cross-24mn-mark/articleshow/55040351.cms
Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).
Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).
Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).
Big Data in Governance in India: Case Studies
This work has been made possible by a grant from the John D. and Catherine T. MacArthur Foundation. The conclusions, opinions, or points of view expressed in the report are those of the authors and do not necessarily represent the views of the John D. and Catherine T. MacArthur Foundation.
Introduction
The research was for a duration of 12 months and in form of an exploratory study which sought to understand the potential opportunity and harm of big data as well as to identify best practices and relevant policy recommendations. Each case study has been chosen based on the use of big data in the area and the opportunity that is present for policy recommendation and reform. Each case study will seek to answer a similar set of questions to allow for analysis across case studies.
What is Big Data
Big data has been ascribed a number of definitions and characteristics. Any study of big data must begin with first conceptualizing defining what big data is. Over the past few years, this term has been become a buzzword, used to refer to any number of characteristics of a dataset ranging from size to rate of accumulation to the technology in use.[1]
Many commentators have critiqued the term big data as a misnomer and misleading in its emphasis on size. We have done a survey of various definitions and understandings of big data and we document the significant ones below.
Computational Challenges
The condition of data sets being large and taxing the capacities of main memory, local disk, and remote disk have been seen as problems that big data solves. While this understanding of big data focusses only on one of its features—size, other characteristics posing a computational challenge to existing technologies have also been examined. The (US) National Institute of Science and Technology has defined big data as data which “exceed(s) the capacity or capability of current or conventional methods and systems.” [2]
These challenges are not merely a function of its size. Thomas Davenport provides a cohesive definition of big data in this context. According to him, big data is “data that is too big to fit on a single server, too unstructured to fit into a row-and-column database, or too continuously flowing to fit into a static data warehouse.” [3]
Data Characteristics
The most popular definition of big data was put forth in a report by Meta (now Gartner) in 2001, which looks at it in terms of the three 3V’s—volume[4], velocity and variety. It is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.[5]
Aside from volume, velocity and variety, other defining characteristics of big data articulated by different commentators are— exhaustiveness,[6] granularity (fine grained and uniquely indexical),[7] scalability,[8] veracity,[9] value[10] and variability.[11] It is highly unlikely that any data-sets satisfy all of the above characteristics. Therefore, it is important to determine what permutation and combination of these gamut of attributes lead us to classifying something as big data.
Qualitative Attributes
Prof. Rob Kitchin has argued that big data is qualitatively different from traditional, small data. Small data has used sampling techniques for collection of data and has been limited in scope, temporality and size, and are “inflexible in their administration and generation.”[12]
In this respect there are two qualitative attributes of big data which distinguish them from traditional data. First, the ability of big data technologies to accommodate unstructured and diverse datasets which hitherto were of no use to data processors is a defining feature. This allows the inclusion of many new forms of data from new and data heavy sources such as social media and digital footprints. The second attribute is the relationality of big data.[13]
This relies on the presence of common fields across datasets which allow for conjoining of different databases. This attribute is usually a feature of not the size but the complexity of data enabling high degree of permutations and interactions within and across data sets.
Patterns and Inferences
Instead of focussing on the ontological attributes or computational challenges of big data, Kenneth Cukier and Viktor Mayer Schöenberger define big data in terms of what it can achieve.[14]
They defined big data as the ability to harness information in novel ways to produce useful insights or goods and services of significant value. Building on this definition, Rohan Samarajiva has categorised big data into non-behavioral big data and behavioral big data. The latter leads to insights about human behavior.[15]
Samarajiva believes that transaction-generated data (commercial as well as non-commercial) in a networked infrastructure is what constitutes behavioral big data. Scope of Research The initial scope arrived at for this case-study on role of big data in governance in India focussed on the UID Project, the Digital India Programme and the Smart Cities Mission. Digital India is a programme launched by the Government of India to ensure that Government services are made available to citizens electronically by improving online infrastructure and by increasing Internet connectivity or by making the country digitally empowered in the field of technology.[16]
The Programme has nine components, two of which focus on e-governance schemes. Read More [PDF, 1948 Kb]
[1]. Thomas Davenport, Big Data at Work: Dispelling the Myths, Uncovering the opportunities, Harvard Business Review Press, Boston, 2014.
[2]. MIT Technology Review, The Big Data Conundrum: How to Define It?, available at https://www. technologyreview.com/s/519851/the-big-data-conundrum-how-to-define-it/
[3]. Supra note 1.
[4]. What constitutes as high volume remains an unresolved matter. Intel defined Big Data volumes are emerging in organizations generating a median of 300 terabytes of data a week.
[5]. http://www.gartner.com/it-glossary/big-data/
[6]. Viktor Mayer Schöenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think” John Murray, London, 2013.
[7]. Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and their consequences, Sage, London, 2014.
[8]. Nathan Marz and James Warren, Big Data: Principles and best practices of scalable realtime data systems, Manning Publication, New York, 2015.
[9]. Bernard Marr, Big Data: the 5 Vs everyone should know, available at https://www.linkedin. com/pulse/20140306073407-64875646-big-data-the-5-vs-everyone-must-know.
[10]. Id.
[11]. Eileen McNulty, Understanding Big Data: the 7 Vs, available at http://dataconomy.com/sevenvs-big-data/.
[12]. Supra Note 7.
[13]. Danah Boyd and Kate Crawford, Critical questions for big data. Information, Communication and Society 15(5): 662–679, available at https://www.researchgate.net/publication/281748849_Critical_questions_for_big_data_Provocations_for_a_cultural_technological_and_scholarly_ phenomenon
[14]. Supra Note 6.
[15]. Rohan Samarajiva, What is Big Data, available at http://lirneasia.net/2015/11/what-is-bigdata/.
[16]. http://www.digitalindia.gov.in/content/about-programme
Mapping MAG: A study in Institutional Isomorphism
This paper that delves into the history of the formation of the Multi-Stakeholder Advisory Group (MAG) and the Internet Governance Forum (IGF) including the lessons from the past that should be applied in strengthening its present structure. The paper covers three broad areas:
- History of the formation of the MAG, its role within the IGF structure, influences that have impinged on its scope of work, manner in which its evolution has deviated from conceptualization
- Analysis of MAG membership (2006-2015): Trends in the selection and rotation of the MAG membership
- Recommendations to reform MAG/IGF
Jyoti Panday[1]
The recent renewal of the Internet Governance Forum[2] (IGF) mandate at the World Summit on the Information Society (WSIS)+10 High-Level Meeting[3] was something of a missed opportunity. The discussions unerringly focused on the periphery of the problem - the renewal of the mandate, leaving aside questions of vital importance such as strengthening and improving the structures and processes associated with the IGF. The creation of the IGF as a forum for governments and other stakeholders to discuss policy and governance issues related to Internet was a watershed moment in the history of the Internet.
In the first decade of its existence the IGF has proven to be a valuable platform for policy debates, a space that fosters cooperation by allowing stakeholders to self-organise to address common areas of concern. But the IGF rests at being a platform for multistakeholder dialogue and is yet to realise its potential as per its mandate to “find solutions to the issues arising from the use and misuse of the Internet” as well as “identify emerging issues […] and, where appropriate, make recommendations”.[4]
From the information available in the public domain, it is evident that the IGF is not crafting solutions and recommendations or setting the agenda on emerging issues. Even if unintended, this raises the disturbing possibility that alternative processes and forums are filling the vacuum created by the unrealised IGF mandate and helming policy development and agenda setting on Internet use and access worldwide. This sits uneasily with the fact that currently there is no global arrangement that serves or could be developed as an institutional home for global internet governance issues.
Moreover, the economic importance of the internet as well as its impact on national security, human rights and global politics has created a wide range of actors who seek to exert their influence over its governance. Given the lack of a global centralized body with authority to enforce norms and standards across political and functional boundaries, control of internet is an important challenge for both developed and emerging economies. As the infrastructure over which the internet runs is governed by nation states and their laws, national governments continue to seek to exert their influence on global issues.
Divergence of approaches to regulation and differences in capacity to engage in processes, has led to fragmentation of approaches to common challenges.[5] Importantly, not all governments are democratic and may exert restrictions on content and access that conflict with the open and global nature of the internet. Alongside national governments, transnational private corporations play a critical role in security and stability of the internet. Much like the state, they too raise the niggling question of how to guard against the guardians.
Corporations control of sensitive information, their institutional identity, secrecy of operations: all are essential to their functioning but could also erode the practice of democratic governance, and the rights and liberties of users online. Additionally, as issues of human rights, access and local content have become interlinked with public policy issues civil society and academia have become relevant to traditionally closed policy spaces. Considering the variety of stakeholders and their competing interests, concerns about ensuring stability and security of the Internet have led the international community to pursue a range of governance initiatives.
Implementing a Multistakeholder Approach
At the broadest level debates about the appropriate way forward has evolved as a contestation between the choice of two models. On the one hand is the state-centric ‘multilateral’ model of participation, and on the other a ‘multistakeholder’ approach that aims for bottom up participation by all affected stakeholders. The multistakeholder approach sees resonance across several quarters[6] including a high level endorsement from the Indian government last year.[7] An innovative concept, a multistakeholder approach fits well within the wider debate about rethinking governance in a globalized world.
Proponents of the multistakeholder approach see it as a democratic process that allows for a variety of views to be included in decision making.[8] Nevertheless, the intertwining of the Internet and society pitches actors and interests at opposing ends. While a multistakeholder approach broadens the scope for participation, it also raises serious issues of representation and accountability. Since multistakeholder processes fall outside the traditional paradigm of governance, establishing legitimacy of processes and structures becomes all the more important.
The multistakeholder concept is only beginning to be critically studied or evaluated. There have been growing concerns, particularly, from emerging economies[9] of a lack of representation in policy development bodies and that issues affecting marginalised communities being overlooked in policy development process. From this view, the multistakeholder model has created ‘transnational and semi privatized’ structures and ‘transnational elites’.[10] Such critics define emerging and existing platforms derived from the multistakeholder concept as ‘an embryonic form of transnational democracy’ that are occupied by elite actors.[11]
Elite actors may include the state, private and civil society organisations, technical and academic communities and intergovernmental institutions. In the context thus sketched out, the key question that the WSIS+10 Review should have addressed is whether the IGF provides the space for the development of institutions and solutions that are capable of responding to the challenges of applying the multistakeholder concept to internet governance. The existing body of work on the role of the IGF has yet to identify, let alone come to terms with, this problem.
Applying critical perspectives examining essential structures and processes associated with the IGF becomes even more relevant given its recently renewed mandate. However, already the forum’s first planning meeting scheduled to take place in Geneva this week is already mired in controversy[12] after a new Chair was named by the UN Secretary General.
The decision for appointing a new Chair was made without any form of public process, or any indication on the selection criteria. Moreover, the "multistakeholder advisory group" (MAG), which decides the content and substance of the forum, membership was also renewed recently. Problematically most of the nominations put forth by different constituent groups to represent them were rejected and individuals were appointed through a parallel top-down and secretive UN process. Of the 55 MAG members, 21 are new but only eight were officially selected by their respective groups.[13]
This paper focuses on the role of the MAG structure and functioning and highlights issues and challenges in its working so as to pave the way for strategic thinking on its improvement. A tentative beginning towards identifying what the levers for change can be made by sifting through the eddies of history to uncover how the MAG has evolved and become politicised.
The paper makes two separate, but interrelated claims: first, it argues that as the de-facto bureau essential to the functioning of the IGF, there is an urgent need to introduce transparency and accountability in the selection procedure of the MAG members. Striking an optimum balance between expertise and legitimacy in the MAG composition is essential to ensure that workshops and sessions are not dominated by certain groups or interests and that the IGF remains an open, well-functioning circuit of information and robust debate.
Second, it argues for immediate evaluation of MAG’s operations given the calls for the production of tangible outcomes. There has been on-going discussion within the broader community about the role of the IGF with divisions between those who prefer a narrow interpretation of its mandate, while others who want to broaden its scope to provide policy recommendations and solutions.[14]
The interpretation of the IGF mandate and whether the IGF should make recommendations has been a sticking point and is closely linked to the question of IGF’s legitimacy and relevance. Be that as it may, the intersessional work, best practices forum and dynamic coalitions over the last ten years have led to the creation of a vast repository of information that should feed into the pursuit of policy options and identification of best practices.
The true test of the multistakeholder model is not only to bring together wide range of views but to also ensure that accumulated knowledge is applied to address common problems. Implementing a multistakeholder approach and developing solutions necessitates enhanced coordination amongst stakeholder groups and in the context of the IGF, is contingent on the strength and stability of the MAG to be able to facilitate such cooperation.
The paper is organised in three parts: in the first section I delve into the history of the formation of the MAG. To understand the MAG’s role within the IGF structure it is essential to revisit the influences that shaped its conceptualisation and subsequent evolution over the decade. A critical historical perspective provides the context of the multiple considerations that have impinged on MAG’s scope of work, of the manner in which MAG’s evolution has deviated from intentions, and the lessons from the past that should be applied in strengthening its present structure.
The second section analyses trends in the selection and rotation of the MAG membership and traces out the elite elements in the composition of the MAG. The analysis reveals two distinct stages in the evolution of the MAG membership which has remained significantly homogeneous across stakeholder representation. The final section of the paper focuses on a set of recommendations to ensure that the MAG is strengthened, becomes sustainable and provides the impetus for IGF reform in the future.
Origins of the IGF
The WSIS process was divided in two phases, the Geneva phase focused on principles of internet governance. The outcome documents of the first phase included a Declaration of Principles and a Plan of Action being adopted by 175 countries. Throughout the process, developing countries such as China, Brazil and Pakistan opposed the prevailing regime that allowed US dominance and control of ‘critical infrastructure’. As the first phase of the WSIS could not resolve these differences the Working Group on Internet Governance (WGIG) was set up by the UN Secretary General to deliberate and report on the issues.
The establishment of the WGIG is an important development in the WSIS process not only because of the recommendations it developed to feed into the second phase of the negotiations, but also because of the procedural legitimacy the WGIG established through its working. The WGIG embodied the multistakeholder principle in its membership and open consultation processes. WGIG members were selected and appointed in their personal capacity through an open and consultative process. As a result the membership demonstrated diversity in the geography, stakeholder groups represented and gender demographics.
The consultations were open, transparent and allowed for a diverse range of views in the form of oral and written submissions from the public to feed into the policy process. At its final meeting the WGIG membership divided into smaller working groups to focus on specific issues, and reassembled at the plenary to review, discuss and consolidate sections which were then approved in a public forum. As the WGIG background paper notes “The WGIG agreed that transparency was another key ingredient to ensure ownership of the process among all stakeholders.”[15]
The WGIG final report[16] identified a vacuum within the context of existing structures and called for the establishment of a forum linked to the UN. The forum was to be modelled on the best practices and open format of the WGIG consultative processes allowing for the participation of diverse stakeholders to engage on an equal footing. It was in this context that the IGF was first conceptualised as a space for global multistakeholder ‘dialogue’ which would interface with intergovernmental bodies and other institutions on matters relevant to Internet governance.
The forum was conceived as a body that would connect different stakeholders involved in the management of the internet, as well as contribute to capacity-building for governance for developing countries drawing on local sources of knowledge and expertise. Importantly, the forum was to promote and assess on an ongoing basis the embodiment of WSIS principles in Internet governance processes and make recommendations’ and ‘proposals for action’ addressing emerging and existing issues not being dealt with elsewhere. However, as things turned out the exercises of power between states and institutional arrangements ultimately led to the development of a subtly altered version of the original IGF mandate.
Aftermath of the WGIG Report
The WGIG report garnered much attention and was welcomed by most stakeholders with the exception of the US government which along with private sector representatives such as Coordinating Committee of Business Interlocutors (CCBI) disagreed with the recommendations.[17] Pre-empting the publication of the report, the National Telecommunications and Information Administration (NTIA) issued a statement in June 2005 affirming its resolve to “maintain its historic role in authorizing changes or modifications to the authoritative root zone file.”[18]
The statement reiterated US government’s intention to fight for the preservation of the status quo, effectively ruling out the four alternative models for internet governance put forward in the WGIG report. The statement even referenced the WGIG report stating, “Dialogue related to Internet governance should continue in relevant multiple fora. Given the breadth of topics potentially encompassed under the rubric of Internet governance there is no one venue to appropriately address the subject in its entirety.”[19]
The final report was presented to PrepCom 3 of the second phase in July 2005 and the subsequent negotiations were by far, the most significant in the context of the role and structure that the IGF would take in the future. US stance on its role with regard to the root zone garnered pushback from both civil society and other governments including Russia, Brazil, Iran and China. However the most significant reaction to US stance came from the European Union issuing a statement after the commencement of PrepCom 3 in September.
EU’s position recognised that adjustments were needed in institutional arrangements for internet governance and called for a new model for international cooperation which would include “the development and application of globally applicable public policy principles.”[20] the US had not preempted this “shocking and profound change” and now isolated in its position on international governance of the internet, and it sent forth a strongly worded letter[21] invoking its long-standing relationship and urging the EU to reconsider its stance.
The pressure worked since the US was in a strong position to stymie the achievement of a resolution from WSIS process. Moreover, introducing reforms to the internet naming and numbering arrangements was not possible without US cooperation. The letter resulted in EU going back on its aggressive stance and with it, the push for the establishment of global policy oversight over the domain names and numbers lost its momentum.
The letter significantly impacted the WSIS negotiations and shaped the role of the IGF. By creating a deadlock and by applying pressure US was able to negotiate a favourable outcomes for itself. The last minute negotiations led to the status quo continuing and in exchange the US provided an undertaking that it would not interfere with other countries’ ccTLDs. The weakened mandate meant that even though creation of the IGF under the WSIS process moved forward the direction changed from its conceptualisation and origins from the WGIG report.
Institutionalizing the IGF
In 2006, the UN Secretary General appointed Markus Kummer to assist with the establishment of the IGF. The newly formed IGF Secretariat initiated an open consultation to be held in Geneva in and issued an open call to stakeholders seeking written submissions as inputs into the consultation.[22] Notably neither the US government nor the EU sent in a response to the consultation and submissions made by other stakeholders were largely a repetition of the views expressed at WSIS.
The division on the mandate of IGF was evident in this very first consultation. Private sector representatives such as the CCBI and ICC-Basis, government representatives from OECD countries like Canada and the technical community represented by likes of Nominet and ISOC[23] opposed the development of the IGF as platform for policy development. On the other hand, civil society representatives such as APC called for the IGF to produce specific recommendations on issues where there is sufficient consensus.[24]
With reference to the MAG structure, again there was division on whether the “effective and cost-efficient bureau” referred to in the Tunis Agenda should have a narrow mandate limited to setting the agenda for plenary meetings or if it should have a more substantial role. Civil society stakeholders envisioned assigning the bureau a more substantial role and notably the Internet Governance Project (IGP) discussion paper released in advance of the February 2006 Geneva consultations.[25]
The paper offered design criteria for the Forum including specific organizational structures and processes proposing “a small, quasi-representational decision making structure” for the IGF Bureau.[26] The paper recommended formation of twelve member bureau with five representatives from governments (from each UN geographic region) and two each from private sector civil society academic and technical communities. The bureau would set the agenda for the plenary meeting not arbitrarily through private discussions, but driven by working group proposals and it would also have the power to approve or reject applications for forming working groups.
The proposed structure in the IGP paper had it been implemented would have developed the bureau along the lines of the IETF where the working groups would develop recommendations which would feed into the deliberation process. However, there was a clear divide on the proposed structure with many stakeholders opposing the establishment of sub-groups or committees under the IGF.[27]
Following the written submissions the first open consultations on the establishment of the IGF were held in Geneva on 16 and 17 February 2006, and were chaired by Nitin Desai.[28] The consultation was well attended with more than 300 participants including 40 representatives from governments and the proceedings were webcast. Further, the two-day consultation was structured as a moderated roundtable event at which most interventions were read from prepared statements, many of which were also tabled as documents and later made available from the IGF Web site. This ofcourse meant that there was a repetition of the views expressed in response to the questionnaire or the WGIG report and as a consequence, there was little opportunity for consensus-building.
Once again there was conflict on whether the IGF should be conceptualised as annual ‘event’ that would provide space for policy dialogue or a ‘process’ of engaging with policy issues which would culminate in an annual event. The CCBI reiterated that “[t]he Tunis Agenda is clear that the IGF does not have decision-making or policy-making authority,” and the NRO emphasised that the “IGF must be a multi-stakeholder forum without decision-making attributions.”[29]
William Drake argued for the IGF “as a process, not as a series of one-off meetings, but as a process that would promote collective dialogue, learning, and mutual understanding on an ongoing basis.”[30] Government representatives were split for example see El Salvador statement “that the Internet Governance Forum will come up with recommendations built on consensus on specific issues,” and Brazil even characterised the first meeting as “an excellent opportunity to initiate negotiations on a framework treaty to deal with international Internet public policy issues.”[31]
Although a broad consensus was declared on need for a lightweight multi-stakeholder bureau there was no consensus on its size, composition and the mandate of this bureau. Nitin Desai held the issue for further written input and the subsequent consultation received twelve submissions with most respondents recommended a body of ten and twenty five members. The notable exceptions were submissions from the Group of 77 and China that sought a combined total of forty members half of which would be governmental representatives.
The discussions during the February consultations and the input received from the written submissions paved the way for what eventually became the MAG. The IGF Secretariat announced the formation of a bureau with forty members and while not expressly stated, half of these would be governmental representatives. It has been speculated that the large membership decision was a result of political wrangling among governments, especially the G77 governments insisting on large group that would accommodate all the political and regional differences among its members.[32]
IGF Secretariat - Set to Fail?
The unwieldy size of the MAG meant that it would have to rely on the newly constituted Secretariat for organization, agenda-setting, and results. This structure empowered the Secretariat while limiting the scope of the MAG, a group that was already divided in its interests and agenda. However, the Secretariat was restrained in its services to stakeholders as it had limited resources since it was not funded by the United Nations and relied upon voluntary donations to a trust fund.[33]
Early donors included the Swiss Agency for Development and Cooperation (SWADC), ICANN and Nominet.[34] Due to disjointed sources of funding, the Secretariat was vulnerable to the influence of its donors. For example, the decision to to base the Secretariat in Geneva was to meet the condition set by SWADC contribution. Distressingly, of the 20 non-governmental positions in the MAG, most were directly associated with the ICANN regime.
The over-representation of ICANN representatives in MAG selection was problematic since the IGF was conceptualised to address the lack of acceptance of ICANN’s legitimacy in the WSIS process. The lack of independent funding led to a deficit of accountability demonstrated in instances where it was possible for one of the MAG members to quietly insinuate that private sector support for the IGF and its Secretariat would be withdrawn if reforms unacceptable to that stakeholder group went ahead.[35]
As might perhaps be expected from a Secretariat with such limited resources, its services to stakeholders were confined to maintaining a rudimentary website and responding to queries and requests. The transparency of the Secretariat’s activities was also very limited, most clearly exemplified by the process by which the Advisory Group was appointed.
Constituting the MAG
Following the announcement of the establishment of the MAG, a call for membership to the advisory group was made in March 2006. From the beginning the nomination process was riddled with lack of transparency and the nominations received from stakeholders were not acknowledged by the IGF Secretariat, nor was the selection criteria of made available. The legitimacy of the exercise was also marred by a top-down approach where first that nominees heard of the outcomes was the Secretariat's announcement of selected nominees. Lack of transparency and accountability resulted in the selection and appointment procedure being driven by patronage and lobbying.
The political wrangling was evident in the composition of the first MAG which was expanded to accommodate six regional coordinators personally appointed by Chair Nitin Desai to the Special Advisory Group (SAG). Of the twenty non-governmental positions, most were associated with the naming and numbering regime including sitting and former Board members and ICANN staff.[36] Participation from civil society was limited as the composition did not recognise[37] technical community as a distinct group, including it along with academic community and as part of civil society.
The political struggles at play was visible in the appointment of Michael D. Gallagher, the former head of the US Commerce Department's NTIA. This appointment was all the more relevant since it was Gallagher who had had only a few months back stated that the US government owns the DNS root and has no intention of giving it up. His presence signalled that the US government took the forum seriously enough to ensure its interests were voiced and received attention on the MAG.
Beyond issues of representation the working of the MAG suffered from a serious lack of transparency as meetings of the Advisory Group were closed, and no reports or minutes were released. The Advisory Group met in May and September in Geneva before the inaugural IGF meeting in Athens. Coordination between members for the preparations for Athens was done utilising a closed mailing list that was not publicly archived. Consequently, the detail of the operations of the Advisory Group ahead of the first IGF meeting were known only to its members.
Whatever little has been reported suggests that the Advisory Group possessed little formal authority, operating like a forum where members expressed views and debated issues without the object of taking formal decisions. Decisions were settled upon by rough consensus as declared by the Chair, and on all matters where there was no agreement the issues were summarised by the Chair in a report to the UN Secretary-General. The Secretary-General would take the report summary in consideration however retained the ultimate authority to make a formal decision.[38]
The UN’s clear deciding role was not so obvious in the early years of the MAG’s existence because of the relatively novel nature of the IGF. Moreover Nitin Desai Chair, MAG and Markus Kummer, IGF Secretariat were appointed by the UN Secretary General and were on good terms with the then-Secretary General Kofi Annan and working together they acted as de facto selectors of the members of the MAG. Most of the MAG’s core membership in the first five years of its existence was made up of leaders from across the different stakeholder groups and self-selection within those groups was encouraged to lend broader stability.
Over the last decade, changes in institutional arrangements led the IGF to be moved as a ‘project’ under the UNDESA umbrella, where it is not a core mission, but simply one of many conferences that it handles across the world every year. The core personnel that shepherded the MAG and the IGF from its early days retired allowing for the creation a new core membership. The new group of leaders in the MAG membership have emerged partly as the result of selection and rotation process instituted by the UNDESA in appointing a ‘program committee’.
The history presented above is to help understand how the MAG was established under the UN umbrella and to highlight the key developments that shaped its scope and working. Importantly the weakened IGF mandate created divergences on the scope of the MAG to function as a ‘program committee’ limited to selecting proposals and planning the IGF or as an ‘advisory committee’ with a more substantial role in developing the forum as an innovative governance mechanism. In its conception the IGF was a novel idea and by empowering MAG and introducing transparency in the selection procedures of members and their workings could have perhaps led to a more democratic and accountable IGF. However, the possibility of this was stemmed early on.
The opacity in the appointment processes meant that patronage and lobbying became key to being selected as a member of the MAG. It established the worrying trend of ensuring diversity and representation taking precedent over the necessity of ensuring that representatives were appointed through a bottom-up multistakeholder process. Further, distributing the composition to ensure geographic representation severely limited participation of technical, academic and civil society. In the next section, I focus on the rotation of members of the MAG over the last ten years to identify and highlight trends that have emerged in its composition.
Analysis of MAG Composition (2006 - 2015)
This primary data for the analysis of the MAG membership has been collected from the membership list from 2010-2015 available on the I website. The membership list for 2005, 2006, 2007 and 2008 have been provided by the UN IGF Secretariat during the course of this research. To the best of my knowledge, this data is yet to be made publicly available and may be accessed here.[39] The Secretariat notes that the MAG membership did not change in 2008 and 2009 and the confirmation is the only account of the list of members for both years, as the records were poorly maintained and are therefore unavailable in the public domain.
It is also worth noting that to the best of my knowledge, no data has been made available by the IGF Secretariat regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG. The stakeholder groups identified for this analysis include government, civil society, industry, technical community and academia. Any overlap between two or more of these groups or movements of individuals between stakeholder groups and affiliations has been taken into account.
Over the decade of its existence, the MAG has had 196 unique members from various stakeholder groups. As per the Terms of Reference[40] (ToR) of the MAG, it is the prerogative of the UN Secretary General to select MAG members. There also exists a policy of rotating one-third members of MAG every year for diversity and taking new viewpoints in consideration. Diversity within the UN is an ingrained process where every group is expected to be evenly balanced in geographic and gender representation. However, ensuring a diverse membership often comes at the cost of legitimate expertise. Further it may often lead to top-down decision making where individuals are appointed based on their characteristics rather than qualifications.
The complexity of the selection process is further compounded by the fact that the IGF Secretariat provides an initial set of recommendations identifying which members should be appointed to the MAG, but the selection and appointment is undertaken by UNDESA civil servants based in New York. Notably, while the IGF Secretariat staff is familiar with and interacts with stakeholder representatives at internet governance meetings and forums that are regularly held in Geneva, the New York UN based officials do not share such relationships with constituent groups.
Consequently, they end up selecting members who meet all their diversity requirements and have put themselves forward through the standard UN open nomination process. The practice of ensuring that UN diversity criteria is met, creates tension within the MAG membership as representatives nominated by different stakeholder and who have more legitimacy within their respective constituencies are not appointed to the MAG.
The stress on maintaining diversity is evident in the MAG membership’s gradual expansion from an initial group of 46 members in 2006 to include a total of 56 members as of 2015. However the increase in membership has not impacted representation of the technical, academic and civil society constituencies with only 56 members having been appointed from the three groups over the last decade.
This is problematic considering that at the time of its constitution of the MAG the composition did not recognise[41] technical community as a distinct group, including it along with academic community as part of civil society. Consequently the three stakeholder groups have been represented collectively in the MAG and yet account for only 24.77% of the total membership compared to the government’s share of 39.3% and industry’s share of 35.7% respectively. At the regional level too membership across the three groups has ranged between 20-25% of the total membership.
The technical community is the least represented constituency accounting for only 5% of the total membership with only 10 members having been appointed over ten years. Of the 10, 6 were appointed from the WEOG region and there were no representatives appointed from the GRULAC region. Representatives from academia accounted for only 6% of the total membership with 13 representatives from the group having been appointed on the MAG. The technical community representation too was low from the US with only two members being appointed to the MAG and with each serving for a period of three years.
Civil society accounted for only 17% of the total membership with a total of 33 members and representation from the constituency was abysmally low across all regions. Civil society representation from the US included a total of five members, of which one served for one year, three served for two years each and only one representative continued for more than three years. Notably, there have been no academics from the US which is surprising given that most of the scholarship on internet governance is dominated by US scholars.
Industry was second largest represented group with a total of 64 members appointed to the MAG of which a whopping 30 members were appointed from the WEOG region. Representation was the highest across WEOG countries with 39.47% of the total membership and the group accounted for 32.4% and 32.5% of the total members from Africa and Asia Pacific respectively. Across Eastern European and GRULAC countries industry representation was very low accounting for merely 11.53% and 18.18% of the total membership respectively. Industry representative from the US Included two members serving one year each, five members who served two years each, two members who continued for three years each, one member was appointed for five years, one member who completed the maximum MAG term of eight years.
It is also interesting to note that the industry membership base expanded steadily, spiking in 2012 with a total of 40 representatives from the industry on the MAG. When assessed against the trend of the core leadership trickling out in 2012, the sudden increase in industry representation may point to attempts at capture from the stakeholder group in 2012. Industry representation from US in the MAG was by far the most consistent over the years and had the most evenly distributed appointment terms for members within a group.
Government has been the most dominant group within the MAG averaging a consistent 40% of the total membership over the last 10 years. At a regional level representation on the MAG was highest from Eastern Europe with more than 61% of its total membership comprising of individuals from the government constituency. GRULAC countries appointments to the MAG also demonstrate a preference for government representation with almost 58% of the total members appointed from within this group. The share of government representation in the total membership from Asia Pacific was 47.5% and 32.43% across Africa.
Another general policy followed in the selection procedure is that members are appointed for a period of one year, which is automatically extendable for two more years consecutively depending on their engagement in MAG activities. Members serving for one year term is inevitable due to the rotation policy, as new members replace existing members and often it may be the case of filling slots to ensure stakeholder group, geographic and gender diversity. Due to the limited resources made available for coordination between MAG members, one year appointments may not allow sufficient time for integrating new members into the procedures and workings of UN institutions.
Over the last decade 24.36% of the total appointed MAG members have been limited to serving a term of one year. Of the total 55 one year appointments 26 individuals served their first term in 2015 alone. This includes all nine representatives of civil society and it could be argued that for a stakeholder group with only 11% of the total membership share, such a rehaul weakens the ability of members to develop linkages severely limiting their ability to exert influence on decision making within the MAG.
Interestingly, the analysis reveals that one year term was a trend in the early years of the MAG where a core group took on the leadership role and continued guiding activities for newcomers including negotiating often conflicting agendas. The pattern of one year appointments was hardly visible from 2008-2012 but picked up again in 2013 and has continued ever since. The trend is perhaps indicative of the movement in the core MAG leadership as many of the original members retired or moved on to other engagements from 2010.
Importantly, the MAG ToR note that in case there is a lack of candidates fitting the desired area or under exceptional circumstances a member may continue beyond three years. However in the formative years the MAG this exception was the norm with most members continuing for more than three years. An analysis of the membership reveals that between 2006-2012 an elite core emerges which guided and was responsible for shaping the MAG and the IGF in its present day format. No doubt some of these members were exceptional talents and difficult to replace, however the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the stipulated one year term.
The analysis also suggests a shift in the leadership core over the last three years and points that a new leadership group is emerging which is distinguishable in that most members have served on the MAG for three or four years. Members serving for one, two or three years makes up more than 75% of the total membership and 111 individual members have served more than 2 years on the MAG. This could be the result of the depletion in membership of those familiar with internal workings and power structures within the UN, and the selection and rotation criteria and procedures that have weakened the original composition over the last decade.
Rotating membership might be necessary to prevent capture from any particular constituency or group, on the other hand more than half of the total members have spent less than three years on the MAG which makes the composition a shifting structure that limits long term engagement. Regular rotation of members can also lead to power struggles as continuing members exercise their influence to ensure that more members from within their constituency groups are appointed. Only seven individuals have completed the maximum term of eight years on the MAG while 23 individuals have completed five years or more on the MAG.
Finally, in terms of gender diversity, the ratio of male to female members is approximately 13:7 in the total membership with the approximate value in percentage being 65% and 35% respectively. Female representatives from WEOG countries dominate with a total of 29 women having been appointed from the region. Participation of women was the lowest across Asia Pacific and Eastern Europe with only nine and five representatives having been appointed respectively. There was a better balance of gender ratios across countries from Africa and GRULAC with 12 and 14 females having been appointed from the region.
Further analysis and visualisations derived from the MAG composition and identifying trends in appointment of individual members are available on the CIS website. The visualizations include MAG membership distribution across region[42] and stakeholder groups[43], evolution of stakeholder groups over the years[44], stakeholder group distribution across countries[45] and the timeline of total number of years served by individual members[46]. The valuation also include a comparison of stakeholder group representatives appointed across India and the USA.[47]
Recommendations: Reforming MAG & the IGF
Between April 4-6, 2016 the MAG convened in Geneva towards the IGF’s first planning meeting for the year[48]. The meeting marks the beginning of MAG’s work in planning and delivering the forum, the first in its recently renewed and now extended mandate. This report is a much needed documentation of its working and processes and has been undertaken as an attempt to scrutinize if the MAG is truly a multi-stakeholder institution or if it is has evolved as a closed group of elite members cloaked in a multi-stakeholder name.
There is very little literature on the evolution of, or critiquing the MAG structure partly due to it being a relatively new structure and partly due its workings being shrouded in secrecy. The above analysis has been conducted with the aim of trying to understand MAG’s functioning of the selection of its membership. The paper explores the history of the formation of IGF and the MAG to identify the geo-political influences that have contributed to the MAG’s evolution and role in shaping the IGF over the last decade.
In this section I apply the theory of institutional isomorphism developed by DiMaggio and Powell in their seminal paper[49] on organizational theory and social change. The paper posits that as organisations emerge as a field, a paradox arises where rational actors make their organizations increasingly similar as they try to change them. A focus on institutional isomorphism can add a much needed perspective on the political struggle for organizational power and survival that is missing from much of discourse and literature around the IGF and the MAG.
A consideration of isomorphic processes also leads to a bifocal view of power and its application in modern politics. I believe that there is much to be gained by attending to similarity as well as to variation between organisations within the same field and, in particular, to change in the degree of homogeneity or variation over time. In this paper I have attempted to study the incremental change in the IGF mandate as well as in the selection of the MAG members.
Applying the theoretical framework proposed by DiMaggio and Powell I identify possible areas of concern and offer recommendations for improvement of the IGF and reform of the MAG. I detail these recommendations through the impact of resource centralization and dependency, goal ambiguity, professionalization and structuration on isomorphic change. There is variability in the extent to and rate at which organizations in a field change to become more like their peers. Some organizations respond to external pressures quickly; others change only after a long period of resistance.
DiMaggio and Powell hypothesize that the greater the extent to which an organizational field is dependent upon a single (or several similar) source of support for vital resources, the higher the level of isomorphism. Their organisational theory also posits that the greater the extent to which the organizations in a field transact with agencies of the state, the greater the extent of isomorphism in the field as a whole. As my analysis reveals both hypotheses hold true for the IGF which is currently defined as a ‘project’ of the UNDESA. Since the IGF and the MAG are dependent on the UN for their existence, it is not surprising that both structures emulate the UN principles for diversity and governmental representation.
It is also worth noting that UN projects are normally not permanent and require regular renewal of mandate, reallocation of resources and budgets. When budget cuts take place as was the case during the global economic crisis, project funding is jeopardized as was the case when the IGF was left without an executive coordinator or a secretariat due to UN budget cuts.
This led to constituent groups coming together to directly fund the IGF secretariat through a special IGF Trust Fund created under an an agreement with the United Nations and to be administered by the UNDESA.[50] The fund was drawn up to expire on 31 December 2015 and efforts to renew contribution to the fund for 2016 is being opposed and questions on the legality of the arrangement are being raised.[51]
It is widely rumoured that the third party opposing the contribution is UNDESA itself. Securing guaranteed, stable and predictable funding for the IGF, including through a broadened donor base, is essential for the forum’s long term stability and ability to realize its underutilized potential. There have been several suggestions from the community in this regard including IT for Change’s suggestion that part of domain names tax collected by ICANN should to be dedicated to IGF funding through statutory/ constitutional arrangements. Centralisation of resources may lead to power structures being created and therefore any attempts at IGF and MAG reform in the future must consider the choice between incorporating the IGF as a permanent body with institutional funding under the UN and the implications of that on the forum’s structure.
There are four other hypotheses in DiMaggio and Powell’s framework that may be helpful in identifying levers for improvement of the IGF and the MAG. The first states that, the greater the extent to which goals are ambiguous within afield, the greater the rate of isomorphic change. As my analysis suggests, there is an urgent need to address the decade long debate on the MAG’s scope as a programme committee limited to planning an annual forum.
The question is linked to the broader need to clarify if the IGF will continue to evolve as an annual policy-dialogue forum or if it can take on a more substantive role that includes offering recommendations and assisting with development of policy on critical issues related to internet governance. Even the MAG is divided in its interpretation of its roles and responsibilities. A resurgence of the IGF necessitates that the global community reassess the need of the forum not only on the mandate assigned to it at the time of its conceptualisation but also in light of the newer and more complex challenges that have emerged over the decade.
The second hypothesis holds that the greater the extent of professionalization in a field greater the amount of institutional isomorphic change. DiMaggio and Powell measure professionalization by the universality of credential requirements, the robustness of training programs, or the vitality of professional associations. As the MAG composition analysis reveals the structure has evolved in a manner that gives preference to participation from the government and industry over participation from civil society, technical and academic communities.
Since the effect of institutional isomorphism is homogenization, the best indicator of isomorphic change is a decrease in variation and diversity, which could be measured by lower standard deviations of the values of selected indicators in a set of organizations. Such professionalization is evident in the functioning of the MAG that has taken on bureaucratic structure akin to other UN bodies where governmental approval weighs down an otherwise light-weight structure. Further the high level of industry representation creates distrust amongst other stakeholders and may be a reason the forum lacks legitimacy as a mechanism for governance as it could be perceived as being susceptible to capture.
The third hypothesis states that fewer the number of visible alternative organizational models in a field, the faster the rate of isomorphism in that field. The IGF occupies a special place in the UN pantheon of semi-autonomous groups and is often held up as a shining example of the ‘multistakeholder model’, where all groups have an equal say in decisions. Currently, there is no global definition of the multistakeholder model which at best remains a consensus framework for legitimizing Internet institutions.
It is worth noting that the system of sovereignty where authority is imposed is at odds with the earned authority within Internet institutions. Given the various interpretations of the approach, if multistakeholderism is to survive as a concept then it needs to be understood as a legitimizing principle that is strictly at odds with state sovereignty-based conceptions of legitimacy.[52] Under a true multistakeholder system, states can have roles in Internet governance but they cannot unilaterally declare authority, or collectively assert it without the consent of the rest of the Internet.
Unfortunately as the MAG membership reveals the composition is dominated by governmental representatives who seek to enforce territorial authority over issues of global significance. Further, while alternative approaches to its application exist within the ecosystem they are context specific and have evolved within unique environments.[53] As critics note emerging and existing platforms derived from the multistakeholder concept create ‘an embryonic form of transnational democracy’. Therefore it is important to recognise that the IGF is a physical manifestation of a much larger ideal, one where individuals and organizations have the ability to help shape the Internet and the information society to which it is intrinsically connected. This points to the need to study and develop alternative models to multistakeholder governance while continuing to strengthen existing practices and platforms.
As such, the IGF and its related local, national and regional initiatives represent a critical channel for expression especially for countries where such conversation is not pursued adequately and keeps discussions of the internet in the public space as opposed to building from regional/national initiatives. However, interaction between the global IGF and national IGFs is yet to be established. The MAG can play a critical role in developing and establishing mechanism to improve the national IGFs coordination with regional and national initiatives. A strengthened IGF could better serve national initiatives by providing formal backing and support to develop as platforms for engaging with long standing and emerging issues and identifying possible ways to address them.
DiMaggio and Powell’s final hypothesis holds that the greater the extent of structuration of a field, the greater the degree of isomorphism. As calls for creating structures to govern cyberspace pick up pace and given the extension of the IGF mandate its structure and working are in need of a rehaul. More research and analysis is needed to understand if there is a preferred approach for multistakeholder participation and engagement is emerging within both the IGF and MAG.
For example, if a portion or category of stakeholder group, countries and regions are not engaging in common dialogue, does the MAG have the mandate to promote and encourage participation? Has a process been established for ensuring a right balance when engaging different stakeholders and if yes, how is such a process initiated and promoted? The data shared by the IGF Secretariat confirmed that there were no records of the nomination procedure, that the membership list was missing for a year and that there was confusion in some cases who the nominees were are actually representing.
This opens up glaring questions on the legitimacy of the MAG such as on what criteria were MAG members selected and rotated? Was this evaluation undertaken by objective criteria or were representative handpicked by the UN? Moreover, it is important to asses of selection took place following an open call for nominations; or if members were handpicked by UN. Such analysis will help determine if there is scope within the current selection procedure to reach out to the wider multistakeholder community or if all MAG activities and discussions are restricted to its constituent membership. Clarifying the role of the IGF in the internet governance and policy space is inextricably linked to reforms in the MAG structure and processes and the questions raised above need urgent attention.
While these issues have been well known and documented for a number of years, yet there has been no progress on resolving them. Currently there is no website or document that lists the activities conducted by MAG in furtherance of ToR, nor does it produce annual report or maintain a publicly archived mailing list. Important recommendations for strengthening the IGF were made by the UN CSTD working group on IGF improvements.
The group took two years to produce its report identifying problems and offering recommendations that were to be implemented by end of 2015 and yet many of the problems identified within it have yet to be addressed. Worryingly, an internal MAG proposal to set up a working group to dig into the delays is being bogged down with discussions over scope and membership and a similar effort six months ago was also shot down.[54]
The ineffectiveness of the MAG to institute reform have led to calls for a new oversight body with established bylaws as the MAG in its present form does not seem up to the task. Further the opaque decision making process and lack of clarity on the scope of the MAG means that each time it undertakes efforts for improvements these are thwarted as being outside of its mandate. There remains a lot of work to be done in strengthening the MAG structure as the group that undertakes the day-to-day work of the IGF and the many issues that plague the role and function of the IGF. A tentative beginning can be made by introducing transparency and accountability in MAG member selection.
[1] This paper has been authored as part of a series on internet governance and has been made possible through a grant from the MacArthur Foundation.
[2] The Internet Governance Forum See: http://www.intgovforum.org/cms/
[3] World Summit on the Information Society (WSIS)+10 High-Level Meeting See: https://publicadministration.un.org/wsis10/
[4]The mandate and terms of reference of the IGF are set out in paragraphs 72 to 80 of the Tunis Agenda for the Information Society (the Tunis Agenda). See: http://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html
[5] Samantha Bradshaw, Laura DeNardis, Fen Osler Hampson, Eric Jardine and Mark Raymond ‘The Emergence of Contention in Global Internet Governance’, the Centre for International Governance Innovation and Chatham House, 2015 See: https://www.cigionline.org/sites/default/files/no17.pdf
[6] Mikael Wigell, ‘Multi-Stakeholder Cooperation in Global Governance’, The Finnish Institute of International Affairs. June 2008, See: https://www.ciaonet.org/attachments/6827/uploads
[7] Arun Mohan Sukumar, India’s New ‘Multistakeholder’ Line Could Be a Game Changer in Global Cyberpolitics,The Wire, 22 June 2015 See:http://thewire.in/2015/06/22/indias-new-multistakeholder-line-could-be-a-gamechanger-in-global-cyberpolitics-4585/
[8] Background Note on Sub-Theme Principles of Multistakeholder/Enhanced Cooperation, IGF Bali 2013 See: https://www.intgovforum.org/cmsold/2013/2013%20Press%20Releases%20and%20Articles/Principles%20of%20Multistakeholder-Enhanced%20Cooperation%20-%20Background%20Note%20on%20Sub%20Theme%20-%20IGF%202013-1.pdf
[9] Statement by Mr. Santosh Jha, Director General, Ministry of External Affairs, at the First Session of the Review by the UN General Assembly on the implementation of the outcomes of the World Summit on Information Society in New York on July 1, 2015 See: https://www.pminewyork.org/adminpart/uploadpdf/74416WSIS%20stmnt%20on%20July%201,%202015.pdf
[10] Jean-Marie Chenou, Is Internet governance a democratic process ? Multistakeholderism and transnational elites, IEPI – CRII Université de Lausanne, ECPR General Conference 2011,Section 35 Panel 4 See: http://ecpr.eu/filestore/paperproposal/1526f449-d7a7-4bed-b09a-31957971ef6b.pdf
[11] Ibid. 9
[12] Kieren McCarthy, ‘Critics hit out at 'black box' UN internet body’, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=3
[13] Ibid.
[14] Malcolm Jeremy, ‘Multistakeholder governance and the Internet Governance Forum, Terminus Press 2008
[15] Background Report of the Working Group on Internet Governance June 2005 See: https://www.itu.int/net/wsis/wgig/docs/wgig-background-report.pdf
[16] Report of the Working Group on Internet Governance, Château de Bossey June 2005 http://www.wgig.org/docs/WGIGREPORT.pdf
[17] Compilation of Comments received on the Report of the WGIG, PrepCom-3 (Geneva, 19-30 September 2005) See: http://www.itu.int/net/wsis/documents/doc_multi.asp?lang=en&id=1818%7C2008
[18] U.S. Principles on the Internet's Domain Name and Addressing System June 30, 2005 See: https://www.ntia.doc.gov/other-publication/2005/us-principles-internets-domain-name-and-addressing-system
[19] Ibid. 16.
[20] Tom Wright, ‘EU Tries to Unblock Internet Impasse’, International Herald Tribune
Published: September 30, 2005 See: http://www.nytimes.com/iht/2005/09/30/business/IHT-30net.html
[21] Kieren McCarthy, Read the letter that won the internet governance battle’, The Register, 2 Dec 2005 See: http://www.theregister.co.uk/2005/12/02/rice_eu_letter/
[22] United Nations Press Release, 2 March, 2006 Preparations begin for Internet Governance Forum,
http://www.un.org/press/en/2006/sgsm10366.doc.htm
[23] The Internet Society’s contribution on the formation of the Internet Governance Forum, February 2006 See: http://www.internetsociety.org/sites/default/files/pdf/ISOC_IGF_CONTRIBUTION.pdf
[24] APC, Questionnaire on the Convening the Internet Governance Forum (IGF) See:http://igf.wgig.org/contributions/apc-questionnaire.pdf
[25] Milton Mueller, John Mathiason, Building an Internet Governance Forum, 2 Febryary 2006, See: http://www.internetgovernance.org/wordpress/wp-content/uploads/igp-forum.pdf
[26] Ibid.
[27] Supra note 11.
[28] Supra note 20.
[29] Consultations on the convening of the Internet Governance Forum, Transcript of Morning Session 16 February 2006. See: http://unpan1.un.org/intradoc/groups/public/documents/igf/unpan038960.pdf
[30] Ibid.
[31] Ibid.
[32]Milton Mueller, ICANN Watch, ‘The Forum MAG: Who Are These People?’ May 2006 See: http://www.icannwatch.org/article.pl?sid=06/05/18/226205&mode=thread
[33] IGF Funding, See: https://intgovforum.org/cmsold/funding
[34] Supra note 12.
[35] Ibid.
[36] ICANN’s infiltration of the MAG was evident in the composition of the first advisory group which included Alejandro Pisanty and Veni Markovski who were sitting ICANN Board members, one staff member (Theresa Swineheart), two former ICANN Board members (Nii Quaynor and Masanobu Katoh); two representatives of ccTLD operators (Chris Disspain and Emily Taylor); two representatives of the Regional Internet Address Registries (RIRs) (Raul Echeberria and Adiel Akplogan). Even the "civil society" representatives appointed were all associated with either ICANN's At Large Advisory Committee or its Noncommercial Users Constituency (or both) Adam Peake of Glocom, Robin Gross of IP Justice, Jeanette Hofmann of WZ Berlin, and Erick Iriarte of Alfa-Redi.
[37] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum, 17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm
[38] Jeremy Malcolm, Multi-Stakeholder Public Policy Governance and its Application to the Internet Governance Forum See: https://www.malcolm.id.au/thesis/x31762.html
[39] MAG Spreadsheet CIS Website https://docs.google.com/spreadsheets/d/1uZzfBz9ihj1M0QSvlnORE0nRD62TCRxhA5d1E_RKfhc/edit#gid=1912343648
[40] Terms of Reference for the Internet Governance Forum (IGF) Multistakeholder Advisory Group (MAG) Individual Member Responsibilities and Group Procedures See: http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference
[41] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum, 17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm
[42] IGF MAG Membership Analysis, 2006-2015 http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Treemap.html
[43] IGF MAG Membership - Stakeholder Types and Regions - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Regions.html
[44] IGF MAG Membership - Stakeholder Types across Years - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Years.html
[45] IGF MAG Membership - Stakeholder Types and Countries - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Country.html
[46] IGF MAG Membership Timeline, 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Member-Timeline.html
[47] MAG Membership - India and USA - 2006-2015
[48] MAG Meetings in 2016
http://www.intgovforum.org/cms/open-consultations-and-mag-meeting
[49] Paul J. DiMaggio and Walter W. Powell, ‘The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields’, Yale University, American Sociological Review 1983, Vol. 48 (April: 147-160)
[50] United Nations Funds-In-Trust Project Document Project number: GLO/11/X01 Project title: Internet Governance Forum Country/area: Global Start date: 1 April 2011 End date: 31 December 2015 Executing agency: UNDESA Funding: Multi-donor – extrabudgetary Budget: Long-term project framework – budget “A” See: http://www.intgovforum.org/cms/2013/TrustFund/Project%20document%20IGF.pdf
[51] Kieren McCarthy, Critics hit out at 'black box' UN internet body, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=2
[52] Eli Dourado, Too Many Stakeholders Spoil the Soup, Foreign Policy, 15 May 2013 See:http://foreignpolicy.com/2013/05/15/too-many-stakeholders-spoil-the-soup/
[53] IANA Transition, NetMundial are some of the other examples of multi-stakeholder engagement.
[54] Ibid.
Privacy Gaps in India's Digital India Project
Introduction
The Central and State governments in India have been increasingly taking steps to fulfill the goal of a ‘Digital India’ by undertaking e-governance schemes. Numerous schemes have been introduced to digitize sectors such as agriculture, health, insurance, education, banking, police enforcement, etc. With the introduction of the e-Kranti program under the National e-Governance Plan, we have witnessed the introduction of forty four Mission Mode Projects.[1]
The digitization process is aimed at reducing the human handling of personal data and enhancing the decision making functions of the government. These schemes are postulated to make digital infrastructure available to every citizen, provide on demand governance and services and digital empowerment.[2]
In every scheme, personal information of citizens are collected in order to avail their welfare benefits. While the efforts of the government are commendable, the efficacy of these programs in the absence of sufficient infrastructure for security raises various concerns. Increased awareness among citizens and stronger security measures by the governments are necessary to combat the cogent threats to data privacy arising out of the increasing rate of cyberattacks.[3]
The schemes identified for the purpose of this paper have been introduced by the following government agencies:
S.No. | Scheme | Government Agency Involved |
---|---|---|
1 | SOIL HEALTH CARD A scheme designed to provide complete soil information to farmers. |
Department of Agriculture Corporation (DACNET) |
2 | CRIME AND CRIMINAL NETWORK TRACKING & SYSTEMS (CCTNS) A scheme that seeks to facilitate the functioning of the criminal system through online records, and has proposed data analysis for the purpose of trend setting, crime analysis, disaster and traffic management, etc. |
National Crime Records Bureau (NCRB) |
3 | U-Dise Serves as the official data repository for educational information. |
Ministry of Human Resource Development (MHRD) |
4 | PROJECT PANCHDEEP The use of Unified Information System for implementation of health insurance facilities under ESIC (Employee State Insurance Corporation). |
Ministry of Labour & Employment |
5 | ELECTRONIC HEALTH RECORDS A scheme to digitally record all health data of a citizen from birth to death. |
Ministry of Health and Family Welfare (MoHFW) |
6 | NHRM SMART CARD Under the Rashtriya Swasthya Bima Yojana (RSBY) Scheme, every beneficiary family is issued a biometric enabled smart card for providing health insurance to persons covered under the scheme. |
Ministry of Health and Family Welfare (MoHFW) |
7 | MYGOV An online platform for government and citizen interaction. |
The Department of Electronics and Information Technology (DeITY) |
8 | EDISTRICTS Common Service Centres are being established under the scheme to provide multiple services to the citizens at a district level. |
DeITY |
9 | MOBILE SEVA A centralized mobile app, used to host various mobile applications. |
DeITY |
10 | DIGILOCKER A scheme that provides a secure dedicated personal electronic space for storing the documents. |
DeITY |
11 | eSIGN FRAMEWORK FOR AADHAAR eSign is an online electronic signature service to facilitate an Aadhaar holder to digitally sign a document. | Ministry of Electronic and Information Technology |
12 | PAYGOV A centralized platform for all citizen to government payments. |
DeITY and NSDL Database Management Limited (NDML) |
13 | PASSPORT SEVA An online scheme for passport application and documentation. |
Ministry of External Affairs |
14 | NATIONAL LAND RECORDS MODERNIZATION PROGRAM (NLRMP) The scheme seeks to modernize land records system through digitization and computerization of land records. |
DeITY and NDML |
15 | AADHAAR A scheme for unique identification of citizens for the purpose of targeted delivery of welfare benefits. |
Unique Identification Authority of India (UIDAI) |
[1]. Introduction to Digital India, available at http://www.governancenow.com/ news/regular-story/securing-digital-india
[2]. Id.
[3]. GN Bureau, Securing Digital India, Governance Now (June 11, 2016) available at http://www.governancenow.com/news/regular-story/securing-digitalindia
Can the Judiciary Upturn the Lok Sabha Speaker’s Decision on Aadhaar?
Jairam Ramesh (L) has said Lok Sabha speaker Sumitra Mahajan’s decision to pass the Aadhaar Act as a money Bill is unconstitutional. It remains to be seen what the court will say. Credit: PTI
The article was published in the Wire on February 21, 2017.
In an earlier article, I had argued that the characterisation of the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, as a money Bill by Sumitra Mahajan, speaker of the Lok Sabha, was erroneous. Specifically, I had argued that upon perusal of Article 110 (1) of the constitution, the Aadhaar Act does not satisfy the conditions required of a money Bill. For a legislation to be classified as a money Bill, it must comprise of ‘only’ provisions dealing with the following matters: (a) imposition, regulation and abolition of any tax, (b) borrowing or other financial obligations of the government of India, (c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, (d) appropriation of money out of CFI, (e) expenditure charged on the CFI or (f) receipt or custody or audit of money into CFI or public account of India; or (g) any matter incidental to any of the matters specified in sub-clauses (a) to (f).
Article 110 is modelled on Section 1(2) of the UK’s Parliament Act, 1911, which also defines money Bills as those only dealing with certain enumerated matters. The use of the word ‘only’ was brought up by Ghanshyam Singh Gupta during the constituent assembly debates. He pointed out that the use of the word ‘only’ limits the scope money Bills to only those legislations which did not deal with other matters. His amendment to delete the word ‘only’ was rejected, clearly establishing the intent of the framers of the constitution to keep the ambit of money Bills extremely narrow. G.V. Mavalankar, the first speaker of Lok Sabha, had stated that the word ‘only’ must not be construed so as to give an overly restrictive meaning. For instance, a Bill which deals with taxation could have provisions which deal with the administration of the tax. The finance minister, Arun Jaitley, referred to these words by Mavalankar, justifying the classification of the Aadhaar Act as a money Bill.
While the Aadhaar Bill does makes references to benefits, subsidies and services funded by the CFI, even a cursory reading of the Bill reveals its main objectives as creating a right to obtain a unique identification number and providing for a statutory apparatus to regulate the entire process. Any reasonable reading of the legislation would be hard pressed to view all provisions in the Aadhaar Act, aside from the one creating a charge on the CFI, as merely administrative provisions incidental to the creation such charge. The mere fact of establishing the Aadhaar number as the identification mechanism for benefits and subsidies funded by the CFI does not give it the character of a money Bill. The Bill merely speaks of facilitating access to unspecified subsidies and benefits rather than their creation and provision being the primary object of the legislation. Erskine May’s seminal textbook, Parliamentary Practice, is instructive in this respect and makes it clear that a legislation which simply makes a charge on the consolidated fund does not becomes a money Bill if otherwise its character is not that of one. Further, the subordinate regulations notified under the Aadhaar Act deal almost entirely with matters to do with enrolment, updation, authentication of the Aadhaar number and related matters such as data security regulations and sharing of information collected, rather than the provision of benefits or subsidies or disbursal of funds otherwise from the CFI.
However, in the context of the petition filed by former Union minister Jairam Ramesh challenging the passage of the law on Aadhaar as a money Bill, the more important question is whether the judiciary has a right to question the speaker’s decision in such a matter. If not, any other questions about whether the legislation is a money Bill will remain merely academic in nature.
Irregularity vs illegality
Article 110 (3) clearly states that with regard to the question whether a legislation is a money Bill or not, the decision of the speaker is final and binding. The question is whether such a clause completely excludes any judicial review. Further, Article 122 prohibits the courts from questioning the validity of any proceedings in parliament on the ground of any alleged irregularity of procedure.
During the arguments in the court, the attorney general questioned the locus standi of Ramesh. The petition has been made under Article 32 of the constitution and the government argued that no fundamental rights of Ramesh were violated. However, the court has asked Ramesh to make his submission and adjourned the hearing to July. The petition by Ramesh would hinge largely on the powers of the judiciary to question the decision of the speaker of the Lok Sabha.
The powers of privilege that parliamentarians enjoy are integral to the principle of separation of powers. The rationale behind parliamentary privilege is to prevent interference in the lawmakers’ powers to perform essential functions. The ability to speak and vote inside the legislature without the fear of punishment is certainly essential to the role of a lawmaker. However, the extent of this protection lies at the centre of this discussion. During the constituent assembly debates, H.V. Kamath and others had argued for a schedule to exhaustively codify the existing privileges. However, B.R. Ambedkar pointed to the difficulty of doing so and parliamentary privilege on the lines of the British parliamentary practice was retained in the constitution. In the last few decades, a judicial position has emerged that courts could exercise a limited degree of scrutiny over privileges, as they are primarily responsible for interpreting the constitution.
In the matter of Raja Ram Pal vs The Hon’ble Speaker, Lok Sabha, it had been clarified that proceedings of the legislature were immune from questioning by courts in the case of procedural irregularity but not in the case of illegality. In this case, the Supreme Court while dealing with Article 122 stated that it does not oust review by the judiciary in cases of “gross illegality, irrationality, violation of constitutional mandate, mala fides, non-compliance with rules of natural justice and perversity.”
In 1968, the speaker of the Punjab legislative assembly adjourned the proceedings for a period of two months following rowdy behaviour. Subsequently, an ordinance preventing such a suspension was promulgated and the legislature was summoned by the governor to consider some expedient financial matters. The speaker disagreed with the decision and after some confusion, the deputy speaker passed a few Bills as money Bills. While looking into the question of what was protected from judicial review, the court stated that the protection did not extend to breaches of mandatory provisions of the constitution, only to directory provisions. By that logic, if Article 110 (1) is seen as a mandatory provision, a breach of its provisions could lead to an interpretation that the Supreme Court may well question an erroneous decision by the speaker of the Lok Sabha to certify a legislation as a money Bill. The use of the word “shall” in Article 110 (1), the nature and design of the provision, its overriding impact on the other constitutional provisions granting the Rajya Sabha powers are ample evidence of its mandatory nature. Based on the above, Anup Surendranath has argued that the passage of the Aadhaar Act as a money Bill when it does not satisfy the constitutional conditions for it does amount to a gross illegality.
The judicial precedent in Mohd. Saeed Siddiqui vs State of Uttar Pradesh where the matter of the court’s power to question the decision of a speaker was considered, though, leans in the other direction. In 2012, the Uttar Pradesh Lokayukta and Up-Lokayuktas (Amendment) Act, 2012 was passed as money Bill by the Uttar Pradesh state legislature. Subsequently, a writ petition was filed challenging its constitutional validity. A three-judge bench of the Supreme Court looked into the application of Article 212. It is the provision corresponding to Article 122, dealing with the power of the courts to inquire into the proceedings of the state legislature. The court held that Article 212 makes “it clear that the finality of the decision of the Speaker and the proceedings of the State Legislature being important privilege of the State Legislature, viz., freedom of speech, debate and proceedings are not to be inquired by the Courts.” Importantly, ‘proceedings of the legislature’ were deemed to include within its scope everything done in transacting parliamentary business, including the passage of the Bill. While the court did acknowledge the limitations of parliamentary privilege as established in the Raja Ram Pal case, it did not adequately take into account the reasoning in it.
The Aadhaar Act is a legislation which makes it mandatory of all residents to enrol for a biometric identification system in order to avail certain subsidies, benefits and services. It has huge potential risks for individual privacy and national security and has been the subject of an extremely high profile Public Interest Litigation. Its passage as a money Bill, without any oversight from the Rajya Sabha and an opportunity for substantial debate and discussion, is a fraud on the Constitution. Whether or not the court chooses to see it that way remains to be seen.
Comments on Information Technology (Security of Prepaid Payment Instruments) Rules, 2017
1. Preliminary
1.1. This submission presents comments by the Centre for Internet and Society[1] in response to the Information Technology (Security of Prepaid Payment Instruments) Rules 2017 (“the Rules”).[2] The Ministry of Electronics and Information Technology (MEIT) issued a consultation paper (pdf) which calls for developing a framework for security of digital wallets operating in the country on March 08, 2017. This proposed rules have been drafted under provisions of Information Technology Act, 2000, and comments have been invited from the general public and stakeholders before the enactment of these rules.
2. The Centre for Internet and Society
2.1. The Centre for Internet and Society, (“CIS”), is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, and open access), internet governance, telecommunication reform, digital privacy, and cyber-security.
2.2. This submission is consistent with CIS’ commitment to safeguarding general public interest, and the interests and rights of various stakeholders involved, especially the privacy and data security of citizens. CIS is thankful to the MEIT for this opportunity to provide feedback to the draft rules.
3. Comments
3.1 General Comments
Penalty
There is no penalty for not complying with these rules. Even the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 doesn’t have penalties. Under section 43A of the Information Technology Act (under which the 2011 Rules have been promulgated), a wrongful gain or a wrongful loss needs to be demonstrated. This should not be a requirement for financial sector.
Expansion to Contractual Parties.
A majority of these rules, in order to be effective and realistically protect consumer interest, should also be expanded to third parties, agents, contractual relationships and any other relevant relationship an e-PPI issuer may delegate as a part of their functioning.
3.2 Rule 2: Definitions
Certain key words relevant to the field of e-PPI based digital payments such as authorisation, metadata, etc. are not defined in the rules and should both be defined and accounted for in the rules to ensure modern developments such as big data and machine learning, digital surveillance, etc. do not violate human rights and consumer interest.
3.2 Rule 7: Definition of personal information
Rule 7 provides an exhaustive list of data that will be deemed to be personal information for the purposes of the Rules. While information collected at the time of issuance of the pre-paid payment instrument and during its use is included within the scope of Rule 7, it makes no reference to metadata generated and collected by the e-PPI issuer.
3.3 Rule 4: Inadequate privacy protections
Rule 4(2) specifies the details that the privacy policies of each e-PPI issuer must contain. However, these specifications are highly inadequate and fall well below the recommendations under the National Privacy Principles in Report of the Group of Experts on Privacy chaired by Justice A P Shah.
Suggestions: The Rules should include include clearly specified rights to access, correction and opt in/opt out, continuing obligations to seek consent in case of change in policy or purpose and deletion of data after purpose is achieved. Additionally, it must be required that a log of each version of past privacy policies be maintained along with the relevant period of applicability.
3.4 Rule 10: Reasonable security practices
Problem: Financial information (“such as bank account or credit card or debit card or other payment instrument details”) is already invoked in an inclusive manner in the definition of ‘personal information’ in Rule 7. Given this there is no need to make the Reasonable Security Practices Rules applicable to financial data through this provisions: it already is, and it is best to avoid unnecessary redundancy.
Solution: This entire rule should be removed.
3.5 Rule 12: Traceability
Problem: There is a requirement created under this rule that payment-related interactions with customers or other service providers be “appropriately trace[able]”. But it is unclear what that would practically mean: would IP logging suffice? would IMEI need to be captured for mobile transactions? what is “appropriately” traceable? — none of those questions are answered.
Suggestion: The NPCI’s practices and RBI regulations, for instance, seek to limit the amount of information that entities like e-PPI providers have. These rules need to be brought in line with those practices and regulations.
3.6 Rule 5: Risk Assessment
Rule 5 requires e-PPI issuers to carry out risk assessments associated with the security of the payments systems at least once a year and after any major security incident. However, there are no transparency requirements such as publications of details of such review, a summary of the analysis, any security vulnerabilities discovered etc.
Suggestion:
- Broaden the scope of this provision to include not just risk assessments but also security audits.
- Mandate publication of risk assessment and security audit reports.
3.7 Rule 11: End-to-End Encryption
The rule concerning end-to-end encryption (E2E) needs significantly greater detailing to be effective in ensuring the the protection of information at both storage and transit.
Suggestions: Elements such as Secure Element or a Secured Server and Trusted User Interface, both concepts to enable secure payments, can be detailed in the rule and a timeline can be established to require hardware, e-PPI practices and security standards to realistically account for such best practices to ensure modern, secure and industry accepted implementation of the rule.
3.8 Rule 13: Retention of Information
Problem: Rule 13 leaves the question of retention entirely unanswered by deferring the future rulemaking to the Central Government.
Suggestions: Rule 13 should be expanded to include the various categories of information that can be stored, guidelines for the short-term (fast access) and long-term storage of the information retained under the rule and other relevant details. The rule should also include the security standards that should be followed in the storage of such information, require access logs be maintained for whenever this information is accessed by individuals, detail secure destruction practices at the end of the retention period and finally mandate that end users be notified by the e-PPI issuer of when such retained information is accessed in all situations bar exceptional circumstances such as national security, compromising an ongoing criminal investigations, etc.
3.9 Rule 14: Reporting of Cyber Incidents
Rule 14 is an excellent opportunity to uphold transparency, accountability and consumer rights by mandating time- and information-bound notification of cyber incidents to customers, including intrusions, database breaches and any other compromise of the integrity of the financial system. While the requirement of reporting such incidents to CERT-In is already present in the Rule 12 of the CERT Rules, the rule retains the optional nature of notifying customers. The rule should include an exhaustive list of categories or kinds of cyber incidents that should be reported to affected end users without compromising the investigation of such breaches by private organisations and public authorities. Further, the rule should also include penalties for non-compliance of this requirement (both to CERT-In and the consumer) to serve as an incentive for e-PPI issuers to uphold consumer public interest. The rule should be expanded to include a detailed mechanism for such reporting, including when e-PPI issuers and the CERT-In can withhold information from consumers as well as requiring the withheld information be disclosed when the investigation has been completed. Finally, the rule should also require that such disclosures be public in nature and consumers not be required to not disseminate such information to enable informed choice by the end user community.
Suggestion:
(1) In Rule 14(3) “may” should be substituted by “shall”.
(2) Penalties of up to 5 lakh rupees may be imposed for each day that the e-PPI issuer fails to report any severe vulnerability that could likely result in harm to customers.
3.10 Rule 15: Customer Awareness and Education
Problem: Rule 15 on Customer Awareness and Education by e-PPI issuers does not take into account the vast lingual diversity and varied socio-economic demographic that makes up the end users of e-PPI providers in India, by mandating the actions under the rule must account for these factors prior to be propagated.
Solutions: The rule must ensure that e-PPI issuers track record in carrying out awareness is regularly held accountable by both the government and public disclosures on their websites. Further, the rule can be made more concrete and effective by including mobile operating systems in their scope (along with equipments), mandating awareness for best practices for inclusive technologies like USSD banking, specifying notifications to include SMS reports of financial transactions, etc.
3.11 Rule 16: Grievance Redressal
Problem: Rule 16 lays down the requirement of grievance redressal, without specifying appellate mechanisms (both within the organisation and at the regulatory level), accountability (via penalties) for non-compliance of the rule nor requiring a clear hierarchy of responsibility within the e-PPI organisation. These factors seriously compromise the efficacy of a grievance redressal framework.
Solutions: Similar rules for grievance redressal that have been enacted by the Insurance Regulatory and Development Authority for the insurance sector and the Telecom Regulatory Authority of India for the telecom sector can and should serve as a reference point for this rule. Their effectiveness and real world operation should also be monitored by the relevant authorities while ensuring sufficient flexibility exists in the rule to uphold consumer rights and the public interest. Proper appellate mechanisms at the regulatory level are essential along with penalties for non-compliance.
3.12 Rule 17: Security Standards
Problem: Rule 17 empowers the Central Government to mandate security standards to be followed by e-PPI issuers operating in India. While appreciable in its overall outlook on ensuring a minimum standard of security, the Rule needs be improved upon to make it more effective. This can be in done by specifying certain minimum security standards to ensure all e-PPI issuers have a minimal level of security, instead of leaving them open to being intimated at a later date.
Solutions: Standards that can either be made mandatory or be used as a reference point to create a new standard under Rule 17(2) are ISO/IEC 14443, IS 14202, ISO/IEC 7816, PCI DSS, etc. Further, the Rule should include penalties for non-compliance of these standards, to make them effectively enforceable by both the government and end users alike. Additional details like the maximum time period in which such security standards should be implemented post their notification, requiring regular third party audits to ensure continuing compliance and effectiveness and requiring updated standards be used upon their release will go a long way in ensuring e-PPI issuers fulfil their mandate under these Rules.
[2] http://meity.gov.in/sites/upload_files/dit/files/draft-rules-security%20of%20PPI-for%20public%20comments.pdf
Benefits, Harms, Rights and Regulation: A Survey of Literature on Big Data
The survey was edited by Sunil Abraham, Elonnai Hickok and Leilah Elmokadem
Introduction
In 2011, it was estimated that the quantity of data produced globally surpassed 1.8 zettabyte.By 2013 it had increased to 4 zettabytes. With the nascent development of the so-called ‘Internet of Things’ gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “Big Data”; though there is no single agreed upon definition of the term. Although still in its initial stages, big data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even a decade ago.
Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. Most notably are the concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data. Concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on big data analytics as opposed to traditional forms of analysis and the creation of new digital divides. The existing literature on big data is vast. However, many of the benefits and harms identified by researchers tend to focus on sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of big data’s possible applications, they do not offer a holistic perspective of the broader impacts of Big Data.
How Aadhaar compromises privacy? And how to fix it?
The op-ed was published in the Hindu on March 31, 2017.
When assessing a technology, don't ask - “what use is it being put to today?”. Instead, ask “what use can it be put to tomorrow and by whom?”. The original noble intentions of the Aadhaar project will not constrain those in the future that want to take full advantage of its technological possibilities. However, rather than frame the surveillance potential of Aadhaar in a negative tone as three problem statements - I will propose three modifications to the project that will reduce but not eliminate its surveillance potential.
Shift from biometrics to smart cards: In January 2011, the Centre for Internet and Society had written to the parliamentary finance committee that was reviewing what was then called the “National Identification Authority of India Bill 2010”. We provided nine reasons for the government to stop using biometrics and instead use an open smart card standard. Biometrics allows for identification of citizens even when they don't want to be identified. Even unconscious and dead citizens can be identified using biometrics. Smart cards, on the other hand, require pins and thus citizens' conscious cooperation during the identification process. Once you flush your smart cards down the toilet nobody can use them to identify you. Consent is baked into the design of the technology. If the UIDAI adopts smart cards, we can destroy the centralized database of biometrics just like the UK government did in 2010 under Theresa May's tenure as Home Secretary. This would completely eliminate the risk of foreign governments, criminals and terrorists using the biometric database to remotely, covertly and non-consensually identify Indians.
Destroy the authentication transaction database: The Aadhaar Authentication Regulations 2016 specifies that transaction data will be archived for five years after the date of the transaction. Even though the UIDAI claims that this is a zero knowledge database from the perspective of “reasons for authentication”, any big data expert will tell you that it is trivial to guess what is going on using the unique identifiers for the registered devices and time stamps that are used for authentication. That is how they put Rajat Gupta and Raj Rajratnam in prison. There was nothing in the payload ie. voice recordings of the tapped telephone conversations – the conviction was based on meta-data. Smart cards based on open standards allow for decentralized authentication by multiple entities and therefore eliminate the need for a centralized transaction database.
Prohibit the use of Aadhaar number in other databases: We must, as a nation, get over our obsession with Know Your Customer [KYC] requirements. For example, for SIM cards there is no KYC requirement is most developed countries. Our insistence on KYC has only resulted in retardation of Internet adoption, a black market for ID documents and unnecessary wastage of resources by telecom companies. It has not prevented criminals and terrorists from using phones. Where we must absolutely have KYC for the purposes of security, elimination of ghosts and regulatory compliance – we must use a token issued by UIDAI instead of the Aadhaar number itself. This would make it harder for unauthorized parties to combine databases while at the same time, enabling law enforcement agencies to combine databases using the appropriate authorizations and infrastructure like NATGRID. The NATGRID, unlike Aadhaar, is not a centralized database. It is a standard and platform for the express assembly of sub-sets of up to 20 databases which is then accessed by up to 12 law enforcement and intelligence agencies.
To conclude, even as a surveillance project – Aadhaar is very poorly designed. The technology needs fixing today, the law can wait for tomorrow.
Analysis of Key Provisions of the Aadhaar Act Regulations
This blog post was edited by Elonnai Hickok
Introduction
At the outset it is important to note that a concerning feature of these regulations is that they intend to govern the processes of a body which has been in existence for over six years, and has engaged in all the activities sought to be governed by these policies at a massive scale, considering the claims of over one billion Aadhaar number holders. However, the regulation do not acknowledge, let alone address past processes, practices, enrollments, authentications, use of technology etc. this fact, and there are no provisions that effectively address the past operations of the UIDAI. Below is an analysis of the five regulations issued thus far by the UIDAI.
Unique Identification Authority of India (Transactions of Business at Meetings of the Authority) Regulations[1]
These regulations framed under clause (h) of sub-section (2) of section 54 read with sub-section (1) of section 19 of the Aadhaar Act, deal with the meetings of the UIDAI, the process following up to each meeting, and the manner in which all meetings are to be conducted.
Provision: Sub-Regulation 3.
Meetings of the Authority– (1) There shall be no less than three meetings of the Authority in a financial year on such dates and at such places as the Chairperson may direct and the interval between any two meetings shall not in any case, be longer than five months
Observations:
The number of times that UIDAI would meet in a year is far too less, taking in account the significance of the responsibilities of UIDAI as the sole body for policy making for all issues related to Aadhaar. In contrast, the Telecom Regulatory Authority of India is required to meet at least once a month. Other bodies such as SEBI and IRDAI are also required to meet at least four times[2] and six times[3] in a year respectively.
Provision: Sub-Regulation 8 (5)
Decisions taken at every meeting of the Authority shall be published on the website of Authority unless the Chairperson determines otherwise on grounds of ensuring confidentiality.
Observations:
The Chairperson has the power to determine withholding publication of the decisions of the meeting on the broad grounds of ‘confidentiality’. Given the fact that the decisions taken by UIDAI as a public body can have very real implications for the rights of residents, the ground of confidentiality is not sufficient to warrant withholding publication. It is curious that instead of referring to the clearly defined exceptions laid down in other similar provisions such as the exceptions in Section 8 of the Right to Information Act, 2005, the rules merely refer to vague and undefined criteria of ‘confidentiality’.
Provision: Sub-Regulation 14 (4)
Members of the Authority and invitees shall sign an initial Declaration at the first meeting of the Authority for maintaining the confidentiality of the business transacted at meetings of the Authority in Schedule II.
Observations:
The above provision, combined with the fact that there is no provision regarding publication of the minutes of the meetings of UIDAI raise serious questions about the transparency of its functioning.
Unique Identification Authority of India (Enrolment and Update) Regulations[4]
These regulations, framed under sub-section (1), and sub-clauses (a), (b), (d,) (e), (j), (k), (l), (n), (r), (s), and (v) of sub-section (2), of Section 54 of the Aadhaar Act deals with the enrolment process, the generation of an Aadhaar number, updation of information and governs the conduct of enrolment agencies and associated third parties.
Provisions:
Sub-Regulation 8 (2), (3) and (4)
The standard enrolment/update software shall have the security features as may be specified by the Authority for this purpose.
All equipment used in enrolment, such as computers, printers, biometric devices and other accessories shall be as per the specifications issued by the Authority for this purpose.
The biometric devices used for enrolment shall meet the specifications, and shall be certified as per the procedure, as may be specified by the Authority for this purpose.
Sub-Regulation 3 (2)
The standards for collecting the biometric information shall be as specified by the Authority for this purpose.
Sub-Regulation 4 (5)
The standards of the above demographic information shall be as may be specified by the Authority for this purpose.
Sub-Regulation 6 (2)
For residents who are unable to provide any biometric information contemplated by these regulations, the Authority shall provide for handling of such exceptions in the enrolment and update software, and such enrolment shall be carried out as per the procedure as may be specified by the Authority for this purpose.
Sub-Regulation 14 (2)
In case of rejection due to duplicate enrolment, resident may be informed about the enrolment against which his Aadhaar number has been generated in the manner as may be specified by the Authority.
Observations:
Though in February 2017, the UIDAI published technical specifications for registered devices[5], the regulations leave unaddressed issues such as lack of appropriately defined security safeguards in the Aadhaar. There is a general trend of continued deferrals in the regulations by stating that matters would be specified later on important aspects such as rejection of applications, uploading of the enrolment packet to the CIDR, the procedure for enrolling residents with biometric exceptions, the procedure for informing residents about acceptance/rejection of enrolment application, specifying the convenience fee for updation of residents’ information, the procedure for authenticating individuals across services etc.c. There is a clear failure to exercise the mandate delegated to UIDAI, leaving key matters to determined at a future unspecified date. The delay and ambiguity around when regulations will be defined is all the more problematic in light of the fact that the project has been implemented since 2010 and the Aadhaar number is now mandatory for availing a number of services.
Further it is important to note that a number of policies put out by the UIDAI predate these regulations, on which the regulations are completely silent, thus neither endorsing previous policies nor suggesting that they may be revisited. Further, the regulations choose to not engage with the question of operation of the Aadhaar project, enrolment and storage of data etc prior to the notification of these regulations, or the policies which these regulations may regularise. For instance, the regulations do not specify any measures to deal with issues arising out of enrolment devices used prior to the development of the February 2017 specifications.
Provision: Sub-Regulation 32
The Authority shall set up a contact centre to act as a central point of contact for resolution of queries and grievances of residents, accessible to residents through toll free number(s) and/ or e-mail, as may be specified by the Authority for this purpose.
(2) The contact centre shall:
- Provide a mechanism to log queries or grievances and provide residents with a unique reference number for further tracking till closure of the matter;
- Provide regional language support to the extent possible;
- Ensure safety of any information received from residents in relation to their identity information;
- Comply with the procedures and processes as may be specified by the Authority for this purpose.
(3) Residents may also raise grievances by visiting the regional offices of the Authority or through any other officers or channels as may be specified by the Authority.
Observations:
While the setting up of a grievance redressal mechanism under the regulations is a welcome move, there is little clarity about the procedure to be followed, nor is a timeline for it specified. The chapter on grievance redressal is in fact one of the shortest chapters in the regulations. The only provision in this chapter deals with the setting up of a contact centre, a curious choice of term for what is supposed to be the primary quasi judicial grievance redressal body for the Aadhaar project. In line with the indifferent and insouciant terminology of ‘contact centre’, the chapter is restricted to the matters of the logging of queries and grievances by the contact centre, and does not address the matter of procedure or timelines, and even the substantive provisions about the nature of redress available. Furthermore, the obligation on the contact centre to protect information received is limited to ‘ensuring safety’ an ambiguous standard that does not speak to any other standards in Indian law.
Aadhaar (Authentication) Regulations, 2016[6]
These regulations, framed under sub-section (1), and sub-clauses (f) and (w) of sub-section (2) of Section 54 of the Aadhaar Act deals with the authentication framework for Aadhaar numbers, the governance of authentication agencies and the procedure for collection, storage of authentication data and records.
Provisions:
Sub-Regulation 5 (1)
At the time of authentication, a requesting entity shall inform the Aadhaar number holder of the following details:—
(a) the nature of information that will be shared by the Authority upon authentication;
(b) the uses to which the information received during authentication may be put; and
(c) alternatives to submission of identity information
Sub-Regulation 6 (2)
A requesting entity shall obtain the consent referred to in sub-regulation (1) above in physical or preferably in electronic form and maintain logs or records of the consent obtained in the manner and form as may be specified by the Authority for this purpose.
Observations:
Sub-regulation 5 mentions that at the time of authentication, requesting entities shall inform the Aadhaar number holder of alternatives to submission of identity information for the purpose of authentication. Similarly, sub-regulation 6 mentions that requesting entity shall obtain the consent of the Aadhaar number holder for the authentication. However, in neither of the above circumstances do the regulations specify the clearly defined options that must be made available to the Aadhaar number holder in case they do not wish submit identity information, nor do the regulations specify the procedure to be followed in case the Aadhaar number holder does not provide consent.
Most significantly, this provision does little by way of allaying the fears raised by the language in Section 8 (4) of the Aadhaar Act which states that UIDAI “shall respond to an authentication query with a positive, negative or any other appropriate response sharing such identity information.” This section gives a very wide discretion to UIDAI to share personal identity information with third parties, and the regulations do not temper or qualify this power in any way.
Sub-Regulation 11 (1) and (4)
The Authority may enable an Aadhaar number holder to permanently lock his biometrics and temporarily unlock it when needed for biometric authentication.
The Authority may make provisions for Aadhaar number holders to remove such permanent locks at any point in a secure manner.
Observations:
A welcome provision in the regulation is that of biometric locking which allows Aadhaar number holders to permanently lock his biometrics and temporarily unlock it only when needed for biometric authentication. However, in the same breath, the regulation also provides for the UIDAI to make provisions to remove such locking without any specified grounds for doing so.
Provision: Sub-Regulation 18 (2), (3) and (4)
The logs of authentication transactions shall be maintained by the requesting entity for a period of 2 (two) years, during which period an Aadhaar number holder shall have the right to access such logs, in accordance with the procedure as may be specified.
Upon expiry of the period specified in sub-regulation (2), the logs shall be archived for a period of five years or the number of years as required by the laws or regulations governing the entity, whichever is later, and upon expiry of the said period, the logs shall be deleted except those records required to be retained by a court or required to be retained for any pending disputes.
The requesting entity shall not share the authentication logs with any person other than the concerned Aadhaar number holder upon his request or for grievance redressal and resolution of disputes or with the Authority for audit purposes. The authentication logs shall not be used for any purpose other than stated in this sub-regulation.
Observations:
While it is specified that the authentication logs collected by the requesting entities shall not be shared with any person other than the concerned Aadhaar number holder upon their request or for grievance redressal and resolution of disputes or with the Authority for audit purposes, and that the authentication logs may not be used for any other purpose, the maintenance of the logs for a period of seven years seems excessive. Similarly, the UIDAI is also supposed to store Authentication transaction data for over five years. This is in violation of the widely recognized data minimisation principles which seeks that data collectors and data processors delete personal data records when the purpose for which it has been collected if fulfilled. While retention of data for audit and dispute-resolution purpose is legitimate, the lack of specification of security standards and the overall lack of transparency and inadequate grievance redressal mechanism greatly exacerbate the risks associated with data retention.
Aadhaar (Sharing of Information) Regulations, 2016 and Aadhaar (Data security) Regulations, 2016[7]
Framed under the powers conferred by sub-section (1), and sub-clause (o) of sub-section (2), of Section 54 read with sub-clause (k) of sub-section (2) of Section 23, and sub-sections
(2) and (4) of Section 29, of the Aadhaar Act, the Sharing of Information regulations look at the restrictions on sharing of identity information collected by the UIDAI and requesting entities. The Data Security regulation, framed under powers conferred by clause (p) of subsection (2) of section 54 of the Aadhaar Act, looks at security obligations of all service providers engaged by the UIDAI.
Provision: Sub-Regulation 6 (1)
All agencies, consultants, advisors and other service providers engaged by the Authority, and ecosystem partners such as registrars, requesting entities, Authentication User Agencies and Authentication Service Agencies shall get their operations audited by an information systems auditor certified by a recognised body under the Information Technology Act, 2000 and furnish certified audit reports to the Authority, upon request or at time periods specified by the Authority.
Observations:
The regulation states that audits shall be conducted by an information systems auditor certified by a recognised body under the Information Technology Act, 2000. However, there is no such certifying body under the Information Technology Act. This suggests a lack of diligence in framing the rules, and will inevitably to lead to inordinate delays, or alternately, a lack of a clear procedure in the appointment of an auditor. Further, instead of prescribing a regular and proactive process of audits, the regulation only limits audits to when requested or as deemed appropriate by UIDAI. This is another, in line of many provisions, whose implication is power being concentrated in the hands of UIDAI, with little scope for accountability and transparency.
Conclusion
In conclusion, it must be stated that the regulations promulgated by the UIDAI leave a lot to be desired. Some of the most important issues raised against the Aadhaar Act, which were delegated to the UIDAI’s rule making powers have not been addressed at all. Some of the most important issues such as data security policies, right to access records of Aadhaar number holders, procedure to be followed by the grievance redressal bodies, uploading of the enrolment packet to the CIDR, procedure for enrolling residents with biometric exceptions, procedure for informing residents about acceptance/rejection of enrolment application have left unaddressed and ‘may be specified’ at a later data. These failures leave a gaping hole especially in light of the absence of a comprehensive data protection legislation in India, as well the speed and haste with the enrolment and seeding has been done by the UIDAI, and the number of services, both private and public, which are using or planning to use the Aadhaar number and the authentication process as a primary identifier for residents.
[1] Available at https://uidai.gov.in/legal-framework/acts/regulations.html
[2] https://www.irda.gov.in/ADMINCMS/cms/frmGeneral_Layout.aspx?page=PageNo62&flag=1
[3] http://www.sebi.gov.in/acts/boardregu.html
[4] Available at https://uidai.gov.in/legal-framework/acts/regulations.html
[5] Available at: https://uidai.gov.in/images/resource/aadhaar_registered_devices_2_0_09112016.pdf
[6] Available at https://uidai.gov.in/legal-framework/acts/regulations.html
[7] Available at https://uidai.gov.in/legal-framework/acts/regulations.html
Aadhaar marks a fundamental shift in citizen-state relations: From ‘We the People’ to ‘We the Government’
The article was published in the Hindustan Times on April 3, 2017.
Until recently, people were allowed to opt out of Aadhaar and withdraw consent to have their data stored. This is no longer going to be an option.
(Siddhant Jumde / HT Illustration)
Imagine you’re walking down the street and you point the camera on your phone at a crowd of people in front of you. An app superimposes on each person’s face a partially-redacted name, date of birth, address, whether she’s undergone police verification, and, of course, an obscured Aadhaar number.
OnGrid, a company that bills itself as a “trust platform” and offers “to deliver verifications and background checks”, used that very imagery in an advertisement last month. Its website notes that “As per Government regulations, it is mandatory to take consent of the individual while using OnGrid”, but that is a legal requirement, not a technical one.
Since every instance of use of Aadhaar for authentication or for financial transactions leaves behind logs in the Unique Identification Authority of India’s (UIDAI) databases, the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software. The space for digital identities as divorced from legal identities gets removed. Clearly, Aadhaar has immense potential for profiling and surveillance. Our only defence: law that is weak at best and non-existent at worst.
The Aadhaar Act and Rules don’t limit the information that can be gathered from you by the enrolling agency; it doesn’t limit how Aadhaar can be used by third parties (a process called ‘seeding’) if they haven’t gathered their data from UIDAI; it doesn’t require your consent before third parties use your Aadhaar number to collate records about you (eg, a drug manufacturer buying data from various pharmacies, and creating profiles using Aadhaar).
It even allows your biometrics to be shared if it is “in the interest of national security”. The law offers provisions for UIDAI to file cases (eg, for multiple enrollments), but it doesn’t allow citizens to file a case against private parties or the government for misuse of Aadhaar or identity fraud, or data breach.
It is also clear that the government opposes any privacy-related improvements to the law. After debating the Aadhaar Bill in March 2016, the Rajya Sabha passed an amendment by MP Jairam Ramesh that allowed people to opt out of Aadhaar, and withdraw their consent to UIDAI storing their data, if they had other means of proving their identity (thus allowing Aadhaar to remain an enabler).
But that amendment, as with all amendments passed in the Rajya Sabha, was rejected by the Lok Sabha, allowing the government to make Aadhaar mandatory, and depriving citizens of consent. While the Aadhaar Act requires a person’s consent before collecting or using Aadhaar-provided details, it doesn’t allow for the revocation of that consent.
In other countries, data security laws require that a person be notified if her data has been breached. In response to an RTI application asking whether UIDAI systems had ever been breached, the Authority responded that the information could not be disclosed for reasons of “national security”.
The citizen must be transparent to the state, while the state will become more opaque to the citizen.
How Did Aadhaar Change?
How did Aadhaar become the behemoth it is today, with it being mandatory for hundreds of government programmes, and even software like Skype enabling support for it? The first detailed look one had at the UID project was through an internal UIDAI document marked ‘Confidential’ that was leaked through WikiLeaks in November 2009. That 41-page dossier is markedly different from the 170-page ‘Technology and Architecture’ document that UIDAI has on its website now, but also similar in some ways. |
In neither of those is the need for Aadhaar properly established. Only in November 2012 — after scholars like Reetika Khera pointed out UIDAI’s fundamental misunderstanding of leakages in the welfare delivery system — was the first cost-benefit analysis commissioned, by when UIDAI had already spent ₹28 billion. That same month, Justice KS Puttaswamy, a retired High Court judge, filed a PIL in the Supreme Court challenging Aadhaar’s constitutionality, wherein the government has argued privacy isn’t a fundamental right.
Every time you use Aadhaar, you leave behind logs in the UIDAI databases. This means that the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software.
Even today, whether the ‘deduplication’ process — using biometrics to ensure the same person can’t register twice — works properly is a mystery, since UIDAI hasn’t published data on this since 2012. Instead of welcoming researchers to try to find flaws in the system, UIDAI recently filed an FIR against a journalist doing so.
At least in 2009, UIDAI stated it sought to prevent anyone from “[e]ngaging in or facilitating profiling of any nature for anyone or providing information for profiling of any nature for anyone”, whereas the 2014 document doesn’t. As OnGrid’s services show, the very profiling that the UIDAI said it would prohibit is now seen as a feature that all, including private companies, may exploit.
UID has changed in other ways too. In 2009, it was as a system that never sent out any information other than ‘Yes’ or ‘No’, which it did in response to queries like ‘Is Pranesh Prakash the name attached to this UID number’ or ‘Is April 1, 1990 his date of birth’, or ‘Does this fingerprint match this UID number’.
With the addition of e-KYC (wherein UIDAI provides your demographic details to the requester) and Aadhaar-enabled payments to the plan in 2012, the fundamentals of Aadhaar changed. This has made Aadhaar less secure.
Security Concerns
With Aadhaar Pay, due to be launched on April 14, a merchant will ask you to enter your Aadhaar number into her device, and then for your biometrics — typically a fingerprint, which will serve as your ‘password’, resulting in money transfer from your Aadhaar-linked bank account.
Basic information security theory requires that even if the identifier (username, Aadhaar number etc) is publicly known — millions of people names and Aadhaar numbers have been published on dozens of government portals — the password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?
In 2015, researchers in Carnegie Mellon captured the iris scans of a driver using car’s side-view mirror from distances of up to 40 feet. In 2013, German hackers fooled Apple iOS’s fingerprint sensors by replicating a fingerprint from a photo taken off a glass held by an individual. They even replicated the German Defence Minister’s fingerprints from photographs she herself had put online. Your biometrics can’t be kept secret.
Typically, even if your username (in this case, Aadhaar number) is publicly known, your password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?
In the US, in a security breach of 21.5 million government employees’ personnel records in 2015, 5.2 million employees’ fingerprints were copied. If that breach had happened in India, those fingerprints could be used in conjunction with Aadhaar numbers not only for large-scale identity fraud, but also to steal money from people’s bank accounts.
All ‘passwords’ should be replaceable. If your credit card gets stolen, you can block it and get a new card. If your Aadhaar number and fingerprint are leaked, you can’t change it, you can’t block it.
The answer for Aadhaar too is to choose not to use biometrics alone for authentication and authorisation, and to remove the centralised biometrics database. And this requires a fundamental overhaul of the UID project.
Aadhaar marks a fundamental shift in citizen-state relations: from ‘We the People’ to ‘We the Government’. If the rampant misuse of electronic surveillance powers and wilful ignorance of the law by the state is any precedent, the future looks bleak. The only way to protect against us devolving into a total surveillance state is to improve rule of law, to strengthen our democratic institutions, and to fundamentally alter Aadhaar. Sadly, the political currents are not only not favourable, but dragging us in the opposite direction.
Right to be Forgotten: A Tale of Two Judgements
While one High Court (Karnataka) ordered the removal of personal details from the judgment,[1] the other (Gujarat) dismissed the plea[2]. In this post, we try to understand the global jurisprudence on the right to be forgotten, and how the contrasting judgments in India may be located within it.
Background
The ‘right to be forgotten’ has gained prominence since a matter was referred to the Court of Justice of European Union (CJEU) in 2014 by a Spanish court.[3] In this case, Mario Costeja González had disputed the Google search of his name continuing to show results leading to an auction notice of his reposed home. The fact that Google continued to make available in its search results, an event in his past, which had long been resolved, was claimed by González as a breach of his privacy. He filed a complaint with the Spanish Data Protection Agency (AEPD in its Spanish acronym), to have the online newspaper reports about him as well as related search results appearing on Google deleted or altered. While AEPD did not agree to his demand to have newspaper reports altered, it ordered Google Spain and Google, Inc. to remove the links in question from their search results. The case was brought in appeal before the Spanish High Court, which referred the matter to CJEU. In a judgement having far reaching implications, CJEU held that where the information is ‘inaccurate, inadequate, irrelevant or excessive,’ individuals have the right to ask search engines to remove links with personal information about them. The court also ruled that even if the physical servers of the search engine provider are located outside the jurisdiction of the relevant Member State of EU, these rules would apply if they have branch office or subsidiary in the Member State.
The ‘right to be forgotten’ is a misnomer, and essentially when we speak of it in the context of the proposed laws in EU, we refer to the rights of individuals to seek erasure of certain data that concerns them. The basis of what has now evolved into this right is contained in the 1995 EU Data Protection Directive, with Article 12 of the Directive allowing a person to seek deletion of personal data once it is no longer required.
Critical to our understanding of the rationale for how the ‘right to be forgotten’ is being framed in the EU, is an appreciation of how European laws perceive privacy of individuals. Unlike the United States (US), where privacy may be seen as a corollary of personal liberty protecting against unreasonable state intrusions, European laws view privacy as an aspect of personal dignity, and are more concerned with protection from third parties, particularly the media. The most important way in which this manifests itself is in where the burden to protect privacy rights lie. In Europe, privacy policy often dictates intervention from the state, whereas in the US, in many cases it is up to the individuals to protect their privacy.[4]
Since the advent of the Internet, both the nature and quantity of information existing about individuals has changed dramatically. This personal information is no longer limited to newspaper reports and official or government records either. Our use of social media, micro-discussions on Twitter, photographs and videos uploaded by us or others tagging us, every page or event we like, favourite or share—all contribute to our digital footprint. Add to this the information created not by us but about us by both public and private bodies storing data about individuals in databases, our digital shadows begin to far exceed the data we create ourselves. It is abundantly clear that we exist in a world of Big Data, which relies on algorithms tracking repeated behaviour by our digital selves. It is in this context that a mechanism which enables the purging of some of this digital shadow makes sense.
Further, it is not only the nature and quantity of information that has changed, but also the means through which this information can be accessed. In the pre-internet era, access to records was often made difficult by procedural hurdles. Permissions or valid justifications were required to access certain kinds of data. Even for the information available in the public domain, often the process of gaining access were far too cumbersome. Now digital information not only continues to exist indefinitely, but can also be easily accessed readily through search engines. It is in this context that in a 2007 paper, Viktor Mayer-Schöenberger pioneered the idea of memory and forgetting for the digital age.[5] He proposed that all forms of personal data should have an additional meta data of expiration date to switch the default from information existing endlessly to having a temporal limit after which it is deleted. While this may be a radical suggestion, we have since seen proposals to allow individuals some control over information about them.
In 2016, the EU released the final version of the General Data Protection Regulation. The regulation provides for a right to erasure under Article 17, which would enable a data-subject to seek deletion of data.[6] Notably, except in the heading of the provision, Article 17 makes no reference to the word ‘forgetting.’ Rather the right made available in this regulation is in the form of making possible ‘erasure’ and ‘abstention from further dissemination.’ This is significant because what the proposed regulations provide for is not an overarching framework to enable or allow ‘forgetting’ but a limited right which may be used to delete certain data or search results. Providing a true right to be forgotten would pose issues of interpretation as to what ‘forgetting’ might mean in different contexts and the extent of measures that data controllers would have to employ to ensure it. The proposed regulation attempts to provide a specific remedy which can be exercised in the defined circumstances without having to engage with the question of ‘forgetting’.
The primary arguments made against the ‘right to be forgotten’ have come from its conflict with the right to freedom of speech. Jonathan Zittrain has argued against the rationale that the right to be forgotten merely alters results on search engines without deleting the actual source, thus, not curtailing the freedom of expression.[7] He has compared this altering of search results to letting a book remain in the library but making the catalogue unavailable. According to Zittrain, a better approach would be to allow data subjects to provide their side of the story and more context to the information about them, rather than allowing any kind of erasure. Unlike in the US, the European approach is to balance free speech against other concerns. So while one of the exceptions in sub-clause (3) of Article 17 provides that information may not be deleted where it is necessary to exercise the right to free speech, free speech does not completely trump privacy as the value that must be protected. On the other hand, US constitutional law would tend to give more credence to the First Amendment rights and allow them to be compromised in very limited circumstances. As per the position of the US Supreme Court in Florida Star v. B.J.F., lawfully obtained information may be restricted from publication only in cases involving a ‘state interest of the highest order’. This position would allow any potential right to be forgotten to be exercised in the most limited of circumstances and privacy and reputational harm would not satisfy the standard. For these reasons the rights to be forgotten as it exists in Article 17 may be unworkable in the US.
Issues in application
Significant technical challenges remain in the effective and consistent application of Article 17 of the EU Directive. One key issue is concerned with how ‘personal data’ is defined and understood, and how its interpretation will impact this right in different contexts. According to Article 17 of the EU directive, the term ‘personal data’ includes any information relating to an individual. Some ambiguity remains about whether information which may not uniquely identify a person, but as a part of small group, could be considered within the scope of personal data. This becomes relevant, for instance, where one seeks the erasure of information which, without referring to an individual, points fingers towards a family. At the same time, often the piece of information sought to be erased by a person may contain personal information about more than one individual. There is no clarity over whether a consensus of all the individuals concerned should be required, and if not, on what parameters should the wishes of one individual prevail over the others. Another important question, which is as yet unanswered, is whether the same standards for removal of content should apply to most individuals and those in public life.
The issue of what is personal data and can therefore be erased gets further complicated in cases of derived data about individuals used in statistics and other forms of aggregated content. While, it would be difficult to argue that the right to be forgotten needs to be extended to such forms of information, not erasing such derived content poses the risk of the primary information being inferred from it. In addition, Article 17(1)(a) provides for deletion in cases where the data is no longer necessary for the purposes for which they were collected or used. The standards for circumstances which satisfy this criteria are, as yet, unclear and may only be fully understood through a consistent application of this law.
Finally, once there are reasonable grounds to seek erasure of information, it is not clear how this erasure will be enforced practically. It may not be prudent to require that all copies of the impugned data are deleted such that they may not be recovered, to the extent technologically possible. A more reasonable solution might be to permit the data to continue to remain available in encrypted forms, much like certain records are sealed and subject to the strictest confidentiality obligations. In most cases, it may be sufficient to ensure that the records of the impugned data is removed from search results and database reports without actually tampering with information as it may exist. These are some of the challenges which the practical application of this right will face, and it is necessary to take them into account in enforcing the proposed regulations.
The two Indian judgments
In the first case, (before the Gujarat High Court), the petitioner entered a plea for “permanent restraint [on] free public exhibition of the judgment and order.” The judgment in question concerned proceeding against the petitioner for a number of offences, including culpable homicide amounting to murder. The petitioner was acquitted, both by the Sessions court and the High Court before which he was pleading. The petitioner’s primary contention was that despite the judgment being classified as ‘unreportable’, it was published by an online repository of judgments and was also indexed by Google search. The decision of the High Court to dismiss the petition, rest of the following factors: a) failure on the part of the petitioner to show any provisions in law which are attracted, or threat to the constitutional right to life and liberty, b) publication on a website does not amount to ‘reporting’, as reporting only refers to that by law reports.
While the second point of reasoning made by the courts is problematic in terms of the function of precedent served by the reported judgments, and the basis for reducing the scope of ‘reporting’ to only law reports, the first point is of direct relevance to our current discussion. The lack of available legal provisions points to the absence of data protection legislation in India. Had there been a privacy legislation which addressed the issues of how personal information may be dealt with, it is possible that it may have had instructive provisions to address situation like these. In the absence of such law, the only recourse that an individual has is to seek constitutional protection under one of the fundamental rights, most notably Article 21, which over the years, has emerged as the infinite repository of unenumerated rights. However, typically rights under Article 21 are of a vertical nature, i.e., available only against the state. Their application in cases where a private party is involved remains questionable, at best.
In contrast, in the second case, the Karnataka High Court ruled in favor of the petitioner. In this case, the petitioner’s daughter instituted both criminal and civil proceedings against a person. However, later they arrived at a compromise and one of the conditions was quashing all the proceedings which had been initiated. The petitioner had raised concerns about the appearance of his daughter’s name in the cause title and was easily searchable. The court, while making vague references to “trend in the Western countries where they follow this as a matter of rule “Right to be forgotten” in sensitive cases involving women in general and highly sensitive cases involving rape or affecting the modesty and reputation of the person concerned, held in the petitioner’s favor, and order that the name be redacted from the cause title and the body of the order before releasing to any service provider. The second judgment is all the more problematic for while it makes a reference to jurisprudence in other countries, yet it does not base it on the fundamental right to privacy, but to the idea of modesty and reputation of women, which has no clear legal basis on either Indian or comparative jurisprudence.
Conclusion
The above two cases demonstrate the problem of lack of a clear legal basis being employed by the judiciary in interpreting the right to be forgotten. Not only were no clear legal provisions in Indian law were taken refuge of while ruling on the existence of this right, the court also do not engage in any analysis of comparative jurisprudence such as the GDPR or the Costeja judgment. Such ad-hoc jurisprudence underlines the need for a data protection legislation, as in its absence, it is likely that divergent views are taken upon this issue, without a clear legal direction. It is likely that most matters concerning the right to erasure concern private parties as data controllers. In such cases, the existing jurisprudence on the right to privacy as interpreted under Article 21 may also be of limited value. Further, as has been pointed out above, the right to be forgotten needs to be a right qualified by conditions very clearly, and its conflict with the right to freedom of expression under Article 19. Therefore, it is imperative that a comprehensive data protection law addresses these issues.
[1] Sri Vasunathan vs The Registrar, available at http://www.iltb.net/2017/02/karnataka-hc-on-the-right-to-be-forgotten/
[2] Dharmraj Bhanushankar Dave v. State of Gujarat, available at https://drive.google.com/file/d/0BzXilfcxe7yueXFJWG5mZ1pKaTQ/view.
[3] Google Spain et al v. Mario Costeja González, available at http://curia.europa.eu/juris/document/document_print.jsf?doclang=EN&docid=152065.
[4] http://www.europarl.europa.eu/RegData/etudes/STUD/2015/536459/IPOL_STU(2015)536459_EN.pdf
[5] Mayer-Schoenberger, Viktor, Useful Void: The Art of Forgetting in the Age of Ubiquitous Computing (April 2007). KSG Working Paper No. RWP07-022. Available at SSRN: https://ssrn.com/abstract=976541 or http://dx.doi.org/10.2139/ssrn.976541.
[6] Article 17 (1) states: The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies:
(a) the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed;
(b) the data subject withdraws consent on which the processing is based according to point (a) of Article 6(1), or point (a) of Article 9(2), and where there is no other legal ground for the processing;
(c) the data subject objects to the processing pursuant to Article 21(1) and there are no overriding legitimate grounds for the processing, or the data subject objects to the processing pursuant to Article 21(2);
(d) the personal data have been unlawfully processed;
(e) the personal data have to be erased for compliance with a legal obligation in Union or Member State law to which the controller is subject;
(f) the personal data have been collected in relation to the offer of information society services referred to in Article 8(1).
[7] Zittrain, Jonathan, “Don’t Force Google to ‘Forget’”, The New York Times, May 14, 2014. Available at https://www.nytimes.com/2014/05/15/opinion/dont-force-google-to-forget.html.
It’s the technology, stupid
The article was published in Hindu Businessline on March 31, 2017.
Aadhaar is insecure because it is based on biometrics. Biometrics is surveillance technology, a necessity for any State. However, surveillance is much like salt in cooking: essential in tiny quantities, but counterproductive even if slightly in excess. Biometrics should be used for targeted surveillance, but this technology should not be used in e-governance for the following reasons:
One, biometrics is becoming a remote technology. High-resolution cameras allow malicious actors to steal fingerprints and iris images from unsuspecting people. In a couple of years, governments will be able to identify citizens more accurately in a crowd with iris recognition than the current generation of facial recognition technology.
Two, biometrics is covert technology. Thanks to sophisticated remote sensors, biometrics can be harvested without the knowledge of the citizen. This increases effectiveness from a surveillance perspective, but diminishes it from an e-governance perspective.
Three, biometrics is non-consensual technology. There is a big difference between the State identifying citizens and citizens identifying themselves to the state. With biometrics, the State can identify citizens without seeking their consent. With a smart card, the citizen has to allow the State to identify them. Once you discard your smart card the State cannot easily identify you, but you cannot discard your biometrics.
Four, biometrics is very similar to symmetric cryptography. Modern cryptography is asymmetric. Where there is both a public and a private key, the user always has the private key, which is never in transit and, therefore, intermediaries cannot intercept it. Biometrics, on the other hand, needs to be secured during transit. The UIDAI’s (Unique Identification Authority of India overseeing the rollout of Aadhaar) current fix for its erroneous choice of technology is the use of “registered devices”; but, unfortunately, the encryption is only at the software layer and cannot prevent hardware interception.
Five, biometrics requires a centralised network; in contrast, cryptography for smart cards does not require a centralised store for all private keys. All centralised stores are honey pots — targeted by criminals, foreign States and terrorists.
Six, biometrics is irrevocable. Once compromised, it cannot be secured again. Smart cards are based on asymmetric cryptography, which even the UIDAI uses to secure its servers from attacks. If cryptography is good for the State, then surely it is good for the citizen too.
Seven, biometrics is based on probability. Cryptography in smart cards, on the other hand, allows for exact matching. Every biometric device comes with ratios for false positives and false negatives. These ratios are determined in near-perfect lab conditions. Going by press reports and even UIDAI’s claims, the field reality is unsurprisingly different from the lab. Imagine going to an ATM and not being sure if your debit card will match your bank’s records.
Eight, biometric technology is proprietary and opaque. You cannot independently audit the proprietary technology used by the UIDAI for effectiveness and security. On the other hand, open smart card standards like SCOSTA (Smart Card Operating System for Transport Applications) are based on globally accepted cryptographic standards and allow researchers, scientists and mathematicians to independently confirm the claims of the government.
Nine, biometrics is cheap and easy to defeat. Any Indian citizen, even children, can make gummy fingers at home using Fevicol and wax. You can buy fingerprint lifting kits from a toystore. To clone a smart card, on the other hand, you need a skimmer, a printer and knowledge of cryptography.
Ten, biometrics undermines human dignity. In many media photographs — even on the @UIDAI’s Twitter stream — you can see the biometric device operator pressing the applicant’s fingers, especially in the case of underprivileged citizens, against the reader. Imagine service providers — say, a shopkeeper or a restaurant waiter — having to touch you every time you want to pay. Smart cards offer a more dignified user experience.
Eleven, biometrics enables the shirking of responsibility, while cryptography requires a chain of trust.
Each legitimate transaction has repudiable signatures of all parties responsible. With biometrics, the buck will be passed to an inscrutable black box every time things go wrong. The citizens or courts will have nobody to hold to account.
The precursor to Aadhaar was called MNIC (Multipurpose National Identification Card). Initiated by the NDA government headed by Atal Bihari Vajpayee, it was based on the open SCOSTA standard. This was the correct technological choice.
Unfortunately, the promoters of Aadhaar chose biometrics in their belief that newer, costlier and complex technology is superior to an older, cheaper and simpler alternative.
This erroneous technological choice is not a glitch or teething problem that can be dealt with legislative fixes such as an improved Aadhaar Act or an omnibus Privacy Act. It can only be fixed by destroying the centralised biometric database, like the UK did, and shifting to smart cards.
In other words, you cannot fix using the law what you have broken using technology.
Privacy in the Age of Big Data
The sum of these interconnected parts will lead to a complete loss of anonymity, greater surveillance and impact free speech and individual choice.
The article was published in the Asian Age on April 10, 2017.
In 2011 it was estimated that the quantity of data produced globally surpassed 1.8 zettabyte. By 2013, it had increased to 4 zettabytes. This is a result of digital services which involve constant data trails left behind by human activity. This expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics on the data collected, is generally referred to as “Big Data”. Despite significant (though largely unrealised) promises about Big Data, which range from improved decision-making, increased efficiency and productivity to greater personalisation of services, concerns remain about the impact of such datafication of all human activity on an individual’s privacy. Privacy has evolved into a sweeping concept, including within its scope matters pertaining to control over one’s body, physical space in one’s home, protection from surveillance, and from search and seizure, protection of one’s reputation as well as one’s thoughts. This generalised and vague conception of privacy not only comes with great judicial discretion, it also thwarts a fair understanding of the subject. Robert Post called privacy a concept so complex and “entangled in competing and contradictory dimensions, so engorged with various and distinct meanings”, that he sometimes “despairs whether it can be usefully addressed at all”.
This also leaves the idea of privacy vulnerable to considerable suspicion and ridicule. However, while there is a lack of clarity over the exact contours of what constitutes privacy, there is general agreement over its fundamental importance to our ability to lead whole lives. In order to understand the impact of datafied societies on privacy, it is important to first delve into the manner in which we exercise our privacy. The ideas of privacy and data management that are prevalent can be traced to the Fair Information Practice Principles (FIPP). These principles are the forerunners of most privacy regimes internationally, such as the OECD Privacy Guidelines, APEC Framework, or the nine National Privacy Principles articulated by the Justice A.P. Shah Committee Report. All of these frameworks have rights to notice, consent and correction, and how the data may be used, as their fundamental principles. It makes the data subject to the decision-making agent about where and when her/his personal data may be used, by whom, and in what way. The individual needs to be notified and his consent obtained before his personal data is used. If the scope of usage extends beyond what he has agreed to, his consent will be required for the increased scope.
In theory, this system sounds fair. Privacy is a value tied to the personal liberty and dignity of an individual. It is only appropriate that the individual should be the one holding the reins and taking the large decisions about the use of his personal data. This makes the individual empowered and allows him to weigh his own interests in exercising his consent. The allure of this paradigm is that in one elegant stroke, it seeks to ensure that consent is informed and free and also to implement an acceptable trade-off between privacy and competing concerns. This approach worked well when the number of data collectors were less and the uses of data was narrower and more defined. Today’s infinitely complex and labyrinthine data ecosystem is beyond the comprehension of most ordinary users. Despite a growing willingness to share information online, most people have no understanding of what happens to their data.
The quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life, “creating countless new digital puddles, lakes, tributaries and oceans of information”. The inadequacy of the regulatory approaches and the absence of a comprehensive data protection regulation is exacerbated by the emergence of data-driven business models in the private sector and the adoption of data-driven governance approach by the government. The Aadhaar project, with over a billion registrants, is intended to act as a platform for a number of digital services, all of which produce enormous troves of data. The original press release by the Central Government reporting the approval by the Cabinet of Ministers of the Digital India programme, speaks of “cradle to grave” digital identity as one of its vision areas.
While the very idea of the government wanting to track its citizens’ lives from cradle to grave is creepy enough in itself, let us examine for a minute what this form of datafied surveillance will entail. A host of schemes under Digital India shall collect and store information through the life cycle of an individual. The result, as we can see, is building databases on individuals, which when combined, will provide a 360 degree view into the lives of individuals. Alongside the emergence of India Stack, a set of APIs built on top of the Aadhaar, conceptualised by iSPIRT, a consortium of select IT companies from India, to be deployed and managed by several agencies, including the National Payments Corporation of India, promises to provide a platform over which different private players can build their applications.
The sum of these interconnected parts will lead to a complete loss of anonymity, greater surveillance and impact free speech and individual choice. The move towards a cashless economy — with sharp nudges from the government — could lead to lack of financial agencies in case of technological failures as has been the case in experiments with digital payments in Africa. Lack of regulation in emerging data driven sectors such as Fintech can enable predatory practices where right to remotely deny financial services can be granted to private sector companies. An architecture such as IndiaStack enables datafication of financial transactions in a way that enables linked and structured data that allows continued use of the transaction data collected. It is important to recognise that at the stage of giving consent, there are too many unknowns for us to make informed decisions about the future uses of our personal data. Despite blanket approvals allowing any kind of use granted contractually through terms of use and privacy policies, there should be legal obligations overriding this consent for certain kinds of uses that may require renewed consent.
Biometrics-based identification in UK: In 2005, researchers from London School of Economics and Political Science came out with a detailed report on the UK Identity Cards Bill (‘UK Bill’) — the proposed legislation for a national identification system based on biometrics. The project also envisaged a centralised database (like India) that would store personal information along with the entire transaction history of every individual. The report pointed strongly against the centralising storage of information and suggested other alternatives such as a system based on smartcards (where biometrics are stored on the card itself) or offline biometric-reader terminals.
As per the report, the alternatives would also have been cheaper as neither required real-time online connectivity. In India, online authentication is a far greater challenge. According to Network Readiness Index, 2016, India ranks 91, whereas UK is placed eight. Poor Internet connectivity can raise a lot of problems in the future including paralysis of transactions. The UK identification project was subsequently discarded as a result of the privacy and cost considerations raised in this report.
Aadhaar: Privacy concerns
- Once the data is collected through National Information Utilities, it will be privatised and controlled by private utilities.
- Once an individual’s data is entered in the system, it cannot be deleted. That individual will have no control over it.
- Aadhaar Data (Demographic details along with photographs) are shared/transferred with the private entities including telecom companies as per the Aadhaar (Targeted delivery of Financial and other subsidies, benefits and services) Act, 2016 with the consent of Aadhaar number holder to fulfil their e-KYC requirements. The data is shared in encrypted form through secured channel.
- Aadhaar Enabled Payment System (AEPS) on which 119 banks are live.
- More than 33.87 crore transactions have taken place through AEPS, which was only 46 lakhs in May 2014.
- As on 30-9-2016, 78 government schemes were linked to Aadhaar.
- The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, provides that no core-biometric information (fingerprints, iris scan) shall be shared with anyone for any reason whatsoever (Sec 29) and that the biometric information shall not be used for any purpose other than generation of Aadhaar and authentication.
- Access to the data repository of UIDAI, called the Central Identities Data Repository(CIDR), is provided to third parties or private companies.
Central Monitoring System (CMS) is already live in Delhi, New Delhi and Mumbai. Union minister Ravi Shankar Prasad revealed this in one of his replies in the Lok Sabha last year. CMS has been set up to automate the process of Lawful Interception & Monitoring of telecommunications.
Central Monitoring System (CMS) is already live in Delhi, New Delhi and Mumbai. Union minister Ravi Shankar Prasad revealed this in one of his replies in the Lok Sabha last year. CMS has been set up to automate the process of Lawful Interception & Monitoring of telecommunications.
Lawful Intercept and Monitoring (LIM) systems are used by the Indian Government to intercept records of voice, SMSes, GPRS data, details of a subscriber’s application and recharge history and call detail record (CDR) and monitor Internet traffic, emails, web-browsing, Skype and any other Internet activity of Indian users.
Brainstorming Session on the Global Conference on Cyberspace (GCCS 2017)
This year the conference will be held in India in November. The theme for the Global Conference on Cyber Space 2017, as decided by the Government of India, is Cyber for all - A Connected, Sustainable and Secure Cyberspace. Broadly, the Ministry’s objective is to curate discussion around global connectedness, sustaining the current state of cyberspace and securing the constantly evolving cyber world.
The brainstorming session was attended by diverse stakeholders ranging from civil society groups such as CIS, CCG and ORF, private sector including intermediaries such as Facebook, industry bodies such as CII and ICA and start-ups and individual legal and technical experts.
The discussion revolved around choice of themes, sub-themes and topics of discussion at the conference, potential speakers for the discussions and possible outcomes of the conference. The Ministry also requested inputs for potential events before and after the event and those that could run parallel to the conference.
Various topics that were suggested by the Ministry included digital inclusion, digital governance, digital business, cyber security, cyber warfare and terrorism, cyber law, digital entertainment, digital innovation and entrepreneurship, digital sustainability and emerging digital technologies. On behalf of CIS, I provided inputs on the choice of themes and potential events that could be organised as side events. Suggested themes included the following : International dimension of cybercrimes focussing both on prevention and investigation of crime, Discussion of privacy within the emerging but uneven context of big data innovations, algorithmic decision making and global digital economy; State’s responsibility to ensure an open, secure and enabling online spaces- regulation of hate speech and countering violent extremism, upholding freedom of expression and undisrupted access to internet for all users especially for marginalized communities.
CIS also emphasized that the conference must be inclusive and representative of the global internet user community, specifically in terms of social identities, regions of operation, and stakeholder types.
CIS also suggested a simulation exercise around a hypothetical problem related to an issue of deep concern such as countering violent extremism as a parallel event at the conference and offered to organise this event. We also made a suggestion about organising a session on presentation of academic papers on selected topics as a pre-conference event.
CIS has been invited by the Ministry to submit formal inputs for the conference. The deadline for submitting these comments is April 17, 2017.
Regulating Bitcoin in India
While most currencies in the real world have the backing of a central authority of some kind (such as a sovereign or a Central Bank) infusing them with an air of legitimacy, Bitcoin has no such central authority which issues or controls it. Additionally, the distributed and decentralised nature of the Bitcoin network makes regulation a tricky issue. This article seeks to touch upon the issue of Bitcoin regulation and makes certain broad suggestions for the future. It is a follow-up to a previous article by this author discussing the legal treatment of Bitcoin under Indian law, available at http://cis-india.org/internet-governance/bitcoin-legal-regulation-india.
The Reserve Bank of India (RBI) has not exactly been shy in recognising and even regulating technological advances in the financial sector as is evident from their detailed guidelines on Internet Banking,[1] Prepaid Payment Instruments[2] Account Aggregator Regulations,[3] and the consultation paper on proposed regulations for P2P lending platforms,[4] etc. However, though the RBI has acknowledged the existence of Bitcoin (it issued a note cautioning the public against dealing in virtual currencies including Bitcoin way back in 2013[5] and again in 2017[6]), there have been no clear guidelines regarding the same. Nevertheless, Bitcoin has come a long way since its inception and a consensus is emerging amongst the more technically inclined individuals that Bitcoin is infact here to stay.
Even if a sceptical view is taken that Bitcoin may not last for a long time, that does not mean that regulation is useless as there is already a large amount of money invested in Bitcoin entities in India and Bitcoin exchanges seem to be betting big on this sector really taking off - especially in the backdrop of the government’s recent push towards a more digital and less cash dependent economy.
While the Indian government is trying to hard sell the idea of digital payments, primarily using existing banking channels as well as the relatively new National Payments Corporation of India (NPCI) and the various applications that are cropping up around the NPCI’s UPI platform, one must note that going digital could involve high administrative costs. These costs are typically charged by banks and intermediary merchants, and may not be palatable to all stakeholders, as was evident in the recent fracas between petrol pump owners and banks over proposed transactional charges on card payments.[7]
It is this vacuum that alternatives such as prepaid payment instruments and virtual currencies can fill while addressing the concern of high administrative charges, which is likely to be a major hurdle in going digital. Administrative charges for most of these instruments are significantly lower than what existing payment channels charge for digital transactions.[8]
Legality of Bitcoin and the need for Regulation
Bitcoin technology is being widely embraced all over the world, including neighbouring China which has become one of the biggest markets for the uniquely decentralised currency. However the biggest hurdle that Bitcoin enthusiasts see in mainstreaming this technology is the fact that most countries are treading too cautiously around Bitcoin and therefore do not have regulation governing them.
The creation and transfer of Bitcoin is based on an open source cryptographic protocol and is not managed by any central authority.[9] It is the decentralized nature of this virtual currency that makes regulation a major challenge. This does not mean that regulators are not capable of regulating Bitcoin, in fact attempts have been made in several jurisdictions but these are mostly in the discussion stage, for eg. the Washington Department of Financial Institutions (“DFI”) introduced a bill in December, 2016 which proposes amendments to certain portions of the Washington Uniform Money Services Act and includes provisions specific to digital currencies;[10] the U.S. District Court for the Southern District of New York has in a decision in September, 2016 taken the view that Bitcoin is money under the plain meaning of Section 1960, the federal money transmission statute.[11]
This article does not intend to undertake a discussion on how Bitcoin is dealt with in various jurisdictions, but instead is aimed at suggesting a possible way forward for Indian regulators to regulate Bitcoin in a manner that satisfies the regulatory zeal towards security as well as ensures that the technology does not get stifled through overregulation. It is important that the regulators create a balanced regulation because an impractical ecosystem for Bitcoin exchanges and their users, may lead to traders seeking alternative methods of purchasing Bitcoin such as P2P trading, over-the-counter (OTC) markets and underground trading platforms, which are significantly more difficult to regulate.[12]
Suggestions for Regulation
Since Bitcoin is a decentralised cryptocurrency, it is impossible to regulate it through one single centralised point for all transactions. Neither is it feasible to regulate each and every Bitcoin user. A pragmatic compromise between these two extremes could be to regulate the points at which fiat currency or valuable goods enter the Bitcoin system, i.e. the Bitcoin exchanges where people may buy and sell Bitcoin for actual real world money, or websites which offer Bitcoin as a means of payment. Such an approach would reduce the number of points of supervision and lead to effective enforcement of the regulations. The regulations may require any entity providing services such as buying and selling of Bitcoin for actual money, trading in Bitcoin (such as non-cash exchanges) or providing other Bitcoin related services (such as Bitcoin wallets, merchant gateways, remittance facilities, etc.) to be registered with a central government agency, preferably the Reserve Bank of India.
One legal issue regarding the regulation of companies transacting in Bitcoin is whether the RBI has the authority or jurisdiction to regulate Bitcoin in the first place. Without getting into the arguments regarding whether it is a dangerous trend or not, an easy way in which the RBI could ensure it has the authority to regulate Bitcoin would be to follow the path that the RBI adopted while regulating Account Aggregators under the Non-Banking Financial Company - Account Aggregator (Reserve Bank) Directions, 2016 wherein the RBI declared Account Aggregators as Non Banking Finance Companies under section 45-I(f)(iii) thereby getting the authority to regulate and supervise them under section 45JA of the Reserve Bank of India Act, 1934.
The Regulations, once issued by the Reserve Bank of India, can prescribe mandatory registration, capital adequacy provisions, corporate governance conditions, minimum security protocols, Know Your Customer (KYC) requirements and most importantly provide for regular and ongoing reporting requirements as well as supervision of the Reserve Bank of India over the activities of Bitcoin companies.
Any proposed Bitcoin regulatory framework would seek to address certain issues; for the purposes of this article, we will assume that the following three issues are the ones that must necessarily be addressed by a regulatory framework:
- Security of the consumer’s property and prevention of fraud on the consumer. In the technology sector this translates into specific emphasis on increased security (against hacking) for accounts that the consumers maintain with the service provider.
- India has robust exchange control laws and the inherently decentralised and digital nature of Bitcoin can enable transfer of value from one jurisdiction to another without any oversight by a central agency, potentially violating the exchange control laws of India.
- Bitcoin has for long been associated with criminal and nefarious activities, infact many believe that the famous black market website “Silk Road” played a big role in making Bitcoin famous[13] and therefore preventing Bitcoin from being used for illegal activities (or creating a mechanism to ensure a digital trail to help investigations post facto) would be a major issue that the regulations would seek to tackle.
Given the above assumptions, let us examine whether the Regulations suggested above can satisfactorily address the concerns of security of consumers, exchange control, and keeping a tab on criminal activities.
If the regulations provide for minimum capital adequacy requirements as well as registration by the RBI or some other central agency, then the chances of consumers being duped by “fly-by-night” operators would be significantly reduced. The Regulations can also provide for minimum security protocols to be maintained by the companies, which protocols can themselves be developed in concert with Bitcoin experts. Critics may point to the hacking of various Bitcoin exchanges in the recent past, including that of MtGox, in which Bitcoin worth millions of dollars were siphoned off, and argue that the security protocols may not be enough to prevent future instances of hacking. But that is true even for the current security protocols for online banking; and that has not prevented a large number of banks from providing online banking facilities and the RBI regulating the same. The other vital issue that legally mandated security protocols would address (and potentially solve) is the issue of liability in case of hackings. Regulations may provide clarity on this issue and protect innocent customers from negligent companies while at the same time protecting entrepreneurs by defining and limiting the liability for bona fide and vigilant companies.
The other issue that may be of major concern to the authorities is exchange control. India has extremely specific exchange control laws, and if any person in India wants to transfer any amount to any person overseas, the only legal way to do so is through a bank transfer, which requires filling paperwork giving the reason for the transfer (although the RBI and banks usually don’t ask for any proof for small amounts upto a few lakhs). This means that all transfers outside India are done through proper banking channels and are therefore under the supervision of the RBI. However the decentralised nature of Bitcoin enables individuals to transfer money outside the borders of India without going through any banking channels and hence stay completely outside the purview of the RBI’s supervision. Such a system which lets users transfer money beyond national borders outside legal banking channels could be easily misused by nefarious actors and this is exactly what happened as international drug cartels turned to Bitcoin and other digital currencies to move their ill gotten wealth beyond the borders of various countries.[14] Regulating the entities which provide Bitcoin wallets and Bitcoin exchanges will ensure that the RBI can exercise its supervisory jurisdiction over Bitcoin transactions of individual customers even though these transactions do not go through the regular banking channels. The Regulations could impose an obligation on the companies to provide information on any suspicious activities or provide greater information about accounts which see very high volumes, etc. to ensure that Bitcoin is not used to finance organised crime. Thus, the regulations could have provisions that would require the companies providing the Bitcoin wallets or exchanges to flag and monitor customers whose trading accounts or Bitcoin wallets have transactions of an amount greater than a specified limit. This would provide the RBI with the ability to enquire as to the reasons for such high volumes and weed out illegal transactions while at the same time allowing bona fide transactions to continue.
Very closely linked to the issue of exchange control and supervision of transactions is the issue of checking the furtherance of criminal activities using the apparent anonymity offered by Bitcoin. However if the RBI has regulatory oversight over all the Bitcoin companies that are operating in India, then it would be possible for it to keep an eye on most Bitcoin transactions in India as long as the wallet that originates or terminates the transaction has been provided by a Bitcoin service provider located in India. An argument may be made that a criminal may use the services of Bitcoin wallet services provided by companies outside India and therefore outside the purview of the RBI and its regulations. However this argument may not be as plausible as it may seem at first look; if we assume that for any criminal activity the ultimate goal is to get the money in the form of recognizable legal tender (preferably cash or money in a bank account) then it stands to reason that the Bitcoin in the wallet would be exchanged for currency at some point or the other in the chain, which can only be done through a Bitcoin exchange if the transaction is of a fairly high value (which most criminal transactions are) and these exchanges as well as the accounts maintained by them will be under the purview of the RBI, thus providing the law enforcement agencies with the final link in the chain of transactions. Further, the public nature of the blockchain (the ledger where each Bitcoin trade is registered and verified) also makes it possible for the enforcement agencies to follow the trail of money for each and every Bitcoin or part thereof.
Conclusion
From the discussion above, we see that the major arguments that have been given by sceptics regarding Bitcoin and its attractiveness to criminals due to its decentralised nature are actually not very viable on a closer look. Bitcoin and the blockchain technology are extremely important steps in the direction of better and more efficient financial transactions in the global economy, which is why a number of mainstream banks are also showing a keen interest in the blockchain technology.[15] Regulations governing Bitcoin or virtual currencies would clear the air regarding their legal status so that consumers as well as entrepreneurs and investors can invest more money in this technology which could potentially change the way financial transactions are carried out across jurisdictions.
[1] https://www.rbi.org.in/scripts/NotificationUser.aspx?Id=414&Mode=0
[2] https://rbi.org.in/scripts/NotificationUser.aspx?Id=10799&Mode=0
[3] https://www.rbi.org.in/scripts/BS_ViewMasDirections.aspx?id=10598
[4] https://rbidocs.rbi.org.in/rdocs/content/pdfs/CPERR280416.pdf
[5] https://rbi.org.in/scripts/BS_PressReleaseDisplay.aspx?prid=30247
[6] https://rbi.org.in/Scripts/BS_PressReleaseDisplay.aspx?prid=39435
[7] http://timesofindia.indiatimes.com/business/india-business/petrol-pumps-wont-accept-cards-from-monday-to-protest-banks-transaction-fee/articleshow/56402253.cms
[8] For example, currently the network fee for a person to person Bitcoin transfer is 0.0001 Bitcoin, which comes to roughly Rs. 6 per transaction irrespective of the amount involved.
[9] The processing of Bitcoin transactions is secured by servers called Bitcoin “miners”. These servers communicate over an internet-based network and confirm transactions by adding them to a ledger which is updated and archived periodically using peer-to-peer filesharing technology, also known as the “blockchain”. The integrity and chronological order of the blockchain is enforced with cryptography. In addition to archiving transactions, each new ledger update creates some newly-minted Bitcoins.
[10] https://www.virtualcurrencyreport.com/2017/01/washington-department-of-financial-institutions-proposes-virtual-currency-regulation/
[11] https://www.virtualcurrencyreport.com/2016/09/sdny-opinion-re-bitcoin/. For a discussion on how different States and agencies in the United States deal with Bitcoin, please see Misha Tsukerman, “THE BLOCK IS HOT: A SURVEY OF THE STATE OF BITCOIN REGULATION AND SUGGESTIONS FOR THE FUTURE, Berkeley Technology Law Journal, Vol. 30:385, 2015, p. 1127, available at http://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=2084&context=btlj .
[12] http://themerkle.com/why-china-isnt-interested-in-banning-bitcoin-importance-of-regulation/
[13] See generally, Nathaniel Popper, “Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money”, Harper Collins, 2015.
[14] https://www.bloomberg.com/view/articles/2013-11-18/are-bitcoins-the-criminal-s-best-friend-
[15] http://www.morganstanley.com/ideas/big-banks-try-to-harness-blockchain.
Killing of Yameen Rasheed Reveals Worsening Human Rights Situation in the Maldives
The fight for freedom of expression is often abstract. On Sunday, it became personal for me: Yameen Rasheed, a courageous human rights defender and blogger in the Maldives, was brutally murdered just outside his apartment. Yameen ran the popular blog The Daily Panic in which he sought to "cover and comment upon the news, satirize the frequently unsatirizable politics of Maldives, and also provide a platform to capture and highlight the diversity of Maldivian opinion". In this blog he often ended up rubbing the powerful the wrong way, with politicians and religious bigots often finding themselves at the receiving end of his satire.
Yameen wasn't the first human rights activist to be attacked. He also led the campaign to force the police to conduct a proper investigation on the forced disappearance in August 2014 of journalist Ahmed Rilwan @moyameehaa, whom he counted as his closest friend. This campaign made him a target as well.
When there was a crackdown on the largest pro-democracy rally in Malé on 1st May 2015, Yameen became a political prisoner: he was remanded in jail for 17 days, and then moved to house arrest. Hundreds of others were also arrested then. Some opposition leaders continue to remain in jail. Sheikh Imran Abdulla, the leader of the [Adhaalath Party] who spoke at that rally, was convicted on charges of terrorism and sentenced to 12 years' imprisonment.
As a result of his advocacy for freedom of religion and freedom of expression in the Maldives, Yameen received death threats on multiple occasions that he reported to the police, who refused to do anything about those complaints.
Why, despite receiving death threats did Yameen continue to voice his opinions fearlessly? When asked, "Do you have a death wish?", he replied: "No. I have a dignified life wish."
Amnesty International has called upon the Maldivian authorities to conduct a full investigation into this killing. I, however, believe that there is no hope for justice from the very police that refused to protect Yameen, and whom he held to be complicit in the disappearance of Rilwan. As Yameen said in 2015, it is time for the international community to act. I hope each of you reading this contacts your external affairs ministry and asks them to apply pressure on the Maldivian authorities, and push for an international investigation into the breakdown of human rights in the Maldives.
Internet Shutdowns in 2016
Download the report: PDF
There is no consolidated research on internet shutdowns worldwide and government policies relating to this phenomenon. Access, however, has been tracking instances of internet shutdowns here. According to this tracker, there were 56 internet shutdowns worldwide in 2016.
In this report, we have identified countries where shutdowns took place more than once in the past one year. We were able to identify these countries from the tracker that is being operated by Access. We have looked at the internet shutdown practices and government policies on shutdowns in these countries. The countries include Brazil, Egypt, Ethiopia, Gambia, India, Iraq, Pakistan, Syria, Turkey and Uganda.
We have greatly relied on media coverage of internet shutdowns in the aforementioned countries and reports by various organisations including Freedom House, Amnesty International, Human Rights Watch, Article 19, Access, Electronic Frontier Foundation, Brookings Institution, Annenberg School of Communication, OSCE, Centre for Communication Governance, OONI and Dyn documenting and/or analysing internet censorship in these countries.
While documenting internet shutdown practices in the countries identified above, we have looked at the geographical coverage of a shutdown i.e. whether it was carried out nationwide or in specific regions and type of internet services that were restricted i.e. whether access was restricted to the whole internet, mobile internet services or specific messaging services and/or applications. In this regard, we have referred to a report published by the Brookings Institution, Internet shutdowns cost countries $2.4 billion last year that has analysed the economic impact of internet shutdowns. In this report, the author has identified six categories of disruptions: national internet, subnational internet, national mobile internet, subnational mobile internet, national app/service, and subnational app/ service. I have used this classification to understand and document internet shutdown practices in the aforementioned countries.
This report was featured on the website of the Keep Us Online campaign led by the Internet Freedom Foundation.
Aadhaar Case: Beyond Privacy, An Issue of Bodily Integrity
The article was published in the Quint on May 1, 2017.
The Finance Act, 2017, among its various sweeping changes, also inserted a new provision into the Section 139AA of the IT ACT, which makes Aadhaar numbers mandatory for:
(a) applying for PAN and
(b) filing income tax returns
In case one does not have an Aadhaar number, she or he is required to submit the enrolment ID of one’s Aadhaar application. The overall effect of this provision is that it makes Aadhaar mandatory for filing tax returns and applying for a PAN. The SC hearings began on 26 April. In order to properly appreciate the tough task at hand for the counsel for the petitioners, it is important to do a quick recap of the history of the Aadhaar case.
Case Over Constitutional Validity
Back in August 2015, the Supreme Court had referred the question of the constitutional validity of the fundamental right to privacy to a larger bench.
This development came after the Union government pointed out that the judgements in MP Sharma vs Satish Chandra and Kharak Singh vs State of UP (decided by eight and six judge benches respectively) rejected a constitutional right to privacy.
The reference to a larger bench has since delayed the entire Aadhaar case, while an alarming number of government schemes have made Aadhaar mandatory in the meantime.
Since then, the Supreme Court has not entertained any arguments related to privacy in the court proceedings on Aadhaar pending the resolution of this issue by a constitutional bench, which is yet to to be set up. The petitioners have had to navigate this significant handicap in the current proceedings as well.
Ongoing Hearing in Aadhaar Case
At the beginning of Advocate Shyam Divan’s arguments on behalf of the petitioners, the Attorney General objected to the petitioners making any argument related to the right to privacy. Anticipating this objection, Divan assured the court, right at the outset that they “will not argue on privacy issue at all”.
In the course of his arguments, Divan referred to at least three rights which may otherwise have been argued as facets of the right to privacy – personal autonomy, informational self-determination and bodily integrity. However, in this hearing those rights were strategically not couched as dimensions of privacy.
Divan consistently maintained that these rights emanate from Article 21 and Article 19 of the Constitutions and are different from the right to privacy.
Many Layers of the Right to Privacy
If one follows the courtroom exchanges in the original Aadhaar matter (not the one being argued now), the debates around the privacy implications of Aadhaar have focussed on simplistic balancing exercises of “security vs privacy” and “efficient governance vs privacy”.
These observations depict the right to privacy as a monolithic concept, i.e. a single right which has a unity of harm it captures within itself. In other words, all privacy harms are considered to be on the same footing. "Privacy harms" here mean the undesirable effects of the violation of the right to privacy.
This monolithic conception was clearly reflected in the Supreme Court’s decision to refer the constitutionality of “right to privacy” to a larger bench.
In MP Sharma vs Satish Chandra, the Supreme Court had rejected certain dimensions of what is generally understood as the right to privacy in a specific context (and hence dealing with a specific kind of privacy harm). A monolithic conception of the right to privacy would mean that MP Sharma should be applicable to all kinds of privacy claims.
Prof Daniel Solove, a privacy law expert, in his landmark paper “Taxonomy of Privacy” argues that the right to privacy captures multiple kinds of harms within itself. The right to privacy is not a monolithic concept, but a plural concept; there is no one right to privacy, but multiple hues of right to privacy.
Sidestepping ‘Privacy’ in the Current Case
The plural conception of the right to privacy not only makes our privacy jurisprudence more nuanced and comprehensive, but also guides us to analyse differential privacy harms according to the standards appropriate for them.
Therefore, the refusal of the Supreme Court in MP Sharma to recognise a specific construction of privacy read into a specific constitutional provision should not have precluded the bench, even one smaller in number, from treating other conceptions of privacy into the same or other constitutional provisions.
As a lawyer, Divan was severely compromised from being unable to argue the right to privacy, which in my opinion, cuts at the heart of the constitutional issues with the Aadhaar project.
He refrained from couching any of his arguments on bodily integrity, informational self-determination, and personal autonomy as privacy arguments. What the approach reveals is that far from being a monolithic notion, the harms that privacy, as we understand it, addresses, are capable of being broken into multiple and distinct rights.
Moving Beyond Article 21
Divan further argues that coercing someone to give personal information is compelled speech and hence, violative of Article 19(1)(a) (the rights to free speech and expression). Once again, the harm described here – compelling someone to part with personal data – is conventionally a privacy harm.
However, it is important to note here that a privacy harm may also be a speech harm. Therefore, Article 21 is not the sole repository of these rights. They may also be located under other articles. The practical consequence of these rights being located under multiple constitutional provisions could be added protection of these rights.
For instance, if it can be shown that compelling an individual to part with personal data results into violation of Article 19(1)(a), the State will have to show which ground laid down under Article 19(2) does the specific restriction fall under.
This might be more challenging as opposed to the vague standard of “compelling state interest” test which has been the constitutional test for privacy violations under Article 21.
Changing the Definition of Right to Privacy
The arguments presented by Divan, if accepted by the Supreme Court, could represent a two-pronged shift in the landscape of the values popularly understood under the right to privacy in India:
1) first, the idea of the rights of bodily integrity, informational self-determination, and personal autonomy as part of a plural concept (whether arising from the right to privacy or another right) that encompasses several harms within it, and
2) second that some of these rights may be read into other Articles in the Constitution.
Under the circumstances, Mr Divan’s performance was nothing short of heroic. Whether they pass muster and impact the course of this long drawn legal battle remains to be seen.
(Amber Sinha is a lawyer and works as a researcher at the Centre for Internet and Society. Aradhya Sethia is a final year law student at the National Law School of India University, Bangalore. This is an opinion piece and the views expressed above are the author’s own. The Quint neither endorses nor is responsible for the same.)
Comments from the Centre for Internet and Society on Renewal of .NET Registry Agreement
With inputs from Sunil Abraham and Pranesh Prakash
CIS would like to express its strong opposition to the proposed renewal. This is for three primary reasons:
Inconsistency with ICANN’s core values
It is important to consider the proposed renewal in light of two Core Values which are meant to guide the decisions and actions of ICANN.
Section 1.2.(b)(iii) of the Bylaws contemplates ICANN’s responsibility to, “ Where feasible and appropriate, depending on market mechanisms to promote and sustain a competitive environment in the DNS market;” and S ection 1.2(b)(iv) envisages, “Introducing and promoting competition in the registration of domain names where practicable and beneficial to the public interest as identified through the bottom-up, multistakeholder policy development process;”.
The presumptive renewal of the .NET Registry agreement precludes an open tender, thereby significantly undermining competition in the DNS market. It ignores the public interest consideration, as the absence of competitive pressure on the contract also means the absence of pressure to lower user costs.
Historical accident
Verisign’s operations over .NET is a historical accident; one that does not justify its collection of .NET revenues in perpetuity. Policies for Contractual Compliance of Existing Registries was approved in 2007 to include presumptive renewal. However, during the deliberations in that Policy Development Process, there was significant objection to presumption of renewal of registry contracts; with constituencies and individuals pointing out that such renewal was blatantly anti competitive, and allowed for presumption to prevail even in the case of material breaches.
The proposed agreement contemplates using a portion of Registry Level
Transaction Fees to create a “special restricted fund for developing country Internet communities to enable further participation in the ICANN mission for these stakeholders.” This form of tokenism to the global south will do little to achieve meaningful participation and diversity of civil society. .NET should instead, be opened to a competitive bid and open tender, in order to encourage innovators from around the world to benefit from it.
Irregularity of contract
The argument that the proposed changes are to bring the contract in line with other gTLD registry agreements doesn't hold because this contract is in itself completely irregular: it was not entered into after a competitive process that other gTLD registry agreements are subject to; and it is not subject to the price sensitivity that other contracts are either.
Cybersecurity Compilation: Indian Context
Ultimately, it attempts to collect all the cyber security initiatives that have been put out by Indian regulatory bodies and organizations. To approach this end, we identified they actors and institutions in cyber security and record their published guidelines, frameworks, ongoing projects and any policies released to strengthen cyber security. We have mostly followed a general framework in which, for each document found, we indicate the definition of cyber security (if stated), the objectives, recommendations/guidelines and scope. This format was sometimes difficult to follow for some types of initiatives in the documents. For example, a document of questions and answers to parliament could not be recorded in this fashion. As a result, the document is not entirely uniform in structure. This research compendium is in continuous progress, expanding along with the base of our knowledge and ongoing research.
Data Protection: Understanding the General Data Protection Regulation
It will come into force on 25th May 2018 and it is expected that under this Regulation data privacy will be strengthened. Substantive and procedural changes have been introduced and for compliance, industries and law enforcement agencies will have to adjust the ways in which they have operated thus far. Click here to read the report
Should an Inability to Precisely Define Privacy Render It Untenable as a Right?
The article was published in the Wire on August 2, 2017.
Ludwig Wittgenstein wrote in his book, Philosophical Investigations, that things which we expect to be connected by one essential common feature, may be connected by a series of overlapping similarities, where no one feature is common. Instead of having one definition that works as a grand unification theory, concepts often draw from a common pool of characteristics. Drawing from overlapping characteristics that exist between family members, Wittgenstein uses the phrase ‘family resemblances’ to refer to such concepts.
In his book, Understanding Privacy, Daniel Solove makes a case for privacy being a family resemblance concept. Responding to the discontent in conceptualising privacy, Solove attempted to ground privacy not in a tightly defined idea, but around a web of diverse yet connected ideas. Some of the diverse human experiences that we instinctively associate with privacy are bodily privacy, relationships and family, home and private spaces, sexual identity, personal communications, ability to make decisions without intrusions and sharing of personal data. While these are widely diverse concepts, intrusions upon or interferences with these experiences are all understood as infringements of our privacy.
Other scholars too have recognised this dynamic, evolving and difficult to pinpoint nature of privacy. Robert Post described privacy as a concept “engorged with various and distinct meanings.” Helen Nissenbaum advocates a dynamic idea of privacy to be understood in terms of contextual norms.
The ongoing arguments in the Supreme Court on the existence of a constitutional right to privacy can also be viewed in the context of the idea of privacy as a family resemblance concept. In their arguments, the counsels for the petitioners have tried to make a case for privacy as a multi-dimensional fundamental right. Senior advocate Gopal Subramanium argued before the court that privacy inheres in the concept of liberty and dignity under Constitution of India, and is presupposed by various other rights such as freedom of speech, good conscience, and freedom to practice religion. He further goes on say that there are four aspects to privacy – spatial, decisional, informational and the right to develop personality. Shyam Divan, also arguing for the petitioners, further added that privacy includes the right to be left alone, freedom of thought, freedom to dissent, bodily integrity and informational self-determination.
When the chief justice brought up the need to define the extent of the right to privacy, the counsels raised concerns about the right being defined too specifically. This reluctance was borne out of the recognition that by its very nature, the right to privacy is a cluster of rights, with multiple dimensions manifesting themselves in different ways depending on the context. Both advocates, Subramaniam and Arvind Datar, argued that court must not engage in an exercise to definitively catalog all the different aspects of the right, foreclosing the future development of the law on point. This reluctance was also a result of the fact that the court has isolated the question of the existence of the right to privacy and how it may apply in the case of the Aadhaar project. Usually judges are able to ground legal principles in the relevant facts of the case while developing precedents. The referral to this bench is only on the limited question of the existence of a constitutional right to privacy. Therefore, any limits that are articulated by the court on the right exist without the benefit of a context.
On the other hand, the Attorney General (AG) argued that this very aspect of privacy was a rationale for not declaring it a fundamental right. At various points during the arguments, he indicated that the ambiguous and vague nature of the concept of privacy made it unsuitable as a fundamental right. Similarly, Tushar Mehta, arguing for Unique Identification Authority of India, also sought to deny privacy’s existence as a fundamental right as it is too subjective and vague.
The above argument assumes that the inability to precisely define privacy renders its untenable as a right. The key question is whether this lack of a common denominator makes privacy too vague a right, liable to expansive misinterpretations. Conceptions that do not have fixed and sharp boundaries, are not boundless. What it means is that the boundaries can often be fuzzy and in a state of constant evolution, but the limits and boundaries always exist.
At one point during the hearings, Justice Rohinton Nariman wanted the counsels to work on the parameters of challenge for state action with respect to privacy. As mentioned earlier, in the absence of facts to work with, such an exercise is fraught with risks. However, the judges may still be able to articulate the manner in which such limits may be arrived at, without specifying them. Justice Nariman himself later agrees that the judicial examination must proceed on a case by case basis, taking into account not only the tests under Article 14,19 and 21 under which petitioners have tried to locate privacy, but also under any other concurrent rights which may be infringed.
The AG also argued that the infringement of privacy in itself does not amount to a violation of the rights under Article 21, rather in some cases the transgressions on privacy may lead to an infringement of a person’s right to liberty and only in such cases should the fundamental rights be invoked. Thus, the argument made was that there was no need to declare privacy as a fundamental right but only to acknowledge that limiting privacy may sometimes lead to violations of the already existing rights. This argument may have been more cogent had he identified specific dimensions of privacy which, according to him, do not qualify as fundamental rights. However, this might have meant conceding that other dimensions of privacy, in fact do amount to fundamental rights.
It must be remembered that the problem of changing or multiple meanings is not limited to privacy. As the bench noted, drawing comparisons to the concepts of ‘liberty’ and ‘dignity’, these are constitutionally recognised values which equally suffer from a multitude of meanings based on context. The government’s position here is in line with critiques of privacy that Solove seeks to bust in his book. The idea of privacy evolves with time and people. And people, whether from a developed or developing polity, have an instinctive appreciation for it. The absence of a precise definition does not necessarily do great disservice to a concept, especially one that is fundamental to our freedoms.
High Level Comparison and Analysis of the Use and Regulation of DNA Based Technology Bill 2017
In July 2017 the Law Commission published a report on DNA profiling and the “Draft Use and Regulation of DNA Based Technology Bill 2017”. India has been contemplating a draft DNA Profiling Bill since 2007. There have been two publicly available versions of the bill, 2012, and 2015, and one version in 2016. In 2013, the Department of Biotechnology formulated an Expert Committee to discuss different aspects and issues raised regarding the Bill towards finalizing the text. The Centre for Internet and Society was a member of the Expert Committee, and in its conclusion, issued a note of dissent to the Expert Committee for DNA Profiling.
This post provides a high level overview of the Use and Regulation of DNA Based Technology Bill 2017 and calls out positive changes from the 2015 Bill, remaining issues, and missing provisions. The post also calls out if, and where, CIS's recommendations to the Expert Committee have been incorporated.
If enacted, the 2017 Bill will establish national and regional DNA data banks that will maintain five different types of indices: a crime scene index, missing persons, offenders, suspects, and unknown deceased persons. The data banks will be led by a Director, responsible for communicating information with requesting entities, foreign states, and international organizations. Information relating to DNA profiles, DNA samples, and records maintained in a DNA laboratory can be made available in six instances: to law enforcement and investigating agencies, in judicial proceedings, for facilitating prosecution and adjudication of criminal cases, for taking defence of an accused, for investigation of civil disputes, and other cases which might be specified by regulations. Offences related to unauthorized disclosure of information in the DNA data bank, obtaining information from DNA data banks without authorization, unlawful access to information in the DNA Data Bank, using DNA sample or result without authorization, and destroying, altering, contaminating, or tampering with biological evidence.
Below are some key positive changes from the 2015 Bill, remaining issues, and missing safeguards from the 2017 Bill:
Positive Changes: The Bill contains a number of positive changes from the 2015 draft. Key ones include:
- Consent: Section 21 prohibits the taking of samples from arrested persons without consent, except in the case of a specified offence - a specified offence being any offence punishable with death or imprisonment for a term exceeding seven years. If consent is refused, a magistrate can order the taking of the sample. This can be in the case of any matter listed in the Schedule of the Act. Section 22 provides for consent from volunteers. It is important to note that despite being an improvement from the 2015 Bill, which did not address instances of collection with our without consent, this provision is still broad as the list of offences under the Schedule is expansive and can be further expanded by the Central Government. Furthermore, the Magistrate can overrule a refusal of consent of the parent or guardian of a voluneet who is a minor, which does not provide adequate protection to childrens' rights.
- Deletion: Section 31 defines instances for deletion of suspect profiles, under trial profiles, and all other profiles. Though a step in the right direction, as the 2015 Bill only addressed retention and deletion of the offenders index, this provision does not address the automatic removal of innocents.
- Purpose limitation: Section 33 limits the purpose of profiles in the DNA Data Bank to that of facilitating identification. This is a positive step from the 2015 Bill - which enabled use of DNA profiles for the creation and maintenance of a population statistics data bank. Section 34 also limits the purposes for which information relating to DNA profiles, samples, and records can be made available.
- Destruction of samples: Section 20 defines instances for destruction of DNA samples. Destruction of samples was not address in the 2015 Bill, and is an important protection as it prevents samples from being re-analyzed.
- Comparison of profiles: Section 29 clarifies that if the individual is not an offender or a suspect, their information will not be compared with DNA profiles in the offenders’ or suspects index. This creates an important distinction between types of indices held in the data bank and the purpose for the same i.e missing persons are not treated as potential offenders. In the 2015 Bill, profiles entered in the offenders or crime scene index could be compared by the DNA Data Bank Manger against all profiles contained in the DNA Data Bank.
- Re-testing: Section 24 allows for an accused person to request for a re-examination of fresh bodily substances if it is believed the sample has been contaminated. The closest provision to this in the 2015 was the creation a post - conviction right for DNA profiling - which is now deleted. It is important to note that fresh samples can easily be obtained from individuals, but if contamination happens at a crime scene, it is much more difficult to obtain a fresh sample.
- Limiting Indices and including a crime scene index: The 2017 Bill limits the number of indices to five - a crime scene index, missing persons, offenders, suspects, and unknown deceased persons. This is an improvement from the 2015 Bill which provides for the maintenance of indices in the DNA Bank and includes a missing person’s index, an unknown deceased person’s index, a volunteers’ index, and such other DNA indices as may be specified by regulation.
Remaining Issues: There are some remaining issues in the 2017 Bill. Some of these include:
- Delegating and Expanding through Regulation: The Bill delegates a number of procedures to regulation - many which should be in the text of the Bill. For example: the format for receiving and storing DNA profiles, and additional criteria for entry, retention, and deletion of DNA profiles. Furthermore, a number of provisions allow for expansion through regulation. For example, the sources from which DNA can be collected from to be expanded as specified by regulations. Further purposes for making DNA profiles available can be defined by regulation. Important procedures such as privacy and security safeguards are also left to regulation.
- Broad Powers and Composition of the Board: The Bill designates twenty one responsibilities to the Board. As pointed out in 1, many of these should be detailed in the text of the legislation.
While serving on the Expert Committee,CIS recommended that the functions of the DNA Profiling Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority. This recommendation has not been incorporated.
Ideally, the Board should also include privacy experts, an expert in ethics, as well as civil society. Towards this, the Board should be comprised of separate Committees to address these different functions. There should be a Committee addressing regulatory issues pertaining to the functioning of Data Banks and Laboratories and an Ethics Committee to provide independent scrutiny of ethical issues.
As a positive note, the reduction of the size of the Board was agreed upon by the Expert Committee from 16 members (2012 Bill) to 11 members. This reccomendation has been incorporated.
CIS also provided language regarding how the Board could consult with the public:The Board, in carrying out its functions and activities, shall be required to consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation. The Board shall, while consulting or co-opting persons, ensure that meetings, workshops, and events are conducted at different places in India to ensure equal regional participation and activities. This language has not been fully incorporated
- Lack of Authorization Procedure: Though the Bill defines instances of when DNA information can be made available, it fails to establish or refer to an authorization process for making information available and the decision currently seems to rest with the DNA Bank Director.
- Expansive Schedule: The Bill creates a schedule containing a list of matters for DNA testing which includes whole acts and a range of civil disputes and matters that are broad and do not relate to criminal cases - most notably “issues relating to immigration or emigration and issues relating to establishment of individual identity.”
- Unclear Data Stored: Though the Bill clarifies the circumstance that the identity of the individual will be associated with a profile, it allows for ‘information of data based on DNA testing and records relating thereto” to be stored, yet it is unclear what information this would entail.
- Lack of procedures for chain of custody: Presently, the Bill defines quality assurance procedures for a sample that is already at the lab. There are no provisions defining a process for the examination of a crime scene and laying down standards for the chain of custody of a sample from the crime scene to a DNA laboratory.
Missing Safeguards:
There are some safeguards that, if added, would strengthen the Bill and ensure rights to the individual:
- Notification to the individual: There are no provisions that ensure that notification is given to an individual if his/her information is accessed or made available.
- Right to challenge: There are no provisions that give the individual the right to challenge the storage of their DNA.
- Established profiling standard: Though the Law Commission report refers to the 13 CODIS standard, the Bill does not mandate the use of the 13 CODIS profiling standard.
- Reporting standard: There are no standards for how matches or other information should be communicated from the DNA director to the authority or receiving entity including instances of partial matches.
- Right to access and review: There are no provisions that allow an individual to review his/her information contained in the regional or the national database.
- Lack of costing: There is no cost estimate in the report or a requirement for one to be carried out.
- Study for the potential for false matches: This must consider the size of the population and large family size, i.e. relatively large numbers of closely related people and is particularly necessary given the the size over population as large as India's.
Importantly, in the DNA Expert Committee, CIS requested the Expert Committee that the Bill be brought in line with the nine national principles defined in the Report of Experts on Privacy led by Justice AP Shah. These include the principles of notice, choice and consent, collection limitation, purpose limitation, access and correction, disclosure of information, security, openness, and accountability. These principles have not been fully incorporated.
Here’s why we need a lot more discussion on India’s new DNA Profiling Bill
Representative photo of human DNA. The implications of creating regional and national level DNA databanks need to be fully understood and publicly debated. DNA is not foolproof – false matches can take place for multiple reasons.(Shutterstock)
The article was published in the Hindustan Times on August 7, 2017.
The first step towards a DNA Profiling Bill was taken in 2007 with the ‘Draft DNA Profiling Bill” by the Centre for DNA Fingerprinting and Diagnostics. Since then, there has been a 2012, 2015, and a 2016 version of the Bill - the last not available to the public. In 2013, the Department of Biotechnology formulated an Expert Committee to deliberate on concerns raised about the Bill and finalise the text. The “Use and Regulation of DNA Based Technology Bill 2017” and the report by the Law Commission is a further evolution of the legislation and dialogue. The 2017 Bill contains a number of improvements from previous versions - yet there are still outstanding concerns that remain.
Positive changes in the Bill include provisions for consent, defined instances for deletion of profiles, limitation on purpose of the use of data in the DNA Data Bank, defined instances fo r destruction of biological samples, and the ability for an individual to request a re-test of bodily substances if they believe contamination has occurred.
Despite these changes the Bill still has an overly broad schedule defining instances of when DNA profiling can be used and is missing a number of safeguards that would enable individual rights. These include a right to notification of storage and access to information on the DNA databank, the right to appeal and challenge storage of DNA samples, and right to access and review personal information stored on the DNA Data Bank.
It is concerning that the 2017 Bill has left the defining of privacy and security safeguards to regulation — including implementation and sufficiency of protection, appropriate use and dissemination of DNA information, accuracy, security and confidentiality of DNA information, timely removal and deletion of obsolete or inaccurate DNA information, and other steps as necessary. Furthermore, though the Law Commission cites the use of the 13 CODIS (Combined DNA Index System) profiling standard as a means to protecting privacy in its report — this standard has yet to find its way in the text of the Bill.
The implications of creating regional and national level DNA databanks need to be fully understood and publicly debated. DNA is not foolproof - false matches can take place for multiple reasons. Importantly, the usefulness of DNA based technology to a legal system and the impact on individual rights is dependent and reflective of the social, legal, and political environment the technology is used in. DNA based technology can be a powerful tool for law enforcement, and it is important that a robust process and structure is given to the collection of DNA samples from a crime scene to the laboratory for analysis, to the DNA Bank for storage and comparison, but this structure needs to also be fully cognizant of the rights of individuals and the potential for misuse of the technology.
As society continues to rapidly become more and more data centric, and that data increasingly is a direct extension of the person, it is critical that legislation that is developed has clear protections of rights. In addition to amendments to the text of the draft 2017 Bill, this includes enacting a comprehensive privacy legislation in India. It is worrying that in the conclusion of its report, the Law Commission has referred to whether privacy is an integral part of Article 21 of the Constitution as merely “a matter of academic debate.” Privacy is recognised as a fundamental right in many democratic contexts – including many of those reviewed by the Law Commission as examples of contexts with DNA Profiling laws.
Policy needs to evolve past protections that are limited to process oriented legal privacy provisions, but instead to protections that are comprehensive — accounting for process and enabling the individual to control and know how her/his data is being used and by whom. Other countries have recognised this and are taking important steps to empower the individual. India needs to do the same for its citizens.
Infographic: The Impending Right to Privacy Judgment
The article was published in the Wire on August 17, 2017.
Over the last month, a nine-judge constitutional bench of the Supreme Court has heard arguments on the existence of a fundamental right to privacy in India. Media coverage of judicial hearings in the apex court is often ripe with inaccuracies, thanks in no small measure to the court’s own restrictive policies, which, for instance, prevent video recordings. In this case, the arguments – which were heard over the course of three weeks – were widely reported in much greater detail and with fidelity, thanks largely to the live tweets by Gautam Bhatia and Prasanna S. (the entire collection of tweets is available here).
The availability of the entire set of written arguments made available by LiveLaw was another rich source for anyone following this matter in detail. The ruling by the bench will be of extreme importance not just for the immediate Aadhaar case, which has witnessed gross delays, but also numerous other matters in the future to do with state intrusions, decisional autonomy and informational privacy.
The questions before this bench are two fold – do the judgments in M.P. Sharma and Others vs Satish Chandra (decided by an eight-judge bench in 1954) and Kharak Singh vs State of UP and Others (decided by a six-judge bench in 1962) lead to the conclusion that there is no fundamental right to privacy, and whether the decisions in the later cases upholding a right to privacy were correct.
This infographic tries to unpack the hearings in the court into distinct issues, and the key arguments advanced by both the sides on them. The arguments from both sides on a particular question have been presented side by side for better appreciation, even though they were not argued together
Given the nature of the exercise, some of the arguments made in the infographic are bound to be a simplification of the actual issue. But it is hoped that this will provide a good overview of the issues argued.
Research and writing by Amber Sinha. Design by Pooja Saxena. Amber Sinha is a lawyer and works at the Centre for Internet and Society. Pooja Saxena is a typeface and graphic designer, specialising in Indic scripts.
Aadhar: Privacy is not a unidimensional concept
The article was published in the Economic Times on July 23, 2017.
The ruling of this nine-judge bench will have far-reaching impact on the extent and scope of rights available to us all. In a disappointing case of judicial evasion by the apex court, it has taken over 600 days since a reference order was passed in August 11, 2015, for this bench to be constituted. Over two days of arguments, the counsels for the petitioners have presented before the court why the right to privacy, despite not finding a mention in the Constitution of India, is a fundamental right essential to a person’s dignity and liberty, and must be read into not one but multiple articles of the Constitution. The government will make its arguments in the coming week.
One must wonder why we are debating the contours of the right to privacy, which 40 years of jurisprudence had lulled us into believing we already had. The answer to that can be found in a series of hearings in the Aadhaar case that began in 2012. Justice KS Puttaswamy, a former Karnataka High Court judge, filed a petition before the Supreme Court, questioning the validity of the Aadhaar project due its lack of legislative basis (since then the Aadhaar Act was passed in 2016) and its transgressions on our fundamental rights. Over time, a number of other petitions also made their way to the apex court, challenging different aspects of the Aadhaar project. Since then, five different interim orders by the Supreme Court have stated that no person should suffer because they do not have an Aadhaar number. Aadhaar, according to the court, could not be made mandatory to avail benefits and services from government schemes. Further, the court has limited the use of Aadhaar to specific schemes: LPG, PDS, MGNREGA, National Social Assistance Programme, the Pradhan Mantri Jan Dhan Yojna and EPFO.
The real spanner in the works in the progress of this case was the stand taken by Mukul Rohatgi, then attorney general of India who, in a hearing before the court in July 2015, stated that there is no constitutionally guaranteed right to privacy. His reliance was on two Supreme Court judgments in MP Sharma v Satish Chandra (1954) and Kharak Singh v State of Uttar Pradesh (1962): both cases, decided by eight- and six-judge benches respectively, denied the existence of a constitutional right to privacy. As the subsequent judgments which upheld the right to privacy were by smaller benches, Rohatgi claimed that MP Sharma and Kharak Singh still prevailed over them, until they were overruled by a larger bench.
The reference to a larger bench has since delayed the entire matter, even as a number of government schemes have made Aadhaar mandatory. This reading of privacy as a unidimensional concept by the courts is, with due respect, erroneous. Privacy, as a concept, includes within its scope, spatial, familial, informational and decisional aspects. We all have a legitimate expectation of privacy in our private spaces, such as our homes, and in our personal relationships. Similarly, we must be able to exercise some control over how personal data, like our financial information, are disseminated. Most importantly, privacy gives us the space to make autonomous choices and decisions without external interference. All these dimensions of privacy must stand as distinct rights. In MP Sharma, the court rejected a certain aspect of the right of privacy by refusing to acknowledge a right against search and seizure. This, in no way prevented the court, even in the form of a smaller bench, from ruling on any other aspects of privacy, including those that are relevant to the Aadhaar case.
The limited referral to this bench means that the court will have to rule on the status of privacy and its possible limitations in isolation, without even going into the details of the Aadhaar case (based on the nature of protection that this bench accords to privacy, the petitioners and defendants in the Aadhaar case will have to argue afresh on whether the project does impede on this most fundamental right). There are no facts of the case to ground the legal principles in, and defining the contours of a right can be a difficult exercise. The court must be wary of how any limits they put on the right may be used in future. Equally, it is important to articulate that any limitations on the right to privacy due to competing interests such as national security and public interest must be imposed only when necessary and always be proportionate.
It will not be enough for the court to merely state that we have a constitutional right to privacy. They would be well advised to cut through the muddle of existing privacy jurisprudence, and unequivocally establish the various facets of the right. Without that, we may not be able to withstand the modern dangers of surveillance, denial of bodily integrity and self-determination through forcible collection of information. The nine judges, in their collective wisdom, must not only ensure that we have a right to privacy, but also clearly articulate a robust reading of this right capable of withstanding the growing interferences with our autonomy.
CIS Statement on Right to Privacy Judgment
reliance was on two Supreme Court judgments in MP Sharma v Satish Chandra (1954) and Kharak Singh v State of Uttar Pradesh (1962): both cases, decided by eight- and six-judge benches respectively, denied the existence of a constitutional right to privacy. As the subsequent judgments which upheld the right to privacy were by smaller benches, he claimed that MP Sharma and Kharak Singh still prevailed over them, until they were overruled by a larger bench. This landmark judgment was in response to a referral order to clear the confusion over the status of privacy as a right.
We, at the Centre for Internet and Society (CIS) welcome this judgement and applaud the depth and scope of the Supreme Court’s reasoning. CIS has been producing research on the different aspects of the right to privacy and its implications for the last seven years and had the privilege of serving on the Justice AP Shah Committee and contributing to the Report of the Group of Experts on Privacy.[1] We are honoured that some of our research has also been cited by the judgment.[2] Such judicial recognition is evidence of the impact sound research can have on policymaking.
In the course of a 547 page judgment, the bench affirmed the fundamental nature of the right to privacy reading it into the values of dignity and liberty. The judgment is instructive in its reference to scholarly works and jurisprudence not only in India but other legal systems such as USA, South Africa, EU and UK, while recognising a broad right to privacy with various dimensions across spatial, informational and decisional spheres. We note with special appreciation that women’s bodily integrity and citizens’ sexual orientation are among those aspects of privacy that were clearly recognised in the judgment. For researchers studying privacy and its importance, this judgment is of great value as it provides clear reasoning to reject oft-quoted arguments which are used to deny privacy’s significance. The judgement is also cognizant of the implications of the digital age and emphasise the need for a robust data protection framework.
The right to privacy has been read into into Article 21 (Right to life and liberty), and Part III (Chapter on Fundamental Rights) of the Constitution. This means that any limitation on the right in the form of reasonable restrictions must not only satisfy the tests evolved under Article 21, but where loss of privacy leads to infringement on other rights, such as chilling effects of surveillance on free speech, the tests for constitutionality under those provisions for also be satisfied by the limiting action. This provides a broad protection to citizens’ privacy which may not be easily restricted. We expect that this judgment will have far reaching impacts, not just with respect to the immediate Aadhaar case, but also to in a score of other matters such as protection of sexual choice by decriminalising Section 377 of the Indian Penal Code, oversight of statutory search and seizure provisions such as Section 132 of the Income Tax Act, personal data collection and processing practices by both state and private actors and mass surveillance programmes in the interest of national security.
As this judgment comes in response to a referral order, the judges were not dealing with any questions of fact to ground the legal principles in. Subsequent judgments which deal with privacy will apply these principles and further evolve the contours of this right on a case-by-case basis. For now, we welcome this judgment and look forward to its consistent application in the future.
[1]. http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf
[2]. CIS was quoted in the judgement on footnote 46, page 33 and 34: http://supremecourtofindia.nic.in/pdf/LU/ALL%20WP(C)%20No.494%20of%202012%20Right%20to%20Privacy.pdf The quote is " Illustratively, the Centre for Internet and Society has two interesting articles tracing the origin of privacy within Classical Hindu Law and Islamic Law. See Ashna Ashesh and Bhairav Acharya ,“Locating Constructs of Privacy within Classical Hindu Law”, The Centre for Internet and Society, available at https://cis-india.org/internet-governance/blog/loading-constructs-of-privacy-within-classical-hindu-law. See also Vidushi Marda and Bhairav Acharya, “Identifying Aspects of Privacy in Islamic Law”, The Centre for Internet and Society, available at https://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law " Further, research commissioned by CIS cited in the judgment includes a reference in page 201 footnote 319, "Bhairav Acharya, “The Four Parts of Privacy in India”, Economic & Political Weekly (2015), Vol. 50 Issue 22, at page 32."
Rethinking National Privacy Principles: Evaluating Principles for India's Proposed Data Protection Law
Edited by Elonnai Hickok and Vipul Kharbanda
This analysis intends to build on the substantial work done in the formulation of the National Privacy Principles by the Committee of Experts led by Justice AP Shah.1 This brief, hopes to evaluate the National Privacy Principles and the assertion by the Committee that right to privacy be considered a fundamental right under the Indian Constitution. The national privacy principles have been revisited in light of technological developments such as big data, Internet of Things, algorithmic decision making and artificial intelligence which are increasingly playing a greater role in the collection and processing of personal data of individuals, its analysis and decisions taken on the basis of such analysis. The solutions and principles articulated in this report are intended to provide starting points for a meaningful and nuanced discussion on how we need to rethink the privacy principles that should inform the data protection law in India.
Click to read the full blog post
The Fundamental Right to Privacy: An Analysis
The Fundamental Right to Privacy - Part I: Sources
Much of the debate and discussion in the hearings before the constitutional bench was regarding where in the Constitution a right to privacy may be located. In this paper, we analyse the different provisions and tools of interpretations use by the bench to read a right to privacy in Part III of the Constitution.
Download: PDF
The Fundamental Right to Privacy - Part II: Structure
In the previous paper, we delved into the sources in the Constitution and the interpretive tools used to locate the right to privacy as a constitutional right. This paper follows it up with an analysis of the structure of the right to privacy as articulated by the bench. We will look at the various facets of privacy which form a part of the fundamental right, the basis for such dimensions and what their implications may be.
Download: PDF
The Fundamental Right to Privacy - Part III: Scope
While the previous papers dealt with the sources in the Constitution and the interpretive tools used by the bench to locate the right to privacy as a constitutional right, as well as the structure of the right with its various dimensions, this paper will look at the judgment for guidance on principles to determine what the scope of the right of privacy may be.
Download: PDF
Revisiting Per Se vs Rule of Reason in Light of the Intel Conditional Rebate Case
Click on the link below to read the full article:
Revisiting Per Se vs Rule of Reason in Light of the Intel Conditional Rebate Case
ICANN’s Problems with Accountability and the .WEB Controversy
Chronological Background of the .WEB Auction
In June 2012, ICANN launched a new phase for the creation and operation of Generic Top-Level Domains (gTLDs). After confirming the eligibility of seven applicants for the rights of the .WEB domain name, ICANN placed them in a string contention set (a group of applications with similar or identical applied for gTLDs).[1]
[Quick Note: ICANN procedure encourages the resolving of this contention set by voluntary settlement amongst the contending applicants (also referred to as a private auction), wherein individual participation fees of US $185,000 go to ICANN and the auction proceeds are distributed among the bidders. If a private auction fails, the provision for a last resort auction conducted by ICANN is invoked - here the total auction proceeds go to ICANN along with the participation fees.[2]]
In June 2016, NuDotCo LLC, a bidder that had previously participated in nine private auctions without any objection, withdrew its consent to the voluntary settlement. Ruby Glen LLC, another bidder, contacted NDC to ask if it would reconsider its withdrawal, and was made aware of changes in NDC’s Board membership, financial position, management and a potential change in ownership, by NDC’s Chief Financial Officer.[3] Concerned about the transparency of the auction process, Ruby Glen requested ICANN to postpone the auction on June 22, in order to investigate the discrepancies between NDC’s official application and its representation to Ruby Glen.[4] The Vice President of ICANN’s gTLD Operations and the independent ICANN Ombudsman led separate investigations, both of which were limited to few e-mails seeking NDC’s confirmation of status quo. On the basis of NDC’s denial of any material changes, ICANN announced that the auction would proceed as planned, as no grounds had been found for its postponement.[5]
On July 27, NDC’s winning bid – USD 135 million – beat the previous record by $90 million, doubling ICANN’s total net proceeds from the past fifteen auctions it had conducted.[6] Soon after NDC’s win, Verisign, Inc., the market giant that owns the .com and .net domain names, issued a public statement that it had used NDC as a front for the auction, and that it had been involved in its funding from the very beginning. Verisign agreed to transfer USD 130 million to NDC, allowing the latter to retain a $5 million stake in .WEB.[7]
Ruby Glen LLC filed for an injunction against the transfer of .WEB rights to NDC, and sought expedited discovery[8] against ICANN and NDC in order to gather evidentiary support for the temporary restraining order.[9] Donuts Inc., the parent company of Ruby Glen, simultaneously filed for recovery of economic loss due to negligence, fraud and breach of bylaws among other grounds, and Affilias, the second highest bidder, demanded that the .WEB rights be handed over by ICANN.[10] Furthermore, at ICANN57, Affilias publicly brought up the issue in front of ICANN’s Board, and Verisign followed with a rebuttal. However, ICANN’s Board refused to comment on the issue at that point as the matter was still engaged in ongoing litigation.[11]
Issues Regarding ICANN’s Assurance of Accountability
The Post-Transition IANA promised enhanced transparency and accountability to the global multistakeholder community. The series of events surrounding the .WEB auction has stirred up issues relating to the lack of transparency and accountability of ICANN. ICANN’s arbitrary enforcement of policies that should have been mandatory, with regard to internal accountability mechanisms, fiduciary responsibilities and the promotion of competition, has violated Bylaws that obligate it to operate ‘consistently, neutrally, objectively, and fairly, without singling out any particular party for discriminatory treatment’.[12]
Though the US court ruled in favour of ICANN, the discrepancies that were made visible with regard to ICANN’s differing emphasis on procedural and substantive compliance with its rules and regulations, have forced the community to acknowledge that corporate strategies, latent interests and financial advantages undermine ICANN’s commitment to accountability. The approval of NDC’s ridiculously high bid with minimal investigation or hesitation, even after Verisign’s takeover, signifies pressing concerns that stand in the way of a convincing commitment to accountability, such as:
- The Lack of Substantive Fairness and Accountability at ICANN (A Superficial Investigation)
- ICANN’s Sketchy Tryst with Legal Conformity
- The Financial Accountability of ICANN’s Auction Proceeds
- The Lack of Substantive Fairness and Accountability in its Screening Processes:
Ruby Glen’s claim that ICANN conducted a cursory investigation of NDC’s misleading and unethical behaviour brought to light the ease and arbitrariness with which applications are deemed valid and eligible.
- Disclosure of Significant Details Unique to Applicant Profiles: In the initial stage, applications for the gTLD auctions require disclosure of background information such as proof of legal establishment, financial statements, primary and secondary contacts to represent the company, officers, directors, partners, major shareholders, etc. At this stage, TAS User Registration IDs, which require VAT/tax/business IDs, principal business address, phone, fax, etc. of the applicants, are created to build unique profiles for different parties in an auction.[13] Any important change in an applicant’s details would thus significantly alter the unique profile, leading to uncertainty regarding the parties involved and the validity of transactions undertaken. NDC’s application clearly didn’t meet the requirements here, as its financial statements, secondary contact, board members and ownership all changed at some point before the auction took place (either prior to or post submission of the application).[14]
- Mandatory Declaration of Third Party Funding: Applications presupposing a future joint venture or any organisational unpredictability are not deemed eligible by ICANN, and if any third party is involved in the funding of the applicant, the latter is to provide evidence of such commitment to funding at the time of submission of its financial documents.[15] Verisign’s public announcement that it was involved in NDC’s funding from the very beginning (well before the auction) and its management later, proves that NDC’s failure to notify ICANN made its application ineligible, or irregular at the very least.[16]
- Vague Consequences of Failure to Notify ICANN of Changes: If in any situation, certain material changes occur in the composition of the management, ownership or financial position of the applicant, ICANN is liable to be notified of the changes by the submission of updated documents. Here, however, the applicant may be subjected to re-evaluation if a material change is concerned, at ICANN’s will (there is no mention of what a material change might be). In the event of failure to notify ICANN of changes that would lead the previous information submitted to be false or misleading, ICANN may reject or deny the application concerned.[17] NDC’s absolute and repeated denial of any changes, during the extremely brief e-mail ‘investigation’ conducted by ICANN and the Ombudsman, show that at no point was NDC planning on revealing its intimacy with Verisign. No extended evaluation was conducted by ICANN at any point.[18] Note: The arbitrary power allowed here and the vague use of the term ‘material’ obstruct any real accountability on ICANN’s part to ensure that checks are carried out to discourage dishonest behaviour, at all stages.
- Arbitrary Enforcement of Background Checks: In order to confirm the eligibility of all applicants, ICANN conducts background screening during its initial evaluation process to verify the information disclosed, at the individual and entity levels.[19] The applicants may be asked to produce any and all documents/evidence to help ICANN complete this successfully, and any relevant information received from ‘any source’ may be taken into account here. However, this screening is conducted only with regard to two criteria: general business diligence and criminal history, and any record of cybersquatting behaviour.[20] In this case, ICANN’s background screening was clearly not thorough, in light of Verisign’s confirmed involvement since the beginning, and at no point was NDC asked to submit any extra documents (apart from the exchange of e-mails between NDC and ICANN and its Ombudsman) to enable ICANN’s inquiry into its business diligence.[21] Further, ICANN also said that it was not required to conduct background checks or a screening process, as the provisions only mention that ICANN is allowed to do so, when it feels the need.[22] This ludicrous loophole hinders transparency efforts by giving ICANN the authority to ignore any questionable details in applications it desires to deem eligible, based on its own strategic leanings, advantageous circumstances or any other beneficial interests.
ICANN’s deliberate avoidance of discussing or investigating the ‘allegations’ against NDC (that were eventually proved true), as well as a visible compromise in fairness and equity of the application process point to the conclusion it desired.
ICANN’s Sketchy Tryst with Legal Conformity:
ICANN’s lack of substantive compliance, with California’s laws and its own rules and regulations, leave us with the realisation that efforts towards transparency, enforcement and compliance (even with emphasis on the IANA Stewardship and Accountability Process) barely meet the procedural minimum.
- Rejection of Request for Postponement of Auction: ICANN’s intent to ‘initiate the Auction process once the composition of the set is stabilised’ implies that there must be no pending accountability mechanisms with regard to any applicant.[23] When ICANN itself determines the opening and closing of investigations or reviews concerning applicants, arbitrariness on ICANN’s part in deciding on which date the mechanisms are to be deemed as pending, may affect an applicant’s claim about procedural irregularity. In this case, ICANN had already scheduled the auction for July 27, 2016, before Ruby Glen sent in a request for postponement of the auction and inquiry into NDC’s eligibility on June 22, 2016.[24] Even though the ongoing accountability mechanisms had begun after initiation of the auction process, ICANN confirmed the continuance of the process without assurance about the stability of the contention set as required by procedure. Ruby Glen’s claim about this violation in auction rules was dismissed by ICANN on the basis that there must be no pending accountability mechanisms at the time of scheduling of the auction.[25] This means that if any objection is raised or any dispute resolution or accountability mechanism is initiated with regard to an applicant, at any point after fixing the date of the auction, the auction process continues even though the contention set may not be stabilised. This line of defence made by ICANN is not in conformity with the purpose behind the wording of its auction procedure as discussed above.
- Lack of Adequate Participation in the Discovery Planning Process: In order to gather evidentiary support and start the discovery process for the passing of the injunction, ICANN was required to engage with Ruby Glen in a conference, under Federal law. However, due to a disagreement as to the extent of participation required from both parties involved in the process, ICANN recorded only a single appearance at court, after which it refused to engage with Ruby Glen.[26] ICANN should have conducted a thorough investigation, based on both NDC’s and Verisign’s public statements, and engaged more cooperatively in the conference, to comply substantively with its internal procedure as well jurisdictional obligations. Under ICANN’s Bylaws, it is to ensure that an applicant does not assign its rights or obligations in connection with the application to another party, as NDC did, in order to promote a competitive market and ensure certainty in transactions.[27] However, due to its lack of substantive compliance with due procedure, such bylaws have been rendered weak.
- Demand to Dismiss Ruby Glen’s Complaint: ICANN demanded the dismissal of Ruby Glen’s complaint on the basis that the complaint was vague and unsubstantiated.[28] After the auction, Ruby Glen’s allegations and suspicions about NDC’s dishonest behaviour were confirmed publicly by Verisign, making the above demand for dismissal of the complaint ridiculous.
- Inapplicability of ICANN’s Bylaws to its Contractual Relationships: ICANN maintained that its bylaws are not part of application documents or contracts with applicants (as it is a not-for-profit public benefit corporation), and that ICANN’s liability, with respect to a breach of ICANN’s foundational documents, extends only to officers, directors, members, etc.[29] In addition, it said that Ruby Glen had not included any facts that suggested a duty of care arose from the contractual relationship with Ruby Glen and Donuts Inc.[30] Its dismissal of and considerable disregard for fiduciary obligations like duty of care and duty of inquiry in contractual relationships, prove the contravention of promised commitments and core values (integral to its entire accountability process), which are to ‘apply in the broadest possible range of circumstances’.[31]
- ICANN’s Legal Waiver and Public Policy: Ruby Glen had submitted that, under the California Civil Code 1668, a covenant not to sue was against policy, and that the legal waiver all applicants were made to sign in the application was unenforceable.[32] This waiver releases ICANN from ‘any claims arising out of, or related to, any action or failure to act’, and the complaint claimed that such an agreement ‘not to challenge ICANN in court, irrevocably waiving the right to sue on basis of any legal claim’ was unconscionable.[33] However, ICANN defended the enforceability of the legal waiver, saying that only a covenant not to sue that is specifically designed to avoid responsibility for own fraud or willful injury is invalidated under the provisions of the California Civil Code.[34] A waiver, incorporating the availability of accountability mechanisms ‘within ICANN’s bylaws to challenge any final decision of ICANN’s with respect to an application’, was argued as completely valid under California’s laws. It must be kept in mind that challenges to ICANN’s final decisions can make headway only through its own accountability mechanisms (including the Reconsideration Requests Process, the Independent Review Panel and the Ombudsman), which are mostly conducted by, accountable to and applicable at the discretion of the Board.[35] This means that the only recourse for dissatisfied applicants is through processes managed by ICANN, leaving no scope for independence and impartiality in the review or inquiry concerned, as the .WEB case has shown.
- Note: ICANN has also previously argued that its waivers are not restricted by S. 1668 because the parties involved are sophisticated - without an element of oppression, and that these transactions don’t involve public interest as ICANN doesn’t provide necessary services such as health, transportation, etc.[36] Such line of argument shows its continuous refusal to acknowledge responsibility for ensuring access to an essential good, in a diverse community, justifying concerns about ICANN’s commitment to accessibility and human rights.
Required to remain accountable to the stakeholders of the community through mechanisms listed in its Bylaws, ICANN’s repeated difficulty in ensuring these mechanisms adhere to the purpose behind jurisdictional regulations confirm hindrances to impartiality, independence and effectiveness.
The Financial Accountability of ICANN’s Auction Proceeds:
The use and distribution of significant auction proceeds accruing to ICANN have been identified by the internet community as issues central to financial transparency, especially in a future of increasing instances of contention sets.
- Private Inurement Prohibition and Legal Requirements of Tax-Exempted Organisations: Subject to California’s state laws as well as federal laws, tax exemptions and tax-deductible charitable donations (available to not-for-profit public benefit corporations) are dependent on the fulfillment of jurisdictional obligations by ICANN, including avoiding contracts that may result in excessive economic benefit to a party involved, or lead to any deviation from purely charitable and scientific purposes.[37] ICANN’s Articles require that it ‘shall pursue the charitable and public purposes of lessening the burdens of government and promoting the global public interest in the operational stability of the Internet’.[38] Due to this, ICANN’s accumulation of around USD 60 million (the total net proceeds from over 14 contention sets) since 2014 has been treated with unease, making it impossible to ignore the exponential increase in the same after the .WEB controversy.[39] With its dedication to a bottom-up, multi-stakeholder policy development process, the use of a single and ambiguous footnote, in ICANN’s Guidebook, to tackle the complications involving significant funds that accrue from last resort auctions (without even mentioning the arbiters of their ‘appropriate’ use) is grossly insufficient.[40]
- Need for Careful and Inclusive Deliberation Over the Use of Auction Proceeds: At the end of the fiscal year 2016, ICANN’s balance sheet showed a total of USD 399.6 million. However, the .WEB sale amount was not included in this figure, as the auction happened after the last date (June 30, 2016).[41] Around seven times the average winning bid, a USD 135 million hike in ICANN’s accounts shows the need for greater scrutiny on ICANN’s process of allocation and distribution of these auction proceeds.[42] While finding an ‘appropriate purpose’ for these funds, it is important that ICANN’s legal nature under US jurisdiction as well as its vision, mission and commitments be adhered to, in order to help increase public confidence and financial transparency.
- The CCWG Charter on New gTLD Auction Proceeds: ICANN has always maintained that it recognised the concern of ‘significant funds accruing as a result of several auctions’ at the outset.[43] In March 2015, the GNSO brought up issues relating to the distribution of auction proceeds at ICANN52, to address growing concerns of the community.[44] A Charter was then drafted, proposing the formation of a Cross-Community Working Group on New gTLD Auction Proceeds, to help ICANN’s Board in allocating these funds.[45] After being discussed in detail at ICANN56, the draft charter was forwarded to the various supporting organisations for comments.[46] The Charter received no objections from 2 organisations and was adopted by the ALAC, ASO, ccNSO and GNSO, following which members and co-chairs were identified from the organisations to constitute the CCWG.[47] It was decided that while ICANN’s Board will have final responsibility in disbursement of the proceeds, the CCWG will be responsible for the submission of proposals regarding the mechanism for the allocation of funds, keeping ICANN’s fiduciary and legal obligations in mind.[48] While creating proposals, the CCWG must recommend how to avoid possible conflicts of interest, maintain ICANN’s tax-exempt status, and ensure diversity and inclusivity in the entire process.[49] It is important to note that the CCWG cannot make recommendations ‘regarding which organisations are to be funded or not’, but is to merely submit a proposal for the process by which allocation is undertaken.[50] ICANN’s Guidebook mentions possible uses for proceeds, such as ‘grants to support new gTLD applications or registry operators from communities’, the creation of a fund for ‘specific projects for the benefit of the Internet community’, the ‘establishment of a security fund to expand use of secure protocols’, among others, to be decided by the Board.[51]
- A Slow Process and the Need for More Official Updates: The lack of sufficient communication/updates about any allocation or the process behind such, in light of ICANN’s current total net auction proceeds of USD 233,455,563, speaks of an urgent need for a decision by the Board (based on a recommendation by CCWG), regarding a timeframe for the allocation of such proceeds.[52] However, the entire process has been very slow, with the first CCWG meeting on auction proceeds scheduled for 26 January 2016, and the lists of members and observers being made public only recently.[53] Here, even parties interested in applying for the same funds at a later stage are allowed to participate in meetings, as long as they include such information in a Statement of Interest and Declaration of Intention, to satisfy CCWG’s efforts towards transparency and accountability.[54]
The worrying consequences of ICANN’s lack of financial as well as legal accountability (especially in light of its controversies), reminds us of the need for constant reassessment of its commitment to substantive transparency, enforcement and compliance with its rules and regulations. Its current obsessive courtship with only procedural regularity must not be mistaken for a greater commitment to accountability, as assured by the post-transition IANA.
[1] DECLARATION OF CHRISTINE WILLETT IN SUPPORT OF ICANN’S OPPOSITION TO PLAINTIFF’S EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER, 2. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-declaration-willett-25jul16-en.pdf)
[2] 4.3, gTLD Applicant Guidebook ICANN, 4-19. (https://newgtlds.icann.org/en/applicants/agb)
[3] NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER; MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 15. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[4] NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER; MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 15. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[5] DECLARATION OF CHRISTINE WILLETT IN SUPPORT OF ICANN’S OPPOSITION TO PLAINTIFF’S EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER, 4-7. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-declaration-willett-25jul16-en.pdf)
[6] PLAINTIFF RUBY GLEN, LLC’S NOTICE OF MOTION AND MOTION FOR LEAVE TO TAKE THIRD PARTY DISCOVERY OR, IN THE ALTERNATIVE, MOTION FOR THE COURT TO ISSUE A SCHEDULING ORDER, 3.
[7](https://www.verisign.com/en_US/internet-technology-news/verisign-press-releases/articles/index.xhtml?artLink=aHR0cDovL3ZlcmlzaWduLm5ld3NocS5idXNpbmVzc3dpcmUuY29tL3ByZXNzLXJlbGVhc2UvdmVyaXNpZ24tc3RhdGVtZW50LXJlZ2FyZGluZy13ZWItYXVjdGlvbi1yZXN1bHRz)
[8] An expedited discovery request can provide the required evidentiary support needed to meet the Plaintiff’s burden to obtain a preliminary injunction or temporary restraining order. (http://apps.americanbar.org/litigation/committees/businesstorts/articles/winter2014-0227-using-expedited-discovery-with-preliminary-injunction-motions.html)
[9] NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER; MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 2. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[10] (http://domainincite.com/20789-donuts-files-10-million-lawsuit-to-stop-web-auction); (https://www.thedomains.com/2016/08/15/afilias-asks-icann-to-disqualify-nu-dot-cos-135-million-winning-bid-for-web/)
[11] (http://www.domainmondo.com/2016/11/news-review-icann57-hyderabad-india.html)
[12] Art III, Bylaws of Public Technical Identifiers, ICANN. (https://pti.icann.org/bylaws)
[13] 1.4.1.1, gTLD Applicant Guidebook ICANN, 1-39.(https://newgtlds.icann.org/en/applicants/agb)
[14] NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER; MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 15. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[15] 1.2.1; 1.2.2, gTLD Applicant Guidebook ICANN, 1-21. (https://newgtlds.icann.org/en/applicants/agb)
[16](https://www.verisign.com/en_US/internet-technology-news/verisign-press-releases/articles/index.xhtml?artLink=aHR0cDovL3ZlcmlzaWduLm5ld3NocS5idXNpbmVzc3dpcmUuY29tL3ByZXNzLXJlbGVhc2UvdmVyaXNpZ24tc3RhdGVtZW50LXJlZ2FyZGluZy13ZWItYXVjdGlvbi1yZXN1bHRz)
[17] 1.2.7, gTLD Applicant Guidebook ICANN, 1-30. (https://newgtlds.icann.org/en/applicants/agb)
[18] DECLARATION OF CHRISTINE WILLETT IN SUPPORT OF ICANN’S OPPOSITION TO PLAINTIFF’S EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER, 4. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-declaration-willett-25jul16-en.pdf)
[19] 1.1.2.5, gTLD Applicant Guidebook ICANN, 1-8. (https://newgtlds.icann.org/en/applicants/agb)
[20] 1.2.1, gTLD Applicant Guidebook ICANN, 1-21. (https://newgtlds.icann.org/en/applicants/agb)
[21] DECLARATION OF CHRISTINE WILLETT IN SUPPORT OF ICANN’S OPPOSITION TO PLAINTIFF’S EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER, 7. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-declaration-willett-25jul16-en.pdf)
[22] 6.8; 6.11, gTLD Applicant Guidebook ICANN, 6-5 (https://newgtlds.icann.org/en/applicants/agb);
DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF MOTION TO DISMISS FIRST AMENDED COMPLAINT, 10. (http://domainnamewire.com/wp-content/icann-donuts-motion.pdf)
[23] 1.1.2.10, gTLD Applicant Guidebook ICANN. (https://newgtlds.icann.org/en/applicants/agb)
[24] NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER; MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 15. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[25] DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF MOTION TO DISMISS FIRST AMENDED COMPLAINT, 8. (http://domainnamewire.com/wp-content/icann-donuts-motion.pdf)
[26] 26(f); 65, Federal Rules of Civil Procedure (https://www.federalrulesofcivilprocedure.org/frcp/title-viii-provisional-and-final-remedies/rule-65-injunctions-and-restraining-orders/); (https://www.federalrulesofcivilprocedure.org/frcp/title-v-disclosures-and-discovery/rule-26-duty-to-disclose-general-provisions-governing-discovery/)
[27] 6.10, gTLD Applicant Guidebook ICANN, 6-6. (https://newgtlds.icann.org/en/applicants/agb); (https://www.icann.org/resources/reviews/specific-reviews/cct)
[28] 12(b)(6), Federal Rules of Civil Procedure; DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF MOTION TO DISMISS FIRST AMENDED COMPLAINT, 6. (http://domainnamewire.com/wp-content/icann-donuts-motion.pdf)
[29] DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF MOTION TO DISMISS FIRST AMENDED COMPLAINT, 8. (http://domainnamewire.com/wp-content/icann-donuts-motion.pdf)
[30] PLAINTIFF RUBY GLEN, LLC’S OPPOSITION TO DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MOTION TO DISMISS FIRST AMENDED COMPLAINT; MEMORANDUM OF POINTS AND AUTHORITIES, 12.
[31] (https://archive.icann.org/en/accountability/frameworks-principles/legal-corporate.htm); Art. 1(c), Bylaws for ICANN. (https://www.icann.org/resources/pages/governance/bylaws-en)
[32] (http://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=CIV§ionNum=1668); NOTICE OF AND EX PARTE APPLICATION FOR TEMPORARY RESTRAINING ORDER: MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT THEREOF, 24. (https://www.icann.org/en/system/files/files/litigation-ruby-glen-ex-parte-application-tro-memo-points-authorities-22jul16-en.pdf)
[33] 6.6, gTLD Applicant Guidebook ICANN, 6-4. (https://newgtlds.icann.org/en/applicants/agb)
[34] DEFENDANT INTERNET CORPORATION FOR ASSIGNED NAMES AND NUMBERS’ MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF MOTION TO DISMISS FIRST AMENDED COMPLAINT, 18. (http://domainnamewire.com/wp-content/icann-donuts-motion.pdf)
[35] (https://www.icann.org/resources/pages/mechanisms-2014-03-20-en)
[36] AMENDED REPLY MEMORANDUM IN SUPPORT OF ICANN’S MOTION TO DISMISS FIRST AMENDED COMPLAINT, 4. (https://www.icann.org/en/system/files/files/litigation-dca-reply-memo-support-icann-motion-dismiss-first-amended-complaint-14apr16-en.pdf)
[37] 501(c)(3), Internal Revenue Code, USA. (https://www.irs.gov/charities-non-profits/charitable-organizations/exemption-requirements-section-501-c-3-organizations)
[38] Art. II, Public Technical Identifiers, Articles of Incorporation, ICANN. (https://pti.icann.org/articles-of-incorporation)
[39](https://community.icann.org/display/alacpolicydev/At-Large+New+gTLD+Auction+Proceeds+Discussion+Paper+Workspace)
[40] (https://www.icann.org/policy); 4.3, gTLD Applicant Guidebook ICANN, 4-19. (https://newgtlds.icann.org/en/applicants/agb)
[41]5, Internet Corporation for ASsigned Names and Numbers, Fiscal Statements As of and for the Years Ended June 30, 2016 and 2015. (https://www.icann.org/en/system/files/files/financial-report-fye-30jun16-en.pdf);
[42] (http://www.theregister.co.uk/2016/07/28/someone_paid_135m_for_dot_web)
[43](https://community.icann.org/display/CWGONGAP/Cross-Community+Working+Group+on+new+gTLD+Auction+Proceeds+Home)
[44] (https://www.icann.org/public-comments/new-gtld-auction-proceeds-2015-09-08-en)
[45] (https://www.icann.org/news/announcement-2-2016-12-13-en)
[46] (https://www.icann.org/news/announcement-2-2016-12-13-en)
[47](https://www.icann.org/news/announcement-2-2016-12-13-en);
[48] (https://ccnso.icann.org/workinggroups/ccwg-charter-07nov16-en.pdf); (https://www.icann.org/news/announcement-2-2016-12-13-en)
[49] (https://www.icann.org/public-comments/new-gtld-auction-proceeds-2015-09-08-en)
[50] (https://community.icann.org/display/CWGONGAP/CCWG+Charter)
[51] 4.3, gTLD Applicant Guidebook ICANN, 4-19. (https://newgtlds.icann.org/en/applicants/agb)
[52] (https://newgtlds.icann.org/en/applicants/auctions/proceeds)
[53] (https://community.icann.org/pages/viewpage.action?pageId=63150102)
[54] (https://www.icann.org/news/announcement-2-2016-12-13-en)
CIS’ Efforts Towards Greater Financial Disclosure by ICANN
With the $135 million sale of .web,[1] the much protested renewal of the .net agreement[2] and the continued annual increase in domain name registrations,[3] among other things, it is no surprise that there are still transparency and accountability concerns within the ICANN Community. CIS, as part of its efforts to examine the functioning of ICANN’s accountability mechanisms, has filed many DIDP requests till date, in a bid for greater transparency of the organisation’s sources of revenues.
1.Efforts towards disclosure of revenue break-up by ICANN
- 2014
- 2015
- 2017
2.The need for granularity regarding historical revenues
-----
1.Efforts towards disclosure of revenue break-up by ICANN
- 2014
In 2014, CIS’ Sunil Abraham demanded greater financial transparency of ICANN at both the Asia Pacific IGF and the ICANN Open Forum at the IGF. Later that year, CIS was provided with a list of ICANN’s sources of revenue for the financial year 2014, including payments from registries, registrars, sponsors, among others, by ICANN India Head Mr. Samiran Gupta.[4] This was a big step for CIS and the Internet community, as before this, no details on granular income had ever been publicly divulged by ICANN on request.
However, as no details of historical revenue had been provided, CIS filed a DIDP request in December 2014, seeking financial disclosure of revenues for the years 1999 to 2014, in a detailed manner - similar to the 2014 report that had been provided.[5] It sought a list of individuals and entities who had contributed to ICANN’s revenues over the mentioned time period.
In its response, ICANN stated that it possessed no documents in the format that CIS had requested, that is, it had no reports that broke down domain name income and revenue received by each legal entity and individual.[6] It stated that as the data for years preceding 2012 were on a different system, compiling reports of the raw data for these years would be time-consuming and overly burdensome. ICANN denied the request citing this specific provision for non-disclosure of information under the DIDP.[7]
- 2015
In July 2015, CIS filed a request for disclosure of raw data regarding granular income for the years 1999 to 2014.[8] ICANN again said that it would be a huge burden ‘to access and review all the raw data for the years 1999 to 2014 in order to identify the raw data applicable to the request’.[9] However, it mentioned its commitment to preparing detailed reports on a go-forward basis - all of which would be uploaded on its Financials page.[10]
- 2017
To follow up on ICANN’s commitment to granularity, CIS sought a detailed report on historical data for income and revenue contributions from domain names for FY 2015 and FY 2016 in June 2017.[11] In its reply, ICANN stated that the Revenue Detail by Source reports for the last two years would be out by end July and that the report for FY 2012 would be out by end September.[12]
2.The need for granularity regarding historical revenues
In 2014, CIS asked for disclosure of a list of ICANN’s sources of revenue and detailed granular income for the years 1999 to 2014. ICANN published the first but cited difficulty in preparing reports of the second. In 2015, CIS again sought detailed reports of historical granular revenue for the same period, and ICANN again denied disclosure claiming that it was burdensome to handle the raw data for those years. However, as ICANN agreed to publish detailed reports for future years, CIS recently asked for publication of reports for the FYs 2012, 2015 and 2016. Reports for these three years were uploaded according to the timeline provided by ICANN.
CIS appreciates ICANN’s cooperation with its requests and is grateful for their efforts to make the reports for FYs 2012 to 2016 available (and on a continued basis). However, it is important that detailed information of historical revenue and income from domain names for the years 1999 to 2014 be made publicly available. It is also crucial that consistent accounting and disclosure practices are adopted and made known to the Community, in order to avoid omissions of statements such as Detail Revenue by Source and Lobbying Disclosures, among many others, in the annual reports - as has evidently happened for the years preceding 2012. This is necessary to maintain financial transparency and accountability, as an organisation’s sources of revenues can inform the dependant Community about why it functions the way it does.
It will also allow more informed discussions about problems that the Community has faced in the past and continues to struggle with. For example, while examining problems such as ineffective market competition or biased screening processes for TLD applicants, among others, this data can be useful in assessing the long-term interests, motives and influences of different parties involved.
[1] https://www.icann.org/news/announcement-2-2016-07-28-en
[2] Report of Public Comment Proceeding on the .net Renewal. https://www.icann.org/en/system/files/files/report-comments-net-renewal-13jun17-en.pdf
[3] https://www.icann.org/resources/pages/cct-metrics-domain-name-registration-2016-06-27-en
[4] https://cis-india.org/internet-governance/blog/cis-receives-information-on-icanns-revenues-from-domain-names-fy-2014
[5] DIDP Request no - 20141222-1, 22 December 2014. https://cis-india.org/internet-governance/blog/didp-request-2
[6] https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf
[7] Defined Conditions for Non-Disclosure - Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.
https://www.icann.org/resources/pages/didp-2012-02-25-en
[8] DIDP Request no - 20150722-2, 22 July 2015. https://cis-india.org/internet-governance/blog/didp-request-12-revenues
[9] https://www.icann.org/en/system/files/files/didp-response-20150722-2-21aug15-en.pdf
[10] https://www.icann.org/en/system/files/files/didp-response-20150722-2-21aug15-en.pdf; https://www.icann.org/resources/pages/governance/financials-en
[11] DIDP Request No. 20170613-1, 14 June 2017.
[12] https://www.icann.org/en/system/files/files/didp-20170613-1-marda-obo-cis-response-13jul17-en.pdf
Why Presumption of Renewal is Unsuitable for the Current Registry Market Structure
With the recent renewal of the .net legacy Top-Level-Domain (TLD), the question of the appropriate method of renewal is worth reconsidering. When we talk about presumption of renewal for registry agreements, it means that the agreement has a reasonable renewal expectancy at the end of its contractual term. According to the current base registry agreement, it shall be renewed for 10-year periods, upon expiry of the initial (and successive) term, unless the operator commits a fundamental and material breach of the operator’s covenants or breach of its payment obligations to ICANN.
A Comparison of Legal and Regulatory Approaches to Cyber Security in India and the United Kingdom
This report compares laws and regulations in the United Kingdom and India to see the similarities and disjunctions in cyber security policy between them. The first part of this comparison will outline the methodology used to compare the two jurisdictions. Next, the key points of convergence and divergence are identified and the similarities and differences are assessed, to see what they imply about cyber space and cyber security in these jurisdictions. Finally, the report will lay out recommendations and learnings from policy in both jurisdictions.
Read the full report here
Breach Notifications: A Step towards Cyber Security for Consumers and Citizens
Electronic data processing has awarded societies with lots of opportunities for improvements that would not have been possible without them. Low market entrance barriers for new innovators have caused a flood of applications and automations that have the potential to improve citizens’ and consumers’ lives, as well as government operations. But while the increasing prevalence of electronic hardware and programmable software in many different parts of society and industry, combined with the intricate value chains of international communications networks, devices and equipment markets and software markets, have created a large number of opportunities for economic, social and public activity, they have also brought with them a number of specific problems pertaining to consumer rights.
Counter Comments on TRAI's Consultation Paper on Privacy, Security and Ownership of Data in Telecom Sector
The submission is divided in three main parts. The first part 'Preliminary' introduces the document. The second part 'About CIS' is an overview of the organization. The third part contains the 'Counter Comments' on the Consultation Paper taking into account the submission made by other stakeholders.
Download the full submission here
GDPR and India: A Comparative Analysis
The post is written by Aditi Chaturvedi and edited by Amber Sinha
High administrative fines in case of non-compliance with GDPR provisions are a driving force behind these concerns as they can lead to loss of business for various countries such as India.
To a large extent, future of business will depend on how well India responds to the changing regulatory changes unfolding globally. India will have to assess her preparedness and make convincing changes to retain the status as a dependable processing destination. This document gives a brief overview of data protection provisions of the Information Technology Act, 2000 followed by a comparative analysis of the key provisions of GDPR and Information Technology Act and the Rules notified under it.
Breeding misinformation in virtual space
The phenomenon of fake news has rece-ived significant sc-holarly and media attention over the last few years. In March, Sir Tim Berners Lee, inventor of the World Wide Web, has called for a crackdown on fake news, stating in an open letter that “misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire.”
Gartner, which annually predicts what the next year in technology will look like, highlighted ‘increased fake news’ as one of its predictions.
The report states that by 2022, “majority of individuals in mature economies will consume more false information than true information. Due to its wide popularity and reach, social media has come to play a central role in the fake news debate.”
Researchers have suggested that rumours penetrate deeper within a social network than outside, indicating the susceptibility of this medium. Social networks such as Facebook and communities on messaging services such as Whats-App groups provide the perfect environment for spreading rumours. Information received via friends tends to be trusted, and online networks allow in-dividuals to transmit information to many friends at once.
In order to understand the recent phenomenon of fake news, it is important to recognise that the problem of misinformation and propaganda has existed for a long time. The historical examples of fake news go back centuries where, prior to his coronation as Roman Emperor, Octavian ran a disinformation campaign against Marcus Antonius to turn the Roman populace against him.
The advent of the printing press in the 15th century led to widespread publication; however, there were no standards of verification and journalistic ethics. Andrew Pettigrew wri-tes in his The Invention of News, that news reporting in the 16th and 17th centuries was full of portents about “comets, celestial apparitions, freaks of nature and natural disasters.”
In India, the immediate cause for the 1857 War of Indepen-dence was rumours that the bones of cows and pigs were mixed with flour and used to grease the cartridges used by the sepoys.
Leading up to the Second World War, the radio emerged as a strong medium for dissemination of disinformation, used by the Nazis and other Axis powers. More recently, the milk miracle in the mid-1990s consisting of stories of the idol of Ganesha drinking milk was a popular fake news phenomenon. In 2008, rumours about the popular snack, Kurkure, being made out of plastic became so widespread that Pepsi, its holding company, had to publicly rebut them.
A quick survey by us at the Centre of Internet and Society, for a forthcoming report, of the different kinds of misinformation being circulated in India, suggested four different kinds of fake news.
The first is a case of manufactured primary content. This includes instances where the entire premise on which an argument is based is patently false. In August 2017, a leading TV channel reported that electricity had been cut to the Jama Masjid in New Delhi for non-payment of bills. This was based on a false report carried by a news portal.
The second kind of fake news involves manipulation or editing of primary content so as to misrepresent it as something else. This form of fake news is often seen with respect to multimedia content such as images, pictures, audios and videos. These two forms of fake news tend to originate outside traditional media such as newspapers and television channels, and can be often sourced back to social media and WhatsApp forwards.
However, we see such unverified stories being picked up by traditional media. Further, there are instances where genuine content such as text and pictures are shared with fallacious contexts and descriptions. Earlier this year, several dailies pointed out that an image shared by the ministry of home affairs, purportedly of the floodlit India-Pakistan border, was actually an image of the Spain-Morocco border. In this case, the image was not doctored but the accompanying information was false.
Third, more complicated cases of misinformation involve the primary content itself not being false or manipulated, but the facts when they are reported may be quoted out of context. Most examples of misinformation spread by mainstream media, which has more evolved systems of fact checking and verification, and editorial controls, would tend to fall under this.
Finally, there are instances of lack of diligence in fully understanding the issues before reporting. Such misrepresentations are often encountered while reporting in fields that require specialised knowledge, such as science and technology, law, finance etc. Such forms of misinformation, while not suggestive of malafide intent can still prove to be quite dangerous in shaping erroneous opinions.
While the widespread dissemination of fake news contributes greatly to its effectiveness, it also has a lot to do with the manner in which it is designed to pander to our cognitive biases. Directionally motivated reasoning prompts people confronted with political information to process it with an intention to reach a certain pre-decided conclusion, and not with the intention to assess it in a dispassionate manner. This further results in greater susceptibility to confirmation bias, disconfirmation bias and prior attitude effect.
Fake news is also linked to the idea of “naïve realism,” the belief people have that their perception of reality is the only accurate view, and those in disagreement are necessarily uninformed, irrational, or biased. This also explains why so much fake news simply does not engage with alternative points of view.
A well-informed citizenry and institutions that provide good information are fundamental to a functional democracy. The use of the digital medium for fast, unhindered and unchecked spread of information presents a fertile ground for those seeking to spread misinformation. How we respond to this issue will be vital for democratic societies in our immediate future. Fake news presents a complex regulatory challenge that requires the participation of different stakeholders such as the content disseminators, platforms, norm guardians which include institutional fact checkers, trade organisations, and “name-and-shaming” watchdogs, regulators and consumers.
AI and Healthcare in India: Looking Forward
Edited by Roshni Ranganathan
The Roundtable consisted of participants from different sides of the AI and healthcare spectrum, from medical practitioners, medical startups to think tanks. The Roundtable discussed various questions regarding AI and healthcare with a special focus on India.
The Roundtable discussion began with the results of the primary research conducted by CIS on AI and healthcare. CIS, in its research, identified three main uses of AI in healthcare - supporting diagnosis, early identification and imaging diagnosis. The benefits of AI were - faster diagnosis, personalised treatment and the bridging of manpower gap. Questions regarding medical ethics, privacy, regulatory certainty, social acceptance and trust were identified as the issues or barriers to the use of AI in healthcare. The cases chosen for study were IBM Watson (used by Manipal Hospitals in India), Deep Blue (used by the NHS UK), Google Brain (used in the Aravind Healthcare in India) and Sig-Tuple (an AI based pathologist assistant). CIS wished to explore the ethical side of this topic,the question of public interest and the need to protect the patient from harm. The session was then opened for discussion on the following issues.
Artificial Intelligence - Literature Review
Edited by Amber Sinha and Udbhav Tiwari; Research Assistance by Sidharth Ray
With origins dating back to the 1950s Artificial Intelligence (AI) is not necessarily new. With an increasing number of real-world implications over the last few years, however, interest in AI has been reignited over the last few years.
The rapid and dynamic pace of development of AI have made it difficult to predict its future path and is enabling it to alter our world in ways we have yet to comprehend. This has resulted in law and policy having stayed one step behind the development of the technology.
Understanding and analyzing existing literature on AI is a necessary precursor to subsequently recommending policy on the matter. By examining academic articles, policy papers, news articles, and position papers from across the globe, this literature review aims to provide an overview of AI from multiple perspectives.
The structure taken by the literature review is as follows:
- Overview of historical development
- Definitional and compositional analysis
- Ethical & Social, Legal, Economic and Political impact and sector-specific solutions
- The regulatory way forward
This literature review is a first step in understanding the existing paradigms and debates around AI before narrowing the focus to more specific applications and subsequently, policy-recommendations.
Document Actions