Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

Can the Matters Dealt with in the Aadhaar Act be the Objects of a Money Bill?

by Pooja Saxena — last modified Apr 24, 2016 02:15 PM
In this infographic, we highlight the matters dealt with in the Aadhaar Act 2016, recently tabled in and passed by the Lok Sabha as a money bill, and consider if these can be objects of a money bill. The infographic is designed by Pooja Saxena, based on information compiled by Sumandro Chattapadhyay and Amber Sinha.

 

Download the infographic: PDF and JPG.

 

License: It is shared under Creative Commons Attribution 4.0 International License.

 

Can the matters dealt with in the Aadhaar Act be the objects of a money bill?

 

The Aadhaar Act is Not a Money Bill

by Amber Sinha — last modified Apr 25, 2016 10:51 AM
While the authority of the Lok Sabha Speaker is final and binding, Jairam Ramesh’s writ petition may allow the Supreme Court to question an incorrect application of substantive principles. This article by Amber Sinha was published by The Wire on April 24, 2016.

 

Originally published by The Wire on April 24, 2016.


Since its introduction as a money bill in the Lok Sabha in the first week of March [1], the Aadhaar (Targeted delivery of Financial and other subsidies, benefits and services) Bill, 2016 has been embroiled in controversy. The Lok Sabha rejected the five recommendations of the Rajya Sabha and adopted the bill on March 16 and only presidential assent was required for it become to become valid law. However, former Union Minister Jairam Ramesh filed a writ petition contesting the decision to treat the Aadhaar Bill as a money bill. The petition is due to be heard before the Supreme Court on April 25, and should the court decide to entertain the petition, it could have far-reaching implications for the Aadhaar project and the manner in which money bills are passed by the Parliament.

There are three broad categories of bills (all legislations or Acts are known as ‘bills’ till they are passed by the Parliament) that the Parliament can pass. The first kind, Constitution Amendment Bills, are those that seek to amend a provision in the Constitution of India. The second are financial bills which contain provisions on matters of taxation and expenditure. Money bills are a subset of the financial bills which contain provisions only related to taxation, financial obligations of the government, expenditure from or receipt to the Consolidated Fund of India and any matters incidental to the above. The third category is of ordinary bills which includes all other bills. The process for the enactment of all these bills is different. Money bills are peculiar in that they can only be introduced in the Lok Sabha where it can be passed by simple majority. Following this, it is transmitted to the Rajya Sabha. The Rajya Sabha’s powers are restricted to giving recommendations on the Bill and sending it back to the Lok Sabha, which the Lok Sabha is under no obligation to accept. The decision to introduce the Aadhaar Bill as a money bill has been widely seen as an attempt to circumvent the Rajya Sabha where the ruling party is in a minority.

Article 110 (1) of the Constitution defines a money bill as one containing provisions only regarding the matters enumerated or any matters incidental to them. These are a) imposition, regulation and abolition of any tax, b) borrowing or other financial obligations of the Government of India, c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, d) appropriation of money out of CFI, e) expenditure charged on the CFI or f) receipt or custody or audit of money into CFI or public account of India. Article 110 is modelled on Section 1(2) of the (UK) Parliament Act, 1911 which also defines the money bills as those only dealing with certain enumerated matters. The use of the word “only” was brought up by Ghanshyam Singh Gupta during the Constituent Assembly Debates. He pointed out that the use of the word “only” limits the scope of money bills to only those legislations which did not deal with other matters. His amendment to delete the word “only” was rejected clearly establishing the intent of the framers of the Constitution to keep the ambit of money bills extremely narrow.

While the Aadhaar Bill does make references to benefits, subsidies and services funded by the Consolidated Fund of India (CFI), even a cursory reading of the bill reveals its main objectives as creating a right to obtain a unique identification number and providing for a statutory apparatus to regulate the entire process. The mere fact of establishing the Aadhaar number as the identification mechanism for benefits and subsidies funded by the CFI does not give it the character of a money bill. The bill merely speaks of facilitating access to unspecified subsidies and benefits rather than their creation and provision being the primary object of the legislation. Erskine May’s seminal textbook, ‘Parliamentary Practice” is instructive in this respect and makes it clear that a legislation which simply makes a charge on the Consolidated Fund does not becomes a money bill if otherwise its character is not that of one.

PDT Achary, former secretary general of the Lok Sabha, has expressed concern about the use of Money Bills as a means to circumvent the Rajya Sabha. He has written here [2] and here [3], on what constitutes a money bill and how the attempts to pass off financial bills like the Aadhaar Bill as money bills could erode the supervisory role Rajya Sabha is supposed to play. This is especially true in the case of a legislation like the Aadhaar Bill which has far reaching implications for individual privacy as it governs the identification system conceptualised to provide a unique and lifelong identity to residents of India dealing with both the analog and digital machinery of the state and by virtue of Section 57 of any private entities. Already over 1 billion people have been enrolled under this identification scheme, and the project has been a subject of much debate and a petition before the Supreme Court. The project has been portrayed as both the last hope for a welfare state and surveillance infrastructure. Regardless of which of the two ends of spectrum one leans towards, it is undeniable that the law governing the Aadhaar project deserved a proper debate in the Parliament. Even those who are strong proponents of the project must accept the decision to pass it off as a money bill undermines the importance of democratic processes and is a travesty on the Constitution and a blatant abrogation of the constitutional duties of the speaker.

The petition by Jairam Ramesh would hinge largely on the powers of the judiciary to question the decision of the Speaker of the Lok Sabha. Article 110 (3) is very clear in pronouncing the authority of the Speaker as final and binding. Additionally, Article 122 prohibits the courts from questioning the validity of any proceedings in Parliament on the ground of any alleged irregularity of procedure. The powers of privilege that Parliamentarians enjoy are integral to the principle of separation of powers. However, the courts may be able to make a fine distinction between inquiring into procedural irregularity which is prohibited by the Constitution; and questioning an incorrect application of substantive principles, which I would argue, is the case with the Speaker decision.

References

[1] See: http://thewire.in/2016/03/07/arun-jaitley-introduces-money-bill-on-aadhar-in-lok-sabha-24115/.

[2] See: http://indianexpress.com/article/opinion/columns/show-me-the-money-4/.

[3] See: http://www.thehindu.com/opinion/lead/circumventing-the-rajya-sabha/article7531467.ece.

 

Cyber Security of Smart Grids in India

by Elonnai Hickok and Vanya Rakesh — last modified Apr 28, 2016 03:34 PM
An integral component of the ambitious flagship programme of the Indian Government- Digital India, which paves way for a digital data avalanche in the country, is a well-designed digital infrastructure ensuring high connectivity and integration of services, the potential areas being smart cities, smart homes, smart energy and smart grids, to list a few. Likewise, the 100 Smart Cities Mission envisions changing the face of urbanization in India, to manage the exponential growth of population in the cities by creating smart cities with ICT driven solutions, along with big data analytics. Smart grid technologies are key for both these schemes.

The article by Elonnai Hickok and Vanya Rakesh was published by Dataquest on April 25, 2016


Smart grid is a promising power delivery infrastructure integrated with communication and information technologies which enables monitoring, prediction and management of energy usages. Establishment of smart grids becomes highly important for the Indian economy, as the present grid losses are one of the highest in the world at upto 50% and costing India upto 1.5% of its GDP. India operates one of the largest synchronous grids in the world – covering an area of over 3 million sq km, 260 GW capacity and over 200 million customers with the estimated demand of India increasing 4 times by the year 2032.

In the year 2013, the Ministry of Power (MoP), in consultation with India Smart Grid Forum and India Smart Grid Task Force released a smart grid vision and roadmap for India, a key policy document aligned to MoP’s overarching objectives of “Access, Availability and Affordability of Power for All”. It lays plans for a framework to address cyber security concerns in smart grids as well. To achieve goals envisaged in the roadmap, the Government of India established the National Smart Grid Mission in the year 2015 for planning, monitoring and implementation of policies and programs related to Smart Grid activities.

A number of smart grid projects have been introduced, and are currently underway. KEPCO in Kerala has established smart meter/intelligent power transmission and distribution equipment system in the year 2011 and the smart grid operations focus on peak reduction, load standardization, reduction in power transmission/distribution loss, response to new/renewable energy and reduction in black-out time. Gujarat was introduced to India’s first modernized electrical grid in the year 2014, to study consumer behaviour of electricity usage and propose a tariff structure based on usage and load on the power utility by installing new meters embedded with SIM card to monitor the data. The Bangalore Electricity Supply Company Ltd. (BESCOM) project in Bangalore envisaged the Smart Grid Pilot Project for integration of renewable and distributed energy resources into the grid, which is vital to meet growing electricity demands of the country, curb power losses, and enhance accessibility to quality power.

Cybersecurity challenges

At the same time, the introduction of a smart grid brings with it certain security risks and concerns, particularly to a nation’s cyber security. Increased interconnection and integration may render the grids vulnerable to cyber threats, putting stored data and computers at great risk.With sufficient cyber security measures, policies and framework in place, a Smart Grid can be made more efficient, reliable and secure as failure to address these problems will hinder the modernization of the existing power system. Smart Grids, comprising of numerous communication, intelligent, monitoring and electrical elements employed in power grid, have a greater exposure to cyber-attacks that can potentially disrupt power supply in a city.

Cyber security and data privacy are some of the key challenges for smart grids in India, as establishment of digital electricity infrastructure entails the challenge of communication security and data management. Digital network and systems are highly prone to malicious attacks from hackers which can lead to misutilisation of consumers’ data, making cyber security the key issue to be addressed. Vulnerabilities allow an attacker to break a system, corrupt user privacy, acquire unauthorized access to control the software, and modify load conditions to destabilize the grid. Hackers or attackers, who compromise a smart meter can immediately alter their energy costs or change generated energy meter readings to monetize it by help of remote PCs. Also, inserting false information could mislead the electric utility into making incorrect decisions about the local usage and capacity.

Initiatives in India

As cybersecurity is critical for Digital India and the Smart City Concept note highlights a smart grid to be resilient to cyber attacks, a National Cyber Coordination Centre is being established by the Indian Government. Also, National Cyber Safety and Security Standards has been started with a vision to safeguard the nation from the current threats in the cyberspace, undertaking research to understand the nature of cyber threats and Cyber Crimes by facilitating a common platform where experts shall provide an effective solution for the complex and alarming problems in the society towards cyber security domain. Innovative strategies and compliance procedures are being developed to curb the increasing complexity of the Global Cyber Threats faced by countries at large.

The National Cyber Security Policy 2013 was released with an umbrella framework for providing guidance for actions related to security of cyberspace, by the Department of Electronics and Information Technology (DeitY). The Working Group on Information Technology established under the Planning Commission has also published a 12 year plan on IT development in India with a road map for cyber security, stating six key priority and focus areas for cyber security including:Enabling Legal Framework ; Security Policy, Compliance and Assurance; Security R&D; Security Incident – Early Warning and Response ; Security awareness, skill development and training, and Collaboration.

In case of Bangalore, to ensure smooth implementation of BESCOM’s vision, the company realised the need to put a cyber-security system in place to protect the smart grid installations in Bangalore city. To ensure security, BESCOM has come out with a separate IT security policy and dedicated trained IT cadre to safeguard its data and servers, becoming one of the few Discoms in India to take such measures for safeguarding the servers and data network from cyber crimes and threats.

Way forward

An electric system like Smart grids has enormous and far-reaching economic and social benefits. However, increased interconnection and integration tends to introduce cyber-vulnerabilities into the grid. With the evolution of cyber threats/attacks over time, it can be said that there are a lot of challenges for implementing cyber security in Indian smart grid. Considering importance of secure smart grid networks for flagship projects in India, the existing regulatory framework does not seem to adequately take into consideration the cyber security implications.

In light of this, the government must aim to develop and adopt high level cybersecurity policy to withstand cyber-attacks. Also, India must focus on skills development in this domain and have a capable workforce to achieve the targets set by Indian Government. The country must look up to develop an overall intelligence framework that brings together industry, governments and individuals with specific capabilities for this purpose.

The National Cyber Security Policy 2013, protecting public and private infrastructure from cyber attacks, along with all kinds of information, such as personal information of web users, banking and financial information,etc. is yet to be implemented by the Government properly. In the Indian Power sector, the cyber security regulations or mandates are absent in the National Electricity Policy (NEP) as well as the Electricity Act 2003 and its amendment in 2007, with no reference to cyber security concerns. These key legislations must be amended to take into account the growing challenges due to increased use of ICT in the power sector.

As the concept of smart grids is still evolving in India, professional intervention from various domains has pushed for adoption and development of standard process and products. Many international standard setting organisations like IEC, IEEE, NIST, CENELEC are engaged in standardization activities of Smart Grids and in India, the Bureau of Indian Standards (BIS) has been rolling out several varieties of standards targeting various technologies. Therefore, BIS must develop standards taking into account the security challenges in the cyberspace as well.

Apart from policy and regulatory measure, the system on which the smart grids are built and networked must be made architecturally strong and secure.One of the areas where due attention is required is making the Supervisory Control and Data Acquisition (SCADA) secure, a system that operates with coded signals to provide control of remote equipment and is entirely based on computer systems and network. Numerous systems also employ the Public Key Infrastructure (PKI) to secure the Smart Grids and address the security challenges by enabling identification, verification, validation and authentication of connected meters for network access. This can be leveraged for securing data integrity, revenue streams and service continuity. The key vulnerable areas prone to cyber attacks on information transmission are network information, data integrity and privacy of information. The information transmission networks must be well-designed as the network unavailability may result in the loss of real-time monitoring of critical smart grid infrastructures and power system disasters.

Addressing these fast growing challenges and cyber security needs of the country by adopting suitable regulatory, policy and architectural steps would help achieve the objectives of Digital India and Smart Cities enabling “Access, Availability and Affordability for All”.

Privacy Issues with DRM

by Jalaj Pandey — last modified May 03, 2016 02:41 AM
This post has been written by Jalaj Pandey interning at CIS. It elaborates upon the various privacy issues with the Digital Rights Management. The author talks about the various ways in which content producers use DRM as a tool to infringe the privacy of the end users.

Nehaa Chaudhari provided inputs and also edited the blog post. Click to download the File.


The ubiquity of internet in today's world has made content and information sharing an easy task. A certain media file can be shared and made public with hardly any technical obstacles. Issues like hacking, unauthorized copying and publication, unlicensed usage have become concerns for content producers, who have employed Digital Rights Management (hereafter DRM) measures to address some of them.

Several instances of the online privacy intrusion by the content producers have been recorded. In such a scenario the balancing the rights of the content producers and the end users becomes an important one. It is imperative to find a common ground to safeguard the interests of both the parties involved. In the recent past DRM has been receiving a lot of flak because of the privacy issues contented by the users.

In the most rudimentary form privacy can be explained as any information about an individual which he/she does not want to be made public. It is important to mention that this information is seen from the perspective of an ordinary reasonable person. The UN Declaration of Human Rights, 1948, defines privacy as a fundamental right of every human. The functioning of the DRM is based on restricting the usage or distribution of the content. Since this restriction is only possible after there is a formal identification of the end user, the content producers end up collecting information about the users. For example: a DRM for a music file might work in a manner where it can only be accessed by one computer from which the user accesses and registers for the first time. DRMs initially identify the IP addresses of the system and make the file functioning on only that IP address. In this way the producer ends up collecting information about the end user. Different DRM models take different ways to collect information of their user. While collecting IP addresses in one of them the other way is tracking the user information via download, browsing activities, subscription service, etc. The usage log of the users is generated and becomes a valuable asset to assess and predict the preferences of the users

Two contentions of privacy have been raised on the privacy issues of DRM -

a) What is the accountability of this process and

b) Whether it puts the content producers in a position where they can control the users.

The information collected is under the control of content producers, who mostly store this information in the form database. BEUC (European Consumer Organization) claimed that the DRM systems technologically enable content providers to monitor private consumption of content, create reports of consumption, and profile users.

The information is at the disposal of the content producers. An assessment of DRM applications under Canadian Privacy showed that the firms did not even recognise privacy issues of the customers as a priority. In fact the firms failed to provide the information that was stored in their databases. This gives an idea about the lack of transparency that exists in collecting the information about users. The question whether users are aware of what information is being collected and to what extent they are being tracked online remains unanswered. The CEN/ISSS (European Committee for Standardization/ Information Society Standardisation System) pointed out that DRMs have a large potential to transmit, generate personal information about users. It has also been characterized by unprecedented levels of monitoring by various content producers.

Further the principled level argumentation to this is on lines of collection of information without any authentication from the user herself/himself. It is essential that if any information is collected or saved by the producers it should only be after taking consent of the user. Surveillance and compelled disclosure of information about intellectual consumption threaten rights to personal integrity.

DRMs take away the anonymity of the consumption. Since the producers can practically monitor the content usage of the user, this has led to wide scale of price discrimination. This means that producers would monitor and assess the preferences of the user and subsequently raise the prices of that particular class of products. In the report of FIPR (Foundation of Information Policy and Research) it was found that Microsoft had been trying to implement their DRM systems in their products using a similar approach to gain a monopoly position as in their strategy of browser implementation.

The Sony BMG copy protection rootkit scandal in 2005 brought much criticism to DRM. It was found out that Sony BMG had introduced illegal and harmful copy protection measure in its CDs. The rootkit element of the software is used to hide virtually all traces of the copy protection software's presence on a PC, so that an ordinary computer user would have no way to find it. Further more than just the DRM part of it the software also made the user's system open to a number of malwares and created vulnerabilities in the system. Sony was eventually made to compensate consumer costs, etc on the same. However the question of whether the database in the hands of companies can be used in arbitrary manner was intensely discussed after this.

It is essential that an effective framework is brought into effect which caters to privacy interests of the users. Privacy is the basic human right and it is the onus of the State to protect and safeguard this right. It is essential that the State does not compromise and support mechanisms which promote the welfare of the content producers over the users. The balance of users and producers becomes all the more important in a developing country like ours. The lack the awareness and the knowledge coupled with increasing usage of internet can lead to the exploitation of many. It is essential that the States see through these problems and collectively find an all encompassing solution to it.



K. G. Coffman and A. M. Odlyzko, Growth of the Internet, AT&T Labs - Research, July 6, 2001, available at, ( www.dtc.umn.edu/~odlyzko//doc/oft.internet.growth.pdf) (hereinafter Growth).

The Daily Source, The Growing Impact of the Internet, April 4, 2016, available at (https://www.dailysource.org/about/impact).

Corryne Mcsherry, Adobe Spyware Reveals (Again) The Price Of DRM: Your Privacy And Security, Electronic Frontier Foundation, October 17, 2014, available at,

(https://www.eff.org/deeplinks/2014/10/adobe-spyware-reveals-again-price-drm-your-privacy-and-security).

Digital Rights Management: A failure in the developed world, a danger to the developing world, Electronic Frontier Foundation, March 23, 2005, available at,

(https://www.eff.org/wp/digital-rights-management-failure-developed-world-danger-developing-world).

R. Subramanya and Byung k. Yi, Digital Rights Management, available at, ( https://www.academia.edu/8054608/Digital_Rights_Management) (hereinafter Digital Rights Management).

Global internet liberty campaign, privacy and human rights, An International Survey of Privacy Laws and Practice, available at, (http://gilc.org/privacy/survey/intro.html).

Ann Cavoukian, Privacy and Digital Rights Management (DRM): An Oxymoron, Information and Privacy Commissioner Ontario, available at, ( https://www.ipc.on.ca/images/Resources/up-1drm.pdf ) (hereinafter Oxymoron)

Varian, H.R. (1985) 'Price discrimination and social welfare', American Economic Review, Vol. 75, available at, (http://www.economics-ejournal.org/economics/journalarticles/2007-1/references/Varian1985).

Privacy and Digital Rights Management,A position paper for the W3C workshop on Digital Rights Management, January 2001, available at, ( www.w3.org/2000/12/drm-ws/pp/hp-poorvi.html).

Growth supra note, 1.

Digital Rights Management supra note, 5.

Thierry Rayna, Privacy or piracy, why choose? Two solutions to the issues of digital rights management and the protection of personal information, Intellectual Property Management, Vol. X, No. Y, available at,

(www.inderscienceonline.com/doi/abs/10.1504/IJIPM.2008.021138).

Oxymoron supra note, 7.

BEUC, Consumentenbond, and CLCV at DRM Working Group 1 (2002), available at, (https://privacy.org.nz/assets/Files/4558510.pdf).

Natali Helberger and Kristo´f Ker´enyi and Bettina Krings, Digital Rights Management and Consumer Acceptability: A Multi-Disciplinary Discussion of Consumer Concerns and Expectations, available at (citeseerx.ist.psu.edu/showciting?cid=733532).

Knud Bohle, Indicare, Research into unfriendly DRM : A Review, December, 2004,available at, (citeseerx.ist.psu.edu/showciting?cid=733532) (hereinafter Indicare).

European Committee for Standardization/Information Society Standardisation System (CEN/ISSS) DRM Report, 2003.

Indicare supra note, 16.

News Release, "Forrester Technographics Finds Online Consumers Fearful of Privacy Violations" (October 27, 1999 available at, (www.forrester.com/ER/Press/Release/0,1769,177,FF.html).

Julia E. Cohen, Georgetown Law Faculty Publications, DRM and Privacy, January 2010, available at,

(https://www.academia.edu/2164013/DRM_and_Privacy).

Thierry Rayna, Privacy or piracy, why choose? Two solutions to the issues of digital rights management and the protection of personal information, Intellectual Property Management, available at, ( www.inderscienceonline.com/doi/abs/10.1504/IJIPM.2008.021138) (hereinafter Privacy or piracy).

Moe, W. and Fader, P. (2004) 'Dynamic conversion behavior at e-commerce sites', Management Science, Vol. 50, available at,

(https://www.researchgate.net/publication/227447618_Dynamic_Conversion_Behavior_at_E-Commerce_Sites).

Privacy or piracy supra note, 21.

Sismeiro, C. and Bucklin, R. (2004) 'Modeling purchase behavior at an e-commerce web site: a task completion approach', Journal of Marketing Research, available at, (citeseerx.ist.psu.edu/showciting?cid=906878).

Ross Anderson, Foundation of Information Policy and Research Consultation Response to DRM (2004), available at, (www. fipr.org/APIG_DRM_submission.pdf).

Otto Helweg, Sony, Rootkits and Digital Rights Management Gone Too Far, Oct, Oct. 31, 2014, available at (https://blogs.technet.microsoft.com/markrussinovich/2005/10/31).

Sony BMG Litigation Info, Electronic Frontier Foundation, available at, (https://www.eff.org/cases/sony-bmg-litigation-info).

Privacy Issues with DRM

by Prasad Krishna last modified May 03, 2016 02:39 AM

ZIP archive icon Privacy issues with DRM.docx — ZIP archive, 26 kB (27141 bytes)

Identity of the Aadhaar Act: Supreme Court and the Money Bill Question

by Vanya Rakesh and Sumandro Chattapadhyay — last modified May 09, 2016 11:52 AM
A writ petition has been filed by former Union minister Jairam Ramesh on April 6 challenging the constitutionality and legality of the treatment of this Act as a money bill. The Supreme Court heard the matter on April 25 and invited the Union government to present its view. It is our view that the Supreme Court can not only review the Lok Sabha speaker’s decision, but should also ask the government to draft the Aadhaar Bill again, this time with greater parliamentary and public deliberation. Vanya Rakesh and Sumandro Chattapadhyay wrote this article on The Wire.

 

Published by and cross-posted from The Wire.


The Aadhaar Act 2016, passed in the Lok Sabha on March 16, 2016, faced opposition ever since it was tabled in parliament. In particular, the move to introduce it as a money bill has been vehemently challenged on grounds of this being an attempt to bypass the Rajya Sabha completely. A writ petition has been filed by former Union minister Jairam Ramesh on April 6 challenging the constitutionality and legality of the treatment of this Act as a money bill. The Supreme Court heard the matter on April 25 and invited the Union government to present its view.

It is our view that the Supreme Court can not only review the Lok Sabha speaker’s decision, but should also ask the government to draft the Aadhaar Bill again, this time with greater parliamentary and public deliberation.

The money bill question

M.R. Madhavan has argued that the Aadhaar Act contains matters other than “only” those incidental to expenditure from the consolidated fund, as it establishes a biometrics-based unique identification number for beneficiaries of government services and benefits, but also allows the number to be used for other purposes beyond service delivery. While Pratap Bhanu Mehta calls this a subversion of “the spirit of the constitution”, P.D.T. Achary, former secretary general of the Lok Sabha, expressed concern about the attempts to pass off financial bills like Aadhaar as money bills as a means to circumvent and erode the supervisory role of the Rajya Sabha. Arvind Datar has further emphasised that when the primary purpose of a bill is not governed by Article 110(1), then certifying it as a money bill is an unconstitutional act.

Article 110(1) of the Constitution identifies a bill as a money bill if it contains “only” provisions dealing with the following matters, or those incidental to them:

  1. imposition and regulation of any tax,
  2. financial obligations undertaken by Indian Government,
  3. payment into or withdrawal from the Consolidated Fund of India (CFI) or Contingent Fund of India,
  4. appropriation of money and expenditure charged on the CFI or receipt, and
  5. custody, issue or audit of money into CFI or public account of India.

However, the link of the Act with the Consolidated Fund of India is rather tenuous, since it depends on the Union or state governments declaring a certain subsidy to be available upon verification of the Aadhaar number. The objectives and validity of the Act would not actually change if the Aadhaar number no longer was directly connected to the delivery of services. The use of the word “if” in section 7 explicitly leaves scope for a situation where the government does not declare an Aadhaar verification as necessary for accessing a subsidy. In such a scenario, the Act will still be valid but without any formal connection with any charges on the Consolidated Fund of India.

A case of procedural irregularity?

The constitution of India borrows the idea of providing the speaker with the authority to certify a bill as money bill from British law, but operationalises it differently. In the UK, though the speaker’s certificate on a money bill is conclusive for all purposes under section 3 of the Parliament Act 1911, the speaker is required to consult two senior members, usually one from either side of the house, appointed by the committee from amongst those senior MPs who chair general committees. In India, the speaker makes the decision on her own.

Although article 110 (3) of the Indian constitution states that the decision of the speaker of the Lok Sabha shall be final in case a question arises regarding whether a bill is a money bill or not, this does not restrict the Supreme Court from entertaining and hearing a petition contesting the speaker’s decision. As the Aadhaar Act was introduced in the Lok Sabha as a money bill even though it does not meet the necessary criteria for such a classification, this treatment of the bill may be considered as an instance of procedural irregularity.

There is ample jurisprudence on what happens when the Supreme Court’s power of judicial review comes up against Article 122 – which states that the validity of any proceeding in the parliament can (only) be called into question on the grounds of procedural irregularities. In the crucial judgment of Raja Ram Pal vs Hon’ble Speaker, Lok Sabha and Others (2007), the court evaluated the scope of judicial review and observed that although parliament is supreme, unlike Britain, proceedings which are found to suffer from substantive illegality or unconstitutionality, cannot be held protected from judicial scrutiny by article 122, as opposed to mere irregularity. Deciding upon the scope for judicial intervention in respect of exercise of power by the speaker, in Kihoto Hollohan vs Zachillhu and Ors. (1992), the Supreme Court held that though the speaker of the house holds a pivotal position in a parliamentary democracy, the decision of the speaker (while adjudicating on disputed disqualification) is subject to judicial review that may look into the correctness of the decision.

Several past decisions of the Supreme Court discuss how the tests of legality and constitutionality help decide whether parliamentary proceedings are immune from judicial review or not. In Ramdas Athawale vs Union of India (2010), the case of Keshav Singh vs Speaker, Legislative Assembly (1964) was referred to, in which the judges had unequivocally upheld the judiciary’s power to scrutinise the actions of the speaker and the houses. It was observed that if the parliamentary procedure is illegal and unconstitutional, it would be open to scrutiny in a court of law and could be a ground for interference by courts under Article 32, though the immunity from judicial interference under this article is confined to matters of irregularity of procedure. These observations were reiterated in Mohd. Saeed Siddiqui vs State of Uttar Pradesh (2014) and Yogendra Kumar Jaiswal vs State of Bihar (2016).

Thus, the decision of the Lok Sabha speaker to pass and certify a bill as a money bill is definitely not immune from judicial review. Additionally, the Supreme Court has the power to issue directions, orders or writs for enforcement of rights under Article 32 of the constitution, therefore, allowing the judiciary to decide upon the manner of introducing the Aadhaar Act in parliament.

National implications demand public deliberation

As the provisions of the Aadhaar Act have far reaching implications for the fundamental and constitutional rights of Indian citizens, the Supreme Court should look into the matter of its identification and treatment as a money bill and whether such decisions lead to the thwarting of legislative and procedural justice.

The Supreme Court may also take this opportunity to reflect on the very decision making process for classification of bills in general. As Smarika Kumar argues, experience with the Aadhaar Act reveals a structural concern regarding this classification process, which may have substantial implications in terms of undermining public and parliamentary deliberative processes. This “trend,” as Arvind Datar notes, of limiting legislative discussions and decisions of national importance within the space of the Lok Sabha must be swiftly curtailed.

Apart from deciding upon the legality of the nature of the bill, it is vital that the apex court ask the government to categorically respond to the concerns red-flagged by the Standing Committee on Finance, which had taken great exception to the continued collection of data and issuance of Aadhaar numbers in its report, and to the recommendations passed in the Rajya Sabha recently. Further, the repeated violation of the Supreme Court’s interim orders – that the Aadhaar number cannot be made mandatory for availing benefits and services – in contexts ranging from marriages to the guaranteed work programme should also be addressed and responses sought from the Union government.

Evidently, the substantial implications of the Aadhaar Act for national security and fundamental rights of citizens, primarily privacy and data security, make it imperative to conduct a duly balanced public deliberation process, both within and outside the houses of parliament, before enacting such a legislation.

 

 

Criminal defamation remains and so does the debate

by Japreet Grewal — last modified May 23, 2016 06:05 AM

The judgment on the plea to de-criminalise defamation is out and despite its verbosity and rich vocabulary is an embarrassment to our recent judicial milestone of constitutional challenges. In the case of Subramanian Swamy vs. Union of India, a two judge bench headed by Justice Dipak Misra, has upheld the constitutionality of Section 499 and Section 500 of Indian Penal Code, 1860 (IPC) and Section199 of Code of Criminal Procedure, 1973 (CrPC) that criminalise defamation.

The judgment has not satisfactorily answered several pertinent questions. Various significant issues relating to the existing regime of defamation have been touched upon in the judgment but the bench has skipped the part where it is required to analyse and give its own reasoning for upholding or reading down the law. This post points out what should have been looked at.

A. Whether defamation is a public or a private wrong?  What is the State’s interest in protecting the reputation of an individual against other private individuals? Is criminal penalty for defamatory statements an appropriate, adequate or disproportionate remedy for loss of reputation?


At the core of the debate to decriminalise defamation lies the question, whether defamation is a public or a private wrong. The question was raised in the Subramanian Swamy case and the court held that defamation is a public wrong. Our problem with the court’s decision lies in its failure to provide a sound and comprehensive analysis of the issue. In order to understand whether defamation is a public or a private wrong, it is necessary that we look at what reputation means, what happens when reputation is harmed and whose interests are affected by such harm.

Reputation is not defined in law, however the Supreme Court has held that reputation is a right to enjoy the good opinion of others and the good name, the credit, honour or character which is derived from such favourable public opinion. The definition reflects several elements that constitute reputation which when harmed have different bearing on the reputation of an individual. Academic Robert C Post in his paper, The Social Foundations on Defamation Law: Reputation and Constitution, says that reputation can be understood as a form of intangible property akin to goodwill or as dignity (the respect including self-respect that arises from observance of rules of the society). While reputation when seen as property can be estimated in money and thus adequately compensated through a civil action for damages, loss of dignity is not a materially quantifiable loss, and thus, monetary compensation appears irrelevant. The purpose of the defamation law could either be to ensure that reputation is not wrongfully deprived of its proper market value or the respect/acceptance of the society. Explanation 4 to Section 499 of the IPC accommodates both such situations and provides that reputation is harmed if it directly or indirectly, in the estimation of others, lowers the moral or intellectual character of that person, or lowers the character of that person in respect of his caste or of his calling, or lowers the credit of that person, or causes it to be believed that the body of that person is in a loathsome state, or in a state generally considered as disgraceful.

Post adds that an individual’s reputation is a product of his interaction with the society by following the norms of conduct (which he calls rules of civility) created by the society, thus the society has an interest in enforcing its rules of civility through defamation law by policing breaches of these rules. Criminal defamation acknowledges that loss of reputation is a wrong to the societal interests; however these interests have not been deliberated upon by the courts in India.

The Subramanian Swamy case was an occasion where, it was imperative that the court took up this exercise and explained what interest the society had in protecting the reputation of an individual for it to be classified as a public wrong. The court stated, “the law relating to defamation protects the reputation of each individual in the perception of the public at large. It matters to an individual in the eyes of the society. There is a link and connect between individual rights and the society; and this connection gives rise to community interest at large. Therefore, when harm is caused to an individual, the society as a whole is affected and the danger is perceived” With this reasoning it can be inferred that the society has an interest in all private wrongs. Where would that inference land us? This reasoning is ambiguous and inadequate.

On the other hand, criminal penalty for perfectly private wrongs such as copyright infringement and dishonour of cheques urges us to ask if there is a problem with the rigid distinction of public and private wrongs. Should we be asking the question differently?

The judgment has provided extremely inadequate answers to this question and has left matters ambiguous.

B. Can the right to reputation under Article 21 be enforced against another individual’s freedom of expression and are safeguards already built in law so as not to unreasonably restrict and stifle free expression in this regard?

Defamation finds a place in the list of constitutionally allowed restrictions on freedom of speech under Article 19 (2). Defamation protects the right to reputation of an individual thus free expression by this reason is subject to the right to reputation of an individual. The court had repeatedly observed that right to reputation is a part of the right to life under Article 21 of the Constitution. The question of enforceability of right to reputation under Article 21 against freedom of expression under Article 19 (1) (a) came into question in the instant case; it was contended that a fundamental right is enforceable against the State but cannot be invoked to serve a private interest of an individual. Thus, the right to reputation as manifested in defamation being a wrong committed against a private person by another person is unconnected and falls outside the scope of Article 19 (2). It is pertinent to note that Article 21 (which includes right to reputation) is enforceable not only against the state but also against private individuals. What is relevant here is an understanding of horizontal enforceability of fundamental rights (certain fundamental rights can be enforced against private individuals and non-state actors). This would help explain the dilemma in enforcing the right to reputation of an individual against free speech of another individual. It is vaguely mentioned in the judgment (see para 88) but has not been deliberated upon.

What follows from the discussion of enforceability of right to reputation, is the discussion on how reasonably it restricts speech. The Supreme Court has previously held that while determining reasonableness, the underlying purpose of the restrictions imposed, the extent and urgency of the evil sought to be remedied thereby, the disproportion of the imposition, the prevailing conditions at the time, should all enter into the judicial verdict. We briefly analyse the critical aspects of the regime of criminal defamation on these parameters.

Underlying purpose

At the heart of the defamation law is the need to find the most suitable remedy for loss of reputation of an individual. How does one restore reputation of an individual in the society and whether criminal penalty an appropriate remedy?

Extent of restriction

The extent to which defamation law restricts free speech could be analysed by looking at various aspects such as what kind of speech is considered defamatory, what procedure is followed to bring action against the alleged wrong doer and scope of abuse of the law. Explanation 1 to Section 499 of IPC provides that a statement or imputation is defamatory if it is not made in public good. It is not sufficient to prove that such statement or imputation is in fact true. The idea of public good is at best vague without any means to evaluate it. Further, under Section 199 of CrPC allows multiple complaints to be filed in different jurisdictions for a single offensive publication. Besides, usage of terms like “some person aggrieved” leaves room for parties other than the person in respect of whom defamatory material is published to bring action and the provision also allows the privilege of two sets of procedures for prosecution (in official capacity and in private capacity) to public servants without satisfactory reasoning provided for such discrimination. These provisions have the potential to be used to file frivolous complaints and could be a handy tool for harassment of journalists or activists among others.

Proportionality

Does the publication or imputation of defamatory material warrant payment of fine and imprisonment? Earlier in the post, we brought up the question of relevance of such measures to the act of defamation. Assuming that it is relevant, do we think it is harsh or commensurate to the wrongful act. It is necessary to look at the process of prosecution before we determine the proportionality of the restriction. Criminal law assumes that the accused is innocent until he is proven guilty. Therefore until the judiciary determines that the act of defamation was committed, how does the process help the accused in maintaining status quo.  It is also pertinent to look at the threshold for civil defamation. Under the civil wrong of defamation, truth works as a complete defence while under criminal defamation, a statement despite being true could invite penalty if it is not published in public good. Thus a lower threshold for criminal liability would upset the balance of proportionality. These aspects are critical to determine the reasonableness of criminal defamation and it is unfortunate that the judgment that runs into hundreds of pages has not evaluated them.

Conclusion

The convoluted debate on criminal defamation remains intact post the pronouncement of this judgment. Questions of competing interests of society and individuals or individuals per se, and ambiguous rationale behind imposition of liability, arbitrariness of procedure for prosecution have not been examined. Further, the hardship in compartmentalising free speech, the right to reputation and the right to privacy remains unanswered.

 

 

 

Comments on Draft Electronic Health Records Standards

by Amber Sinha — last modified Dec 15, 2016 08:45 AM
The Centre for Internet & Society submitted its comments on the Draft Electronic Health Records Standards to the Ministry of Health and Family Welfare.

 

To,
Ministry of Health and Family Welfare,
Room 307 D,
Nirman Bhavan,
New Delhi 110108

Subject: Comments on the Electronic Health Record (EHR) Standards of India

The Electronic Health Record (EHR) Standards (hereinafter “EHR Standards”) were publicly circulated on March 18, 2016 seeking comments and views from stakeholders and the general public. Having reviewed the EHR Standards and referred to other robust standards dealing with the same subject matter, we wish to submit the following comments on the EHR Standards.

Standards and Interoperability

The EHR Standards state that the "primary aim of interoperability standards is to ensure syntactic (structural) and semantic (inherent meaning) interoperability of data amongst systems at all times" [1]. It is mentioned that set of standards outlined in this document represents an incremental approach to adopting standards and that they need to be flexible and modifiable to adapt to the demographic and resource diversity in India.

Comments:

  1. The EHR Standards make a reference to syntactic and semantic interoperability without really defining these terms or stipulating clear steps for how they may be achieved. It is suggested that these terms are clearly defined. Syntactic interoperability can be defined as ensuring the preservation of the clinical purpose of the data during transmission among healthcare systems. Similarly, semantic interoperability can defined as enabling multiple systems to interpret the information that has been exchanged in a similar way through pre-defined shared meaning of concepts [2].
  2. Inadequate human resource capacity remains a critical challenge to the adoption of e-health standards. The WHO and ITU eHealth Strategy Toolkit [3] recommends the development of effective health ICT workforce, capable of designing, building, operating and supporting e-health services. This workforce could participate in standards development, as well as the localization of international standards to fit a country's specific need. The EHR Standards should also include mechanisms and solutions to address these issues.

Ownership of Data

The physical or electronic records, which are generated by the healthcare provider are held in trust by them on behalf of the patient [4]. It is stated that the contained data which is sensitive personal data or personal information of the patient as per the Information Technology Act, 2000 is owned by the patients, however the medium for storage or transmission of such data is owned by the healthcare provider.

Comments:

  1. Currently, the EHR Standards state that the contained data which are the sensitive personal data of the patient is owned by the patient. While medical records and history is included within the scope of sensitive personal data under the Information Technology Act, 2000, the definition of "Personal Health Information" under the EHR Standards is more expansive. Therefore, it is recommended that all Personal Health Information is deemed to be owned by the patient.
  2. Currently, the EHR Standards do not clearly specify the bodies and individuals who would be subject to the requirements under this document. A definition similar to that of "covered entities" under the US Health Insurance Portability and Accountability Act (HIPAA) could be used [5].

Privileges of Patient

Currently, the privileges of the patient include the rights to inspect and view their medical records. Further, the patient can request a healthcare organization that stores/maintains their medical records, to withhold specific information that they do not want disclosed to other

organizations or individuals. Also, patients can demand information from a healthcare provider on the details of disclosures performed on the patient's medical records [6].

Comments:

  1. Currently, the EHR Standards only refer to "medical records" as being available for inspection and review of the patients. This should be expanded to also include information about enrollment, payment, claims adjudication, case or medical management record systems maintained by or for a health plan; or Other records that are used, to make decisions about individuals by healthcare providers or other bodies [7].
  2. The EHR standards do not currently stipulate that the upon request by a patient, healthcare providers must exercise timeliness in providing the information to them. A time-limit such 30 calendar days should be clearly stated within which the healthcare provider must process the request.
  3. The right of patients to request information from a healthcare provider on the details of disclosures should include within its scope the rights to receive the date of the disclosure; the name and address of the entity or person who received the information; a brief description of the medical information disclosed; and; a brief summary of the purpose of the disclosure [8].
  4. A right to seek amendment of the one's medical records should also be provided to patients in cases where the information is incomplete.

Patient Identifying Information

Under the Standards, Personal identifiers include the following: Name, Address (all geographic subdivisions smaller than street address, and PIN code) All elements (except years) of dates related to an individual (including date of birth, date of death, etc.), Telephone, cell (mobile) phone and/or Fax numbers, Email address, Bank Account and/or Credit Card Number, Medical record number, Health plan beneficiary number, Certificate/license number, Any vehicle or other any other device identifier or serial numbers, PAN number, Passport number, AADHAAR card, Voter ID card, Fingerprints/Biometrics, Voice recordings that are non-clinical in nature, Photographic images and that possibly can individually identify the person, Any other unique identifying number, characteristic, or code [9].

Comments:

The above mentioned list is not adequate and exhaustive such as the definition and scope of Protected Health Information under the HIPAA [10]. The following identifiers must be included within the scope of Patient Identifying Information: Device identifiers and serial numbers, Web Universal Resource Locators (URLs), Internet Protocol (IP) address numbers.

Disclosure of Protected/Sensitive Information

The EHR Standards state that disclosure of protected/sensitive information for use in treatment, payments and other healthcare operations must be only done after obtaining a general consent of the patient. On the other hand, disclosures for for non-routine and most non-health care purposes must be done only after obtaining the specific consent of the patient. Only for certain specified national priority activities, such as notifiable/communicable diseases, it is stated that "the health information may be disclosed to appropriate authority as mandated by law without the patient's prior authorization."

Comments:

  1. The terms "specific consent" and "general consent" need to be clearly defined.
  2. In cases of disclosures for for non-routine and most non-health care purposes, a written authorisation should be mandatory. It should be clearly specified that a healthcare provider may not condition treatment, payment, enrollment, or benefits eligibility on an individual granting an authorization.
  3. There is confusion due to the use of numerous terms such as "health information", "protected health information", "sensitive personal data", "personal information" and "protected/sensitive information" in the EHR Standards for the same purpose. Some of these above terms are defined while the others are not. In order to remove the ambiguity caused due to this, it is recommended that the term "protected health information" is used throughout the document.
  4. All bodies dealing with medical data should be required to abide by the principle of "data minimisation" in use and disclosure. They must take reasonable efforts to use, disclose, and request only the minimum amount of protected health information needed to accomplish the intended purpose of the use, disclosure, or request.
  5. For internal uses, healthcare providers and other entities must develop and implement policies and procedures that restrict access and uses of protected health information based on the specific roles of the members of their workforce.


Amber Sinha,
Centre for Internet and Society,
No. 194, 2nd 'C' Cross,
Domlur, 2nd Stage,
Bengaluru, 560071


 

[1] Page 7 of the EHR Standards.

[2] Funmi Adebesin, Rosemary Foster, Paula Kotze, Darelle van Greunen, "A review of interoperability standards in e-Health and imperatives for their adoption in Africa", Research Article - SACJ No. 50, July 2013; L. E. Whitman and H. Panetto. "The missing link: Culture and language barriers to interoperability", Annual Reviews in Control, vol. 30, no. 2, 2006.

[3] WHO and ITU. "National eHealth Strategy Toolkit", available at http://goo.gl/uxMvE.

[4] Page 19 of the EHR Standards.

[5] Covered Entity includes a healthcare provider ( Doctors, Clinics, Psychologists, Dentists, Chiropractors, Nursing Homes, Pharmacies), a health plan (Insurance companies, HMOs, Company Health Plans, Government programs that pay for health care) and Healthcare Clearinghouse.

[6] Page 20 of the EHR Standards.

[7] Individuals' Right under HIPAA to Access their Health Information 45 CFR § 164.524, available at http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/ .

[8] Patient Rights Under HIPAA Accounting of Disclosures of Health Information, available at http://uthscsa.edu/hipaa/patientrights/accountingofdisclosures.pdf.

[9] Page 21 of the EHR Standards.

Submission by the Centre for Internet and Society on Draft New ICANN By-laws

by Vidushi Marda last modified May 31, 2016 02:49 AM
The Centre for Internet & Society sent its comments on the Draft New ICANN Bylaws. The submission was prepared by Pranesh Prakash, Vidushi Marda, Udbhav Tiwari and Swati Muthukumar. Special thanks to Sunil Abraham for his input and feedback.

We at the Centre for Internet and Society are grateful for the opportunity to comment on the draft new ICANN by-laws. Before we comment on specific aspects of the Draft by-laws, we would like to make a few general observations:

Broadly, there are significant differences between the final form of the by-laws and that which has been recommended by the participants in the IANA transition process through the ICG and the CCWG. They have been shown to be unnecessarily complicated, lopsided, and skewed towards U.S.-based businesses in their past form, which continues to reflect in the current form of the draft by-laws.

The draft by-laws are overwrought, but some of that is not the fault of the by-laws, but of the CCWG process itself.  Instead of producing a broad constitutional document for ICANN, the by-laws read like the worst of governmental regulations that go into unnecessary minutiae and create more problems than they solve. Things that ought not to be part of fundamental by-laws — such as the incorporating jurisdiction of PTI, on which no substantive agreement emerged in the ICG — have been included as such.

Simplicity has been seen as a sin and has made participation in this complicated endeavour an even more difficult proposition for those who don’t choose to participate in the dozens of calls held every month. On specific substantive issues, we have the following comments:

Jurisdiction of ICANN’s Principal Office

Maintaining by-law Article XVIII, which states that ICANN has its principal office in Los Angeles, California, USA, these Draft by-laws make an assumption that ICANN’s jurisdiction will not change post transition, even though the jurisdiction of ICANN and its subsidiary bodies is one of the key aspects of post transition discussion to be carried out in Work Stream 2 (WS2). Despite repeated calls to establish ICANN as an international community based organisation (such as the International Red Cross or International Monetary Fund), the question of ICANN's future jurisdiction was deferred to WS2 of the CCWG-Accountability process. All of the new proposed by-laws have been drafted with certainty upon ICANN's jurisdiction remaining in California. Examples of this include the various references to the California Civil Code in the by-laws and repeated references to entities and structures (such as public benefit corporations) in the fundamental by-laws of the ICANN that can only be found in California.

This would make redundant any discussion in WS2 regarding jurisdiction, since they cannot be implemented without upending the decisions relating to accountability structures made in WS1, and embedded in the by-laws.

CIS suggests an provision expressly be inserted in the by-laws to allow changes to the by-laws in WS2 insofar as matters relating to jurisdiction and other WS2 issues are concerned, to make it clear that there is a shared understanding that WS2 decisions on jurisdiction are not meant to be redundant.

Jurisdiction of the Post-Transition IANA Authority (PTI)

The structure of the by-laws and the nature of the PTI in Article 16 make its Californian jurisdiction integral to the very organisation as a whole and control all its operations, rights and obligations. This is so despite this issue not having been included in the CWG report (except for footnote 59 in the CWG report, and as a requirement proposed by ICANN’s lawyers, to be negotiated with PTI’s lawyers, in Annex S of the CWG report).  The U.S. government’s requirement that the IANA Functions Operator be a U.S.-based body is a requirement that has historically been a cause for concern amongst civil society and governments.  Keeping this requirement in the form of a fundamental by-law is antithetical to the very idea of internationalizing ICANN, and is not something that can be addressed in Work Stream 2.

CIS expressed its disagreement with the inclusion of the U.S-jurisdiction requirement in Annex S in its comments to the ICG. Nothing in the main text of the CWG or ICG recommendations actually necessitate Californian jurisdiction for the PTI.  Thus, clearly the draft by-laws include this as a fundamental by-law despite it not having achieved any form of documented consensus in any prior process. This being a fundamental by-law would make shifting the PTI’s registered and principal office almost impossible once the by-laws are passed.

No reasoning or discussion has been provided to justify the structure, location and legal nature of the PTI. The fact that the revenue structure, by-laws and other details have not even been hinted at in the current document, indicate that the true rights and obligations of PTI have been left at the sole discretion of the ICANN while simultaneously granting it fundamental by-law protection. This is not only deeply problematic on front of delegation of excessive responsibility for a key ICANN function without due oversight but also leads to situation where the community is agreeing to be bound to a body whose fundamental details have not even been created yet, and yet is a fundamental by-law.

CIS would therefore suggest that the PTI related clauses in the by-laws be solely those on which existing global Internet community consensus can be shown, and the PTI’s jurisdiction is not something on which such consensus can be shown to exist.  Therefore the by-laws should be rewritten to make them agnostic to PTI’s jurisdiction. Further, CIS suggests that the law firm appointed for PTI be non-American, since U.S.-based law firms capable law firms in Brazil, France, and India.

We would also like to note that we have previously proposed that PTI’s registered office and ICANN’s registered office be in different jurisdictions to increase jurisdictional resilience against governmental and court-based actions.

Grandfathering Agreements Clause

A fair amount of discussion has taken place both in the CCWG mailing list about Section 1.1 (d)(ii), which concerns the inclusion of certain agreements into the scope of protection granted to ICANN from its Mission and Objective statement goals. CIS largely agrees with the positions taken by the IAB and CCWG in their comments of demanding the removal of parts B, C, D E and F of Section 1.1(d)(ii) as all of these are agreements that were not included in the scope of the CCWG Proposal and a fair few of these agreements (such as the PTI agreement) have not even been created yet. This leads to practical and legal issues for the ICANN as well as the community as it restricts possible accountability and transparency measures that may be taken in the future.
CIS as its suggestion therefore agrees with the IAB and CCWG in this regard and supports the request by them that demand by these grandfathering provisions be removed.

Inspection Rights

Section 22.7 severely limits the transparency of ICANN’s functioning, and we believe it should be amended.

(a) It limits Inspection Requests to Decisional Participants and does not allow for any other interested party to make a request for inspection.  While the argument has been made that Californian law requires inspection rights for decisional participants, neither the law nor CCWG’s recommendations require restricting the inspection rights to decisional participants. CIS’s suggestion is to allow for any person in the public to make a request for examination, but to have to declare the nature of the public interest behind requests for non-decisional participants, so that an undue number of requests are not made for the purpose of impairing the operations of the organisation.

(b) The unclear but extremely limited definition of ‘permitted scope’, which does not allow one to question any ‘small or isolated aspect’ of ICANN’s functioning, where there is no explicit definition of what constitutes the scope of matters relevant to operation of ICANN as a whole, leaving a loophole for potential exploitation. CIS suggests the removal of this statement and to allow only for limitations listed in Section 22.7 (b) for Inspection Requests.

(3) There is no hard deadline provided for the information to be made available to the querying body, thus allowing for inordinate delays on the part of the ICANN, which is open to abuse. CIS suggests the removal of the clause ‘or as soon as reasonably practicable thereafter’ in this section.

(4) The need for insisting that the material be used only for restricted purposes. CIS suggests that as a step towards ICANN’s transparency, it is essential that they allow the use of the information for any reason deemed necessary by the person demanding inspection. There is no clear reason to require restriction to EC proceedings for non-confidential material.  This requirement should be removed.

Work Stream 2 Topics

Section 27.2, which covers necessary topics for WS2, currently does not include key aspects such as PTI documents, jurisdictional issues, etc. In this light, we suggest that they be included and a clause be inserted to indicate that this list of topics is indicative and the CCWG can expand the scope of items to be worked on in WS2 as well as make changes to work completed in WS1 (such as these by-laws) to meet WS2 needs as well.

FOI-HR

Section 27.3 (a) requires the FOI-HR to be approved by "(ii) each of the CCWG-Accountability’s chartering organizations..” which is inconsistent with the CCWG proposal that forms the basis for these by-laws. The requirement of formal approval from every Chartering Organisation in the current draft is inconsistent with Annex 6 of the CCWG proposal, that has no such requirement.

CIS strongly advocates for a change in the bylaw text to align with the intent of the CCWG Accountability report, and to reflect that the process of developing the FOI-HR shall follow the same procedure as Work Stream 1.

Contracts with ICANN

Section 27.5 currently states that “Notwithstanding the adoption or effectiveness of the New by-laws, all agreements, including employment and consulting agreements, entered by ICANN shall continue in effect according to their terms.”

As the section currently stands, there is a possibility that prior to the creation of by-laws, agreements that may be in contravention of the by-laws may be brought forth intentionally before the commencement of the operation of ICANN’s Mission statement in the said by-laws. The clause may be updated as follows to avoid this —

“Notwithstanding the adoption or effectiveness of the New by-laws, all agreements, including employment and consulting agreements, entered by ICANN shall continue in effect according to their terms, provided that they are in accordance with ICANN’s Mission Statement.”

Criminal Defamation and the Supreme Court’s Loss of Reputation

by Bhairav Acharya last modified Jun 03, 2016 03:05 AM
The Supreme Court’s refusal, in Subramanian Swamy v. Union of India, to strike down the anachronistic colonial offence of criminal defamation is wrong. Criminalising defamation serves no legitimate public purpose; the vehicle of criminalisation – sections 499 and 500 of the Indian Penal Code, 1860 (IPC) – is unconstitutional; and the court’s reasoning is woolly at best.

The article was published in the Wire on May 14, 2016.


Politics and censorship

Two kinds of defamation actions have emerged to capture popular attention. First, political interests have adopted defamation law to settle scores and engage in performative posturing for their constituents. And, second, powerful entities such as large corporations have exploited weaknesses in defamation law to threaten, harass, and intimidate journalists and critics.

The former phenomenon is not new. Colonial India saw an explosion of litigation as traditional legal structures were swept away and native disputes successfully migrated to the colonial courts. These included politically-motivated defamation actions that had little to do with protecting reputations. In fact, defamation litigation has long become an extension of politics, in many cases a new front for political manoeuvring.

The latter type of defamation action is far more sinister. Powerful elites, both individuals and corporations, have cynically misused the law of defamation to silence criticism and chill the free press. By filing excessive and often unfounded complaints that are dispersed across the country, which threaten journalists with imprisonment, powerful elites frighten journalists into submission and vindictively hound those who refuse to back down. Such actions are called Strategic Lawsuits against Public Participation (SLAPPs) which Rajeev Dhavan warns have created a new system of censorship.

Petitions and politicians

Defamation originates from the concept of scandalum magnatum – the slander of great men – which protected the reputations of aristocrats. The crime was linked to sedition, so insulting a lord was akin to treason. In today’s neo-feudal India, political leaders are contemporary aristocrats. Investigating them can invite devastating consequences, even death. Most of the time, they retaliate through defamation law. Since the criminal justice system is most compromised at its base, where the police and magistrates directly interact with people, the misuse of criminal defamation law hurts ordinary citizens.

This is different from politicians prosecuting each other since they rarely, if ever, suffer punishment. Of all the petitions before the Supreme Court concerning the decriminalisation of defamation, the three that received the most news coverage were those of Subramanian Swamy, Rahul Gandhi, and Arvind Kejriwal. They are all politicians, their petitions were made in response to defamation complaints filed by rival politicians. On the other hand, there are numerous cases which politicians have filed against private members of civil society to silence them. When presented with these concerns, the Supreme Court simply failed to seriously engage with them.

The architecture of defamation

Defamation has many species, a convoluted history, and complex defences. Defamation can be committed by the spoken word, which is slander, or the written word, which is libel. The historical distinction between these two modes of defamation is based on the permanence of written words. Before the invention of the printing press, the law was chiefly concerned with slander. But as written ideas proliferated through mass publication technologies, libel came to be viewed as more malevolent and the law visited serious punishments on writers and publishers.

Such a distinction presumes a literate readership. In largely illiterate societies, the spoken word was more potent. This is why films and radio have long attracted censorship and state control in India. Before mass publishing forked defamation into libel and slander, there existed only the historical crime of libel. Historical libel had four species: seditious libel, blasphemous libel, obscene libel, and defamatory libel.

Seditious libel, which has been repealed in Britain, prospers in India as the offence of sedition which is criminalised by section 124A of the IPC. Blasphemous libel, repealed in Britain, fares well in India as the offence of blasphemy under section 295A of the IPC. Obscene libel, as the offence of obscenity, is criminalised by section 294 of the IPC. And defamatory libel, repealed in Britain, which is the offence of criminal defamation that the Subramanian Swamy case upheld, continues to exist under section 499 of the IPC.

Confusing harms

Of the many errors that litter the Supreme Court’s May 13, 2016 judgment in the Subramanian Swamy case, perhaps the most egregious is the failure to recognise the harm that criminal defamation poses to a healthy civil society in a free democracy. At the crux of this mistake is the Supreme Court’s failure to distinguish between private injury and social harm. Two people may, in their private capacities, litigate a civil suit to recover damages if one feels the other has injured her reputation. This private action of defamation was not in issue before the court.

On the other hand, by criminalising defamation, why should the state protect the reputations of individuals while expending public resources to do so? This goes to the concept of crime. When an action is serious enough to harm society it is criminalised. Rape strikes at the root of public safety, human dignity, equality, and peace, so it is a crime. A breach of contract only injures the party who was expecting the performance of contractual duties; it does not harm society, so it is not a crime. Similarly, a loss of reputation, which is by itself difficult to quantify, does no harm to society and so it should not be a crime.

Truth and the public good

It may be argued, and the Supreme Court hints, that at its fundament, society is premised on the need for truth; so lies should be penalised. This is where defamation law wanders into moral policing. In Indian and European philosophies, truth is consecrated as a moral good. The Supreme Court quotes from the Bhagavad Gita on the virtue of truth. But while quotes like these are undoubtedly meaningful, they have no utility in a constitutional challenge. In reality, society is composed of truth, lies, untruths, half-truths, rumour, satire, and a lot more. In fact, the more shades of opinion there are, the livelier that society is. So lies should not invite criminal liability.

If we concede the moral debate and arrive at a consensus that the law must privilege truth over lies, then truth alone should be a complete defence to defamation. If the law criminalises untruth, then it must sanctify truth. That means when tried for the crime of defamation, a journalist must be acquitted if her writing is true. But the law and the Supreme Court require more. In addition to proving the truth, the journalist must prove that her writing serves the public good. So speaking truth is illegal if it does not serve the public good.

In fact, truth has only recently been recognised as a defence to defamation, albeit not a complete defence. This belies the social foundations of criminal defamation law. The purpose of the offence is not to uphold truth, it is to protect the reputations of the powerful. But what is reputation? The Supreme Court spends 25 pages trying to answer this question with no success. Instead, the court declares that reputation is protected by the right to life guaranteed by Article 21 of the Indian Constitution but it offers no sound reasoning to support this claim. The court also fails to explain why the private civil action of defamation is insufficient to protect reputation.

The constitution and constitutionalism

There are two core constitutional questions posed by the Subramanian Swamy case. They are:

  • Does the crime of defamation fall within one of the nine grounds listed in Article 19(2) of the constitution; and
  • Are sections 499 and 500 of the IPC which criminalise and punish defamation reasonable restrictions on the right to free speech?

Article 19(2) contains nine grounds in the interests of which a law may reasonably restrict the right to free speech. Defamation is one of the nine grounds, but the provision is silent as to which type of defamation, civil or criminal, it considers. However, B.R. Ambedkar’s comments in the Constituent Assembly arguably indicate that criminal defamation was intended to be a ground to restrict free speech.

The answer to the second question lies in measuring the reasonableness of the restriction criminal defamation places on free speech. If the restriction is proportionate to the social harm caused by defamation, then it is reasonable. However, restating an earlier point, criminalising defamation serves no legitimate public purpose because society is unconcerned with the reputations of a few individuals. Even if society is concerned with private reputations, the private civil action of defamation is more than sufficient to protect private interests. Further, the danger that current criminal defamation law poses to India’s free speech environment is considerable. Dhavan says: “Defamation cases [are] a weapon by which the rich and powerful silence their critics and censor a democracy.”

The Subramanian Swamy case highlights several worrying trends in India’s constitutional jurisprudence. The judgment is delivered by one judge speaking for a bench of two. Such critically significant constitutional challenges cannot be left to the whims of two unelected and unaccountable men. Moreover, from its position as the guarantor of individual freedoms, the Supreme Court appears to be in retreat. This will have far-reaching and negative consequences for India’s citizenry. If the court fails to enhance individual freedoms, what is its constitutional role? The judiciary would do well to stay away from policy mundanities and focus on promoting India’s democratic project, lest it injure its own reputation.

Women's Safety? There is an App for That

by Rohini Lakshané last modified Jan 10, 2017 02:48 AM
“After locking ourselves in a room for more than 6 days, this is what we came out [sic] with. Join us in helping make WOMEN feel SAFE,” read a gloating press release about a smartphone app for women to notify their near ones that they were in distress. It was one among many such PRs frequently landing in my mailbox after the rape and murder of a young student on board a private bus in Delhi in 2012.

The article by Rohini Lakshané was published in Gender IT.org on May 19, 2016. This was also mirrored by Feminism in India on January 9, 2017.


The incident had spurred protests across the country and made international headlines. Along with all this came a slew of new “women’s safety” apps. Existing ones, many of which had fizzled out, were conveniently relaunched. My own experience of user-testing such apps in India back then was that they were unreliable at best and dangerously counterproductive at worst. Some of them were endorsed by governments and celebrities and ended up being glorified despite their flaws, their technical and systemic handicaps never acknowledged at all.

There are myriad mobile phone apps meant to be deployed for personal safety, but their basic functioning is more or less the same: the user activates the app (by pressing a button, shaking the device or similar cue), which sends a distress message containing the users’ location to pre-defined contacts. Some apps include additional artefacts such as a short audio or video recording of the situation. Some others augment this mechanism by alerting the police and other agencies best placed to respond to the emergency. For example, the Companion app for students living on campus notifies the university along with police. The SOS buttons in taxi-hailing apps such as Uber enable the user’s contacts to follow the cab’s GPS trail and notify them and the cab company’s “incident response team” of emergencies. Apps such as Kitestring would treat the lack of the user’s response within a time-window as the trigger for a distress message. All their technical wizardry perhaps makes it easy to lose sight of the fact that technology is not a saviour but a tool or an enabler, that technology alone cannot be the panacea of a problem that is deeply complex and, in reality, rooted in society and governance.

The Indian government announced last month that every phone sold in the country from January 2017 should be equipped with a panic button that sends distress flares to the police and a trusted set of contacts. Nearly half the phones sold in India cost USD 100 or less. Prices are kept so low by sacrificing features and the quality of the hardware; there are a lot of phones with substandard GPS modules, poor touchscreens, slow processors, bad cameras, tiny memory, and dismal battery life. They run on different versions of different operating systems, some of them outdated. All of these factors would determine if someone is able to use the app at all and how quickly they and their phone would be able to respond to an emergency. Additionally, mobile phone signals become thin or shaky in areas with a high number of users and buildings located cheek-by-jowl. Even when the mobile hardware is good and the mobile signal usable, GPS accuracy can be spotty and constant location tracking would hog battery. These issues would affect the efficacy of any app. Besides, there is too much uncertainty for an app developer to factor in. (Two years ago, I learnt about an app called Pukar, then operational in collaboration with police departments in four cities in India. Pukar solved the problem of potential inaccuracy of the GPS location by getting the user’s contacts to tell the police where the person in distress might be.) Designing a one-size-fits-all safety app is almost impossible. The app that rings a loud alarm when triggered may save someone’s life or spoil the chances of someone who is trying to get help while hiding. Different people may be vulnerable to different kinds of distress situations and an app can at best be optimised for some target user groups.

An app that does not work in tandem with existing machinery for law enforcement and public safety is a bad idea.

In the end, the “technical” problems may actually be problems of economic disparity. Making it mandatory for people to own phones equipped with certain hardware or requiring them to upgrade to more reliable devices would drive the phones out of the financial reach of many. Indian manufacturers have expressed concerns that the proposed panic button would raise costs for them as well the end buyers. Popularising a downloadable app and informing its target users how to install and work it correctly needs a marketing blitzkrieg, which is something only the state or well-funded developers can afford. The New Delhi police department runs a dedicated control room for reports arriving from its safety app, Himmat (the word for courage in many Indian languages). It’s an expensive affair.

An app that does not work in tandem with existing machinery for law enforcement and public safety is a bad idea. It puts the onus of “keeping women safe” on members of their social circles or on intermediaries and private parties such as cab companies, while absolving law enforcement agencies of their failing to provide security. It opens doors to victim blaming in case someone is unable to use the app at the right time in the right way, or if the app fails.

On the contrary, an app that does loop in the police raises concerns about surveillance and protection of data available to the police, which is especially problematic in places such as India where there is no law for privacy or data protection. Alwar, one of the cities where Pukar was implemented, is super-populated with a large geographical area and a high crime rate. Police departments in such places tend to be overworked and understaffed. Without significant policing reforms, it is questionable whether they will be able to respond in time. A sting operation done by two media outlets on 30 senior officials of the New Delhi police department in 2012 showed the cops blaming victims of sexual violence with gay abandon. “If girls don't stay within their boundaries, if they don't wear appropriate clothes, then naturally there is attraction. This attraction makes men aggressive, prompting them to just do it [sexual assault]," reads one of their nuggets. “It's never easy for the victim [to complain to the police]. Everyone is scared of humiliation. Everyone's wary of media and society. In reality, the ones who complain are only those who have turned rape into a business," goes another. An app that lets known people monitor someone’s location also poses the risk of abuse, coercion and surveillance by intimate partners or members of the family.

Unfortunately, there is no app for reforming a morass in law enforcement or dismantling patriarchy.

Facebook: A Platform with Little Less Sharing of Personal Information

by Nishant Shah last modified Jun 05, 2016 02:38 AM
As Facebook becomes less personal, what happens to digital friendship?

The article was published in the Indian Express on May 8, 2016.


Facebook is worried. Even though usage is growing, something strange is happening on the social network. For the first time since it started its journey as a website to rate datable people on college campuses, to becoming the global reference point that defined friendship in the connected age, people are sharing less personal information on Facebook. For a social media network that positions itself largely as a space where our everyday, banal doings become newsworthy articulations, this is surprising news. But it is true. On Facebook, the traffic is high, but most of it is now sharing of external information. People are sharing links to news, to listicles, to videos, to blogs entries, to pictures and to information that they find interesting, but they are writing less and less about what it is that they are doing and feeling.

Ironically, this coincides with the latest change in Facebook’s “response” options, where the ubiquitous “Like” button can now expand to other emojis where you can also be appropriately angry, sad, surprised, or happy about the shared content. Even as Facebook is trying to get its users to qualify how they feel and give emotional value to their likes, people seem to be sharing even less of their private lives on Facebook.

One of the key ways of understanding this drop in people sharing their personal information is through the concept of “context collapse”. It has been a concern since the first instances of disembodied digital communication. In our everyday life, we make sense of information based on the different contexts that surround us. The person who authors the information, the setting within which that information reaches us, the emotional state that we are in when encountering the information, our sense of where we are when processing it, and the preparedness we have for receiving this information are all crucial parameters by which we make sense of the meaning of the information and also our response to it.

In the case of Facebook, the context within which information and transactions have made sense is “friendship”. The site’s USP was that you could bring in a variety of information, but you were always sharing it with friends. You could have a large audience, but this audience is formed of people you know, people you trust, people you add to your friend groups — there is a sense of intimacy, privacy, and casualness that marks the flow of information. You are able to talk, in an equal breath, about what you had for breakfast, your crush on a celebrity, your random acts of charity, and your strong political rant, one after the other, without requiring to think about what you are posting and how others will receive it.

However, Facebook is not really a friendship platform. It is a company interested in selling our interactions and data to advertisers who can target us with content and information based on the patterns of our behaviour. To serve its advertisers better, Facebook started privileging “verified” information trying to ensure news and content producers higher attention and more eyeballs. This was further strengthened by their continued integration with third party vendors, who could push and pull information into the social world of Facebook, and is seen as one of the biggest reasons for this drop. Any newsfeed in the last few months has had equal amounts of professional and amateur content, leading to a context collapse, where you no longer feel like your Facebook feed is a private and intimate conversation with friends.

Similarly, Facebook’s expansive integration of its products —WhatsApp chats, Instagram updates, and Tumblr posts all can collapse into one — produced a confusing space where the personal information that you were once happy to share with your friends, is suddenly being shared along with news and information. Also, digital behaviour works on mirroring, and we often shape our updates to match what we see on our timelines. If we more and more see external content rather than personal statuses, we also start sharing more third party news and links, thus producing a domino effect of everybody shying away from extremely personal or intimate moments.

Facebook, for the millennials, has been the context within which friendship got structured. Its own transitions have now collapsed that context, leading people to think of it as a content aggregator. It is going to be interesting to see what happens to our digital friendships and networks if Facebook is no longer the space where they are housed.

Online Censorship on the Rise: Why I Prefer to Save Things Offline

by Nishant Shah last modified Jun 05, 2016 03:26 AM
As governments use their power to erase what they do not approve of from the web, cloud storage will not be enough.

The article was published in the Indian Express on April 17, 2016.


It took me some time to trust the cloud. Growing up with digital technologies that were neither resilient nor reliable — a floppy drive could go kaput without you having done anything, a CD once scratched could not be recovered, hard drives malfunctioned and it was a given that once every few months your PC would crash and need a re-install — I have always been paranoid about making backups and storing information. Once I kicked into my professional years, I developed a foolproof, albeit paranoid, system, where I backed up my machines to a common hard drive, made a mirror image of that hard drive, and for absolutely crucial documents, I would put them on to a separate DVD which would have the emergency documents. It was around 2006, when I discovered the cloud.

It began with Google’s unlimited email accounts where you could mail information to yourself and then it would stay there for a digital eternity. I noticed that the size of my digital storage began decreasing. I no longer download videos I find on the web. I don’t save information on a device and I have come to think of the web as one large cloud, relying on the fact that if something is online once, it will always be available to me.

However, over the last couple of months, I have started noticing something different in my usage patterns. These days, when I do come across interesting information, instead of merely indexing it, I find myself making an offline copy of that information. Tweets enter a Storify folder. YouTube videos get downloaded. I make PDF copies of blogs and take screenshots of digital medial updates. I have been wondering why I am suddenly so invested in archiving the web when, theoretically, it is always there.

When I voiced this to a group of young students, I was surprised to hear that I wasn’t alone. The web is becoming a space that is crowded with take-downs, deletions, removals, and retractions which leave no archival memory. The students quickly pointed out that these take-downs are not just personal redactions. In fact, what we personally choose to remove has very little chances of actually disappearing from the web. Instead, these are things that are removed by governments, private companies and intermediaries who are being largely held liable for the content of the information that they make available.

Turkey, recently, demanded that German authorities remove a satirical German video titled Erdowie, Erdowo, Erdogan mocking their President. In response, Germany reminded the Turkish diplomacy of that lovely little thing called freedom of speech, and in the meantime, Extra 3, the group that had released the video on YouTube, added English subtitles to the video. Just for perks. I hope you gave a brownie point to Germany, even as you scrambled to see the video.

On the home front, though, things are not as celebratory. The minister of state for information and broadcasting, Rajyavardhan Rathore, and the head of the BJP’s information and technology cell, Arvind Gupta, have called for action against journalist Raghav Chopra who tweeted a photoshopped image of PM Narendra Modi bending down to touch the feet of a man dressed in Saudi Arabia’s national dress, to make a political comment about the PM’s recent visit to SA.

The two politicos, who have not had much to say about the doctored videos that were used to convict innocent students in JNU or the photoshopping that the government’s Press Information Bureau had indulged in to give us that iconic image of the prime minister doing an aerial survey of #ChennaiFloods, have taken umbrage against an image because it seems (obviously) false, and are demanding its takedown.

My proclivity for saving things offline is perhaps fuelled by this web of partisan censorship and the atmosphere of precarious hostility that governments seem to be supporting. Increasingly, we have seen, in India and around the globe, a rush of political power that exercises its clout to remove information, images and stories that they do not approve of.

Instinctively, I am reacting to the fact that intellectual questioning or cultural critique is being removed from the web at the behest of these vested powers, and that the cloud, light and airy as it sounds, is prone to some incredible acts of censorship and removal. I have found myself facing too many removal notices and take-down errors when trying to revisit bookmarked sites, that I am beginning to feel that the only way to keep my information safe might be to archive the whole web on a personal server.

A Large Byte of Your Life

by Nishant Shah last modified Jun 05, 2016 03:35 AM
With the digital, memory becomes equated with storage. We commit to storage to free ourselves from remembering.

The article was published in Indian Express on April 3, 2016.


This is the story of a broken Kindle. A friend sent a message to a WhatsApp group that I belong to that she is mourning the loss of her second-generation Kindle, that she bought in 2012, and since then had been her regular companion. It is not the story of hardware malfunction or a device just giving up. Instead, it is a story of how quickly we forget the old technologies which were once new. The friend, on her Easter holiday, was visiting her sister, who has a six-year-old daughter.

This young one, a true digital native, living her life surrounded by smart screens, tablets, phones, and laptops, instinctively loves all digital devices and plays with them. In her wanderings through her aunt’s things, she came across the old Kindle — unsmart, without a touch interface, studded with keys, not connected to any WiFi, and rendered in greyscale. It was an unfamiliar device. But with all the assurance of somebody who can deal with digital devices, she took it in her hands to play with it.

Much to her dismay, none of the regular modes of operation worked. The old Kindle did not have a touch screen operated lock. It wasn’t responding to scroll, swipe and pinch. It had no voice command functions. As she continued to cajole it to come to life, it only stared at her, a lock on the digital interface, refusing to budge to the learned demands and commands of the new user. After about 20 minutes of trying to wake the Kindle up, she became frustrated with it and banged it harshly on the table, where it cracked, the screen blanked out and that was the end of the story.

Or rather, it is the beginning of one. As my friend registered the loss of her clunky, clumsy, heavy, non-intuitive Kindle, and messages of grief poured in, with the condolence that the new ones are so much better and the assurances that at least all her books are safe on the Amazon cloud, I see in this tale, the quest of newness that the digital always has to offer.

If it has missed your attention, the digital is always new. Our phones get discarded every few seasons, even as phone companies release new models every few months. Our operating systems are constantly sending us notifications that they need to be updated. Our apps operate in stealth mode, continuingly adding updates where bugs are fixed and features are added. Most of us wouldn’t know what to do if we were faced with a computer that doesn’t “heal”, “backup” or “restore” itself. If our lives were to be transferred back to dumb phones, or if we had to deal with devices that do not strive to learn and read us, it might lead to some severe anxiety.

The newness that the digital offers is also found in our socially mediated lives. Our digital memories are short-lived — relationships rise and fall in the span of days as location-based dating apps offer an infinite range of options to choose your customised partner; celebrities are made and unmade overnight as clicks lead to viral growth and then disappear to be replaced by the next new thing; communities find droves of subscribers, only to become a den of lurkers where nothing happens; must-have apps find themselves discarded as trends shift and new must-haves crop up overnight. Breathless, bountiful and boundless, the digital keeps us constantly running, just to be in the same place, always the same and yet, always new.

We would be hard pressed to remember that magical moment when we first discovered a digital object. For millennials, the digital is such a natural part of their native learning environments that they do not even register the first encounter or the subsequent shifts as they navigate across the connected world. Increasingly, we tune ourselves to the temporality and the acceleration of the digital, tailoring our memories to what is important, what is now, and what is immediately of use, excluding everything else and dropping it into digital storage, assured in our godlike capacities to archive everything.

This affordance of short digital memories is enabled partly by the fact that we are subject to information overload, but partly also to the fact that our machines can now remember, more accurately and more robustly than the paltry human, prone to error and forgetfulness. With the digital, memory becomes equated with storage, and the more we commit to storage, the more we free ourselves from the task of remembering.

The broken Kindle is a testimony not only to the ways in which we discard old devices but also our older forms of individual and collective memory — quickly doing away with information that is not of the now, that is not urgent, and that does not have immediate use value. My friend’s Kindle got replaced in two days. All her books were re-loaded and she was set to go. However, as she told me in a chat, she is not going to throw away her old broken Kindle. Because she wants to remember it — remember the joy of reading her favourite books on it. She is scared that if she throws it away, she might forget.

The Digital is Political

by Nishant Shah last modified Jun 05, 2016 03:58 AM
To speak of technology is to speak of human life and living.

The article was published in the Indian Express on March 20, 2016.


“You are supposed to write about the internet, why do you keep talking about all this politics?” I was taken aback when I was faced with this question. It is true – since the year has begun, I have talked about digital education and the ways in which it needs to account for unexpected and underserved communities, about net neutrality and why the Indian government needs to build a stronger, safer, and a more inclusive digital ecosystem. I have written about freedom of speech and expression and how this is going to be the year when we stand together to save the internet from vested interests that seek to convert it from a public commons into a private commodity.

In my head, all these questions — of inclusion, of access, of presence, of rights — are questions of human life and living, but they are also those that are being hugely restructured by the internet and digital technologies. When faced with the query, I was reminded of a deep-seated division that has been at the heart of digital cultures.

Way back in the ’90s, when the internet was still a space of science fiction and the World Wide Web was in its nascent stages, there was a distinction made between Virtual Reality (VR) and Real Life (RL). The presumption in the construction of these categories was that the digital is only an escape, the technological is merely a prosthesis, and the internet is just a thing that a few geeks engaged with in their free time. However, the last three decades have made this distinction between VR and RL redundant.

We live in digital times. The digital is not just something we use strategically and specifically to do a few tasks. Our very perception of who we are, how we connect to the world around us, and the ways in which we define our domains of life, labour, and language are hugely structured by the digital technologies. The digital is ubiquitous and hence, like air, invisible. We live within digital systems, we live with intimate gadgets, we interact through digital media, and even though we might all be equally digital natives, there is no denying the fact that the very presence and imagination of the digital has dramatically restructured our lives. The digital, far from being a tool, is a condition and context that defines the shapes and boundaries of our understanding of the self, the society, and the structures of governance.

The pervasive nature of the digital technologies and internet can be found at multiple levels. For instance, we do not think about going online anymore, because most of our devices are connected 24×7 to the digital web. Even when we are not online, sunk in a bad network connection, or protecting our precious data usage, we know that our avatars and digital identities are online and talking without us.

So established is this phenomenon that we even have a name for the anxiety it creates: FOMO — the Fear Of Missing Out. Similarly, the digital can be located at the level of human understanding. We are used to thinking of ourselves as digital systems. We talk about our primary identity as one marked by information overload. We often complain, when faced with too many demands on our time and space, that we don’t have enough bandwidth to deal with new problems, and we are not referring to digital connectivity.

The digital also has space at the level of policy and governance. If you, like the many millions of Indians, have registered for an Aadhaar card, you have already been marked by a digital identity whether or not you have broadband access. When our government launches Digital India campaigns, it is not merely about an economic model of growth, but it is suggesting that the digital is going to be at the foundations of the new India that we want to build for the future.

If the digital is so central to our fundamental understanding of the self, the society, and the state, then surely it is time to stop thinking that these technologies have nothing to do with politics? There remains a forced imagination of technologies as devices, as tools, as prostheses which do not have any other role than the performing of a function. However, this is a fallacy, because not only do technologies shape our sense of who we are, but they also prescribe new templates and models of who we are going to be. In the process, these technologies take political action, create social structures, mobilise cultural possibilities, and often, because they are technologies that are still elite and available to the privileged few in the country, they enable decisions which are not always fair, open, and just.

Hence, a technological decision cannot be read merely as a technical decisions but as human decisions. To speak of technology is to speak of human life and living. To write about technology is to write about politics, because a separation between the two is not only futile but downright dangerous.

CIS's Comments on the Draft Geospatial Information Regulation Bill, 2016

by Pranesh Prakash last modified Jun 05, 2016 03:06 PM
The Centre for Internet and Society is alarmed by the Draft Geospatial Information Regulation Bill, 2016, and has recommended that the proposed law be withdrawn in its entirety. It offered the following detailed comments as its submission.

Comments on the Draft Geospatial Information Regulation Bill, 2016

by the Centre for Internet and Society

1. Preliminary

1.1. This submission presents comments and recommendations by the Centre for Internet and Society (“CIS”) on the draft Geospatial Information Regulation Bill, 2016 (“the draft bill” / “the proposed bill” / “the bill”).

2. Centre for Internet and Society

2.1. The Centre for Internet and Society is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from the perspectives of policy and academic research. The areas of focus include accessibility for persons with disabilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, digital privacy, and cyber-security. The academic research at CIS seeks to understand the reconfiguration of social processes and structures through the internet and digital media technologies, and vice versa.

2.2. This submission is consistent with CIS’ commitment to safeguarding the public interest, and particularly the representing the interests of ordinary citizens and consumers. The comments in this submission aim to further the principles of people’s right to information regarding their own country, openness-by-default in governmental activities, freedom of speech and expression, and the various forms of public good that can emerge from greater availability of open (geospatial) data created by both public and private agencies, and the innovations made possible as a result.

3. Comments

3.1. General Remarks

3.1.1. While CIS welcomes the intentions of the government to prevent use of geospatial information to undermine national security, the proposed bill completely fails to do so, infringes upon Constitutional rights, harms innovation, undermines the national initiatives of Digital India and Startup India, is completely impractical and unworkable, and it will lead to a range of substantial harms if the government actually seeks to enforce it.

3.1.2. There are already laws in place that prevent the use of geospatial information to undermine national security. For instance, the Official Secrets Act, 1923 (“OSA”) already contains provisions — sections 3(2)(a), (b), and (c) — all of which would prevent a person from creating maps that undermine national security and would penalise their doing so. Section 5 of the OSA contains multiple provisions that penalise the possession and communication of maps that undermine “national security.” The penalties under the OSA range from imprisonment of up to 3 years all the way to imprisonment up to 14 years. Given this, there is absolutely no need to create yet another law to deal with maps that undermine “national security.” Indeed, it is the government’s stated policy to reduce the number of laws in India, whereas the proposed bill introduces a redundant new law that adds multiple layers of bureaucracy.

3.1.3. The National Mapping Policy, 2005, already puts in place restrictions on wrongful depictions of India’s international boundaries, and as we explain below in section 3.4 of this document, even the National Mapping Policy is over-broad. Even if the government wishes to provide statutory backing to the policy, it should be a very different law that is far more limited in scope, and restricts itself to criminalising those who misrepresent India’s international boundary with an intention to mislead people into thinking that that is the official boundary of India as recognised by the Survey of India. CIS would support a law of such limited scope and mandate, provided it has an appropriate penalty.

3.1.4. There would be much utility in a law that creates a duty on the Survey of India to make available, in the form of an open standard, an official electronic version of the maps that it creates, and expressly allows and encourages citizens and startups to reuse such official maps, however the Ministry of Home Affairs would not be the nodal ministry for such a law.

3.1.5. We recommend that the proposed law be scrapped in its entirety.

3.1.6. We additionally provide an alternative manner of reducing the harms caused by this bill, in our comments below. By no means should these further comments be seen as a repudiation of our above position, since we do not feel the proposed bill, even with the inclusion of all of our recommendations, would truly further its stated aims. All our below recommendations would do is to reduce the bill’s harmful, and often unintended, consequences.

3.2. Definition of “Geospatial Information” is over-broad, all- encompassing

3.2.1. The second part of the definition of “geospatial information” refers to all “graphic or digital data depicting natural or man-made physical features, phenomenon or boundaries of the earth or any information related thereto” that are “referenced to a co-ordinate system and having attributes.” (Section 2(1)(e)) As per the definition, this will include all geo-referenced information, and data, that is produced by everyday users as an integral part of various everyday uses of digital technologies. This will also include geo-referenced tweets and messages, location of public and private vehicles shared in the real-time with agencies tracking their location (from public transport authorities, to insurance agencies, etc.), location data of mobile phones collected and used by telecommunication service providers, location of mobile phones shared by the user with various kinds of service providers (from taxi companies to delivery agencies), etc.

3.2.2. We recommend that instead of regulating all kinds of geospatial information, and giving rise to a range of possible harms, the draft bill be revised to specifically address “sensitive geospatial information,” defined as geospatial information related to the “Prohibited Places” as defined in the Official Secrets Act 1923 (section 2(8)) which will allow the bill to effectively respond to its key stated concerns of ensuring “security, sovereignty and integrity of India.” Since the National Map Policy defines “Vulnerable Points” and “Vulnerable Areas” (para 3(b)) as the two main types of geospatial units associated with “Prohibited Places”, these terms should also be referred to in the revised version of the draft bill.

3.3. Unreasonable regulation of acquiring and end-use of geospatial information

3.3.1. Section 3 of the draft bill states that “[s]ave as otherwise provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall acquire geospatial imagery or data including value addition” and “[e]very person who has already acquired any geospatial imagery or data ... including value addition prior to coming of this Act into effect, shall within one year from the commencement of this Act, make an application alongwith requisite fees to the Security Vetting Authority.” This effectively makes it illegal to acquire and maintain ownership of geospatial information that has not been subjected to security vetting.

3.3.2. This draft bill doesn’t apply just to geospatial information that may undermine national security but covers all manners of geospatial information and modern geospatial technologies embedded in everyday digital devices and intimately connected to various electronic products and services, from cars to mobile phones, result in the creation and acquiring of various kinds of geo-referenced information, ranging from the geo-referenced photographs to locations shared with friends. Even ordinary users who are unknowingly looking at maps that contain sensitive geospatial information, which are illegal under the Official Secrets Act, are committing an illegal act under the draft bill, because the users temporarily acquires such sensitive geospatial information in her/his digital device, as part of the very act of browsing the map concerned. This clearly cannot be the intention of the bill. Thus we recommend deletion of the word “acquire.”

3.3.3. Further, the insertion of the phrase “including value addition” in both Section 3(1) and 3(2) appears to suggest that all users who have created derivative products using geospatial information that includes sensitive data (that is data related to Prohibited Places) may be held liable under this draft bill, even if these users have not themselves collected or created such sensitive geospatial information, which was part of the original geospatial information published by the source map agency. This too cannot be the intention of the bill. Thus, we recommend deletion of the phrase “including value addition.”

3.3.4. In the definition of the “Security Vetting of Geospatial Information” itself, it is mentioned that the process will include “screening of the credentials of the end-users and end-use applications, with the sole objective of protecting national security, sovereignty, safety and integrity.” (Section 2(1)(o)) This appears to indicate that all end-users of all electronic and analog services and products using geospatial information will have to be individually vetted before such services and products are used, which would cover a large proportion of the Indian population. This imposes an enormous and impractical burden on the Indian digital economy in particular, and the entire national economy in general, without improving national security. This too cannot be the intention of the draft bill. Thus, we recommend deletion of this phrase, and ensure that end users are not covered by the law.

3.3.5. Given these specific characteristics of how modern geospatial technologies work, and how they provide a basis for various kinds of everyday use of electronic products and services, we would like to submit that the regulatory focus should be on large-scale and/or commercial dissemination, publication, or distribution of geospatial information, and not on the acts of acquiring, possessing, sharing, and using geospatial information. Further, the regulation in general should be aimed at the party owning the geospatial information in question, and not at the parties involved in its dissemination (say, Internet Service Providers) or in its generation or use (say, end-users).

3.4. Removal of journalistic, political, artistic, creative, and speculative depictions of India from the scope of Section 6

3.4.1. Section 6 of the draft bill states that “[n]o person shall depict, disseminate, publish or distribute any wrong or false topographic information of India including international boundaries through internet platforms or online services or in any electronic or physical form.” Section 15 imposes a penalty for such wrong depiction of maps of India.

3.4.2. Depictions of India, which do not purport to accurately represent the international boundaries as recognised by the Indian government should not be penalised. For instance, a map published in a newspaper article about India’s border disputes that shows the incorrect claims that the Chinese government has made over Indian territory would also be penalised as “wrong or false topographic information of India”, since there is a clear intention to depict the boundary as claimed by China. Criminalising such journalism cannot be the legitimate intent of such a provision.

3.4.3. There are numerous instances which have been willfully depicting inaccurate and inauthentic maps of India with international borders for political ends. For instance, there are often depictions of India which show territories within present day Pakistan, Bangladesh, Bhutan, Nepal and Sri Lanka as part of an “Akhand Bharat.” Depictions of this sort should not be penalised. In doing so, would contradict the freedom of expression guaranteed under Article 19(1)(a) without being a reasonable restriction under Article 19(2).

3.4.4. Even depictions of India for purposes of speculative fiction would be penalised under this proposed bill unless they depict the official borders. This is clearly undesirable and would not be allowed as a reasonable restriction under Article 19(2).

*3.4.5.* Even geography students in schools and colleges who mis-draw the official map of India would be liable to penalties under the draft bill. This plainly, cannot be the intention of the drafters of this bill. The creator of a rough and inaccurate tourist map of an Indian city can also be identified as committing a criminal act under the proposed bill as she would be depicting “… wrong or false topographic information of India …”

3.4.6. In brief: Merely depicting, disseminating, publishing or distributing any “wrong or false topographic information of India” should not be penalised. unless a person publishes and widely circulates an incorrect map of India while claiming that that represents the official international boundaries of India, such should not be penalised.

3.4.7. CIS recommends that the bill should instead state: “No person shall depict, disseminate, publish, or distribute any topographic information purporting to accurately depict the international boundaries of India as recognised by the Survey of India unless he is authorised to do so by the Surveyor General of India; provided that usage by any person of the international boundaries as is electronically and in print made available by the Survey of India shall deemed to be usage that is authorised by the Surveyor General of India.”

3.5. Absence of Publicly Available and Openly Reusable Standardised National Boundary of India

3.5.1. Given the lack of an reusable versions of maps of India, including of India’s official boundary as recognised by the Survey of India, it becomes impossible for people to accurately depict the boundary of India. We recommend that the bill requires the Survey of India to publish all “Open Series Maps,”as defined in the National Mapping Policy, 2005, including maps depicting the official international and subnational political and administrative boundaries of India, using open geospatial standards and under an open licence allowing such geospatial data to be used by citizens and all companies.

3.6. Remove Requirement for Prior License for Acquire, Dissemination, Publication, or Distribution of Geospatial Information

3.6.1. Section 9 of the draft bill refers to “any person who wants to acquire, disseminate, publish, or distribute any geospatial information of India” (emphasis added), which can be interpreted as the need for a prior license before any person decides to acquire (including creation, collection, generation, and buying) geospatial information. This creates at least two problems:

  • modern digital geospatial technologies have enabled everyday digital devices (like smartphones) to instantaneously acquire, disseminate, publish, and distribute geospatial information all the time when the person holding that device is looking at online digital maps, say Google Maps, or sharing location with their friends, online platforms and services and service providers (both local and foreign); and

  • the requirement of prior license involves payment of a “requisite fees” to the Security Vetting Authority, which may act as an arbitrary (since the fee might be based upon the volume of geospatial information to be acquired that one may not know fully determine before acquiring) and effective barrier to acquiring, dissemination, publication, or distribution of geospatial information even if it does not violate the concerns of “security, sovereignty, and integrity” in any manner. This requirement also impedes competition in the market, because new entrants to the geospatial industry may not have enough upfront capital to procure licenses.

3.6.2. Further, the requirement of necessary prior license for acquiring geospatial information does not seem to be a crucial component of the security vetting process, since the geospatial information, once acquired by the agency concerned, is in any case directed to be shared with the Security Vetting Authority for undertaking necessary expunging of sensitive or incorrect information.

3.6.3. We recommend revision of this section so that no prior license and/or permission is required for collection, acquiring, distribution, and/or use of geospatial information; instead, a framework may be established for monitoring of published geospatial information for purposes of ensuring geospatial information pertaining to “Prohibited Places,” as defined under the Official Secrets Act, is not made available to the general public by any person or entity under Indian jurisdiction, including, for instance, Indian subsidiaries and branches of foreign corporations.. Such a framework must not address the end-user of such geospatial information, but its publishers.

3.7. Unenforceable jurisdictional scope

3.7.1. Section 5 of the draft bill states “[s]ave as otherwise provided in any international convention, treaty or agreement of which India is signatory or as provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall, in any manner, make use of, disseminate, publish or distribute any geospatial information of India, outside India, without prior permission from the Security Vetting Authority.”

3.7.2. In compliance with this section, domestic and foreign companies and platforms will be required to obtain permission from the Security Vetting Authority of India prior to publishing, distributing etc. geospatial information. Similarly in the preliminary, the draft bill holds in person who commits an offence beyond India under the scope of the bill. The bill is thus proposing extraterritorial applicability of its provisions, yet the extent and method of enforcement of the same on other jurisdictions are kept unclear.

3.8. Negative implications for rights of citizens

3.8.1. There are a number of sections in the draft bill which have negative implications for the rights of all users and potentially impinge on the constitutional rights of Indian citizens. These include:

a. Section 18(2) which empowers the Enforcement Authority to conduct a search without a judicial search order;

b. Section 17(3) which empowers the Enforcement Authority to conduct undefined surveillance and monitoring to enforce the Act;

c. Chapter (V) which penalises individuals with Rs. 1-100 Crores and/or seven years in prison for an offence under the act;

d. Section 22 which allows the government to take ownership of a person’s land if a financial penalty has not been paid;

e. Section 30(1) which holds, in the case of the offense being committed by a company, every person in charge of and responsible for the conduct of business of the company, guilty and liable.

3.9. Overly broad powers and responsibilities of the Apex Committee and Enforcement Authority, and lack of adequate oversight

3.9.1. Section 7(2) states that “[t]he Apex Committee shall do all such acts and deeds that may be necessary or otherwise desirable to achieve the objectives of the Act, including the following functions:...” The wording in this section is broad and open ended, and allows for the responsibilities of the Apex Committee to be expanded without clear oversight of such expansion.

3.9.2. Similarly, section 17 established an “Enforcement Authority” for the purpose of carrying out surveillance and monitoring for enforcement of the draft bill. The Authority has been given a number of powers including the power of inquiry, the power to adjudicate, and the power to give directions. These powers have direct implications on the rights of individuals, yet the Authority is not subject to oversight or accountability requirements.

3.9.3. We recommend that the powers and responsibilities of the Apex Committee and Enforcement Authority are narrowly defined in the draft bill itself, limited by the principle of necessity, and subject to independent oversight and accountability requirements.

3.10. Remove the Security Vetting Authority’s power of delegation

3.10.1. Section 8(3) allows the Security Vetting Authority to delegate to any constituent member of the Authority, other subordinate committee, or officer powers and functions as it may deem necessary except the power to grant a licence. In practice, this will allow security vetting to be done by another institution and risks potential involvement of private agencies and/or quasi-governmental bodies.

3.10.2. We recommend that the power of delegation should not be granted to the Security Vetting Authority.

3.11. Negative implications for innovation and India’s digital economy

3.11.1. Section 3 of the draft bill states “[s]ave as otherwise provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall acquire geospatial imagery or data including value addition of any part of India either through any space or aerial platforms such as satellite, aircrafts, airships, balloons, unmanned aerial vehicles or terrestrial vehicles, or any other means whatsoever”. This effectively ensures that each and every user of geospatial data, products, services, and solutions — since all of these either include or are derivatives of geospatial information — would require prior permission from the Security Vetting Authority. This will substantially affect the existing and emerging digital economy in particular, and the entire economy in general.

3.11.2. Further, Section 9 of the draft bill mandates that any person submitting an application for geospatial information to be vetted must pay a fee. As the provisions of the bill mandate that users approach the Security Vetting Authority for license to use geospatial information, this will impose an immense burden on all users of digital devices in and outside of India. CIS submits that imposition of this fee for security vetting be removed.

3.12. Disproportionate penalty for acquisition of geospatial information

3.12.1. Section 12 states that “[p]enalty for illegal acquisition of geospatial information of India.- Whoever acquires any geospatial information of India in contravention of section 3, shall be punished with a fine ranging from Rupees one crore to Rupees one hundred crore and/or imprisonment for a period upto seven years.” Seven years in prison is disproportionate to the offense of acquiring geospatial information without vetting by the authority concerned. This is particularly true given the broad and all-encompassing definition of “geospatial information” in the draft bill, and the fact that the bill applies to individuals and companies both within and outside of India.

3.13. Improper and inconsistent usage of terms in the draft bill

3.13.1. Section 4 of the draft bill regulates the visualization, publication, dissemination and distribution of geospatial information of India, while section 5 regulates use, dissemination, publication, and distribution of geospatial information outside of India. The definition of “visualization” remains unclear, and the act is only regulated in section 4. The section 6 of the draft bill uses the term ‘depict’, which is undefined as well. We submit that in this context terms are interchangeable, and the draft bill should either define them expressly to avoid ambiguity in interpretations, or consistently use only one throughout the draft bill.

3.13.2. Section 11 (3) of the draft bill requires licensees to “[d]isplay the insignia of the clearance of the Security Vetting Authority on the security-vetted geospatial information by appropriate means such as water-marking or licence as relevant, while disseminating or distributing of such geospatial information.” We observe that geospatial information includes graphical representation, location coordinates, inter alia. While the former may be represented visually on an “as is” basis after the completion of the vetting, the latter may be used to perform other complex functions at the “back-end” (i.e., vendor-facing side) in various technologies. Water-marking and/or displaying of insignia would place undue burden on the licensee, depending on the kind of platform, service, or individual.

3.14. Lack of reference to technical implementation guidance

3.14.1. The regulation, harmonisation, and standardisation of the collection, generation, dissemination etc. of geospatial information is a complex process that goes beyond a process of security vetting and that will require extensive technical implementation guidance from the government. At a minimum this could include quality assurance considerations and standard operating procedures, yet the draft bill makes no reference to the need for technical standards or guidance.

Comments prepared by Sumandro Chattapadhyay, Adya Garg, Pranesh Prakash, Anubha Sinha, and Elonnai Hickok. Submitted by the Centre for Internet and Society, on June 3, 2016.

Smart City Policies and Standards: Overview of Projects, Data Policies, and Standards across Five International Smart Cities

by Kiran A. B., Elonnai Hickok and Vanya Rakesh — last modified Jun 11, 2016 01:29 PM
This blog post aims to review five Smart Cities across the globe, namely Singapore, Dubai, New York City, London and Seoul, the Data Policies and Standards adopted. Also, the research seeks to point the similarities, differences and best practices in the development of smart cities across jurisdictions.

 

Download the brief: PDF.


Introduction

Smart City as a concept is evolutionary in nature, and the key elements like Information and Communication Technology (ICT), digitization of services, Internet of Things (IoT), open data, big data, social innovation, knowledge, etc., would be intrinsic to defining a Smart City [1].

A Smart City, as a “system of systems”, can potentially generate vast amounts of data, especially as cities install more sensors, gain access to data from sources such as mobile devices, and government and other agencies make more data accessible. Consequently, Big Data techniques and concepts are highly relevant to the future of Smart Cities. It was noted by Kenneth Cukier, Senior Editor of Digital Products at The Economist, that Big Data techniques can be used to enhance a number of processes essential to cities - for example, big data can be used to spot business trends, determine quality of research, prevent diseases, tack legal citations, combat crime, and determine real-time roadway traffic conditions [2]. Having said this, data is deemed to be the lifeblood of a Smart City and its availability, use, cost, quality, analysis, associated business models and governance are all areas of interest for a range of actors within a smart city [3]

This blog reviews five Smart Cities namely Singapore, Dubai, New York City, London and Seoul. In doing so, the research seeks to point the similarities, differences and best practices in the development of smart cities across jurisdictions. To achieve this, the research reviews:

  • The definition of a Smart City in a given context or project (if any).
  • Existing policy/regulations around data or notes the lack thereof.
  • The cities adherence to the International standards and providing an update on the current status of the Smart City programme.

 

Singapore

Introduction

The Smart Nation programme in Singapore was launched on 24th November, 2014. The programme is being driven by the Infocomm Development Authority of Singapore, through which Singapore seeks to harness ICT, networks and data to support improved livelihoods, stronger communities and creation of new opportunities for its residents [4] According to the IDA, a Smart Nation is a city where “people and businesses are empowered through increased access to data, more participatory through the contribution of innovative ideas and solutions, and a more anticipatory government that utilises technology to better serve citizens’ needs” [5]. The Smart Nation programme is driven by a designated Office in the Prime Minister’s Office [6]. As a core component to the Smart Nation Programme, the Smart Nation Platform has been developed as the technical architecture to support the Programme. This Platform enables greater pervasive connectivity, better situational awareness through data collection, and efficient sharing and access to collected sensor data, allowing public bodies to use such data to develop policy and practical interventions [7] Such access would allow for anticipatory governance - a goal of the Smart Nation Programme as noted by Dr. Yaacob Ibrahim, Minister for Communications and Information stating “Insights gained from this data would enable us to better anticipate citizens’ needs and help in better delivery of services” [8].

Status of the Project

The Smart Nation Programme is an ongoing initiative, being built on the past programme Intelligent Nation 2015 (iN2015 masterplan). The plan involves putting in place the infrastructure, policies, ecosystem and capabilities to enable a Smart Nation, by adopting a people-centric approach [9]. A number of co-creating solutions adopted by the Government include:
  • Development of Mobile Apps to facilitate communication between the public and the providers of public services.
  • Organization of Hackathons by government agencies or corporations in collaboration with schools and industry partners to ideate and develop solutions to tackle real-world challenges.
  • Adopt measure for smart mobility to create a more seamless transport experience and providing greater access to real-time transport information so that citizens can better plan their journeys.
  • Smart technologies are also being introduced to the housing estates [10].

Policies and Regulations

The Smart Nation plan derives its legitimacy from the constitution of Singapore, holding the Prime Minister responsible to take charge of the subject ‘Smart Nation’ blueprint under the Statutory body of ‘Smart Nation’ Programme Office [11]. Singapore has a comprehensive data protection law – the Personal Data Protection Act 2012, rules governing the collection, use, disclosure and care of personal data. The Personal Data Protection Commission of Singapore has committed to work closely with the private sector, and also to support the Smart Nation vision on data privacy and cyber security ecosystem [12] [13].

Towards achieving the Smart Nation vision the government has also promoted the use of open data. In 2015 the Department of Statistics has made a vast amount of data available (across multiple themes say transport, infocomm, population, etc.) for free to the public in order to encourage innovation and facilitate the Smart Nation [14]. Prior to this initiative, the government had adopted the Open Data Policy in 2011, enabling public data for analysis, research and application development [15]. The concept of Virtual Singapore, which is a part of the Smart Nation Initiative, has been developed to adopt and simulate solutions on a virtual platform using big data analytics [16].

Adoption of International Standards

The Smart Nation initiative follows the standards laid under the purview of the Singapore Standards Council (SSC). It specifies three types of Internet of Things (IoT) Standards – sensor network standards (TR38 - for public areas & TR40 - for homes), IoT foundational standards (common set of guidelines for IoT requirements and architecture, information and service interoperability, security and data integrity) and domain-specific standards (healthcare, mobility, urban living, etc.) [17].

Singapore is part of ISO/IEC JTC 1/WG7 Sensor Networks and ISO/IEC JTC 1/WG10 Internet of Things (IoT) [18]. Singapore IT standards abides to the international standards as defined by ISO, ITU, etc.Singapore is a member of many international standards forums (see Singapore International Standards Committee) which includes JTC1/WG9 - Big Data; JTC1/WG10 - Internet of Things; JTC1/WG11 - Smart Cities.

 

Dubai, United Arab Emirates

Introduction

The Dubai Smart City strategy was launched as part of the Dubai Plan 2021 vision, in the year 2015 [19]. Dubai Plan 2021 describes the future of Dubai evolving through holistic and complementary perspectives, starting with the people and the society and places the government as the custodian of the city’s development. Within the Plan, the smart city theme envisions a platform that is fully connected and integrated infrastructure that enables easy mobility for all residents and tourists, and provides easy access to all economic centers and social services, in line with the world’s best cities [20]. Center to the smart city platform is data and data analytics, particularly cross functional data and big data techniques to give a complete view of the city [21] As envisioned, the Dubai Data portal would provide a gateway to empower relevant stakeholders to understand the nuances of the city and pursue questions that will result in the greatest impact from the city’s data [22]. The platform will be based on current data and existing services, initiatives, and networks to identify opportunities for a smart city [23]. The Smart City Plan also includes a framework for aligning districts of Dubai with the Smart City vision and dimensions [24].

The Smart Dubai roadmap 2015 provides a consolidated report and planned smart city services, its status and the stage of its implementation, for e.g. Smart Grid, Mobile Payment, Smart Water, Health applications, Public Wi-Fi, Municipality, E-Traffic solutions, etc [25].

Status of the Project

The Smart Dubai strategy is envisioned to be completed by the year 2020, and currently it’s ongoing. The first phase of Smart Dubai masterplan is expected to end by 2016. Between 2017 and 2019, the plan aims to deliver new initiatives and services. The second phase of the masterplan is expected to be completed by the year 2020 [26].

Policies and Regulations

The Smart City Plan is being driven by the Dubai Smart City Office – which has been established under Law No. (29) of 2015 on the establishment of Dubai Smart City Office; Law No. (30) of 2015 on the establishment of Dubai Smart City Establishment; Decree No. (37) of 2015 on the formation of the Board of the Dubai Smart City Office; and Decree No (38) of 2015- appointing a Director General for the Office, which will develop overall policies and strategic plans, supervise the smart transformation process and approve joint initiatives, projects and services [27]. Also, an open data law called Dubai Open Data Law was issued to complete the legislative framework for transforming Dubai into a Smart City [28]. This law will enable the sharing of non-confidential data between public entities and other stakeholders.

Adoption of International Standards

In 2015 the Smart Dubai Executive Committee has collaborated through an agreement with the International Telecommunications Union (ITU) adopt the performance indicators by the ITU Focus Group on Smart Sustainable Cities to evaluate the feasibility of the indicators [29]. The Focus Group is working towards identifying global best practices for the development of smart cities [30].

 

New York City, United States of America

Introduction

The ‘One New York Plan’ announced in the year 2015 is a comprehensive plan for a sustainable and resilient city. It includes the adoption of digital technology and considers the importance of the role of data in transforming every aspect of the economy, communications, politics, and individual and family life [31]. Furthermore, through a publication on 'Building a Smart+Equitable City', the Mayor’s Office of Technology and Innovation (MOTI) describes efforts to leverage new technologies to build Smart city.

Accordingly, the plan seeks to establish better lives through establishing principles and strategic frameworks to guide connected device and Internet of Things (IoT) implementation; MOTI serving as the coordinating entity for new technology and IoT deployments across all City agencies; collaborating with academia and the private sector on innovative pilot projects, and partnering with municipal governments and organizations around the world to share best practices and leverage the impact of technological advancements [32].

Status of the Project

OneNYC represents a unified vision for a sustainable, resilient, and equitable city developed with cross-cutting interagency collaboration, public engagement, and consultation with leading experts in their respective fields. The Mayor’s Office of Sustainability oversees the development of OneNYC and now shares responsibility with the Mayor’s Office of Recovery and Resiliency for ensuring its implementation [33].

Policies and Regulations

As per the Local Law 11 of 2012, each City entity must identify and ultimately publish all of its digital public data for citywide aggregation and publication by 2018. In adherence to this law, there exists a NYC Open Data Plan which requires annual data updation [34].

The LinkNYC initiative, one of the key projects to make New York a ‘smart’ city, aims to connect everyone through a city wide wi-fi network. The LinkNYC initiative will retrofit payphones with kiosks to provide high-speed WiFi hotspots and charging stations for increased connectivity [35]. Data Privacy in the initiative is addressed through the customer first privacy policy, which considers user’s privacy on priority and will not sell any personal information or share with third parties for their own use. LinkNYC will use anonymized, aggregate data to make the system more efficient and to develop insights to improve your Link experience [36].

Adoption of International Standards

The ANSI Network on Smart and Sustainable Cities (ANSSC) is a forum for information sharing and coordination on voluntary standards, conformity assessment and related activities for smart and sustainable cities in the US [37]. The US is a signatory of the ISO/ITU defined standards on smart cities [38].

 

London, United Kingdom

Introduction

The Smart London Plan was unveiled in the year 2013 by the Mayor of London. The plan is being driven through the Greater London Authority, with the advice of the Smart London Board. The Smart London Plan envisions ‘Using the creative power of new technologies to serve London and improve Londoner’s lives[39]. ‘Smart London’ is about harnessing new technology and data so that businesses, Londoners and visitors experience the city in a better way, and do not face bureaucratic hassle and congestion. Smart London seeks to improve the city as a whole and focuses on city macro functions that result from the interplay between city subsystems - such as local labour markets to financial markets, from local government to education, healthcare, transportation and utilities. According to strategy documents, a smarter London recognises and employs data as a service and will leverage data to enable informed decision making and the design of new activities.

Status of the Project

This project is currently ongoing. Since its formation in March 2013, the Smart London Board has been advising the Greater London Authority.The Plan sits within the overarching framework of the Mayor’s Vision 2020 [40].

Policies and Regulations

The Smart London Plan incorporates the existing open data platform called ‘London DataStore’. The rules and guidelines for this platform are defined by the Greater London Authority, which includes working with public and private sector organisations to create, maintain and utilise it, enabling common data standards, identify and prioritise which data are needed to address London’s growth challenges, establish a Smart London Borough Partnership to encourage boroughs to free up London’s local level data. Also, privacy is protected and there is transparent use of data - to ensure data use is managed in the best interests of the public rather than private enterprise.42 The Smart London Plan aims to build on this existing datastore to identify and publish data that addresses specific growth challenges, with an emphasis on working with companies and communities to create, maintain, and use this data [41].

The Open Data White Paper, issued by the Office of Paymaster General, seeks to build a transparent society by releasing public data through open data platforms and leveraging the potential of emerging technologies [42]. The Greater London Authority processes personal data in accordance with the Data Protection Act 1998 [43].

Adoption of International Standards

The British Standards Institution (BSI) has already established Smart City standards and has associated with the ISO Advisory Group on smart city standards. The UK subscribes to the BSI standards for smart cities and has adopted the same [44]. The following standards and publications help address various issues for a city to become a smart city:

Further, the Smart London Plan incorporates open data standards in accordance with London DataStore [45]. Various government reports – Smart Cities background paper, Open Data White Paper, etc., have suggested the use of standards related to Internet of Things (IoT), open data standards, etc [46].

 

Seoul, Korea

Introduction

Smart Seoul 2015 was announced in June 2011 by the Seoul Metropolitan Government, which envisions integrating IT services into every field, including administration, welfare, industry and living. Through this, the Seoul Metropolitan Government plans to create a Seoul that uses smart technologies by 2015 [47]. Towards this, the Seoul Metropolitan Government plans to make use of Big Data in policy development, and through scientific analytics, will provide customized administrative services and reduce wasteful spending. Also, the government is utilising Big Data to analyse trends emerging from existing services [48]. Examples of projects that leverage big data that the government has undertaken include the Taxi Matchmaking Project – analyzes the data related to taxi stands and passengers, the Owl Bus [49] - maps the bus routes, etc.

Status of the Project

Building on the Smart Seoul 2015, the Seoul Metropolitan Government plans to establish 'Global Digital Seoul 2020 – New Connections, Different Experiences' vision in next five-years. In this multi-objective plan, it aims to establish a ’Big Data campus’ providing win-win cooperation among public, private, industry and university [50].

Policies and Regulations

The Smart Seoul 2015 aims to create a ‘Seoul Data Mart’, which will be an open platform that makes public information available for data processing [51]. Furthermore, Seoul has opened the Seoul Open Data Plaza [52], an online channel to share and provide citizens with all of Seoul’s public data, such as real-time bus operation schedules, subway schedules, non-smoking areas, locations of public Wi-Fi services, shoeshine shops, and facilities for disabled people, and the information registered in Seoul Open Data Plaza is provided in the open API format.45

South Korea has a comprehensive law governing data privacy – Personal Information Protection Act, 2011. The law includes data protection rules and principles, including obligations on the data controller and the consent of data subjects, rights to access personal data or object to its collection, and security requirements. It also covers cookies and spam, data processing by third parties and the international transfer of data [53].

International Standards

The smart city standards are adopted in the development of smart cities in Korea [54]. Korea has adopted the ISO/TC 268, which is focused on sustainable development in communities. Korea also has one working group developing city indicators and another working group developing metrics for smart community infrastructures [55].

 

Conclusion

The smart city projects studied are at different levels of implementation and have both similarities and differences. Below is an analysis of some of the key similarities and differences between smart city projects, a comparison of these points to India’s 100 Smart City Mission, and a summary of best practices around the development of smart city frameworks.

Nodal Agency

All cities studied have nodal agencies driving the smart city initiatives and many have policies in place backing these initiatives. For example, while the Smart Nation programme in Singapore is being driven by the Infocomm Development Authority, in London the smart city project is governed by the Great London Authority. The Smart Seoul Project in Korea is governed by the Seoul Metropolitan Government and New York has the Mayor’s Office of Technology and Innovation serving as the coordinating entity for new technology and IoT deployments across all City agencies. In India, the nodal agency driving the 100 Smart Cities Project is the Ministry of Urban Development under the Indian Government. In India, the implementation of the Mission at the City level will be done by a Special Purpose Vehicle (SPV), which will be a limited company and will plan, appraise, approve, release funds, implement, manage, operate, monitor and evaluate the Smart City development projects.

Policies

Many of the cities had open data policies and data protection policies that pertain to the Smart City initiatives. In Dubai, an open data law called Dubai Open Data Law has been issued to complete the legislative framework for transforming Dubai into a Smart City and the Smart City Establishment will develop policies for the project. New York also has an Open Data Plan in place and LinkNYC will use anonymized, aggregate data to address data privacy of users. In London, the Smart London Plan incorporates the existing open data platform called ‘London DataStore’, the rules for which are defined by the Greater London Authority, which also ensures privacy and transparent use of data by processing personal data in accordance with the Data Protection Act 1998. For regulation of data in Seoul, a ‘Seoul Data Mart’ will be established to make public information available for data processing and the Seoul Open Data Plaza is an existing online channel to share and provide citizens with all of Seoul’s public data. South Korea has a comprehensive law governing data privacy in place as well. In Singapore, the Personal Data Protection Commission has committed to work and support the Smart Nation vision on data privacy and cyber security ecosystem. To achieve the vision of the project, the government has also promoted the use of open data. It can be said the these countries , with clearly laid out policies to support and guide the project, have well planned ecosystem for regulation and governance of systems, technologies and cities. All cities have incorporated open data into smart cities and many have developed guidelines for its use. All cities have similar goals of enhancing the lives of citizens and developing anticipatory regulation, however, there appears to be little discussion on the need to amend existing law or enable new law around privacy and data protection in light of data collection through smart cities. In India, no enabling legislation or policy has been formulated by the Government, apart from releasing “Mission Statement and Guidelines”, which provides details about the Project and vision, excluding a definition of a ‘smart city’ or the relevant applicable laws and policies. No information is publicly available regarding deployment of open data, use of specific technologies like cloud, big data, etc., the relevant policies and applicability of laws. Unlike India, all cities recognize the importance of big data techniques in enabling smart city visions, technology and policies. On the lines of these cities, India must work towards addressing the need for an open data framework in light of the 100 Smart Cities Mission to enable the sharing of non-confidential data between public entities and other stakeholders. This requires co-ordination to incorporate, enable and draw upon open data architecture in the cities by the Government with the existing open data framework in India, like the National Data Sharing and Accessibility Policy, 2012. Use of technology in the form of IoT and Big Data entails access to open data, bringing another policy area in its ambit which needs consideration. Also, identification and development of open standards for IoT must be looked at. Also, as data in smart cities will be generated, collected, used, and shared by both the public and private sector. It is essential that India’s existing data protection standards and regime must be amended to extend the data regulation beyond a body corporate and oversee the collection and use of data by the Government, and its agencies.

Standards

In Singapore, the Smart Nation initiative follows the standards laid under the purview of the Singapore Standards Council (SSC)and the Singapore IT standards abides to the international standards as defined by ISO, ITU, etc. The Country is also a member of many international standards forums (see Singapore International Standards Committee) which includes JTC1/WG9- Big Data; JTC1/WG10 - Internet of Things; JTC1/WG11 - Smart Cities. In Dubai, the Smart Dubai Executive Committee with the International Telecommunications Union (ITU) to adopt the performance indicators by the ITU Focus Group on Smart Sustainable Cities to evaluate the feasibility of the indicators. For the purpose of standards, the ANSI Network on Smart and Sustainable Cities (ANSSC) in New York is a forum smart and sustainable cities, along with US being a signatory of the ISO/ITU defined standards on smart cities. Also, The British Standards Institution (BSI) has already established Smart City standards and has associated with the ISO Advisory Group on smart city standards. The UK subscribes to the BSI standards for smart cities and has adopted the same and the Smart London Plan incorporates open data standards in accordance with London DataStore. For development of smart cities, Korea has adopted the ISO/TC 268, which is focused on sustainable development in communities and also has one working group developing city indicators and another working group developing metrics for smart community infrastructures. However, in India, the Bureau of Indian Standards (BIS) has undertaken the task to formulate standardised guidelines for central and state authorities in planning, design and construction of smart cities by setting up a technical committee under the Civil engineering department of the Bureau. However, adoption of the standards by implementing agencies would be voluntary and intends to complement internationally available documents in this area. Also, The Global Cities Institute (GCI) has undertaken a mission in the year 2015 to align with the Bureau of Indian Standards regarding development of standards of smart cities and also to forge relationships with Indian cities in light of ISO 37120. It can be said that India has currently not yet adopted international standards, but is in the process of developing national standards and adopting key international standards. Unlike other cities,which are adopting standards - national, ISO, or ITU, Indian cities are yet to adopt standards for regulation of the future smart cities.

Notes for India

India is in the nascent stages of developing smart cities across the country. Drawing from the practices adopted by cities across the world, smart cities in India should adopt strong regulatory and governance frameworks regarding technical standards, open data and data security and data protection policies. These policies will be essential in ensuring the sustainability and efficiency of smart cities while safeguarding individual rights. Some of these policies are already in place - such as India’s Open Data Policy and India’s data protection standards under section 43A of the ITA. It will be important to see how these policies are adopted and applied to the context of smart cities.

 

References

[1] Smart Cities and Transparent Evolution, http://www.posterheroes.org/Posterheroes3/_mat/PH3_eng.pdf.

[2] "Data, Data Everywhere." The Economist, February 25, 2010. Accessed March 17, 2016, http://www.economist.com/node/15557443.

[3] "Smart Cities." ISO. 2015. Accessed March 17, 2016, http://www.iso.org/iso/smart_cities_report-jtc1.pdf.

[4] Transcript of Prime Minister Lee Hsien Loong's speech at Smart Nation launch on 24 November, http://www.pmo.gov.sg/mediacentre/transcript-prime-minister-lee-hsien-loongs-speech-smart-nation-launch-24-november.

[5] Smart Nation Vision, https://www.ida.gov.sg/Tech-Scene-News/Smart-Nation-Vision.

[6] Smart Nation, http://www.pmo.gov.sg/smartnation.

[7] Smart Nation Platform, https://www.ida.gov.sg/~/media/Files/About%20Us/Newsroom/Media%20Releases/2014/0617_smartnation/AnnexA_sn.pdf.

[8] Transcript of Prime Minister Lee Hsien Loong's speech at Smart Nation launch on 24 November, https://www.ida.gov.sg/blog/insg/featured/singapore-lays-groundwork-to-be-worlds-first-smart-nation/.

[9] Prime Ministers’ Office Singapore-Smart Nation, http://www.pmo.gov.sg/smartnation.

[10] Prime Ministers’ Office Singapore-Smart Nation, http://www.pmo.gov.sg/smartnation.

[11] Constitution of the Republic of Singapore (Responsibility of the Prime Minister) Notification 2015, http://statutes.agc.gov.sg/aol/search/display/view.w3p;page=0;query=Status%3Acurinforce%20Type%3Aact,sl%20Content%3A%22smart%22;rec=4;resUrl=http%3A%2F%2Fstatutes.agc.gov.sg%2Faol%2Fsearch%2Fsummary%2Fresults.w3p%3Bquery%3DStatus%253Acurinforce%2520Type%253Aact,sl%2520Content%253A%2522smart%2522;whole=yes.

[12] Personal Data Protection Singapore-Annual Report 2014-15, https://www.pdpc.gov.sg/docs/default-source/Reports/pdpc-ar-fy14---online.pdf.

[13] Balancing Innovation and Personal Data Protection, https://www.ida.gov.sg/Tech-Scene-News/Tech-News/Digital-Government/2015/9/Balancing-innovation-and-personal-data-protection.

[14] Department of Statistics Singapore- Free Access to More Data on the SingStat Website from 1 March 2015, http://www.singstat.gov.sg/docs/default-source/default-document-library/news/press_releases/press27022015.pdf.

[15] Singapore Marks 50th Birthday With Open Data Contest, https://blog.hootsuite.com/singapore-open-data/.

[16] Virtual Singapore - a 3D city model platform for knowledge sharing and community collaboration, http://www.sla.gov.sg/News/tabid/142/articleid/572/category/Press%20Releases/parentId/97/year/2014/Default.aspx.

[17] Internet of Things (IoT) Standards Outline to Support Smart Nation Initiative Unveiled, http://www.spring.gov.sg/NewsEvents/PR/Pages/Internet-of-Things-(IoT)-Standards-Outline-to-Support-Smart-Nation-Initiative-Unveiled-20150812.aspx.

[18] Information Technology Standards Committee, https://www.itsc.org.sg/technical-committees/internet-of-things-technical-committee-iottc and https://www.ida.gov.sg/~/media/Files/Infocomm%20Landscape/iN2015/Reports/realisingthevisionin2015.pdf.

[19] Government of Dubai-2021 Dubai Plan-Purpose, http://www.dubaiplan2021.ae/the-purpose/.

[20] Government of Dubai-2021 Dubai Plan, http://www.dubaiplan2021.ae/dubai-plan-2021/.

[21] Smart Dubai, http://www.smartdubai.ae/foundation_layers.php.

[22] The Internet of Things: Connections for People’s happiness, http://www.smartdubai.ae/story021002.php.

[23] Smart Dubai - Current State, http://www.smartdubai.ae/current_state.php.

[24] Smart Dubai - District Guidelines, http://smartdubai.ae/districtguidelines/Smart_Dubai_District_Guidelines_Public_Brief.pdf.

[25] See; http://roadmap.smartdubai.ae/search-services-public.php and http://roadmap.smartdubai.ae/search-initiatives-public.php.

[26] Smart Dubai-Smart District Guidelines, http://smartdubai.ae/districtguidelines/Smart_Dubai_District_Guidelines_Public_Brief.pdf.

[27] Dubai Ruler issues new laws to further enhance the organisational structure and legal framework of Dubai Smart City, https://www.wam.ae/en/news/emirates/1395288828473.html.

[28] See: http://slc.dubai.gov.ae/en/AboutDepartment/News/Lists/NewsCentre/DispForm.aspx?ID=147&ContentTypeId=0x01001D47EB13C23E544893300E8367A23439 and http://www.smartdubai.ae/dubai_data.php.

[29] Dubai first city to trial ITU key performance indicators for smart sustainable cities, http://www.itu.int/net/pressoffice/press_releases/2015/12.aspx#.VtaYtlt97IU.

[30] Smart Dubai Benchmark Report 2015 Executive Summary, http://smartdubai.ae/bmr2015/methodology-public.php.

[31] Building a Smart + Equitable City, http://www1.nyc.gov/assets/forward/documents/NYC-Smart-Equitable-City-Final.pdf

[32] Building a Smart + Equitable City, http://www1.nyc.gov/site/forward/innovations/smartnyc.page.

[33] One New York: The Plan for a Strong and Just City, http://www1.nyc.gov/html/onenyc/about.html

[34] Open Data for All, http://www1.nyc.gov/assets/home/downloads/pdf/reports/2015/NYC-Open-Data-Plan-2015.pdf.

[35] 7 public projects that are turning New York into a “smart city”, http://www.builtinnyc.com/2015/11/24/7-projects-are-turning-new-york-futuristic-technology-hub.

[36] LinkNYC, https://www.link.nyc/faq.html#privacy.

[7] ANSI Network on Smart and Sustainable Cities, http://www.ansi.org/standards_activities/standards_boards_panels/anssc/overview.aspx?menuid=3

[38] IoT-Enabled Smart City Framework, http://publicaa.ansi.org/sites/apdl/Documents/News%20and%20Publications/Links%20Within%20Stories/IoT-EnabledSmartCityFrameworkWP20160213.pdf.

[39] Smart London (UK) Plan: Digital Technologies, London and Londoners, http://munkschool.utoronto.ca/ipl/files/2015/03/KleinmanM_Smart-London-UK-v5_30AP2015.pdf.

[40] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

[41] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

[42] Open Data White Paper, https://data.gov.uk/sites/default/files/Open_data_White_Paper.pdf.

[43] London Datastore-Privacy, http://data.london.gov.uk/about/privacy/.

[44] Future Cities Standards Centre in London, https://eu-smartcities.eu/commitment/5937.

[45] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

[46] Smart Cities background paper, October 2013, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/246019/bis-13-1209-smart-cities-background-paper-digital.pdf.

[47] Presentation of 2015 Blueprint of Seoul as ‘State-of-the-art Smart City’, http://english.seoul.go.kr/presentation-of-2015-blueprint-of-seoul-as-%E2%80%98state-of-the-art-smart-city%E2%80%99/.

[48] “Policy Where There is Demand,” Seoul Utilizes Big Data, http://english.seoul.go.kr/policy-demand-seoul-utilizes-big-data/

[49] Seoul’s “Owl Bus” Based on Big Data Technology, http://www.citiesalliance.org/sites/citiesalliance.org/files/Seoul-Owl-Bus-11052014.pdf

[50] Seoul Launches “Global Digital Seoul 2020”, http://english.seoul.go.kr/seoul-launches-global-digital-seoul-2020/

[51] Smart Seoul 2015, http://english.seoul.go.kr/wp-content/uploads/2014/02/SMART_SEOUL_2015_41.pdf

[52] Disclosing public data through the Seoul Open Data Plaza, http://english.seoul.go.kr/policy-information/key-policies/informatization/seoul-open-data-plaza/

[53] Data protection in South Korea: overview, http://uk.practicallaw.com/2-579-7926.

[54]Smart Cities Seoul: a case study, https://www.itu.int/dms_pub/itu-t/oth/23/01/T23010000190001PDFE.pdf

[55] Smart Cities-ISO, http://www.iso.org/iso/livelinkgetfile-isocs?nodeid=16193764.

 

List of Blocked 'Escort Service' Websites

by Pranesh Prakash last modified Jun 15, 2016 08:33 AM
Here is the full list of URLs that Indian ISPs were asked to block on Monday, June 13, 2016.

On April 20, 2016, DNA carried a report on a PIL seeking action against advertisements for prostitution in newspapers and on websites. That report noted that the Mumbai Police had obtained an order from a magistrates court to block 174 objectionable websites, and had sent a list to the "Group Coordinator (Cyber Laws)" within the Department of Electronics and IT. On June 13, 2016, some news agencies carried reports about the Ministry of Communications and IT having ordered ISPs to block 240 websites.

As far as we know, the Mumbai Police has not proceeded against any of the people who run these websites, whose phone numbers are available, and whose names and addresses are also available in many cases through WHOIS queries on the domain names.

Unfortunately, the government does not make available publicly the list of websites they have ordered ISPs to block. Given that knowledge of what is censored by the government is crucial in a democracy, we are publishing the entire list of blocked websites.

Those of these websites that use TLS (i.e., those with 'https'), still appear to be available on multiple Indian ISPs, and others can be accessed by using a proxy VPN from outside India or by using Tor.

Notes:

  • The list circulated to ISPs has two sub-lists, numbered from 1-174 (but containing 175 entries, with a numbering mistake), and 1-64, for a total of 239 URLs.
  • 4 URLs are repeated in the list ("www.salini.in/navi-mumbai-independent-escort-service.php", "exmumbai.in", "www.mansimathur.in/pinkyagarwal", "www.mumbaifunclubs.com")
  • For one website, both the domain name and a specific web page within it are listed (""www.mumbaiwali.in" and "www.mumbaiwali.in/navi-mumbai-escort-service.php")
  • One URL is incomplete (No. 214: "www.independentescortservicemumbai.com/mumbai%20escort%20servi..")
  • There are thus 235 unique URLs, targetting 234 websites and web pages.




Full List of Blocked URLs

  1. www.sterlingbioscience.com
  2. rawpoint.biz
  3. www.onemillionbabes.com
  4. www.mumbaihotcollection.in
  5. simranoberoi.in
  6. rubinakapoor.biz
  7. talita.biz
  8. www.mumbaiescortsagency.net
  9. www.mumbaifunclubs.com
  10. www.alishajain.co.in
  11. www.ankitatalwar.co.in
  12. https://www.jennyarora.ind.in
  13. www.riya-kapoor.com
  14. shneha.in
  15. missinimi.in
  16. www.mumbaiglamour.in
  17. kalyn.in
  18. www.saumyagiri.co.in/city/mumbai/
  19. bookerotic.com
  20. www.divyamalik.in
  21. www.suhanisharma.co.in
  22. www.ruhi.biz
  23. umbaiqueens.in
  24. www.aliyaghosh.com
  25. priyasen.in
  26. www.highprofilemumbaiescorts.co.in
  27. charmingmumbai.com
  28. www.poojamehata.in
  29. kiiran.in/
  30. mansikher.in
  31. www.newmumbaiescorts.in
  32. www.mumbaifunclubs.com
  33. www.punarbas.in
  34. www.discreetbabes.in
  35. www.alisharoy.in
  36. www.arpitarai.in
  37. www.nidhipatel.in
  38. navimumbailescort.com
  39. www.zoyaescorts.com
  40. www.juhioberoi.in
  41. shoniya.in
  42. panchibora.in
  43. rehu.in
  44. www.nehaanand.com
  45. www.aditiray.co.in
  46. www.rakhibajaj.in
  47. www.alianoidaescorts.in
  48. www.sobiya.in
  49. www.alishaparul.in
  50. mumbai-escorts.leathercurrency.com
  51. ankita-ahuja.in
  52. www.yamika.in
  53. mumbailescort.co
  54. www.ranjika.in
  55. www.aditiray.com
  56. www.alinamumbailescort.in
  57. www.sonikaa.com/services/
  58. riyamodel.in
  59. mumbai-escorts.info
  60. soonam.in
  61. www.sejalthakkar.com
  62. www.yomika-tandon.in
  63. www.asika.in
  64. www.siyasharma.org/
  65. www.rubikamathur.in
  66. www.mumbaiescortslady.com
  67. www.sexyshe.in
  68. www.indepandentescorts.com
  69. www.saanvichopra.co.in
  70. www.goswamipatel.in
  71. ojaloberoi.in
  72. www.naincy.in
  73. www.sonyamehra.com
  74. www.pinkgrapes.in
  75. anjalitomar.in/
  76. www.nishakohli.com/
  77. sagentia.co.in
  78. mumbai.vivastreet.co.in/escort+mumbai
  79. www.deseescortgirls.in
  80. guides.wonobo.com/mumbai/mumbai-escorts-service/.4299
  81. jasmineescorts.com
  82. www.shalinisethi.com
  83. www.highclassmumbailescort.com
  84. www.vipescortsinmumbai.com
  85. www.mumbaiescorts69.co.in
  86. monikabas.co.in
  87. www.riyasehgal.com
  88. onlycelebrity.in
  89. www.greatmumbaiescorts.com/escort-service-mumbai.html
  90. www.aishamumbailescort.com
  91. www.jennydsouzaescort.com
  92. www.desifun.in
  93. www.siyaescort.co.in
  94. masti—escort.in
  95. www.sofya.in
  96. www.mumbaiwali.in/navi-mumbai-escort-service.php
  97. www.mumbaiwali.in
  98. www.calldaina.com
  99. www.mumbaiescortsservice.co.in
  100. www.escortsgirlsinmumbai.com
  101. www.passionmumbai.escorts.com
  102. www.nehakapoor.in
  103. meerakapoor.com
  104. www.dianamumbaiescorts.net .in
  105. www.allmumbailescort.in
  106. www.rakhiarora.in
  107. www.ritikasingh.com
  108. www.rekhapatil.com
  109. www.mumbaidolls.com
  110. www.piapandey.com
  111. www.mumbaicuteescorts.in
  112. www.mumbaiescortssevice.com
  113. www.onlycelebrity.com
  114. www.meetescortservice.com
  115. onlyoneescorts.com
  116. simirai.org
  117. www.riyamumbaiescorts.in
  118. www.neharana.in
  119. www.tanyaroy.com
  120. www.mumbaihiprofilegirls.in
  121. www.sexyescortsmumbai.in
  122. www.sexymumbai.escorts.com
  123. www.four-seasons—escort.in
  124. www.mumbaiescortsgirl.com
  125. www.vdreamescorts.com
  126. www.passionatemumbaiescorts.in
  127. www.payalmalhotra.in
  128. www.shrutisinha.com
  129. www.juliemumbaiescorts.com
  130. www.indiasexservices.com/mumbai.html
  131. www.mumbai-escorts.co.in
  132. www.aliyamumbaiescorts.net.in
  133. shivaniarora.co.in/escort–service-mumbai.html
  134. www.pinkisingh.com
  135. soyam.in
  136. www.arpitaray.com
  137. www.localescorts.in
  138. www.jennifermumbaiescorts.com
  139. www.yanaroy.com
  140. escorts18.in/mumbai—escorts.html
  141. www.tinamumbaiescorts.com
  142. www.mumbaijannatescorts.com
  143. www.deepikaroy.com
  144. www.nancy.co.in
  145. www.pearlpatel.in
  146. 30minsmumbaiescorts.in
  147. www.datinghopes.com
  148. https://www.riyaroy.com/services.html
  149. www.sonalikajain.com
  150. www.zainakapoor.co.in
  151. kavyajain.in
  152. www.kinnu.co.in
  153. exmumbai.in/
  154. www.mansimathur.in/pinkyagarwal
  155. exmumbai.in
  156. www.mansimathur.in/pinkyagarwal
  157. www.devikabatra.in
  158. katlin.in
  159. riyaverma.in
  160. escortsinindia.co/
  161. www.snehamumbaiescorts.in
  162. shimi.in
  163. www.mumbaiescortsforu.com/about
  164. www.chetnagaur.co.in/chetna-gaur.html
  165. www.escortspoint.in
  166. www.rupalikakkar.in
  167. www.hemangisinha.co.in
  168. 1escorts.in/location/mumbai.html
  169. www.salini.in/navi-mumbai-independent—escort-service.php
  170. www.salini.in/navi-mumbai-independent-escort-service.php
  171. www.mumbaibella.in
  172. mohitescortservicesmumbai.com
  173. www.anchu.in
  174. www.aliyaroy.co.in
  175. jaanu.co.in/mumbai-escorts-service-call-girls.html
  176. www.andyverma.com
  177. dreams-come-true.biz
  178. feel–better.biz
  179. jellyroll.biz
  180. dreamgirlmumbai.com
  181. role-play.biz
  182. mansi—mathur.com
  183. www.zarinmumbaiescorts.com
  184. mymumbai.escortss.com
  185. www.goldentouchescorts.com
  186. www.mumbaipassion.biz
  187. ishitamalhotra.com
  188. happy-ending.biz
  189. juicylips.biz
  190. www.escortsmumbai.name
  191. www.kirstygbasai.net
  192. www.hiremumbaiescorts.com
  193. www.meeraescorts.com/mumbai-escorts.php
  194. 3–5–7star.biz
  195. www.pranjaltiwari.com
  196. www.richagupta.biz
  197. way2heaven.biz
  198. piya.co/
  199. pinkflowers.info
  200. www.beautifulmumbaiescorts.com
  201. www.bestescortsinmumbai.com/charges-html
  202. www.mumbaiescorts.me
  203. www.tanikatondon.com
  204. www.escortsinmumbai.biz
  205. www.escortgirlmumbai.com
  206. www.mumbaicallgrils.com
  207. www.quickescort4u.com
  208. www.mayamalhotra.com
  209. www.legal-escort.com
  210. escortsbaba.com/mumbai-escorts.html
  211. rupa.biz
  212. www.mumbaiescorts.agency/erotic-service-mumbai.html
  213. www.escortscelebrity.com
  214. www.independentescortservicemumbai.com/mumbai%20escort%20servi..
  215. garimachopra.com
  216. kajalgupta.biz
  217. lipkiss.site
  218. aanu.in
  219. bombayescort.in
  220. hotkiran.co.in
  221. khushikapoor.in
  222. joyapatel.in
  223. rici.in
  224. aaditi.in
  225. andheriescorts.org.in
  226. www.jiyapatel.in
  227. spicymumbai.in
  228. rimpyarora.in
  229. lovemaking.co.in
  230. riyadubey.co.in
  231. escortservicesmumbai.in
  232. mumbaiescorts.co.in
  233. midnightprincess.in/
  234. vashiescorts.co.in/
  235. angee.in/
  236. www.rozakhan.in/
  237. www.mumbaiescortsvilla.in/
  238. kylie.co.in/
  239. escortservicemumbai.co.in

Jurisdiction: The Taboo Topic at ICANN

by Pranesh Prakash last modified Jun 29, 2016 07:51 AM
The "IANA Transition" that is currently underway is a sham since it doesn't address the most important question: that of jurisdiction. This article explores why the issue of jurisdiction is the most important question, and why it remains unaddressed.

In March 2014, the US government announced that they were going to end the contract they have with ICANN to run the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

Why is the U.S. government removing the NTIA contract?

The main reason for the U.S. government's action is that it will get rid of a political thorn in the U.S. government's side: keeping the contract allows them to be called out as having a special role in Internet governance (with the Affirmation of Commitments between the U.S. Department of Commerce and ICANN, the IANA contract, and the cooperative agreement with Verisign), and engaging in unilateralism with regard to the operation of the root servers of the Internet naming system, while repeatedly declaring that they support a multistakeholder model of Internet governance.

This contradiction is what they are hoping to address. Doing away with the NTIA contract will also increase — ever so marginally — ICANN’s global legitimacy: this is something that world governments, civil society organizations, and some American academics have been asking for nearly since ICANN’s inception in 1998. For instance, here are some demands made in a declaration by the Civil Society Internet Governance Caucus at WSIS, in 2005:

“ICANN will negotiate an appropriate host country agreement to replace its California Incorporation, being careful to retain those aspects of its California Incorporation that enhance its accountability to the global Internet user community. "ICANN's decisions, and any host country agreement, must be required to comply with public policy requirements negotiated through international treaties in regard to, inter alia, human rights treaties, privacy rights, gender agreements and trade rules. … "It is also expected that the multi-stakeholder community will observe and comment on the progress made in this process through the proposed [Internet Governance] Forum."

In short: the objective of the transition is political, not technical. In an ideal world, we should aim at reducing U.S. state control over the core of the Internet's domain name system.1

It is our contention that U.S. state control over the core of the Internet's domain name system is not being removed by the transition that is currently underway.

Why is the Transition Happening Now?

Despite the U.S. government having given commitments in the past that were going to finish the IANA transition by "September 30, 2000", (the White Paper on Management of Internet Names and Addresses states: "The U.S. Government would prefer that this transition be complete before the year 2000. To the extent that the new corporation is established and operationally stable, September 30, 2000 is intended to be, and remains, an 'outside' date.") and later by "fall of 2006",2 those turned out to be empty promises. However, this time, the transition seems to be going through, unless the U.S. Congress manages to halt it.

However, in order to answer the question of "why now?" fully, one has to look a bit at the past.

In 1998, through the White Paper on Management of Internet Names and Addresses the U.S. government asserted it’s control over the root, and asserted — some would say arrogated to itself — the power to put out contracts for both the IANA functions as well as the 'A' Root (i.e., the Root Zone Maintainer function that Network Solutions Inc. then performed, and continues to perform to date in its current avatar as Verisign). The IANA functions contract — a periodically renewable contract — was awarded to ICANN, a California-based non-profit corporation that was set up exclusively for this purpose, but which evolved around the existing IANA (to placate the Internet Society).

Meanwhile, of course, there were criticisms of ICANN from multiple foreign governments and civil society organizations. Further, despite it being a California-based non-profit on contract with the government, domestically within the U.S., there was pushback from constituencies that felt that more direct U.S. control of the DNS was important.

As Goldsmith and Wu summarize:

"Milton Mueller and others have shown that ICANN’s spirit of “self-regulation” was an appealing label for a process that could be more accurately described as the U.S. government brokering a behind-the-scenes deal that best suited its policy preferences ... the United States wanted to ensure the stability of the Internet, to fend off the regulatory efforts of foreign governments and international organizations, and to maintain ultimate control. The easiest way to do that was to maintain formal control while turning over day-to-day control of the root to ICANN and the Internet Society, which had close ties to the regulation-shy American technology industry." [footnotes omitted]

And that brings us to the first reason that the NTIA announced the transition in 2014, rather than earlier.

ICANN Adjudged Mature Enough

The NTIA now sees ICANN as being mature enough: the final transition was announced 16 years after ICANN's creation, and complaints about ICANN and its legitimacy had largely died down in the international arena in that while. Nowadays, governments across the world send their representatives to ICANN, thus legitimizing ICANN. States have largely been satisfied by participating in the Government Advisory Council, which, as its name suggests, only has advisory powers. Further, unlike in the early days, there is no serious push for states assuming control of ICANN. Of course they grumble about the ICANN Board not following their advice, but no government, as far as I am aware, has walked out or refused to participate.

L'affaire Snowden

Many within the United States, and some without, believe that the United States not only plays an exceptional role to play in the running of the Internet — by dint of historical development and dominance of American companies — but that it ought to have an exceptional role because it is the best country to exercise 'oversight' over 'the Internet' (often coming from clueless commentators), and from dinosaurs of the Internet era, like American IP lawyers and American 'homeland' security hawks, Jones Day, who are ICANN's lawyers, and other jingoists and those policymakers who are controlled by these narrow-minded interests.

The Snowden revelations were, in that way, a godsend for the NTIA, as it allowed them a fig-leaf of international criticism with which to counter these domestic critics and carry on with a transition that they have been seeking to put into motion for a while. The Snowden revelations led Dilma Rousseff, President of Brazil, to state in September 2013, at the 68th U.N. General Assembly, that Brazil would "present proposals for the establishment of a civilian multilateral framework for the governance and use of the Internet", and as Diego Canabarro points out this catalysed the U.S. government and the technical community into taking action.

Given this context, a few months after the Snowden revelations, the so-called I* organizations met — seemingly with the blessing of the U.S. government3 — in Montevideo, and put out a 'Statement on the Future of Internet Governance' that sought to link the Snowden revelations on pervasive surveillance with the need to urgently transition the IANA stewardship role away from the U.S. government. Of course, the signatories to that statement knew fully well, as did most of the readers of that statement, that there is no linkage between the Snowden revelations about pervasive surveillance and the operations of the DNS root, but still they, and others, linked them together. Specifically, the I* organizations called for "accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing."

One could posit the existence of two other contributing factors as well.

Given political realities in the United States, a transition of this sort is probably best done before an ultra-jingoistic President steps into office.

Lastly, the ten-yearly review of the World Summit on Information Society was currently underway. At the original WSIS (as seen from the civil society quoted above) the issue of US control over the root was a major issue of contention. At that point (and during where the 2006 date for globalization of ICANN was emphasized by the US government).

Why Jurisdiction is Important

Jurisdiction has a great many aspects. Inter alia, these are:

  • Legal sanctions applicable to changes in the root zone (for instance, what happens if a country under US sanctions requests a change to the root zone file?)
  • Law applicable to resolution of contractual disputes with registries, registrars, etc.
  • Law applicable to labour disputes.
  • Law applicable to competition / antitrust law that applies to ICANN policies and regulations.
  • Law applicable to disputes regarding ICANN decisions, such as allocation of gTLDs, or non-renewal of a contract.
  • Law applicable to consumer protection concerns.
  • Law applicable to financial transparency of the organization.
  • Law applicable to corporate condition of the organization, including membership rights.
  • Law applicable to data protection-related policies & regulations.
  • Law applicable to trademark and other speech-related policies & regulations.
  • Law applicable to legal sanctions imposed by a country against another.

Some of these, but not all, depend on where bodies like ICANN [the policy-making body], the IANA functions operator [the proposed "Post-Transition IANA"], and the root zone maintainer are incorporated or maintain their primary office, while others depend on the location of the office [for instance, Turkish labour law applies for the ICANN office in Istanbul], while yet others depend on what's decided by ICANN in contracts (for instance, the resolution of contractual disputes with ICANN, filing of suits with regard to disputes over new generic TLDs, etc.).

However, an issue like sanctions, for instance, depends on where ICANN/PTI/RMZ are incorporated and maintain their primary office.

As Milton Mueller notes, the current IANA contract "requires ICANN to be incorporated in, maintain a physical address in, and perform the IANA functions in the U.S. This makes IANA subject to U.S. law and provides America with greater political influence over ICANN."

He further notes that:

While it is common to assert that the U.S. has never abused its authority and has always taken the role of a neutral steward, this is not quite true. During the controversy over the .xxx domain, the Bush administration caved in to domestic political pressure and threatened to block entry of the domain into the root if ICANN approved it (Declaration of the Independent Review Panel, 2010). It took five years, an independent review challenge and the threat of litigation from a businessman willing to spend millions to get the .xxx domain into the root.

Thus it is clear that even if the NTIA's role in the IANA contract goes away, jurisdiction remains an important issue.

U.S. Doublespeak on Jurisdiction

In March 2014, when NTIA finally announced that they would hand over the reins to “the global multistakeholder community”. They’ve laid down two procedural condition: that it be developed by stakeholders across the global Internet community and have broad community consensus, and they have proposed 5 substantive conditions that any proposal must meet:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and,
  • Maintain the openness of the Internet.
  • Must not replace the NTIA role with a solution that is government-led or an inter-governmental organization.

In that announcement there is no explicit restriction on the jurisdiction of ICANN (whether it relate to its incorporation, the resolution of contractual disputes, resolution of labour disputes, antitrust/competition law, tort law, consumer protection law, privacy law, or speech law, and more, all of which impact ICANN and many, but not all, of which are predicated on the jurisdiction of ICANN’s incorporation), the jurisdiction(s) of the IANA Functions Operator(s) (i.e., which executive, court, or legislature’s orders would it need to obey), and the jurisdiction of the Root Zone Maintainer (i.e., which executive, court, or legislature’s orders would it need to obey).

However, Mr. Larry Strickling, the head of the NTIA, in his testimony before the U.S. House Subcommittee on Communications and Technology, made it clear that,

“Frankly, if [shifting ICANN or IANA jurisdiction] were being proposed, I don't think that such a proposal would satisfy our criteria, specifically the one that requires that security and stability be maintained.”

Possibly, that argument made sense in 1998, due to the significant concentration of DNS expertise in the United States. However, in 2015, that argument is hardly convincing, and is frankly laughable.4

Targetting that remark, in ICANN 54 at Dublin, we asked Mr. Strickling:

"So as we understand it, the technical stability of the DNS doesn't necessarily depend on ICANN's jurisdiction being in the United States. So I wanted to ask would the US Congress support a multistakeholder and continuing in the event that it's shifting jurisdiction."

Mr. Strickling's response was:

"No. I think Congress has made it very clear and at every hearing they have extracted from Fadi a commitment that ICANN will remain incorporated in the United States. Now the jurisdictional question though, as I understand it having been raised from some other countries, is not so much jurisdiction in terms of where ICANN is located. It's much more jurisdiction over the resolution of disputes.

"And that I think is an open issue, and that's an appropriate one to be discussed. And it's one I think where ICANN has made some movement over time anyway.

"So I think you have to ... when people use the word jurisdiction, we need to be very precise about over what issues because where disputes are resolved and under what law they're resolved, those are separate questions from where the corporation may have a physical headquarters."

As we have shown above, jurisdiction is not only about the jurisdiction of "resolution of disputes", but also, as Mueller reminds us, about the requirement that ICANN (and now, the PTI) be "incorporated in, maintain a physical address in, and perform the IANA functions in the U.S. This makes IANA subject to U.S. law and provides America with greater political influence over ICANN."

In essence, the U.S. government has essentially said that they would veto the transition if the jurisdiction of ICANN or PTI's incorporation were to move out of the U.S., and they can prevent that from happening after the transition, since as things stand ICANN and PTI will still come within the U.S. Congress's jurisdiction.

Why Has the ICG Failed to Consider Jurisdiction?

Will the ICG proposal or the proposed new ICANN by-laws reduce existing U.S. control? No, they won't. (In fact, as we will argue below, the proposed new ICANN by-laws make this problem even worse.) The proposal by the names community ("the CWG proposal") still has a requirement (in Annex S) that the Post-Transition IANA (PTI) be incorporated in the United States, and a similar suggestion hidden away as a footnote. Further, the proposed by-laws for ICANN include the requirement that PTI be a California corporation. There was no discussion specifically on this issue, nor any documented community agreement on the specific issue of jurisdiction of PTI's incorporation.

Why wasn't there greater discussion and consideration of this issue? Because of two reasons: First, there were many that argued that the transition would be vetoed by the U.S. government and the U.S. Congress if ICANN and PTI were not to remain in the U.S. Secondly, the ICANN-formed ICG saw the US government’s actions very narrowly, as though the government were acting in isolation, ignoring the rich dialogue and debate that’s gone on earlier about the transition since the incorporation of ICANN itself.

While it would be no one’s case that political considerations should be given greater weightage than technical considerations such as security, stability, and resilience of the domain name system, it is shocking that political considerations have been completely absent in the discussions in the number and protocol parameters communities, and have been extremely limited in the discussions in the names community. This is even more shocking considering that the main reason for this transition is, as has been argued above, political.

It can be also argued that the certain IANA functions such as Root Zone Management function have a considerable political implication. It is imperative that the political nature of the function is duly acknowledged and dealt with, in accordance with the wishes of the global community. In the current process the political aspects of the IANA function has been completely overlooked and sidelined. It is important to note that this transition has not been a necessitated by any technical considerations. It is primarily motivated by political and legal considerations. However, the questions that the ICG asked the customer communities to consider were solely technical. Indeed, the communities could have chosen to overlook that, but they did not choose to do so. For instance, while the IANA customer community proposals reflected on existing jurisdictional arrangements, they did not reflect on how the jurisdictional arrangements should be post-transition , while this is one of the questions at the heart of the entire transition. There were no discussions and decisions as to the jurisdiction of the Post-Transition IANA: the Accountability CCWG's lawyers, Sidley Austin, recommended that the PTI ought to be a California non-profit corporation, and this finds mention in a footnote without even having been debated by the "global multistakeholder community", and subsequently in the proposed new by-laws for ICANN.

Why the By-Laws Make Things Worse & Why "Work Stream 2" Can't Address Most Jurisdiction Issues

The by-laws could have chosen to simply stayed silent on the matter of what law PTI would be incorporated under, but instead the by-law make the requirement of PTI being a California non-profit public benefit corporation part of the fundamental by-laws, which are close to impossible to amend.

While "Work Stream 2" (the post-transition work related to improving ICANN's accountability) has jurisdiction as a topic of consideration, the scope of that must necessarily discount any consideration of shifting the jurisdiction of incorporation of ICANN, since all of the work done as part of CCWG Accountability's "Work Stream 1", which are now reflected in the proposed new by-laws, assume Californian jurisdiction (including the legal model of the "Empowered Community"). Is ICANN prepared to re-do all the work done in WS1 in WS2 as well? If the answer is yes, then the issue of jurisdiction can actually be addressed in WS2. If the answer is no ­— and realistically it is — then, the issue of jurisdiction can only be very partially addressed in WS2.

Keeping this in mind, we recommended specific changes in the by-laws, all of which were rejected by CCWG's lawyers.

The Transition Plan Fails the NETmundial Statement

The NETmundial Multistakeholder Document, which was an outcome of the NETmundial process, states:

In the follow up to the recent and welcomed announcement of US Government with regard to its intent to transition the stewardship of IANA functions, the discussion about mechanisms for guaranteeing the transparency and accountability of those functions after the US Government role ends, has to take place through an open process with the participation of all stakeholders extending beyond the ICANN community

[...]

It is expected that the process of globalization of ICANN speeds up leading to a truly international and global organization serving the public interest with clearly implementable and verifiable accountability and transparency mechanisms that satisfy requirements from both internal stakeholders and the global community.

The active representation from all stakeholders in the ICANN structure from all regions is a key issue in the process of a successful globalization.

As our past analysis has shown, the IANA transition process and the discussions on the mailing lists that shaped it were neither global nor multistakeholder. The DNS industry represented in ICANN is largely US-based. 3 in 5 registrars are from the United States of America, whereas less than 1% of ICANN-registered registrars are from Africa. Two-thirds of the Business Constituency in ICANN is from the USA. While ICANN-the-corporation has sought to become more global, the ICANN community has remained insular, and this will not change until the commercial interests involved in ICANN can become more diverse, reflecting the diversity of users of the Internet, and a TLD like .COM can be owned by a non-American corporation and the PTI can be a non-American entity.

What We Need: Jurisdictional Resilience

It is no one's case that the United States is less fit than any other country as a base for ICANN, PTI, or the Root Zone Maintainer, or even as the headquarters for 9 of the world's 12 root zone operators (Verisign runs both the A and J root servers). However, just as having multiplicity of root servers is important for ensuring technical resilience of the DNS system (and this is shown in the uptake of Anycast by root server operators), it is equally important to have immunity of core DNS functioning from political pressures of the country or countries where core DNS infrastructure is legally situated and to ensure that we have diversity in terms of legal jurisdiction.

Towards this end, we at CIS have pushed for the concept of "jurisdictional resilience", encompassing three crucial points:

  • Legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated.
  • Division of core Internet operators among multiple jurisdictions
  • Jurisdictional division of policymaking functions from technical implementation functions

Of these, the most important is the limited legal immunity (akin to a greatly limited form of the immunity that UN organizations get from the laws of their host countries). This kind of immunity could be provided through a variety of different means: a host-country agreement; a law passed by the legislature; a U.N. General Assembly Resolution; a U.N.-backed treaty; and other such options exist. We are currently investigating which of these options would be the best option.

And apart from limited legal immunity, distribution of jurisdictional control is also valuable. As we noted in our submission to the ICG in September 2015:

Following the above precepts would, for instance, mean that the entity that performs the role of the Root Zone Maintainer should not be situated in the same legal jurisdiction as the entity that functions as the policymaking venue. This would in turn mean that either the Root Zone Maintainer function be taken up Netnod (Sweden-headquartered) or the WIDE Project (Japan-headquartered) [or RIPE-NCC, headquartered in the Netherlands], or that if the IANA Functions Operator(s) is to be merged with the RZM, then the IFO be relocated to a jurisdiction other than those of ISOC and ICANN. This, as has been stated earlier, has been a demand of the Civil Society Internet Governance Caucus. Further, it would also mean that root zone servers operators be spread across multiple jurisdictions (which the creation of mirror servers in multiple jurisdictions will not address).

However, the issue of jurisdiction seems to be dead-on-arrival, having been killed by the United States government.

Unfortunately, despite the primary motivation for demands for the IANA transition being those of removing the power the U.S. government exercises over the core of the Internet's operations in the form of the DNS, what has ended up happening through the IANA transition is that these powers have not only not been removed, but in some ways they have been entrenched further! While earlier, the U.S. had to specify that the IANA functions operator had to be located in the U.S., now ICANN's by-laws themselves will state that the post-transition IANA will be a California corporation. Notably, while the Montevideo Declaration speaks of "globalization" of ICANN and of the IANA functions, as does the NETmundial statement, the NTIA announcement on their acceptance of the transition proposals speaks of "privatization" of ICANN, and not "globalization".

All in all, the "independence" that IANA is gaining from the U.S. is akin to the "independence" that Brazil gained from Portugal in 1822. Dom Pedro of Brazil was then ruling Brazil as the Prince Regent since his father Dom João VI, the King of United Kingdom of Portugal, Brazil and the Algarves had returned to Portugal. In 1822, Brazil declared independence from Portugal (which was formally recognized through a treaty in 1825). Even after this "independence", Dom Pedro continued to rule Portugal just as he had before indepedence, and Dom João VI was provided the title of "Emperor of Brazil", aside from being King of the United Kingdom of Portugal and the Algarves. The "indepedence" didn't make a whit of a difference to the self-sufficiency of Brazil: Portugal continued to be its largest trading partner. The "independence" didn't change anything for the nearly 1 million slaves in Brazil, or to the lot of the indigenous peoples of Brazil, none of whom were recognized as "free". It had very little consequence not just in terms of ground conditions of day-to-day living, but even in political terms.

Such is the case with the IANA Transition: U.S. power over the core functioning of the Domain Name System do not stand diminished after the transition, and they can even arguably be said to have become even more entrenched. Meet the new boss: same as the old boss.


  1. It is an allied but logically distinct issue that U.S. businesses — registries and registrars — dominate the global DNS industry, and as a result hold the reins at ICANN.

  2. As Goldsmith & Wu note in their book Who Controls the Internet: "Back in 1998 the U.S. Department of Commerce promised to relinquish root authority by the fall of 2006, but in June 2005, the United States reversed course. “The United States Government intends to preserve the security and stability of the Internet’s Domain Name and Addressing System (DNS),” announced Michael D. Gallagher, a Department of Commerce official. “The United States” he announced, will “maintain its historic role in authorizing changes or modifications to the authoritative root zone file.”

  3. Mr. Fadi Chehadé revealed in an interaction with Indian participants at ICANN 54 that he had a meeting "at the White House" about the U.S. plans for transition of the IANA contract before he spoke about that when he visited India in October 2013 making the timing of his White House visit around the time of the Montevideo Statement.

  4. As an example, NSD, software that is used on multiple root servers, is funded by a Dutch foundation and a Dutch corporation, and written mostly by European coders.

CIS Submission to TRAI Consultation on Free Data

by Pranesh Prakash last modified Jul 01, 2016 04:04 PM
The Telecom Regulatory Authority of India (TRAI) held a consultation on Free Data, for which CIS sent in the following comments.

 

The Telecom Regulatory Authority of India (TRAI) asked for public comments on free data. Below are the comments that CIS submitted to the four questions that it posed.

 

Question 1

Is there a need to have TSP agnostic platform to provide free data or suitable reimbursement to users, without violating the principles of Differential Pricing for Data laid down in TRAI Regulation? Please suggest the most suitable model to achieve the objective.

Is There a Need for Free Data?

No, there is no need for free data, just as there is no need for telephony or Internet. However, making provisions for free data would increase the amount of innovation in the Internet and telecom sector, and there is a good probability that it would lead to faster adoption of the Internet, and thus be beneficial in terms of commerce, freedom of expression, freedom of association, and many other ways.

Thus the question that a telecom regulator should ask is not whether there is a need for TSP agnostic platforms, but whether such platforms are harmful for competition, for consumers, and for innovation. The telecom regulator ought not undertake regulation unless there is evidence to show that harm has been caused or that harm is likely to be caused. In short, TRAI should not follow the precautionary principle, since the telecom and Internet sectors are greatly divergent from environmental protection: the burden of proof for showing that something ought to be prohibited ought to be on those calling for prohibition.

Goal: Regulating Gatekeeping

TRAI wouldn’t need to regulate price discrimination or Net neutrality if ISPs were not “gatekeepers” for last-mile access. “Gatekeeping” occurs when a single entity establishes itself as an exclusive route to reach a large number of people and businesses or, in network terms, nodes. It is not possible for Internet services to reach their end customers without passing through ISPs (generally telecom networks). The situation is very different in the middle-mile and for backhaul. Even though anti-competitive terms may exist in the middle-mile, especially given the opacity of terms in “transit agreements”, a packet is usually able to travel through multiple routes if one route is too expensive (even if that is not the shortest network path, and is thus inefficient in a way). However, this multiplicity of routes is generally not possible in the last mile.1 This leaves last mile telecom operators (ISPs) in a position to unfairly discriminate between different Internet services or destinations or applications, while harming consumer choice.

However, the aim of regulation by TRAI cannot be to prevent gatekeeping, since that is not possible as long as there are a limited number of ISPs. For instance, even by the very act of charging money for access to the Internet, ISPs are guilty of “gatekeeping” since they are controlling who can and cannot access an Internet service that way. Instead, the aim of regulation by TRAI should be to “regulate gatekeepers to ensure they do not use their gatekeeping power to unjustly discriminate between similarly situated persons, content or traffic”, as we proposed in our submission to TRAI (on OTTs) last year.

Models for Free Data

There are multiple models possible for free data, none of which TRAI should prohibit unless it would enable OTTs to abuse their gatekeeping powers.

Government Incentives For Non-Differentiated Free Data

The government may opt to require all ISPs to provide free Internet to all at a minimum QoS in exchange for exemption from paying part of their USO contributions, or the government may pay ISPs for such access using their USO contributions.

TRAI should recommend to DoT that it set up a committee to study the feasibility of this model.

ISP subsidies

ISP subsidies of Internet access only make economic sense for the ISP under the following ‘Goldilocks’ condition is met: the experience with the subsidised service is ‘good enough’ for the consumers to want to continue to use such services, but ‘bad enough’ for a large number of them to want to move to unsubsidised, paid access.

  1. Providing free Internet to all at a low speed.
    1. This naturally discriminates against services and applications such as video streaming, but does not technically bar access to them.
  2. Providing free access to the Internet with other restrictions on quality that aren’t discriminatory with respect to content, services, or applications.

Rewards model

A TSP-agnostic rewards platform will only come within the scope of TRAI regulation if the platform has some form of agreement with the TSPs, even if it is collectively. If the rewards platform doesn’t have any agreement with any TSP, then TRAI does not have the power to regulate it. However, if the rewards platform has an agreement with any TSP, it is unclear whether it would be allowed under the Differential Data Tariff Regulation, since the clause 3(2) read with paragraph 30 of the Explanatory Memorandum might disallow such an agreement.

Assuming for the sake of argument that platforms with such agreements are not disallowed, such platforms can engage in either post-purchase credits or pre-purchase credits, or both. In other words, it could be a situation where a person has to purchase a data pack, engage in some activity relating to the platform (answer surveys, use particular apps, etc.) and thereupon get credit of some form transferred to one’s SIM, or it could be a situation where even without purchasing a data pack, a consumer can earn credits and thereupon use those credits towards data.

The former kind of rewards platform is not as useful when it comes to encouraging people to use the Internet, since only those who already see worth in using in the Internet (and can afford it) will purchase a data pack in the first place. The second form, on the other hand is quite useful, and could be encouraged. However, this second model is not as easily workable, economically, for fixed line connections, since there is a higher initial investment involved.

Recharge API

A recharge API could be fashioned in one of two ways: (1) via the operating system on the phone, allowing a TSP or third parties (whether OTTs or other intermediaries) to transfer credit to the SIM card on the phone which have been bought wholesale. Another model could be that of all TSPs providing a recharge API for the use of third parties. Only the second model is likely to result in a “toll-free” experience since in the first model, like in the case of a rewards platform that requires up-front purchase of data packs, there has to be a investment made first before that amount is recouped. This is likely to hamper the utility of such a model.

Further, in the first case, TRAI would probably not have the powers to regulate such transactions, as there would be no need for any involvement by the TSP. If anti-competitive agreements or abuse of dominant position seems to be taking place, it would be up to the Competition Commission of India to investigate.

However, the second model would have to be overseen by TRAI to ensure that the recharge APIs don’t impose additional costs on OTTs, or unduly harm competition and innovation. For instance, there ought to be an open specification for such an API, which all the TSPs should use in order to reduce the costs on OTTs. Further, there should be no exclusivity, and no preferential treatment provided for the TSPs sister concerns or partners.

“0.example” sites

Other forms of free data, for instance by TSPs choosing not to charge for low-bandwidth traffic should be allowed, as long as it is not discriminatory, nor does it impose increased barriers to entry for OTTs. For instance, if a website self-certifies that it is low-bandwidth and optimized for Internet-enabled feature phones and uses 0.example.tld to signal this (just as wap.* were used in for WAP sites and m.* are used for mobile-optimized versions of many sites), then there is no reason why TSPs should be prohibited from not charging for the data consumed by such websites, as long as the TSP does so uniformly without discrimination. In such cases, the TSP is not harming competition, harming consumers, nor abusing its gatekeeping powers.

OTT-agnostic free data

If a TSP decides not to charge for specific forms of traffic (for example, video, or for locally-peered traffic) regardless of the Internet service from which that traffic emanates, as as long as it does so with the end customer’s consent, then there is no question of the TSP harming competition, harming consumers, nor abusing its gatekeeping powers. There is no reason such schemes should be prohibited by TRAI unless they distort markets and harm innovation.

Unified marketplace

One other way to do what is proposed as the “recharge API” model is to create a highly-regulated market where the gatekeeping powers of the ISP are diminished, and the ISP’s ability to leverage its exclusive access over its customers are curtailed. A comparison may be drawn here to the rules that are often set by standard-setting bodies where patents are involved: given that these patents are essential inputs, access to them must be allowed through fair, reasonable, and non-discriminatory licences. Access to the Internet and common carriers like telecom networks, being even more important (since alternatives exist to particular standards, but not to the Internet itself), must be placed at an even higher pedestal and thus even stricter regulation to ensure fair competition.

A marketplace of this sort would impose some regulatory burdens on TRAI and place burdens on innovations by the ISPs, but a regulated marketplace harms ISP innovation less than not allowing a market at all.

At a minimum, such a marketplace must ensure non-exclusivity, non-discrimination, and transparency. Thus, at a minimum, a telecom provider cannot discriminate between any OTTs who want similar access to zero-rating. Further, a telecom provider cannot prevent any OTT from zero-rating with any other telecom provider. To ensure that telecom providers are actually following this stipulation, transparency is needed, as a minimum.

Transparency can take one of two forms: transparency to the regulator alone and transparency to the public. Transparency to the regulator alone would enable OTTs and ISPs to keep the terms of their commercial transactions secret from their competitors, but enable the regulator, upon request, to ensure that this doesn’t lead to anti-competitive practices. This model would increase the burden on the regulator, but would be more palatable to OTTs and ISPs, and more comparable to the wholesale data market where the terms of such agreements are strictly-guarded commercial secrets. On the other hand, requiring transparency to the public would reduce the burden on the regulator, despite coming at a cost of secrecy of commercial terms, and is far more preferable.

Beyond transparency, a regulation could take the form of insisting on standard rates and terms for all OTT players, with differential usage tiers if need be, to ensure that access is truly non-discriminatory. This is how the market is structured on the retail side.

Since there are transaction costs in individually approaching each telecom provider for such zero-rating, the market would greatly benefit from a single marketplace where OTTs can come and enter into agreements with multiple telecom providers.

Even in this model, telecom networks will be charging based not only on the fact of the number of customers they have, but on the basis of them having exclusive routing to those customers. Further, even under the standard-rates based single-market model, a particular zero-rated site may be accessible for free from one network, but not across all networks: unlike the situation with a toll-free number in which no such distinction exists.

To resolve this, the regulator may propose that if an OTT wishes to engage in paid zero-rating, it will need to do so across all networks, since if it doesn’t there is risk of providing an unfair advantage to one network over another and increasing the gatekeeper effect rather than decreasing it.

Question 2

Whether such platforms need to be regulated by the TRAI or market be allowed to develop these platforms?

In many cases, TRAI would have no powers over such platforms, so the question of TRAI regulating does not arise. In all other cases, TRAI can allow the market to develop such platforms, and then see if any of them violates the Discriminatory Data Tariffs Regualation. For government-incentivised schemes that are proposed above, TRAI should take proactive measure in getting their feasibility evaluated.

Question 3

Whether free data or suitable reimbursement to users should be limited to mobile data users only or could it be extended through technical means to subscribers of fixed line broadband or leased line?

Spectrum is naturally a scarce resource, though technological advances (as dictated by Cooper’s Law) and more efficient management of spectrum make it less so. However, we have seen that fixed-line broadband has more or less stagnated for the past many years, while mobile access has increased. So the market distortionary power of fixed-line providers is far less than that of mobile providers. However, competition is far less in fixed-line Internet access services, while it is far higher in mobile Internet access. Switching costs in fixed-line Internet access services are also far higher than in mobile services. Given these differences, the regulation with regard to price discrimination might justifiably be different.

All in all, for this particular issue, it is unclear why different rules should apply to mobile users and fixed line users.

Question 4

Any other issue related to the matter of Consultation.

None.


  1. In India’s mobile telecom sector, according to a Nielsen study, an estimated 15% of mobile users are multi-SIM users, meaning the “gatekeeping” effect is significantly reduced in both directions: Internet services can reach them via multiple ISPs, and conversely they can reach Internet services via multiple ISPs. See Nielsen, ‘Telecom Transitions: Tracking the Multi-SIM Phenomena in India’, http://www.nielsen.com/in/en/insights/reports/2015/telecom-transitions-tracking-the-multi-sim-phenomena-in-india.html

Big Data Governance Frameworks for 'Data Revolution for Sustainable Development'

by Meera Manoj — last modified Jul 05, 2016 01:13 PM
A key component of the process to achieve the Sustainable Development Goals is the call for a global 'data revolution' to better understand, monitor, and implement development interventions. Recently there has been several international proposals to use big data, along with reconfigured national statistical systems, to operationalise this 'data revolution for sustainable development.' This analysis by Meera Manoj highlights the different models of collection, management, sharing, and governance of global development data that are being discussed.

 

1. What are the Sustainable Development Goals?

2. The Need for a Data Revolution

3. Big Data: Characteristics and Use for Development

3.1. Characteristics of Big Data

3.2. Using Big Data for Development

4. Sustainable Development and Data Rights

5. Governance Frameworks Proposed

5.1. UN Sustainable Development Solutions Network

5.2. The UN DATA Revolution Group

5.3. Organization for Economic Co-Operation and Development

5.4. The Global Partnership for Sustainable Development of Data

5.5. The World Economic Forum (WEF)

5.6. Dr. Julia Lane - A Quadruple Data Helix

5.7. Data Pop Alliance

6. Conclusion

7. Endnotes

8. Author Profile


Speaking on Big Data, Dan Ariely commented that, "Everyone talks about it, nobody really knows how to do it, and everyone thinks everyone else is doing it, so everyone claims they are doing it" [1]. This offers a useful insight into the lack of adequate discourse on the kind of governance and accountability frameworks that are needed to facilitate the developmental, sustainable, and responsible uses of big data.

In light of the recent international proposals to use big data to track the Sustainable Development Goals, this paper highlights the different models of management, sharing, and governance of data that are being discussed, and concurrently, how they conceptualise the various rights around big data and how are they to be protected.

 

1. What are the Sustainable Development Goals?

The Sustainable Development Goals, otherwise known as the Global Goals, build on the Millennium Development Goals (MDGs). Adopted on 1 January 2016, these universally applicable 17 goals of the 2030 Agenda for Sustainable Development, seek to end all forms of poverty, fight inequalities, tackle climate change and address a range of social needs like education, health, social protection and job opportunities over the next 15 years [2].

 

Sustainable Development Goals
Source: UN Data Revolution Group, A World that Counts, 2014, p.12.

 

2. The Need for a Data Revolution

An overwhelming cause of concern regarding the precursor to the SDGs, the MDGs, is the data unavailability to monitor their progress. For instance, the figure below indicates that there is no five-year period when the availability of MDG related data is more than 70% of what is required. Entire groups of people and key issues remain invisible [3]. Lack of data is not only a problem for global statisticians, but also for people whose needs and demands remain invisible due to lack of quantitative representation of the same. For instance, the incidences of gender related crimes when not recorded could lead to a misconception on the achievement of the MDG of gender equality.

UN Stats - Percentage of MDG data currently available for developing countries by nature of source.
Source: UN, Sustainable Development Goals.

As the new goals (SDGs) cover a wider range of issues it is clear that a far higher level of detail is required. To this effect the High-Level Panel of Eminent Persons on the post-2015 agenda has called for a "data revolution for sustainable development" [4].

The world is experiencing a Data Revolution and a "data deluge." One estimate has it that 90% of the data in the world has been created in the last 2 years. As Eric Schmidt of Google in 2010 famously said, "There were 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days [5].

In its report A World that Counts, the UN Data Revolution Group defines the data revolution as an explosion in the volume of data, the speed with which data are produced, the number of producers of data, the dissemination of data, and the range of things on which there is data, coming from new technologies such as mobile phones and the “internet of things”, and from other sources, such as qualitative data, citizen-generated data and perceptions data [6].

This data revolution in the context of sustainable development has been defined by the UN Secretary General’s Independent Expert Advisory Group (IEAG) as follows:

[T]he integration of data coming from new technologies with traditional data in order to produce relevant high‐quality information with more details and at higher frequencies to foster and monitor sustainable development. This revolution also entails the increase in accessibility to data through much more openness and transparency, and ultimately more empowered people for better policies, better decisions and greater participation and accountability, leading to better outcomes for the people and the planet [7].

The majority of such “data coming from new technologies” is what can be called big data. It is data being generated in real-time, in high velocity and volume, in a variety of forms and formats, and on an increasing range of phenomenon that are being mediated by digital technologies – from governance to human communication. Further, a good part of such big data is not about the content of the phenomenon concerned but about its process – for example, Call Detail Records are generated for each mobile phone call a person makes and it contains data about the process of the call (time, location, duration, recipient, etc.) but not about the content of the call. Big data about various governmental and human processes are becoming a crucial instrument for documenting and monitoring of the same.

 

3. Big Data: Characteristics and Use for Development

3.1. Characteristics of Big Data

The simplest definition of big data is that it is a dataset of more than 1 petabyte. The US Bureau of Labour Statistics terms it to be non-sampled data, characterized by the creation of databases from electronic sources whose primary purpose is something other than statistical inference [8].

The characteristics which broadly distinguish Big Data are sometimes called the “3 V’s”: more volume, more variety and higher rates of velocity [9]. Big data sources generally share some or all of these features [10]:

  • Digitally generated,
  • Passively produced,
  • Automatically collected,
  • Geographically or temporally trackable, and
  • Continuously analysed.

Increasingly, Big Data is recognised as creating "new possibilities for international development" [11]. It could provide faster, cheaper, more granular data and help meet growing and changing demands. It was claimed, for example, that "Google knows or is in a position to know more about France than INSEE" [12], its highly resourceful national statistical agency. To illustrate, Global Pulse gives the example of a hypothetical small household facing soaring commodity prices, particularly food and fuel [13]. They have the options of:

  • Getting part of their food at a nearby World Food Programme distribution centre,
  • Reducing mobile usage,
  • Temporarily taking their children out of school,
  • Calling a health hotline when children show signs of malnutrition related diseases, and
  • Venting about their frustration on social media.

Such a systemic shock of food insecurity will prompt thousands of households to react in roughly similar ways. These collective behavioural changes may show up in different digital data sources:

  • WFP might record that it serves twice as many meals a day,
  • The local mobile operator may see reduced usage,
  • UNICEF data may indicate that school attendance has dropped,
  • Health hotlines might see increased volumes of calls reporting malnutrition, and
  • Tweets mentioning the difficulty to “afford food” might begin to rise.

Thus the power of real-time, digital data to predict paths for development is immense. Amassing such a large volume of data which tracks practically every aspect of social behavious can revolutionize the field of official statistics and policy making.

Two points to be noted are: 1) all these data sources are not available for comparison in the real-time by default, so one task before using big data in developmental work is to make data from different sources available across agencies and make them comparable, and 2) finding repeating patterns within large data sets, sourced from varied origins, can not only allow for monitoring but also (statistically) predicting future possibilities and implications for development action.

3.2. Using Big Data for Development

There are several international organizations attempting to use such data.

Global Pulse, a United Nations initiative, launched by the Secretary-General in 2009, seeks to leverage innovations in digital data, rapid data collection and analysis to help decision-makers gain a real-time understanding of how crises impact vulnerable populations. To this end, Global Pulse is establishing an integrated, global network of Pulse Labs, anchored in Pulse Lab New York, to pilot the approach at country level [14].

The Global Working Group on Big Data for Official Statistics, created in May 2014, pursuant to Statistical Commission, makes an inventory of ongoing activities and examples regarding the use of big data, addresses concerns related to methodology, human resources, quality and confidentiality, and develops guidelines on classifying various types of big data sources [15].

There have been applications even on a national and individual level. For instance, in 2013, various sources reported that the CIA had admitted to the “full monitoring of Facebook, Twitter, and other social networks” to identify links between events and sequences or paths leading to national security threats, ultimately leading to forecasting future activities and events [16].

In the field of conflict prevention is the emerging applications to map and analyse unstructured data generated by politically active Internet use by academics, activists, civil society organizations, and even general citizens. In reference to Iran’s post-election crisis beginning in 2009, it is possible to detect web-based usage of terms that reflect a general shift from awareness towards mobilization, and eventually action within the population [17].

The "Big Data, Small Credit" report proposes that financial inclusion can be promoted by allowing consumers with mobile phones to access credit formally as customers [18].

At a national level, the biggest challenge for most big data projects is the limited or restricted access the government agencies have to potential big data sets owned by the private sector [19]. The overall consensus is that Big Data to track SDGs must complement traditional data sources [20]. This is because big data may not always be available for the entire population, or include a diverse enough sample of the population. Moreover most big data projects measure development indicators through a correlation which may not always be correct unlike official data. For instance big data might help in predicting lowered household income through reducing mobile bills while traditional data directly collects income statistics.

In a survey by the Global Working Group on Big Data for Official Statistics [21], it was found that only a few countries have developed a long-term vision for the use of big data, while many are formulating a big data strategy. Most countries have not yet defined business processes for integrating big data sources and results into their work and do not have a defined structure for managing big data projects.

Thus there exists a need to identify a governance framework for big data for sustainable development, not only at national level, but also at the international level.

 

4. Sustainable Development and Data Rights

Any discussion on governance frameworks would be incomplete without defining the kind of data rights they must seek to protect.

In the famous parable of the six blind men and the elephant they conclude that the elephant is like a wall, snake, spear, tree, fan or rope, depending upon where they touch. Similarly Internet experiences of individual users (what they touch) often contrast drastically with different views (what they conclude) on what would constitute data rights.

The IEAG in its report has identified the following set of data related rights, but has not defined any actual framework or process for ensuring them (yet) [22]:

  • Right to be counted,
  • Right to an identity,
  • Right to privacy and to ownership of personal data,
  • Right to due process (for example when data is used as evidence in proceedings, or in administrative decisions),
  • Freedom of expression,
  • Right to participation,
  • Right to non-discrimination and equality, and
  • Principles of consent.

Personal data is broadly defined as "any information relating to an identified or identifiable individual" [23]. Often primary data producers (users of services and devices generating data) are unaware of individual privacy infringements [24].

A survey by the Global Working Group on Big Data for Official Statistics found that only a few countries have a specific privacy framework for big data, while most apply the privacy framework for traditional statistics to big data as well [25].

Conventionally, safeguards against the re-use of big data to protect data rights have involved the “anonymization” or “de-identification” of data, to conceal individual identities. Global Pulse, for instance, is putting forth the concept of Data Philanthropy, whereby "corporations take the initiative to anonymize (strip out all personal information) their data sets and provide this data to social innovators to mine the data for insights, patterns and trends in real-time or near real-time" [26]. There however exists a debate on whether data can actually be anonymized effectively. Several state that data can never be effectively de-anonymized due to technological challenges [27]. For instance, when the New York City government released de-anonymised data sets of New York cab drivers were made re-identifiable by approaching a separate method. Within less than 2 hours work, researchers knew which driver drove every single trip in this entire dataset. It would be even be easy to calculate drivers’ gross income, or infer where they live [28].

Even the OECD opines that the current model of limiting identifiability of individuals is unsustainable. It recommends moving towards one where the focus is on transparency around how data is being used, rather than preventing specific types of use, stating that - "research funding agencies and data protection authorities should collaborate to develop an internationally recognized framework code of conduct covering the use of new forms of personal data, particularly those generated via network communication. This framework, built on best practice procedures for consent from data subjects, data sharing and re-use, anonymization methods, etc., could be adapted as necessary for specific national circumstances" [29].

Thus, there is a push for the arguement that the historical approaches to protecting privacy and confidentiality — namely, informed consent and anonymity — no longer hold [30]. Some have even suggested using big data itself to keep track of user permissions for each piece of data to act as a legal contract [31].

There is an overall consensus that any legal or regulatory mechanisms set up to mobilise the 'data revolution for sustainable development' should protect the data rights of the people [32], without any clear agreement on what these rights may be.

 

5. Governance Frameworks Proposed

A largely unanswered question that is posed in light of the emerging consensus on the use of Big Data for monitoring SDGs is within what sort of governance frameworks these data collection and analysis methods will operate. Methods of collection and the key actors involved in data analysis, management, storage and coordination. The role of NGOs and CSOs, if any, within these systems must be delineated. Certain key global organizations and eminent researchers have suggested the following models.

5.1. UN Sustainable Development Solutions Network

In 2012, the UN Secretary-General launched the UN Sustainable Development Solutions Network (SDSN) to mobilize global scientific and technological expertise to promote practical problem solving for sustainable development, including the design and implementation of the Sustainable Development Goals (SDGs) [33]. It has proposed the following.

Collection

The Inter-Agency and Expert Group on Sustainable Development Goal Indicators (IAEGSDG) and the United Nations Statistical Commission are to establish roadmaps for strengthening specific data collection tools that enable the monitoring of SDG indicators.

Analysis

Based on discussions with a large number of statistical offices, including Eurostat, BPS Indonesia, the OECD, the Philippines, the UK, and many others, 100 is recommended to be the maximum number of global indicators to analyse data for which NSOs can report and communicate effectively in a harmonized manner. This conclusion was strongly endorsed during the 46th UN Statistical Commission and the Expert Group Meeting on SDG indicators [34].

Specialist indicators developed by thematic communities must be used for data analysis as they include input and process metrics that are helpful complements to official indicators, which tend to be more outcome-focused. For example, the UN Inter-Agency Group on Child Mortality Estimation has developed a specialist hub responsible for analysing, checking, and improving mortality estimation. This is a leading source for child morality information for both governments and non-governmental actors [35].

Research arms of private companies such as Microsoft Research, IBM research, SAS, and R&D arms of telecom companies could directly partner with official statistical systems to share sophisticated analysing techniques [36].

Management

Four levels of monitoring, national, regional, global, and thematic, should be "organized in an integrated architecture" [37].

Countries must decide individually whether official data must be complemented with non-official indicators from big data which can add richness to the monitoring of the SDGs.

Where possible, regional monitoring should build on existing regional mechanisms, such as the Regional Economic Commissions, the Africa Peer Review Mechanism, or the Asia-Pacific Forum on Sustainable Development [38].

To coordinate thematic monitoring under the SDGs, each thematic initiative may have one or more lead specialist agencies or “custodians” as per the IAEG-MDG monitoring processes. Lead agencies would be responsible for convening multi-stakeholder groups, compiling detailed thematic reports, and encouraging ongoing dialogues on innovation. These thematic groups can become testing grounds in launching a data revolution for the SDGs, trialling new measurements and metrics that in time can feed into the global monitoring process with annual reports [39].

UN Sustainable Development Solutions Network - Schematic illustration with explanation of the indicators for national, regional, global, and thematic monitoring.
Schematic illustration with explanation of the indicators for national, regional, global, and thematic monitoring.
Source: UN Sustainable Development Solutions Network, Indicators and a Monitoring Framework for the Sustainable Development Goals: Launching a Data Revolution for the SDGs, 2015, p.3.

Role of NSOs

Monitoring the SDG agenda will require substantive improvements in national statistical capacity. Assessments of existing capacity to fulfil SDG monitoring expectations must be undertaken and needs be integrated into National Strategies for the Development of Statistics (NSDSs) [40].

Coordination

A Global Partnership for Sustainable Development Data must be established and a World Forum on Sustainable Development Data be convened in 2016 to create mechanisms for ongoing collaboration and innovation.

A high-level, powerful group of businesses and states must convene the various data and transparency sustainable development initiatives under one umbrella.

To ensure comparability, Global Monitoring Indicators must be harmonized across countries by one lead technical or specialist agency which will additionally coordinate data standards and collection and provide technical support.

The following table indicates the suggested Lead Agencies for individual SDGs [41].

Number Sustainable Development Goal Lead Agencies
1. No Poverty World Bank, UNDP, UNSD, UNICEF, ILO, FAO, UN-Habitat, UNISDR, WHO, CRED, UNFPA, and UN Population Division
2. No Hunger FAO, WHO, UNICEF, and Internal Fertilizer Industry Associaton (IFA)
3. Good Health WHO, UN Population Division, UNICEF, World Bank, GAVI, UN AIDS, and UN-Habitat
4. Quality Education UNESCO, UNICEF, and World Bank
5. Gender Equality UNICEF, UN Women, WHO, UNSD, ILO, UN Population Division, and UNFPA
6. Clean Water and Sanitation WHO/UNICEF Joint Monitoring Programme (JMP), FAO, UN Water, and UNEP
7. Renewable Energy Sustainable Energy for All, IEA, WHO, World Bank, and UNFCC
8. Good Jobs and Economic Growth IMF, World Bank, UNSD, and ILO
9. Innovation and Infrastructure World Bank, OECD, UNIDO, UNFCC, UNESCO, and ITU
10. Reduced Inequalities UNSD, World Bank, and OECD
11. Sustainable Cities and Communities UN-Habitat, Global City Indicators Facility, WHO, CRED, UNISDR, FAO, and UNEP
12. Responsible Consumption EITI, UNCTAD, UN Global Compact, FAO, UNEP Ozone Secretariat, WBCSD, GRI, IIRC, and Global Compact
13. Climate Action OECD DAC, UNFCCC, and IEA
14. Life below Water UNEP-WCMC, IUCN, and FMC
15. Life on Land FAO, UNEP, IUCN, and UNEP- WCMC
16. Peace and Justice UNODC, WHO, UNOCHA, UNCHR, IOM, OCHA, OECD, UN Global Compact, EITI, UNCTAD, UNICEF, UNESCO, and Transparency International
17. Partnership for the Goals BIS, IASB, IFRS, IMF, WIPO, WTO, UNSD, OECD, World Bank, OECD DAC, and SDSN

5.2. The UN DATA Revolution Group

The group constituted by the UN Secretary-General Ban Ki-moon in August 2014, is an Independent Expert Advisory Group with the aim of making concrete recommendations on bringing about a 'data revolution for sustainable development' [42]. In its report, A World that Counts, it makes the following recommendations [43].

Collection

Clear standards on data collection methods must be developed based on the UN Fundamental Principles of Official Statistics. Periodic audits must be conducted by professional and independent third parties to ensure data quality.

Governments, civil society, academia and the philanthropic sector must work together strengthening statistical literacy so that all people have capacity to input into and evaluate the quality of data.

Social entrepreneurs, private sector, academia, media, civil society and other individuals and institutions must be engaged globally with incentives (prizes, data challenges) to encourage data sharing.

Analysis

A SDGs Analysis and Visualisation Platform is to be set up for fostering private-public partnerships and community-led peer-production efforts for data analysis.

A dashboard on ”the state of the world” will engage the UN, think-tanks, academics and NGOs in analysing, and auditing data.

Academics and scientists are to analyse data to provide long-term perspectives, knowledge and data resources at all levels.

The “Global Forum of SDG-Data Users” will ensure feedback loops between data producers, processors and users to improve the usefulness of data and information produced.

A “SDGs data lab” to support the development of a first wave of SDG indicators is to be established mobilizing key public, private and civil society data providers, academics and stakeholders working with the Sustainable Development Solutions Network.

Storage

A “world statistics cloud” will store data and metadata produced by different institutions but according to common standards, rules and specifications.

Role of NSOs

Civil society organisations must share data and processing methods with private and public counterparts on the basis of agreements. They must hold governments and companies accountable using evidence on the impact of their actions, provide feedback to data producers, develop data literacy and help communities and individuals generate and use data.

NSOs are the central players of the Data Revolution. Their autonomy must be strengthened to maintain data quality. They must abandon expensive and cumbersome production processes, incorporate new data sources like big data that is human and machine-readable, compatible with geospatial information systems and available quickly enough to ensure that the data cycle matches the decision cycle. Collaborations with the private sector can boost technical and financial investments.

Coordination

Key stakeholders must create a “Global Consensus on Data”, to adopt principles concerning legal, technical, privacy, geospatial and statistical standards. Best practices related to public data such as the Open Government Partnership (OGP) and the G8 Open Data Charter are recommended foundations for such principles.

A UN-led “Global Partnership for Sustainable Development Data” is proposed, to coordinate and broker key global public-private partnerships for data sharing [44].

A “World Forum on Sustainable Development Data” and “Network of Data Innovation Networks” will be a converging point for the data ecosystem to share ideas and experiences for improvements, innovation and technology transfer.

5.3. Organization for Economic Co-Operation and Development (OECD)

The Organisation for Economic Co-operation and Development (OECD) is an inter-governmental organization that seeks to promote policies that will improve the economic and social well-being of people globally. It has made the following proposals [45].

Collection

Data is to be collected from National statistical agencies, national and international researchers and international organisations.

Role of NSOs

By leveraging the expertise of telecommunications companies and software developers, for instance, national statistical systems could potentially reduce costs and improve the availability of data to monitor development goals [46].

Coordination

National Data Forums for Social Science Data must be created for the development of social science data for improved coordination between social scientists, data producers (national statistical agencies, government departments, large private sector businesses and sources undertaking academic direction), and data curators.

Social science research communities must contribute to national plans of action after a needs assessment [47]. Research funding agencies must collaborate at the international level for a common system for referencing datasets in research publications [48].

5.4. The Global Partnership for Sustainable Development of Data

The partnership is a global network of governments, NGOs, and businesses working to strengthen the inclusivity, trust, and innovation in the way that data is used to address the world’s sustainable development efforts [49].

Analysis

There must be a common framework for information processing. At minimum, a simple lexicon must tag each datum specifying:

  • What: i.e. the type of information contained in the data,
  • Who: the observer or reporter,
  • How: the channel through which the data was acquired,
  • How much: whether the data is quantitative or qualitative, and
  • Where and when: the spatio-temporal granularity of the data.

Analysis of data involves filtering relevant information, summarising keywords and categorising into indicators. This intensive mining of socioeconomic data, known as “reality mining,” can be done by: (1) Continuous analysis of real time streaming data, (2) Digestion of semi-structured and unstructured data to determine perceptions, needs and wants. (3) Real-time correlation of streaming data with slowly accessible historical data repositories.

Use of big data for developmental goals can draw upon all three techniques to various degrees depending on availability of data and the specific needs.

Role of NSOs

NSOs have a pivotal part to play in the data revolution. Countries and organizations believe that big data cannot replace traditional official statistical data as it is based more on perception than facts. To quote Winston Churchill, "Do not trust any statistics that you did not fake yourself."

For instance, a study found that Google Flu Trends, to detect influenza epidemics, predicted nonspecific flu-like respiratory illnesses well but not actual flu. The mismatch was due to popular misconceptions on influenza symptoms. This has important policy implications. Doctors using Google Flu Trends may overstock on flu vaccines or be overly inclined to diagnose normal respiratory illnesses as influenza [50].

However Big Data if understood correctly, can inform where further targeted investigation is necessary and give immediate responses to favourably change outcomes.

5.5. The World Economic Forum (WEF)

The WEF is an International Organization for Public-Private Cooperation. It engages the foremost political, business and other leaders of society to shape global, regional and industry agendas [51]. In the report titled Big Data, Big Impact: New Possibilities for International Development, it makes the following recommendations [52].

Collection

Data production and development actors include individuals, public sector and the private sector. Each produce different kinds of data that have unique requirements. The private sector maintains vast troves of transactional data, much of which is "data exhaust," or data created as a by-product of other transactions. The public sector maintains enormous datasets in the form of census data, health indicators, and tax and expenditure information. The following figure highlights the different kinds of data that each sector collects and what incentives they have to share the data along with requirements to maintain such data.

World Economic Forum - Diagram on Data Commons.
Source: World Economic Forum, Big Data, Big Impact: New Possibilities for International Development, 2012, p.4.

Business models must be created to provide the appropriate incentives for private-sector actors to share data. Such models already exist in the Internet environment. For instance companies in search and social networking profit from products they offer at no charge to end users because the usage data these products generate is valuable to other ecosystem actors. Similar models could be created in garnering Big Data for SDGs. The following flowchart illustrates how different sectors must work together to incentivise data collection and sharing.

World Economic Forum - Diagram on Global Coordination.
Source: World Economic Forum, Big Data, Big Impact: New Possibilities for International Development, 2012, p.7.

5.6. Dr. Julia Lane - A Quadruple Data Helix

Dr. Julia Lane is a Professor in the Wagner School of Public Policy at New York University; and also a Provostial Fellow in Innovation Analytics and a Professor in the Center for Urban Science and Policy [53]. She has done extensive research on the uses of big data. In her paper titled "Big Data for Public Policy: A Quadruple Data Helix," she makes the following suggestions [54].

Collection

In the future there will exist a model of a quadruple data helix for data collection which will have four strands — state and city agencies, universities, private data providers, and federal agencies.i

A new set of institution, city/university data facilities, must be established. These institutions should form the backbone of the quadruple helix, with direct connections to the private sector and to the federal statistical agencies.

Analysis

There is a need for graduate training for non-traditional students, who need to understand how to use data science tools as part of their regular employment. They must identify and capture the appropriate data, understand how data science models and tools can be applied, and determine how associated errors and limitations can be identified from a social science perspective.i

Universities can act as a trusted independent third party to process, store, analyze, and disseminate data. ii

Management

The new infrastructure must ensure that data from disparate sources are collected managed and used in a manner that is informed by end users. There are many technical challenges: disparate data sets must be ingested, their provenance determined, and metadata documented. Researchers must be able to query data sets to know what data are available and how they can be used. And if data sets are to be joined, they must be joined in a scientific manner, which means that workflows need to be traced and managed in such a way that the research can be replicated.

Coordination

The role of State and City agencies is to address immediate policy issues, rather than to build long-term data infrastructures as their mandate is to work with city data than the full spectrum of available data.

5.7. Data-Pop Alliance

Data-Pop Alliance is a global coalition on Big Data and development created by the Harvard Humanitarian Initiative, MIT Media Lab, and Overseas Development Institute that brings together researchers, experts, practitioners, and activists to promote a people-centred big data revolution through collaborative research, capacity building, and community engagement [55]. It makes the following suggestions.

Collection

The idea of shared responsibility between the public and private sector is a proposed operational principles to create a deliberative space. Mechanisms and legal frameworks must be devised for private companies to share their big data under formalized and stable arrangements instead of being compelled by ad hoc requests from researchers and policymakers.

The media too, could avoid publishing statistical data collected by unexplained methodologies by employing "statistical editors" and disseminate verified information.

Role of NSOs

For official statistics, engaging with Big Data is not a technical consideration but a political obligation. In a two tier system of official and non-official statistics, the public and investors tend to distrust official figures. For instance, the results of the 2010 census in the UK are being disputed on the basis of sewage data.

It is imperative for NSOs to retain, or regain, their primary role as the legitimate custodian of knowledge and creator of a deliberative public space to democratically drive human development [56].

 

6. Conclusion

The Big data frameworks provide some useful insights on monitoring mechanisms though some questions remain unanswered in each model. Key actors that have been proposed include city and state agencies like NSOs, private companies, social scientists, private individuals and international research agencies. Data analysis can be through public-private collaborations, data philanthropy, and using indicators by thematic communities.

Collection

There appears consensus across models that collection must be effected through public private partnerships while providing incentives.

Analysis

While several methods of analysis have been proposed by the Global Partnership it is unclear on who will be conducting the analysis. The UNSDSN has suggested that it be conducted by academics and scientists with Julia Lane stating it must be through public private partnerships which appear more feasible and transparent.

Role of NSOs

All frameworks agree on the pivotal role of NSOs and acknowledge them as the key players and coordinators at the national level. They must be strengthened financially, technologically and politically. Most frameworks seek to empower national agencies which will coordinate collaborations with the private sector through incentives while protecting personal data.

Coordination

Several international fora have been proposed to enable coordination while there is consensus that the NSOs. A Global Partnership for Sustainable Development Data, a Global Consensus on Data and a World Forum on Sustainable Development Data have been suggested. UN organizations appear to be suggesting more responsibility for those in the UN framework with UNSDSN giving an extensive list of lead agencies (UNDP, UN Women, Who etc) while the WEF emphasises on the private sector, Data Pop Alliance on NSOs, and Prof. Lane on State and City agencies.

On an international level countries can opt to join international organization that are being setup for the purpose. It remains to be seen whether all countries globally can achieve such a feat in a coordinated manner without infringing on data rights when unanswerable to any set international organization. The burden appears to fall on civil society and market forces within the private sector to regulate this process. For instance when a private sector company starts providing large un-anonymized data sets for government use, the privacy concerns of civil society that result in them opting for the company’s competitor’s more privacy friendly products will result in a regulation through market forces. However these forces may have disparate strengths in different contexts and countries depending on market practices and information asymmetry resulting in the lack of a uniform accountability mechanism.

 

7. Endnotes

[1] Dan Ariely, Facebook, January 06, 2013, https://www.facebook.com/dan.ariely/posts/904383595868.

[2] United Nations Organizations, 'Sustainable Development Goals' (United Nations Sustainable Development, 26 September 2015), http://www.un.org/sustainabledevelopment/sustainable-development-goals/, accessed 6 June 2016.

[3] Data Revolution Group, 'A World that Counts: Mobilising the Data Revolution for Sustainable Development' (November 2014), http://www.undatarevolution.org/wp-content/uploads/2014/12/A-World-That-Counts2.pdf, accessed 8 June 2016.

[4] High level panel on the post-2015 development agenda , 'A New Global Partnership: Eradicate Poverty and Transform Economies through Sustainable Development'(Post2015hlp,0rg, July 2012), http://www.post2015hlp.org/, accessed 8 June 2016.

[5] Gary King, 'Ensuring the Data-Rich Future of the Social Sciences' [2011] 3(2) Science, http://gking.harvard.edu/files/datarich.pdf, accessed 8 June 2016.

[6] See [3].

[7] Ibid.

[8] Michael Horrigan, 'Big Data: A Perspective from the BLS' (Amstatorg, 1 January 2013) http://magazine.amstat.org/blog/2013/01/01/sci-policy-jan2013/, accessed 4 June 2016.

[9] UN Global Pulse, 'Big Data for Development: Challenges & Opportunities' (6 May 2012) http://www.unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf, accessed 5 June 2016.

[10] Emmanuel Letouzé and Johannes Jütting, 'Official Statistics, Big Data and Human Development: Towards a New Conceptual and Operational Approach' (2014) 12(3), Data-Pop Alliance White papers Series, https://www.odi.org/sites/odi.org.uk/files/odi-assets/events-documents/5161.pdf, accessed 4 June 2016.

[11] See [9].

[12] See [10].

[13] See [9].

[14] UN Global Pulse, 'About: United Nations Global Pulse' (2016) http://www.unglobalpulse.org/about-new, accessed 7 June 2016.

[15] UN Stats, 'Global Working Group' (2014) http://unstats.un.org/unsd/bigdata/, accessed 8 June 2016.

[16] New York City Press Release, ‘Mayor Bloomberg, Police Commissioner Kelly and Microsoft Unveil New, State-of-the-Art Law Enforcement Technology that Aggregates and Analyzes Existing Public Safety Data in Real Time to Provide a Comprehensive View of Potential Threats and Criminal Activity’ (New York City, 8 August 2012), http://www1.nyc.gov/office-of-the-mayor/news/291-12/mayor-bloomberg-police-commissioner-kelly-microsoft-new-state-of-the-art-law, accessed 2 July 2016.

[17] Francesco Mancini, 'New Technology and the Prevention of Violence and Conflict' (Reliefwebint, April 2013), http://reliefweb.int/sites/reliefweb.int/files/resources/ipi-e-pub-nw-technology-conflict-prevention-advance.pdf, accessed 2 July 2016.

[18] Arjuna Costa, Anamitra Deb, and Michael Kubzansky, 'Big Data, Small Credit: The Digital Revolution and Its Impact on Emerging Market Consumers,' (Omidyar, 3 March 2013) https://www.omidyar.com/sites/default/files/file_archive/insights/Big%20Data,%20Small%20Credit%20Report%202015/BDSC_Digital%20Final_RV.pdf, accessed 2 July 2016.

[19] United Nations Economic and Social Council, 'Report of the Global Working Group on Big Data for Official Statistics' (UN Stats, 3 March 2015), http://unstats.un.org/unsd/statcom/doc15/2015-4-BigData-E.pdf, accessed 8 June 2016.

[20] Ibid.

[21] Ibid.

[22] See [3].

[23] OECD, 'OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data' (23 September 1980), http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm, accessed 29 May 2016.

[24] Amir Efrati, ''Like' Button Follows Web Users' (WSJ, 18 May 2011) http://www.wsj.com/articles/SB10001424052748704281504576329441432995616, accessed 23 May 2016.

[25] See [15].

[26] Robert Kirkpatrick, 'Data Philanthropy: Public and Private Sector Data Sharing for Global Resilience' (UN Global Pulse, 16 September 2011), http://www.unglobalpulse.org/blog/data-philanthropy-public-private-sector-data-sharing-global-resilience, accessed 4 June 2016.

[27] Ibid.

[28] Arvind Narayanan, 'No silver bullet: De-identification still doesn't work' (1 April 2016), http://randomwalker.info/publications/no-silver-bullet-de-identification.pdf, accessed 3 July 2016.

[29] OECD Global Science Forum, 'New Data for Understanding the Human Condition: International Perspectives,' (February 2013) http://www.oecd.org/sti/sci-tech/new-data-for-understanding-the-human-condition.pdf, accessed 2 June 2016.

[30] S. Barocas, 'The Limits of Anonymity and Consent in the Big Data Age,' in Privacy, Big Data, and the public good: Frameworks for Engagement (Cambridge University Press, 2014).

[31] A. Pentland, 'Institutional Controls: The New Deal on Data,'  in Privacy, Big Data, and the public good: Frameworks for Engagement (Cambridge University Press, 2014).

[32] See [3].

[33] UN Sustainable Development Solutions Network, 'About Us: Vision and Organization' (2012) http://unsdsn.org/about-us/vision-and-organization/, accessed 2 June 2016.

[34] UN Sustainable Development Solutions Network, 'Indicators and a Monitoring Framework for the Sustainable Development Goals: Launching a data revolution for the SDGs' (12 June 2015) http://unsdsn.org/wp-content/uploads/2015/05/150612-FINAL-SDSN-Indicator-Report1.pdf, accessed 4 June 2016.

[35] UNICEF, 'CME Info - Child Mortality Estimates' (2014) http://www.childmortality.org/, accessed 1 June 2016.

[36] See [10].

[37] UNESCO, 'Technical report by the Bureau of the United Nations Statistical Commission (UNSC) on the process of the development of an indicator framework for the goals and targets of the post-2015 development agenda' (6 March 2015) http://www.uis.unesco.org/ScienceTechnology/Documents/unsc-post-2015-draft-indicators.pdf, accessed 3 June 2016.

[38] UN, 'The Road to Dignity by 2030: Ending Poverty, Transforming All Lives and Protecting the Planet ' (4 December 2014) http://www.un.org/disabilities/documents/reports/SG_Synthesis_Report_Road_to_Dignity_by_2030.pdf, accessed 7 June 2016.

[39] Ibid.

[40] UN Sustainable Development Solutions Network, 'Data for Development: An Action Plan to Finance the Data Revolution for Sustainable Development' (10 July 2015) http://unsdsn.org/wp-content/uploads/2015/04/Data-For-Development-An-Action-Plan-July-2015.pdf, accessed 3 June 2016.

[41] See [34].

[42] UN Data Revolution Group, 'About the Independent Expert Advisory Group' (6 November 2014) http://www.undatarevolution.org/about-ieag/, accessed 4 June 2016.

[43] See [3].

[44] The Partnership has already been established, and it is developing a further framework.

[45] Organisation for Economic Co-Operation and Development), 'The Organisation for Economic Co-operation and Development (OECD): About' (2016) http://www.oecd.org/about/, accessed 2 June 2016.

[46] Organisation for Economic Co-Operation and Development, 'Strengthening National Statistical Systems to Monitor Global Goals' (2015) http://www.oecd.org/dac/POST-2015%20P21.pdf, accessed 1 June 2016.

[47] Ibid.

[48] OECD Global Science Forum, 'New Data for Understanding the Human Condition: International Perspectives' (February 2013) http://www.oecd.org/sti/sci-tech/new-data-for-understanding-the-human-condition.pdf, accessed 2 June 2016.

[49] The Global Partnership On Sustainable Development Data, 'Who We Are: The Data Ecosystem and the Global Partnership' (2016) http://www.data4sdgs.org/who-we-are/, accessed 5 June 2016.

[50] World Economic Forum, 'Big Data, Big Impact: New Possibilities for International Development' (22 January 2012) http://www3.weforum.org/docs/WEF_TC_MFS_BigDataBigImpact_Briefing_2012.pdf, accessed 8 June 2016.

[51] World Economic Forum, 'Our Mission: The World Economic Forum' (12 January 2016) https://www.weforum.org/about/world-economic-forum/, accessed 7 June 2016.

[52] See [50].

[53] Julia Lane, Homepage, http://www.julialane.org/.

[54] Julia Lane, 'Big Data for Public Policy: The Quadruple Helix' (2016) 8(1) Journal of Policy Analysis and ManagementDOI:10.1002/pam.21921, accessed 1 June 2016.

[55] Data-Pop Alliance, 'Data-Pop Alliance: Our Mission' (May 2014) http://datapopalliance.org/, accessed 1 June 2016.

[56] See [10].

 

8. Author Profile

Meera Manoj is a law student at the Gujarat National Law University, Gandhinagar and has completed her first year. She is passionate about civil rights, feminism, economics in law and anything involving paneer. She aspires to travel the world and build up a vast library, with unparalleled sections on International Law and Archie comics.

 

Cross Border Cooperation on Criminal Matters - A Perspective from India

by Elonnai Hickok and Vipul Kharbanda — last modified Jul 11, 2016 06:45 AM
In today’s increasingly interconnected world where information and data can be moved to and from different parts of the world in a matter of seconds, more and more transactions are taking place online.

 

With internet transactions, especially when they do not involve a centralized or governmental agency, traditional physical borders between nation states become increasingly irrelevant. This is equally true for both legal as well as illegal transactions. It is perhaps due to this that there has been an increase in the number of transnational crimes, especially cyber crimes, in the recent past.

It has been widely accepted that cooperation and sharing of information on a regular and sustained basis between nation states is a very important tool in tackling incidents of international cyber crime. For example, the Report of the Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security established by the Secretary General of the United Nations, explicitly prescribes the following norms:

  • (a) ….. States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security;
  • (d) States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect;
  • (h) States should respond to appropriate requests for assistance by another State whose critical infrastructure is subject to malicious ICT acts. States should also respond to appropriate requests to mitigate malicious ICT activity aimed at the critical infrastructure of another State emanating from their territory, taking into account due regard for sovereignty;
  • (j) States should encourage responsible reporting of ICT vulnerabilities and share associated information on available remedies to such vulnerabilities to limit and possibly eliminate potential threats to ICTs and ICT-dependent infrastructure;

In a similar vein, on June 7th 2016, the Prime Minister’s Office released a fact sheet on the framework for the US-India Cyber Relationship. The fact sheet, which should result in a signed Framework in 60 days time from the signing, articulated a number of principles to frame and guide the U.S-India cyber relationship. The following principles of the framework focus on cross border sharing of information:

  • A commitment to promote cooperation between and among the private sector and government authorities on cybercrime and cybersecurity
  • A recognition of the importance of bilateral and international cooperation for combating cyber threats and promoting cybersecurity;
  • A commitment to promote closer cooperation among law enforcement agencies to combat cybercrime between the two countries;
  • Sharing information on a real time or near real time basis, when practical and consistent with existing bilateral arrangements, about cybersecurity threats, attacks and activities, and establishing appropriate mechanisms to improve such information sharing;
  • Developing joint mechanisms for practical cooperation to mitigate cyber threats to the security of ICT infrastructure and information contained         therein consistent with their respective obligations under domestic and international law;

Processes for Crossborder Sharing

The process by which the Indian government could access data stored with a U.S company depends most importantly on if the data is meta data (location data or subscriber information) or content data (content of emails). If the data is meta data, the Indian government could approach the company directly for access, at which point it is at the company’s discretion to share the data. For example, with respect to requests for user data Google states:

“Respect for the privacy and security of data you store with Google underpins our approach to producing data in response to legal requests. When we receive such a request, our team reviews the request to make sure it satisfies legal requirements and Google's policies. Generally speaking, for us to produce any data, the request must be made in writing, signed by an authorized official of the requesting agency and issued under an appropriate law. If we believe a request is overly broad, we'll seek to narrow it.”

Due to provisions in the Electronic Communications Privacy Act, if the data is content, than the Indian government must use an international instrument, such as an MLAT request or a Letter of Rogatory, to access the information. In the case of a MLAT or Letter of Rogatory, the U.S government recieves a court order and issues a search warrant to the company for the data and shares it back with India. For the Indian request to be approved it must at a minimum 1. meet the terms of the relevant treaty 2. comply with US law including fourth and fifth amendemnt rights and probable cause when applicable.

Legal Provisions to operationalise requests in criminal matters in India

In terms of Indian law, section 105 of the Criminal Procedure Code (Cr.P.C.) speaks of reciprocal arrangements to be made by Central Government with the Foreign Governments with regard to the service of summons/warrants/judicial processes. In case of countries with which India has an operational MLAT, the process envisaged in the MLAT coupled with the provisions of section 105 Cr.P.C. are to be followed, while in case of other countries the ministry makes a request on the basis of assurance of reciprocity to the concerned foreign government through the mission / Embassy. The difference between the two categories of the countries is that the country having an MLAT has an obligation to consider serving the documents whereas the non-MLAT countries do not have any obligation to consider such a request. The summons issued by the Foreign Courts/Authorities and received in the Ministry of Home Affairs are served by the State Police through CBI-Interpol.

Although the process for service of summons/warrants/judicial processes has been dealt with in section 105 of the Cr.P.C. some MLATs such as the one with the United Kingdom may provide for even greater assistance such as attachment and forfeiture of property as well as warrants for arrest. Since the Cr.P.C. as originally drafted did not have provisions for such therefore Chapter VIIA was inserted into the Cr.P.C. The statement of Objects and Reasons for the Amendment Act bringing the said chapter into the Cr.P.C. states:

“The Government of India has signed an agreement with the Government of United Kingdom of Great Britain and Northern Ireland for extending assistance in the investigation and prosecution of crime and the tracing, restraint and confiscation of the proceeds of crime (including crimes involving currency -transfer) and terrorist funds, with a view to check the terrorist activities in India and the United Kingdom. For giving full effect to this agreement, it is proposed to amend the Code of Criminal Procedure, 1973 to provide for-

(a) the transfer of persons between the contracting States including persons in custody for the purpose of assisting in investigation or giving evidence in proceedings;

(b) attachment and forfeiture of properties obtained or derived from the commission of an offence that may have been or has been committed in the other country; and

(c) enforcement of attachment and forfeiture orders issued by a Court in the other country.”

Conflict between Treaty provisions and Indian Law

One question which sometimes arises for discussion is what happens when a request is made by a foreign state under an MLAT for information which is not legally enforceable even by Indian authorities. Usually the treaties themselves are very clear on this point and have a provision which precludes a state from acting upon a request if the same is not enforceable under its domestic law. Even so, in the hypothetical scenario that such a provision does not exist in a treaty, the law in India is pretty clear, that in case of a conflict between the provisions of the treaty and Indian law, the provisions of Indian law shall prevail. This was held by the Supreme Court in the case of Bhavesh Jayanti Lakhani v. State of Maharashtra and others, wherein the Court said:

“The Act as also the treaties entered into by and between India and foreign countries are admittedly subject to our municipal law. Enforcement of a treaty is in the hands of the Executive. But such enforcement must conform to the domestic law of the country. Whenever, it is well known, a conflict arises between a treaty and the domestic law or a municipal law, the latter shall prevail.”

MLATs in India

India currently has MLATs with 39 countries.The nodal Ministry for concluding Mutual Legal Assistance Treaties in Criminal Matters is Ministry of Home Affairs, which is responsible for facilitating measures of mutual assistance in investigation, prosecution and prevention of crime, service of summons and other judicial documents, execution of warrants and other judicial commissions and tracing, restraint, forfeiture or confiscation of proceeds and instruments of crime.

It must be noted here that unlike countries like the United States, Indian law does not require parliamentary approval for treaties to become operation and therefore MLATs enter into force in accordance with the requirements stipulated in their provisions.[1] This means that some MLATs may enter into force as soon as they are signed if the terms of those MLATs provide as such, while others may require ratification. Ratification is usually done through an Instrument of Ratification which is signed by the President of India and ratification is considered complete only after the Instruments of Ratification are exchanged between the signatory states.

As an additional note, MLAT's between countries are not the same. For example, the US/EU MLAT is  different from the US/India MLAT. Significant differences in scope include:

  • Types of offenses: US/EU MLAT requires assistance for offenses that are recognized in both the EU and US, serious offenses punishable under the laws of both states, or offenses pununishable with 4-2 years in prison. In contrasts the US/India MLAT requires assitance without regard to if the investigation would constitute an offence under the laws of the requested state.
  • Forms of cooperation:Forms of cooperation found in the US/EU MLAT but not the US/India MLAT include joint investigiative teams, expediated means of communication, specific provision for identifaction of banking information.
  • Use of obtained evidence: The US/EU MLAT places clear limitations on the use of personal and other data where as the US/India MLAT maintains that the requested state can place a limitation on use and confidentiality if they wish to.
  • Review and application: The US/EU treaty applies to offenses committed before and after entering into the MLAT and requires review five years after it comes into force. The US/India treaty does not incorporate either of these aspects.

Letters Rogatory in India

The process of sending Letters Rogatory is enabled via 166A of the CrPc, which allows an investigating officer to issue a letter of request to a Court. The Court can issue such letter to the Central Government, who will then send it to the courts in the U.S. In 2007, the Ministry of Home Affairs issued comprehensive guidelines regarding letters of rogatory, extradition requests, and contact with foreign police – sighting issues in consistent and accurate use of such tools and processes. Examples of Letters of Rogatory issued by India to the US, include on theChase Manhattan Bank,David Headley,Louis Berger, and in theSheena Bora murder case.

Challenges in Cross Border Sharing of Information

Cyber crimes have the unique feature of making geographical boundaries irrelevant, thus creating challenges of jurisdiction and applicable legal standards. For example, a person sitting in the US may commit a crime against a victim in India without leaving the U.S - raising questions of which law would apply to the individual. Or a person sitting in India may commit a crime against a victim in India, but the evidence would be stored with a U.S company located outside of India - raising questions again over which law would apply to the data. As pointed out by the Centre for Law and Democracy, the question of jurisdiction can be based on a number of vantage points including location of the data, citizenship of the individual, location of the individual, place of the crime, or incorporation of the company holding the data.

The challenge of jurisdiction is further exacberated by the capabilities of technologies today. For example, if the data of the Indian citizen contains information about a U.S citizen, the information about the Indian would receive different levels of legal protections than that of the U.S citizen while stored in the U.S. Indian courts can also try to claim jurisdiction over Indian data/persons being subject to U.S law. For example, in 2012  Indian courts attempted to issue summons on U.S ICT companies. Also in 2012, the court of a District Judge in Vishakhapatnam, the Court issued an order restraining Google Inc. from complying with a subpoena issued by the Superior Court of California that ordered Google to share the password of the Gmail account belonging to an Indian citizen residing in Vishakhapatnam. Although this was only an interim order and was given by a District Court, and thus did not have  any precedent value, the case demonstrates the need to have in place systems and mechanisms to ensure that law enforcement and judicial authorities do not trip over each other while dealing with cross jurisdictional issues in the realm of cyber crimes and cyber disputes.

Challenges in cross border sharing of information do rest only with the question of jurisdiction. All bi-lateral and multi-lateral processes for cross border sharing of information and cooperation have their own complexities. Such complexities could include bureaucracy, mistakes in issuing and/or processing requests, lack of competency, and differing legal requirements. For example, experts have noted that many requests for access to communications from foreign governments are rejected as they fail to meet the requirement that a U.S court find probable cause for issuing an order to a company for disclosure of communications.  In the case of MLATs and Letters of Rogatory, these complexities have resulted in slow processing times for requests, rejection of requests due to errors in submission, or rejection of requests for legal reasons. For example, according to the President’s Review Group on Intelligence and Communications Technologies, it takes approximately 10 months for the US to process and respond to an MLAT request from a foreign government. In the age of the internet, where situations require real time access to data, such delays are frustrating and can severely hamper an investigation.

Solutions to the Challenges

Law enforcement agencies, governments, academia, and civil society across the world over have realized the complexity of this situation, and have been trying to find effective solutions.

A recent proposal is the one that is being negotiated between theUS and the UK. Which, simply put, would allow the UK government and intelligence agencies to access content data directly from ICT companies in the US when the content and the crime do not pertain to a US citizen. There is talk that this agreement would be extended to other countries on the condition that they meet certain standards and requirements. At the same time the U.S has recently reached thePrivacy Shield Agreement with the EU. The agreement establishes more stringent standards and protocols for sharing data between the US and EU and includes requirements for transparency of U.S government access to EU citizen's data. In 2016 the U.S enacted theJudicial Redress Act . The Act authorizes the Department of Justice to approve countries whose citizens may bring civil action under the Privacy Act 1974 against specified US agencies. Countries will be assessed as per their privacy protections, if data can be freely passed between the US and the applying country, and the Department of Justice has certified data transfer policies that do not impede the national security interests of the U.S.

Parallel to these policy developments,  civil society and academics are also discussing alternative frameworks. Some of these include:

Daskal-Woods: Proposed by Jennifer Daskal, Assistance Professor of Law and American University Washington College of Law and Andrew Woods, Assistant Professor of law and the University of Kentucky, - the framework seeks to address repercussions arising from roadblocks in the cross border sharing of data including data localization requests, heavy handed encryption policies, and governmental demands for built in back doors. The framework could be extended to countries meeting a set of established criteria including basic human rights requirements. Importantly, the framework proposes replacing the requirement of probable cause with that of a strong factual basis that a crime has been or will be committed. This would likely be a welcomed change for the Indian Government as it could make it easier for requests for be approved, though for Indian citizens, it might be a less welcomed change as they would lose the privacy protection that the probabal cause standard affords them against requests from the Indian government and other foreign governments. India does not require probable cause to be demonstrated and access to the content of communications can be authorized by the Joint Secretary to the Ministry of Home Affairs on the grounds laid out in section 5 of the Telegraph Act and section 69 of the IT Act.

  • Strawman: Proposed by the Centre for Democracy and Technology, suggests a framework that would treat MLAT requests in which the location of the crime, the citizenship and location of thevictim, the perpetrator, and data subject are in the same country - primarily to the requesting country’s domestic law. Such a framework would bring content and non-content under its scope. This framework would be extended to countries that meet baseline human rights standards.
  • Peter Swire and Justin Hemmings: Swire and Hemmings have proposed a number of process reforms to help streamline the MLAT process including increasing the resources to the Office of International Affairs, streamlining the number of steps in the MLAT request process, and streamlining the provision of requested records back to the requesting country. They also proposed a system that would prioritize or streamline requests from ‘pre approved’ countries and explore joint criminal investigations between law enforcement officials from different countries as alternative means for obtaining access to information.

Conclusion

Due to the pervasive nature of the internet and technology in our everyday lives there is a greater danger of criminal activities being conducted from great distances and often from across international borders. One of the most important means to tackle this increase in cross border criminal activities is to increase international cooperation and information sharing through more efficient processes.  As negotiations take place between the U.S and the U.K and the U.S and the EU on finding alternatives to enhance cross border sharing of information, it is encouraging that the US-India cyber framework touches on cross border sharing of information and references the Expert Group, but India needs to participate in the larger global debate that is happening around cross border sharing of information and needs to focus on improving internall chain of custody for sending requests and strengthening privacy practices domestically.

Questions that India should be asking with respect to the developments in the US and EU include:

  • By what criteria will non-EU countries be evaluated for participation in such partnerships? If India does not immediately meet the set criteria, is there the will to make the needed amendments to domestic policy and practice? What are the pro’s and con’s of India participating in such partnerships? This is an important question to ask and for India to reflect upon as it is not clear that current practices and policy would meet set human rights standards. For example, an issue could be the India has provisions for the death penalty for heinous crimes and terror attacks.
  • Once evaluated will agreements and processes between the US and various countries be standard? i.e, will the EU - US agreement be the same as an India - US agreement? Will the process for engagement between the EU-US and Indian-US be the same? If not, what will determine differences?

As the Centre for Internet and Society continues to research into MLATs and cross border sharing of information, some questions we are seeking to answer include:

  • For what differing purposes does India send Letters Rogatory and/or MLAT requests?
  • What is the process followed for issuing and recieving Letters Rogatory/MLAT? CBI has issued comprehensive guidelines for issuing these requests, but are these followed? what are challenges in the implementation of these processes?
  • How much money is spent on sending/processing Letters of Rogatory/MLATs?
  • Is one instrument preferred over another i.e, Letters Rogatory vs. MLATs, and why?
  • How many requests originating from India to the US have been rejected and why?
  • What level of privacy protection is afforded to the data transferred? Who has access to such data?

[1] The issue of parliamentary approval was raised in the case of the Indus Waters Treaty, 1960 on the ground that it involved a huge financial commitment and its ratification without parliamentary approval amounted to an encroachment upon the financial powers of the Parliament. Deciding on this point the   Speaker of the House said “Wherever the Government enters into a treaty-Parliament may or may not agree-primary right under the Constitution is with the Government to enter into a treaty..................We cannot now take away powers which have been vested in the Government under the Constitution….. In accordance with previous practice, it is not obligatory on the Government to place treaties before this House for ratification unless, as constituent parts of those treaties, the respective Governments have agreed to place them before Parliament and obtain their ratification.” Decisions from the Chair, Lok Sabha, available at http://www.parliamentofindia.nic.in/ls/decision/decp84.htm

 

Trans Pacific Partnership and Digital 2 Dozen: Implications for Data Protection and Digital Privacy

by Shubhangi Heda — last modified Jul 12, 2016 07:56 AM
In this essay, Shubhangi Heda explores the concerns related to data protection and digital privacy under the Trans Pacific Partnership (TPP) agreement signed recently between United States of America and eleven countries located around the pacific ocean region, across South America, Australia, and Asia. TPP is a free trade agreement (FTA) that emphasises, among other things, the need for liberalising global digital economy. The essay also analyses the critical document titled ‘Digital 2 Dozen’ (D2D), which compiles the key action items within TPP addressing liberalisation of digital economy, and sets up the relevant goals for the member nations.

 

1. Introduction

2. Analysis of TPP and D2D

2.1. Trans Pacific Partnership (TPP)

2.2. Digital 2 Dozen (D2D)

3. Major Criticisms of the Digital Agenda of TPP

3.1. Data Protection

3.2. Digital Privacy

4. Implications of TPP for RCEP

5. Implications of TPP in the Context of EU Safe Harbour Judgement

6. Implications of TPP for India after US-India Cyber Relationship Agreement

7. Conclusion

8. Endnotes

9. Author Profile


1. Introduction

This essay explores the concerns related to data protection and digital privacy under the Trans Pacific Partnership (TPP) agreement signed recently between United States of America and eleven countries located around the pacific ocean region, across South America, Australia, and Asia [1]. TPP is a free trade agreement (FTA) that emphasises, among other things, the need for liberalising global digital economy. The essay also analyses the critical document titled ‘Digital 2 Dozen’ (D2D), which compiles the key action items within TPP addressing liberalisation of digital economy, and sets up the relevant goals for the member nations. TPP requires the member countries to facilitate unhindered digital data flow across nations, for commercial and governmental purposes, which evidently have major implications for national and regional data protection and privacy regimes. These implications must also be seen in the context the recent judgement by the EU Court of Justice against the validity of the EU-USA data transfer agreement of 2000. Further, the essay discusses the potential impacts that TPP/D2D might have on India, in the context of the ongoing USA-India Cyber Relationship dialogue. If the privacy concerns are not raised right now TPP might act as a model framework for future FTAs which will fail to encompass proper data protection and digital privacy regime within it.

2. Analysis of TPP and D2D

2.1. Trans Pacific Partnership (TPP)

Trans Pacific Partnership (TPP) is a large multi-partner free trade agreement amongst twelve Asia-Pacific countries, which is closely led by geo-political and economic strategies of the USA. Countries started the negotiation of TPP in 2008 when USA joined Pacific Four (P-4) negotiations and in 2015 negotiations of TPP was concluded and text was released. Ministers from the member countries signed the agreement on February 4, 2016 [2]. The main aim of TPP is to liberalise trade and investment beyond what is provided for within the WTO. It is also considered to be a strategic move by the US to counter the trade linkages that are being established in the Asian region. TPP largely covers topics of market access, and rules on various related issues such as intellectual property rights, labour laws, and environment standards [3].

Between 1992 -2012 there has been an upsurge in bilateral trade agreements being signed in Asia from 25 to 103 and the effect of these FTAs is called the ‘noodle bowl effect’. TPP is seen as framework which will replace these FTAs which are causing the ‘noodle bowl effect’.While these FTAs are being replaced but with TPP being signed there are various bilateral arrangements signed along with TPP. USA has also stated that TPP will not affect the already existing NAFTA [4]. While TPP is being concluded there is another free trade agreement being negotiated between USA and EU , which is Trans Trade and Investment Partnership (TTIP). Both TPP and TTIP and are considered to be serving similar objective which is to deal with new and modern trade issues. Also both the agreements are US led and since negotiation for TPP are now finalised it may have a significant impact on TTIP [5].

TPP is one of the first document which deals specifically with digital economy and applies across borders. The main aims of TPP are to promote free flow of data across borders without data localisation. It aims to remove national clouts and regional internets. It also includes provisions to combat theft of trade secrets. It allows you to create transparent regulatory process with inputs from various stakeholders. It also aims to provide access to tools and procedures for conduct of e-commerce [6].

Some of the major criticism to TPP were regarding the issues related to [7]:

  • environment, wherein it does not address the issue of climate change and the language used in the agreement is very weak;
  • labour rights provision mandates parties to adhere to the ILO provision but it does not seem to provide for effective framework and might not bring the desired change;
  • investment chapter is seen to be controversial because of the investor state dispute settlement clause which will allow foreign investor to sue government over policies that might cause harm to them;
  • e-commerce and telecommunication chapter raises major privacy concerns;
  • intellectual property chapter wherein it includes controversial rules regarding pharmaceutical companies and data exclusivity apart from the privacy concerns.

2.2 Digital 2 Dozen (D2D)

D2D is set of rules and aims which is specifically drafted to be followed for the trade agreements related to open internet and digital economy. More specific aims of TPP as provided within the ‘Digital 2 Dozen,’ aiming for more liberalised trade in digital goods and services, are [8]:

  • promoting free and open internet,
  • prohibiting digital custom duties,
  • securing basic non-discrimination principles,
  • enabling cross-border data flows,
  • preventing localization barriers,
  • barring forced technology transfers,
  • advancing innovative authentication methods,
  • delivering enforceable consumer protections,
  • safeguarding network competition,
  • fostering innovative encryption products, and
  • building an adaptable framework.

Strategic goal of the US in introducing D2D as goals of TPP has been to set up a trend within Asian region for all the trade agreements. It is expected to ensure that if TPP is a success, similar goals and policy frameworks will be followed for other trade agreements as we. For example, the USA-India partnership also enshrines similar aims and so does the USA-Korea partnership. Hence while India is not part of TPP, USA is nonetheless trying to get India into a partnership which is similar to the TPP. The language proposed by the USA in TPP negotiations has always been supportive for cross border data flows as it claims that companies have mechanism to keep a privacy check and privacy would not be undermined, but countries like New Zealand and Australia which have strong privacy protection laws nationally have raised concerns which will be discussed in further sections [9]. Also not only in privacy rights but Digital Dozen initiative also affects other digital rights related to - excessive copyright terms TPP proposed to extend the term of copyright to hundred years which deprive access to knowledge; as in the U.S motive to give more power to private entities , the ISP obligations enumerated within TPP which puts freedom of expression and privacy at risk as ISPs are allowed to check for copyright infringement and TPP does not put any privacy restriction in this regard; introduction of new fair use rules; ban on circumvention of digital locks or DRMs; no compulsory limitation for persons with disabilities; lack of fair use for journalistic right; while net neutrality is major issue is many developing nations in Asia no effective provision for net neutrality is aimed at in the D2D initiative; prohibits open source mandates which puts barrier for countries which want to release any software as open source as a policy decision [10].

3. Major Issues Related to Data Protection and Privacy in the TPP

3.1. Data Protection

One of the major concern raised against TPP is regarding data protection provisions that have been integrated within the E- Commerce chapter of the agreement. Article 14.11 and Article 14 .13 are the ones that deal with data flow related to consumer information.Article 14.11 in the agreement puts a requirement on the member states to allow transfer of data across border and Article 14.13 does not allow the companies to host data on local servers. Concerns were raised in few member states for instance, Australian Privacy Foundation raised concerns over Article 14.11 which requires transfers to be allowed in context of business activities of service suppliers. It claimed that exception to this provision is very narrow and the repercussion for not following the exception is that investor state dispute settlement proceedings can be initiated, which is not sufficient to protect privacy. Also, it highlighted the issue that with the narrow exception provided under Article 14.13 which relates to prohibition on data localisation, it might have adverse effect on the implementation of national privacy laws within Australia [11].

Another provision which is of major concern is Article 14.13 which prohibit data localisation. It will raise problems for countries like Indonesia and China which will have to change their local laws to implement the provision [12]. Since there already has been a major concern with regard to USA- EU Safe Harbour Agreement which was later made subject to the ECJ’s ruling on data protection, which invalidated any arrangement which provides voluntary enterprises responsibility to enforce privacy. But both the USA and EU are in process of renegotiating the agreement.The major concern was that in EU data protection is a fundamental right while in USA data protection is more consumer centric. When similar concerns were raised in TPP negotiations, they were rebutted as USA claimed that FTA does not concern itself with data protection [13].

In 2012 Australia proposed an alternative language to TPP which allowed countries to place restriction on data flow as long as it was not a barrier to trade. U.S responded to concerns raised by the Australia through a side letter which ensured Australia that U.S and Australia have a mutual understanding in relation to privacy and U.S will ensure the privacy of data with regards to Australia. While Australia’s concern was given acknowledgement other countries which raised similar issues were not given any assurances [14]. US instead proposed ad- hoc strategy that gave private companies power to form privacy policy with implementation through state machinery [15].

3.2. Digital Privacy

Article 14.8 in the E- Commerce chapter of the agreement states that countries can form legal framework for the protection of rights but the kind of ‘legal framework’ is not defined. Also, nowhere it states that the privacy protection or data protection laws are expressly exempted, rather it states that any such policy implemented by member states will be put under review of TPP standards. The standards which TPP proposes to follow are based on the underlying idea that any such policy should not hinder free trade in any way. This test will be applied by tribunals which are experts in trade and investment and not on data protection or human rights [16]. While Article 14.8 provides for protection of private information of consumers but the footnote to the provision renders it ineffective. The footnote states that member countries can adopt legal framework for the protection of data which can be done by self-regulation by industry and does not provide for any comprehensive data protection obligation upon the member states [17]. Similar to this Article 13.4 of the telecommunications chapter under TPP also states that the countries can apply regulation regarding confidentiality of the messages as long as it is not “a means of arbitrary or unjustifiable discrimination or a disguised restriction on trade in services" [18].

Another chapter which raises major concerns about the privacy rights is intellectual property. It affects privacy through the provisions related to technological protective measures and the provision that regulate ISP’s liability. Regarding the TPM provision, the TPP follows the DMCA model whereby the exception to anti- circumvention provision is very narrow and does not apply to anti- trafficking provision. The exception allows user to circumvent TPM if it affect the user's privacy in any way, although this provision does not apply to ant- trafficking of TPM. The provision regarding ISP’s liability states that there should be cooperation between ISPs and rights holders and it does not prohibit ISPs to monitor its users. Also TPP proposes the notice for takedown and identification of the infringer by the ISP but this provision is not in consonance with laws of member states, like that of Peru which does not have any copyright law on ISP . Also many countries have tried to introduce proper privacy laws along with implementation of ISP liability but that is not done within the TPP [19]. TPP as whole aims to give greater power to private regulators without providing for minimum standard for protection of privacy.

Although TPP is not a data protection agreement but it consequently deals with various aspects of data protection, hence it is prospective model for privacy and data protection practices in future trade agreements. If positive obligations are included within the free trade agreements it will have an advancing impact on the data protection regime.

4.Implications of TPP for RCEP

While TPP has such lacunas similar provision are proposed in RCEP to which India is a party and which will have serious implication as many of the countries have inadequate data protection laws nationally and with the introduction of such an FTA the exploitation of privacy rights will be rampant [20]. To avoid this EU directive on data protection should be taken into consideration in the negotiations of such FTAs. But for the RCEP negotiations are still going on and in India many companies like Flipkart, Snapdeal etc. have started preparing for the changing norms. The government claims that it is going to accept best practices in the region which indicates that it is going to have same policies as that of TPP. Although people from industry have raised concerns that while there are national laws but it is difficult to check third party involvement within the business and it is becoming increasingly difficult to keep the consumer data confidential [21].

5. Implications of TPP in the Context of EU Safe-Harbour Judgement

Mr. Maximillian Schrems, an Austrian National residing in Austria, has been a user of the Facebook social network since 2008. Any person residing in EU who wishes to use Facebook is required to conclude, at the time of his registration, a contract with Facebook Ireland (a subsidiary of Facebook Inc. which itself is established in Unites States). Some or all of the personal data of the Facebook Ireland’s users who residing in EU is transferred to servers belonging to Facebook Inc. that are located in United States, where it undergoes processing. On 25 June 2013 Mr Schrems made a complaint to the commissioner by which he in essence asked the latter to exercise his statutory powers by prohibiting Facebook Ireland from transferring his personal data to Unites States, and this led to the Maximillian Schrems v Data Protection Commissioner case [22]. He contended that in his complaint that the law and practice in force in that country did not ensure adequate protection of the personal data held in its territory against the surveillance activities that were engaged in thereby by the public authorities. Mr Schrems referred in this regard to the revelations made by Edward Snowden concerning the activities of the United States intelligence services, in particular those of the NSA.(para 26, 27, 28). The case came in the court ruled that “that a third country which ensures an adequate level of protection, does not prevent a supervisory authority of a Member State, within the meaning of Article 28 of the EU 94/46 directive as amended, from examining the claim of a person concerning the protection of his rights and freedoms in regard to the processing of personal data relating to him which has been transferred from a Member State to that third country when that person contends that the law and practices in force in the third country do not ensure an adequate level of protection. The ruling implies that personal data cannot be transferred to third country which does not provide adequate level of protection.

EU safe harbour judgment and EU directive on privacy provide contrasting rules related to privacy. While TPP gives power to private entities to formulate rules regarding privacy while the recent ECJ judgment invalidated giving such power to private entities under EU-US Safe Harbour Agreement. Also in context of the same judgment Hamburg’s Commissioner for Data Privacy And Freedom of Information announced an investigation into the data transfer taking place through Facebook and Google to U.S. Hence in the light of the recent judgment member states within EU are not allowed to permit cross border data flow, in contrast to this one of the main goals of TPP is to maintain free flow of data across border [23]. EU is this regard has also set forth the proposal to introduce General Data Protection Regulation. (GDPR). Although U.S and EU are trying to renegotiate the agreement but the privacy concerns raised cannot be ignored. Hence following the same model as was invalidate under the ECJ judgment lets US exploit privacy of member states under TPP. Similar concerns as raised within the judgment are also raised in India as it also following the same model within U.S-India Cyber Relationship Agreement and in RCEP negotiations.

6. Implications of TPP in the context of USA-India Cyber Relationship

While India is not part of TPP but it might have an effect on the U.S India Cyber Relationship Agreement. In August 2015 there was re- initiation of the India-U.S cyber dialogue to address common concerns related to cybersecurity and to develop better partnerships between public and private sector for betterment of digital economy [24]. One of the key aim of this agreement is free flow of information between two nations, which suffers from similar problem that it will put privacy of the citizens at risk. Also India does not have any bilateral treaty which ensures cyber data protection in such a scenario the only solution is data localisation, but this agreement will put data at risk [25]. Hence while the TPP negotiations were going on and also RCEP is being discussed the concerns about privacy and data protection need to be raised as mention in earlier section regarding implications of TPP on RCEP, the USA-India Cyber Relationship also faces the same implications..Although the aim of USA-India Cyber Relationship is to ensure cybersecurity. After the cases of Muzaffarnagar riots, upheaval in North -Eastern states and Gujarat riots, India has realised it is important to ensure compliance from the social media companies. India sees the USA-India Cyber Relationship as an opportunity to achieve this goal. The Google Transparency Report states that that India made around three thousand requests to Google for user data [26], which indicate at the country's interest in having a common data understanding with the major social media companies (almost all of which are located in USA) about requesting and sharing of user activity data. While this concern is being addressed through the agreement, it is difficult to ignore the clause related to free flow of information, and if the meaning of the term is extended and adopted from TPP itself will put digital privacy of Indian citizens at risk [27].

7. Conclusion

Even though TPP negotiation are completed but the ratification of the agreement is still underway. TPP is being seen as one of a kind trade agreement because it is the first time that countries across the globe have come together as a whole to address concerns of modern trade. Although it fails to address some of the key concerns related to privacy and data protection which are becoming increasingly important. Data protection and privacy issues cannot be seen in isolation and needs to merged within the modern day trade agreements. The D2D component by the USA is strategic move to have trade dominance in Asia and to compete with China’s growth . TPP has privacy and data protection lacunae within the e- commerce , telecommunications and intellectual property discussion.Although it might have serious implications on RCEP negotiation and USA- India Cyber Relationship Dialogue. Similar concern regarding data protection has already been addressed by ECJ judgment invalidating USA-EU Safe Harbour Agreement but the similar ad - hoc strategy has been incorporated within TPP. Since TPP might be considered as best practice model for future FTAs in the Asian region it is important to raise and address these privacy concerns now.

8. Endnotes

[1] The signatory countries include Australia, Canada, Japan, Malaysia, Mexico, Peru, United States of America, Vietnam, Chile, Brunei, Singapore, New Zealand. "The Trans-Pacific Partnership," http://www.ustr.gov/tpp (last visited Jul 7, 2016).

[2] "The Origins and Evolution of the Trans-Pacific Partnership (TPP)," Global Research, http://www.globalresearch.ca/the-origins-and-evolution-of-the-trans-pacific-partnership-tpp/5357495 (last visited Jul 7, 2016).

[3] Fergusson, Ian F., Mark A. McMinimy & Brock R. Williams, "The Trans-Pacific Partnership (TPP): In Brief," (2015), http://digitalcommons.ilr.cornell.edu/key_workplace/1477/ (last visited Jul 1, 2016).

[4] Gajdos, Lukas, The Trans-Pacific Partnership and its impact on EU trade, Policy Department, Directorate-General for External Policies, Policy Briefing (2013), http://www.europarl.europa.eu/RegData/etudes/briefing_note/join/2013/491479/EXPO-INTA_SP(2013)491479_EN.pdf.

[5] Twining, Daniel, Hans Kundnani & Peter Sparding, Trans-Pacific Partnership: geopolitical implications for EU-US relations, Policy Department, Directorate-General for External Policies, June 24 (2016), http://www.europarl.europa.eu/RegData/etudes/STUD/2016/535008/EXPO_STU(2016)535008_EN.pdf.

[6] USTR, "Remarks by Deputy U.S. Trade Representative Robert Holleyman to the New Democrat Network," https://ustr.gov/about-us/policy-offices/press-office/speechestranscripts/2015/may/remarks-deputy-us-trade (last visited Jul 4, 2016).

[7] Murphy, Katharine, "Trans-Pacific Partnership: four key issues to watch out for," The Guardian, November 6, 2015, https://www.theguardian.com/business/2015/nov/06/trans-pacific-partnership-four-key-issues-to-watch-out-for (last visited Jul 7, 2016).

[8] USTR, "The Digital 2 Dozen" (2016), https://ustr.gov/sites/default/files/Digital-2-Dozen-Final.pdf (last visited Jul 1, 2016).

[9] Fergusson, Ian F.m Mark A. McMinimy & Brock R. Williams, "The Trans-Pacific Partnership (TPP) negotiations and issues for congress," (2015), http://digitalcommons.ilr.cornell.edu/key_workplace/1412/ (last visited Jul 8, 2016).

[10] "How the TPP Will Affect You and Your Digital Rights," Electronic Frontier Foundation (2015), https://www.eff.org/deeplinks/2015/12/how-tpp-will-affect-you-and-your-digital-rights (last visited Jul 7, 2016).

[11] Australian Privacy Foundation (APF), Trans Pacific Partnership Agreement (2016), https://www.privacy.org.au/Papers/Parlt-TPP-160310.pdf.

[12] Greenleaf, Graham, "The TPP & Other Free Trade Agreements: Faustian Bargains for Privacy?," SSRN (2016), http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2732386 (last visited Jul 1, 2016).

[13] "GED-Project: Transatlantic Data Flows and Data Protection," GED Blog (2015), https://ged-project.de/topics/competitiveness/transatlantic-data-flows-and-data-protection-the-state-of-the-debate/ (last visited Jul 1, 2016).

[14] Geist, Michael, "The Trouble with the TPP, Day 14: No U.S. Assurances for Canada on Privacy," (2016), http://www.michaelgeist.ca/2016/01/the-trouble-with-the-tpp-day-14-no-u-s-assurances-for-canada-on-privacy/ (last visited Jul 4, 2016).

[15] Aaronson, Susan Ariel, "What does TPP mean for the Open Internet?" From Policy Brief on Trade Agreements and Internet Governance Prepared for the Global Commission on Internet Governance (2015), https://www.gwu.edu/~iiep/events/DigitalTrade2016/TPPPolicyBrief.pdf (last visited Jul 5, 2016).

[16] Lomas, Natasha, "TPP Trade Agreement Slammed For Eroding Online Rights," TechCrunch, http://social.techcrunch.com/2015/11/05/tpp-vs-privacy/ (last visited Jun 30, 2016).

[17] "Q&A: The Trans-Pacific Partnership," Human Rights Watch (2016), https://www.hrw.org/news/2016/01/12/qa-trans-pacific-partnership (last visited Jul 1, 2016).

[18] "TPP Full Text Released," People Over Politics (2015), http://peopleoverpolitics.org/2015/11/07/tpp-just-as-bad-as-you-thought/ (last visited Jul 7, 2016).

[19] "Right to Privacy in Trans-Pacific Partnership (TPP ) Negotiations," Knowledge Ecology International, http://keionline.org/node/1164 (last visited Jul 1, 2016).

[20] Asian Trade Centre, "E-Commerce and Digital Trade Proposals for RCEP (2016)," http://static1.squarespace.com/static/5393d501e4b0643446abd228/t/575a654c86db438e86009fa1/1465541967821/RCEP+E-commerce+June+2016.pdf (last visited Jul 1, 2016).

[21] "E-commerce companies like Flipkart, Snapdeal to beef up data security to meet RCEP norms," The Economic Times, http://economictimes.indiatimes.com//articleshow/49068419.cms (last visited Jul 1, 2016).

[22] ECLI:EU:C:2015:650 (C -362/14)

[23] King et al., "Privacy law, cross-border data flows, and the Trans Pacific Partnership Agreement: what counsel need to know," Lexology, http://www.lexology.com/library/detail.aspx?g=b5c0b400-8161-4439-a4b7-131552ad5209 (last visited Jul 4, 2016).

[24] "U.S.-India Business Council Applauds Resumption of Cybersecurity Dialogue," U.S.-India Business Council (2015), http://www.usibc.com/press-release/us-india-business-council-applauds-resumption-cybersecurity-dialogue (last visited Jul 5, 2016).

[25] Sukumar, Arun Mohan, "India Is Coming up Against the Limits of Its Strategic Partnership With the United States," The Wire (2016), http://thewire.in/40403/india-is-coming-up-against-the-limits-of-its-strategic-partnership-with-the-united-states/ (last visited Jul 4, 2016).

[26] Countries – Google Transparency Report, https://www.google.com/transparencyreport/userdatarequests/countries/ (last visited Jul 8, 2016).

[27] Sukumar, Arun Mohan, "A case for the Net’s Ctrl+Alt+Del," The Hindu, September 5, 2015, http://www.thehindu.com/opinion/op-ed/a-case-for-the-nets-ctrlaltdel/article7616355.ece (last visited Jul 5, 2016).

9. Author Profile

Shubhangi Heda is a Student of Jindal Global Law School, O.P Jindal Global University. She has completed her fourth year. She gives due importance to popular culture in her life and loves to read fiction and like to watch TV-shows, her favorite being 'White Collar'.

 

No, India did NOT oppose the United Nations move to “make internet access a human right”

by Pranesh Prakash and Japreet Grewal — last modified Jul 13, 2016 04:09 PM
Last Friday, the United Nations Human Rights Council (UNHRC) passed a resolution titled “The promotion, protection and enjoyment of human rights on the Internet.”

The article by Pranesh Prakash and Japreet Grewal was published in Factordaily on July 13, 2016.


Several media outlets, including The Verge, India Today, and BuzzFeed, reported that the resolution was ‘opposed’ by China, Russia, Saudi Arabia, South Africa and India. The Verge, for instance, reported that these countries “specifically opposed” a clause of the resolution that “condemns unequivocally measures to intentionally prevent or disrupt access to or dissemination of information online and calls for all countries to refrain from such measures”.  This is pure bunkum.  Some media organisations have also been reporting that the UNHRC resolution “declares that access to the Internet is a human right”. This too is fiction.

What’s the truth?  The UNHRC resolution covers wide ground, including the reaffirmations of two previous resolutions, which stated that the same rights that people have offline must also be protected online as well.  As ARTICLE19, an international free speech NGO, notes: “The draft resolution goes further than its predecessors, including by stressing the importance of an accessible and open Internet to the achievement of the Sustainable Development Goals, as well as in calling for accountability for extrajudicial killings, arbitrary detentions and other violations against people for expressing themselves online.”  Importantly, the resolution “unequivocally condemns” internet shutdowns, such as the one that happened in Kashmir just last week after security forces killed guerrilla Burhan Wani.

This resolution was, in fact, adopted without any opposition. So why the brouhaha over countries like India?

Here are the facts

There were four separate amendments, two of which were proposed by Belarus, China and Russia (referred as L85, L86 in this article) and the other two were proposed by Belarus, China, Russia and Iran (referred as L87 and L88).  None of these amendments comment on the paragraph in the resolution that condemns intentional disruption of access or dissemination of internet services. So the headlines in most of the reports are just plain wrong. Let’s examine each of these four amendments one by one

In L85, an amendment was suggested to a paragraph that refers to past resolutions by the UNHRC and the UN General Assembly relating to freedom of expression and the right to privacy online. The amendment, which proposed including a reference to a previous UNHRC resolution on the rights of children online, was later withdrawn.

In L86 the proposed amendments both added and removed some text, and was hotly opposed by organisations like ARTICLE19. The proposed amendment said that the same rights people have offline must also be protected online, in particular, freedom of expression and the right to privacy, in accordance with articles 17 and 19 of the International Covenant on Civil and Political Rights (ICCPR), a multilateral treaty adopted by the United National General Assembly to respect civil and political rights of individuals. Major additions: Some text on right to privacy and a reference to Article 17 of the ICCPR, which is about privacy. Major deletions: a reference to the Universal Declaration on Human Rights, and language stating that that freedom of expression is “applicable regardless of frontiers and through any media of one’s choice”, which is present in article 19 of the ICCPR.  However, article 19 of the ICCPR is incorporated by reference even in the proposed amendment!  So is there a real loss in purely legal terms?  Not really.

The amendments in L87 sought to replace the term “human rights based approach” that stressed on the need to provide and expand access to the internet, and to replace it with the term “comprehensive and integrated approach.” The problem is that there is no clarity about what a “human rights based approach” to providing and expanding access to the internet is. What does it even mean? Is there a “human rights based approach” to spectrum auctions and spectrum sharing? Or the laying of fibre optic cables? Or anything else associated with internet access?  If there is, indeed, a human rights based approach to providing and expanding access to the internet, it should be spelt out, rather than simply calling it that. Similarly, the term “comprehensive and integrated approach” is equally vague.

Even if one harbours reservations about these amendments, none of these amendments could be reasonably be characterised as “opposing” the condemnation of Internet shutdowns or “opposing” online freedoms.

Finally, in L88, the amendments proposed that the UN resolution should acknowledge concerns about using the internet and information technology for spreading ideas about “racial superiority or hatred, incitement to racial discrimination, xenophobia and related intolerance.” In the light of this, it is difficult to understand how adding concerns relating to hate speech to the resolution is seen as “being opposed” to online freedoms, especially when there is no direct action contemplated in the proposed amendment.

Indeed, in Paragraph 9, gender violence is mentioned, and in Paragraph 11, incitement to hatred is mentioned.  Adding an additional, more specific reference can hardly be construed as being opposed to online freedoms. After all, states have a positive obligation to enact laws to prohibit hate speech under Article 20 (2) of the ICCPR, which is a centrepiece of international human rights law.

Even if one harbours reservations about these amendments, none of these amendments could be reasonably be characterised as “opposing” the condemnation of Internet shutdowns or “opposing” online freedoms. And factually, no states (including India, China, South Africa, Russia, and more) voted against the resolution.

A game of Chinese whispers

So why did so many prominent news organisations around the world get it so wrong? My theory is that it happened because organisation like ARTICLE19 put out press releases on what they perceived as the ‘weakening’ of the resolutions by the amendments examined above, and their regret that even democratic states like India and South Africa voted for these amendments.  This was wrongly portrayed in much of the media as opposition by these countries to the resolution itself, to online freedoms, and particularly as opposition to the idea of condemning internet shutdowns.  Thanks to the Chinese whispers nature of news reporting, this mistaken idea spread far and wide without any of the reporters bothering to check the original UN documents.

It is shameful if India condemns internet shutdowns at the UNHRC while deploying them for purposes such as preventing cheating during an examinations, during Ganesha visarjan, during Eid, during wrestling matches, and during protests.

However, regardless of the faulty reportage, there is a real crisis in India, with organisations like Medianama and  the Software Freedom Law Centre having counted at least nine internet shutdowns this year alone, and at least 30 since 2013. It is shameful if India condemns internet shutdowns at the UNHRC while deploying them for purposes such as preventing cheating during an examinations, during Ganesha visarjan, during Eid, during wrestling matches, and during protests.

We at the Centre for Internet and Society have previously explained why a Gujarat High Court order allowing for an internet shutdown during riots was wrong in law, and violated our Constitution as well as our international human rights obligations.  That is something the India media ought to be focussing far more on, but aren’t.

Lastly, it would also be welcome for the individual civil society organisations that signed an open letter to UNHRC members to explain why they too believed that these amendments would have significantly harmed our freedoms online.  We see it instead as a case of ‘human rights politics’ being played out, when none of the proposed amendments would have had much of a negative legal impact, but only a political impact.

Should civil society organisations really get worked up about these?

Edited by: Pranav Dixit

 

The Gay Pride Charade

by Nishant Shah last modified Jul 25, 2016 01:10 AM
For most of the milllenials, news is formed by trends, what goes viral, and often open to speculation, projection, manipulation and deceit.

The article was published in Indian Express on July 3, 2016.


The world of social media can be a minefield of misinformation, and it does get difficult to verify facts and ensure the veracity of the information that comes to us on the winged notifications of our apps. This becomes starkly clear in times of crises. Hence, when the historic and heinous shootout at a gay night club in Orlando, USA, shook the world with horror and grief a couple of weeks ago, when the first tweets appeared on my timeline, my initial reaction was denial. Instead of believing those first responders, I was already searching for more credible news lines that could confirm — or hopefully deny — the massacre. It took only a few minutes, though, to realise that #StandWithOrlando was a reality that we will have to accommodate in the story of continued violence and abuse of sexual minorities around the world.

However, not all deception is bad. One of the most fantastic responses to the shoot-out was from a Quebec-based satirical website called JournalDemourreal.com that published a photoshopped image showing the Canadian PM Justin Trudeau kissing the leader of the Canadian opposition party Tom Mulcair, with a headline that the two, despite their differences, are “united against homophobia”. I know that I liked this fake story four times on different newsfeeds, half-believing, half-wishing that it was true, before I realised that it is a hoax. Morphed as it might be, the doctored image enabled people to talk about the tragedy as demanding a personal and a policy-level action, ranging from acceptance and freedom, to control of guns and protecting the rights of life and dignity for the sexual minorities who continue to remain persecuted in the world.

The image also allowed many queer people in different parts of the world — especially in the countries where homosexuality continues to be criminalised and severely punished — to participate not only in the global grief but also to demand that their governments take more responsibility towards its queer population.

While this photoshopped picture was making the rounds, another tweet showed up on my timeline. This time it was a tweet from our media-savvy PM, Narendra Modi, who claimed that he was “shocked at the shootout in Orlando.”And further added that his “thoughts and prayers are with the bereaved families and the injured”. When I saw this tweet, my reaction again, was that this must be another joke. Because even as queer rights activists in the country struggle to fight for the decriminalisation of homosexuality, through their curative petitions in the Supreme Court in India, PM Modi’s government has continued its hateful diatribe against queer people in the country. His party has called homosexuality “anti-Indian” and “anti-family”. The party’s favourite, Baba Ramdev, continues his hate speech, offering to cure homosexuality through yoga.

Ever since the current government took power, documented hate crimes against queer people have more than doubled in the country. So when the PM decided to offer his condolences to those in Orlando, I figured that either it was a fake Twitter account masquerading as the PM or it was some kind of a hacker troll — maybe Anonymous, the online guerrilla activists, who recently took over ISIS- friendly websites and filled them up with information about male homosexuality as a response to the shoot-out — had taken control of the Twitter account. But it turned out that this piece of information was not photoshopped or hacked. It was actually true, and we were to believe in earnest that while the government doesn’t care about the millions of queer people being denied their rights to live and love in their country, it is heartbroken about what happened in the USA.

It does make you wonder about the world we live in, where a photoshopped image sounded more plausible than an undoctored tweet. It emphasises why Orlando cannot be treated as one isolated instance in another country, but that #WeAreOrlando. For right now, Orlando is also in India. It is a reminder that while we have been fortunate not to have such an instance of dramatic violence, there are millions of people in the country who are forced to live and die in deception for their sexual orientation.

One Pokémon to Rule Them All

by Nishant Shah last modified Jul 25, 2016 01:16 AM
America’s head start on the augmented reality game Pokémon Go shows that the interweb is not an egalitarian space.

The article was published in the Indian Express on July 17, 2016


I was busy, writing, when a Telegram message trickled in. It was a friend who asked me if I had looked at the new Pokémon Go game which has been getting more attention than national elections and global warfare in the USA lately. A location-based augmented reality game that involves the users moving around their physical environments “collecting” pokemon characters that appear hiding in different locations has a large part of the American population in a frenzy, leading to aching soles, traffic accidents, and involuntary bumping into things and people as the players move around, their eyes glued to their screens. The global release of the game is still in the pipeline, and so the rest of us will have to make do with the videos and screen grabs of the game.

While a big Pikachu fan myself, I don’t see myself going crazy over this game as and when my geography allows me for it, but the friend who had written to me about it is perturbed. An avid gamer and a self-proclaimed Pokémon fan, he is devastated that the users in privileged geography are going to get a head start in the global leader boards that he can never catch up with. The interwebz is already abuzz with players sharing hacks, cracks, bugs, cheat codes, and tips to collect more Pokémon, discover hidden powers, and rise quickly in the ranks as they drive, walk, run and jog around their neighbourhoods, in the quest of catching those delightful monsters on their phones. While my friend is aware that this cloud-based game will have multiple servers for different geographies, and so there will be relative rankings and customised interfaces for each community of players, he was feeling cheated about living in India and not having access to the first release of the game that has all the attention on the social web right now.

‘It almost makes me want to leave India and move to the USA,’ he said in mock frustration. It made me think about the privilege of geography when it comes to the presumed flatness of the digital world. One of the imaginations of the peer-to-peer architectures of the internet is its promise of flatness. With a series of non-discriminatory principles like #NetNeutrality and #ZeroRating enshrined as the fundamental attributes of the digital internet, we are often led to believe that when we are online, we are equal. This idea is so prevalent that in most of our technology-based development practices and policies, we think of access as the “be all”, if not the “end all”, of our activities. The rhetoric promises that if we get everybody online, we will have an egalitarian society where everybody will have equal access to resources, and equity by participating in the decision-making processes.

Despite overwhelming evidence that the digital world is anecdotally and systemically a space of exclusion, contestation, and intimidation, we continue to propagate the idea that these are “human” problems. Humans, fragile, frail and foolish in their being, contaminate the digital space. Humans, mired in the analogue systems of hatred and abuse, appropriate technologies to perpetuate these older forms of discrimination. The technological structures are imagined as pure, sterile, and committed to constructing parameters of equality through their neutral promises of universal access and seamless connectivity. Technology is clean, the human being is impure. Technology is robust, the human frail. Technology is flat, human hierarchical. These narratives of a neutral and egalitarian technology consequently lead us to put more importance and faith in algorithmic decisions and data-driven governance and policing. We have come to believe that because technologies are neutral, they will do a better job of regulating us than we do ourselves.

Pokémon Go, and its obvious geographical privilege reminds us that the digital is not flat. It is oriented towards a very obvious logic of geopolitical, economic, racial, and identity privileging that continues to promote some parts of the world as favoured standards of first access. The exclusive release of Pokémon Go reminds us that the digital is as subject to Euro-American centrism which treat these erstwhile imperial geographies as the beginning points of all digital activities, slowly expanding their fold to other regions through a trickle-down politics and economics. Whether you are waiting impatiently to join the global bandwagon of Pokémon collection, or are ready to shrug this off as another thing that people do on the web, this differential, preferential, and variable access of the internet is something we definitely want to consider as we continue to push for the digital as the solution to human problems.

DIDP Request #10 - ICANN does not know how much each RIR contributes to its Budget

by Asvatha Babu last modified Jul 27, 2016 02:57 PM
In an effort to understand the relationship between the Regional Internet Registries (RIRs) and ICANN, we requested current and historical information on the contract fees paid by the five RIRs (AfriNIC, ARIN, APNIC, LACNIC and RIPE NCC) to ICANN annually.

We acknowledged that the independently audited financial reports on ICANN’s website list the total amount from all RIRs as a lump sum.[1] However, we specifically sought a breakdown of these fees detailing contributions made by each RIR from 1999 to 2014. Not only will this information help understand the RIR-ICANN relationship, it will also be relevant to the IANA transition.

The request filed by Protyush Choudhury can be found here.

What ICANN said

According to ICANN’s response to our request, the five RIRs (AfriNIC, ARIN, APNIC, LACNIC and RIPE NCC) make a voluntary annual contribution to ICANN’s budget through the Number Resource Organization (NRO). [2] Since Financial Year 2000, this contribution has been made to ICANN as an aggregate amount without the kind of breakdown requested by us with the exception of FY03, FY04 and FY05. The breakdown of the contribution for those years is as below:

  • FY03: APNIC - $129,400; ARIN - $159,345; RIPE - $206,255
  • FY04: APNIC - $160,500; ARIN - $144,450; RIPE - $224,700; LACNIC - $5,350
  • FY05: APNIC - $220,976; ARIN - $218,507; RIPE - $358,086; LACNIC - $25,431

The response links back to the independent financial reports mentioned by us in the request. These reports can be found on the ICANN website here.

On closer examination of the audit reports of FY03, 04 and 05, it is clear that the information provided in their response is either incomplete or incorrect. According to KPMG’s audit report of FY03, the total contribution from Address Registries is US$535,000. The breakdown in the response adds up only to $494,600. The response does not account for the extra $40,400. If only APNIC, ARIN and RIPE contributed to ICANN in 2003, where did the other $40,400 come from? Moreover, why is it listed as an Address Registry Fee in the audit report if it was a voluntary contribution?[3]

The “Address Registry Fees” in the audit reports for FY04 and FY05 match the amounts in the response: $535,000 and $823,00 respectively. ICANN's response to our DIDP request may be found here.

For the reader’s reference, the audit reports for FY00 - FY14 are linked below:


[1] See audited financial reports: https://www.icann.org/resources/pages/governance/current-en

[2] See letter from NRO to ICANN: https://www.icann.org/en/system/files/files/akplogan-to-twomey-23mar09-en.pdf

[3]. See report for FY03 (pg 4): https://www.icann.org/en/system/files/files/financial-report-fye-30jun03-en.pdf

DIDP Request #9 - Exactly how involved is ICANN in the NETmundial Initiative?

by Asvatha Babu last modified Jul 27, 2016 03:53 PM
The importance and relevance of knowing ICANN’s involvement in the NETmundial Initiative cannot be overstated.

It was reported recently that ICANN contributed US$200,000 to the Initiative.[1] Following this report, we requested the details of all expenses incurred by ICANN for NMI till date. This includes formal contributions to NMI as well as costs incurred towards travel and accommodation of ICANN board and staff to meetings relevant to the NMI discussion.

Apart from these financial details, we also requested information regarding the number of staff working on NMI from ICANN and the hours clocked by them for the same. We further specified that we would like this information to gauge ICANN’s involvement beyond its technical mandate. The request filed by Geetha Hariharan can be found here.

What ICANN said

In its response, ICANN separated the questions in the request into two categories: a) Expenses incurred by ICANN towards the NETmundial Initiative and b) Other resources (personnel and hours) allocated to the Initiative by ICANN. The first category in the request includes: formal contribution to the NETmundial Initiative; travel costs of ICANN board and staff; and costs of maintenance of other sponsored parties. The second includes the number of staff involved in the NETmundial Initiative from ICANN and the number of hours spent working on it.

To answer both, the response directs us to the Memorandum of Collaboration (MOC)[2]signed by the Brazilian Internet Steering Committee (CGI.br), ICANN and the World Economic Forum (WEF) to set up the NETmundial Initiative according to the outcome document from the initial NETmundial meeting in Sao Paulo, Brazil.

Some of the important takeaways from the MOC that are relevant to our request are the following:

  • Each party to the MOC agrees to pay $201,667 towards operational expenses on signature of the agreement.
  • Total anticipated cost of the NETmundial Initiative is $605,000 (also mentioned in the response).
  • Each party will assign 1 staff member to the NETmundial Initiative secretariat during the inaugural period to smoothen the process. This staff member will commit at least 50% of their time towards Secretariat work.

This information is important but it does not provide a comprehensive answer to our query. It does not, for example, answer if ICANN contributed anything more than the $201,667 the MOC specifies. It also does not tell us if ICANN allotted any staff apart from the designated secretariat member to work on NETmundial Initiative.

Further, the response states that ICANN does not keep track of costs according to the number of hours or the topic but rather according to strategic objectives. Since ICANN is not required to create a document that does not already exist to answer a DIDP enquiry,[3] we have no way of knowing the specific amount of  time or money spent on the NETmundial Initiative by ICANN. The response instead directs us to the financial presentation at ICANN50 where the costs of attending the NETmundial Meeting at Sao Paulo is detailed. While this is interesting (ICANN spent $1.5 million)[4] it is not a satisfactory answer to our question.

ICANN justifies its lack of direct answers by expressing that not only is the request “overbroad", it is also “subject to the following DIDP Condition of Nondisclosure: Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; and (iii) complying with which is not feasible.”[5]

ICANN's response to our DIDP request may be found here.


[1] See McCarthy, ‘I’m Begging You To Join’ – ICANN’s NETmundial Initiative gets desperate, THE REGISTER (12 December 2014), http://www.theregister.co.uk/2014/12/12/im begging you to join netmundial initiative gets d esperate/

[2] See MOC: https://www.netmundial.org/sites/default/files/MOC-%20CGI.br,%20ICANN%20&%20WEF.pdf

[3] See Disclosure Policy: https://www.icann.org/resources/pages/didp-2012-02-25-en

[4] See ICANN50 Finance Presentation (Pg 4): https://london50.icann.org/en/schedule/thu-finance/presentation-finance-26jun14-en

[5] See ICANN conditions for non-disclosure: https://www.icann.org/resources/pages/didp-2012-02-25-en

DIDP Request #13: Keeping track of ICANN’s contracted parties: Registries

by Asvatha Babu last modified Jul 28, 2016 03:40 PM
On multiple occasions, Fadi Chehade, then President and CEO of ICANN has emphasized the importance of conducting audits (internal and external) to ensure compliance of ICANN’s contracted parties. At a US congressional hearing, he spoke about the contract monitoring function of ICANN.

In September 2015, we filed two separate DIDP requests regarding ICANN’s Contractual Compliance Goals. The first one, briefed below, is regarding the contracts with registries and the second one is regarding ICANN contracts with registrars. This post contains some additional background information on the Contractual Compliance Goals at ICANN. In our first request, we specifically asked for the following information:

  1. Copies of the registry contractual compliance audit reports for all the audits carried out as well as external audit reports from the last year (2014-2015).
  2. A generic template of the notice served by ICANN before conducting such an audit.
  3. A list of the registries to whom such notices were served in the last year.
  4. An account of the expenditure incurred by ICANN in carrying out the audit process.
  5. A list of the registries that did not respond to the notice within a reasonable period of time.
  6. Reports of the site visits conducted by ICANN to ascertain compliance.
  7. Documents which identifies the registry operators who had committed material discrepancies in the terms of the contract.
  8. Documents pertaining to the actions taken in the event that there was found to be some form of contractual non-compliance.

The DIDP request filed by Padmini Baruah can be viewed here.

What ICANN said

ICANN’s Contractual Compliance Goal is to ensure that all the parties that ICANN has entered into a contract with complies with the stipulations of the contract. This is done in several ways, including Contractual Compliance complaints and Audits.[1]

In 2012, ICANN initiated the Three Year Audit plan where one-third of registries were selected each year for an audit. In 2014, the third set of registries were audited. In response to Item 1,  information about the audit for 2014 can be found here: https://www.icann.org/en/system/files/files/contractual-compliance-ra-audit-report-2014-03feb15-en.pdf. At this link, we can also find the list of registries that went through the audit process in 2014 (item 3). Monthly updates on overall contractual compliance can be found here: https://www.icann.org/resources/pages/update-2013-03-15-en.

ICANN linked us to all the communication templates used during the audit process, including the notice served by ICANN prior to conducting audits. (Item 2) It can be found here: https://www.icann.org/en/system/files/files/audit-communication-template-04dec15-en.pdf

In the operating plan and budget for FY15, ICANN sets aside USD 0.2 million for the New Registry Agreement Audit and USD 0.6 million for the Three Year Audit plan.[2]

Other documents to answer this question such as invoices from the external auditing firm are subject to non-disclosure under DIDP policies. Since all registries responded in a timely manner and no site visits were conducted, there are no documents to answer items 5 and 6.

The audit report linked above contains information on deficiencies identified during the audit. ICANN states that registries addressed these deficiencies during the remediation process. However, there is a caveat to this discussion. The names of the registries that are associated with these discrepancies remains confidential, subject to the DIDP Defined Conditions for Nondisclosure. (Item 7) ICANN goes on to state that it is not required to confirm if the registries have taken appropriate action and thus does not have any documents in response to item 8. While ICANN’s audit process seems thorough, does this last statement indicate a lack of enforcement mechanisms on ICANN’s part?  

ICANN’s response to our request can be found here.


[1]. See Contractual Compliance website: https://www.icann.org/resources/pages/compliance-2012-02-25-en

[2]. See FY15 budget (pg72): https://www.icann.org/en/system/files/files/adopted-opplan-budget-fy15-01dec14-en.pdf

DIDP Request #14: Keeping track of ICANN’s contracted parties: Registrars

by Asvatha Babu last modified Jul 28, 2016 04:34 PM
In September 2016, we filed two separate DIDP requests regarding ICANN’s Contractual Compliance Goals.

The first one which we have written about here,[1] was regarding ICANN contracts with registries while the second one about registrars is briefed below. In our second request, we specifically asked for the following information:

  1. Copies of the registrar contractual compliance audit reports for all the audits carried out as well as external audit reports from the last year (2014-2015).
  2. A generic template of the notice served by ICANN before conducting such an audit.
  3. A list of the registrars to whom such notices were served in the last year.
  4. An account of the expenditure incurred by ICANN in carrying out the audit process.
  5. A list of the registrars that did not respond to the notice within a reasonable period of time.
  6. Reports of the site visits conducted by ICANN to ascertain compliance.
  7. Documents which identify the registrars who had committed material discrepancies in the terms of the contract.
  8. Documents pertaining to the actions taken in the event that there was found to be some form of contractual non-compliance.
  9. A copy of the registrar self-assessment form which is to be submitted to ICANN.

The DIDP request filed by Padmini Baruah can be viewed here.

What ICANN said

Information pertinent to item 1 and 3 can be found in the 2014 Contractual Compliance Annual Report here:https://www.icann.org/en/system/files/files/annual-2014-13feb15-en.pdf. While this report contains detailed information regarding the audit, individual audit reports are subject to the DIDP Defined Conditions for Nondisclosure.

ICANN provided a link to all the communication templates used during the audit process, including the notice served by ICANN prior to conducting audits. (Item 2) It can be found here: https://www.icann.org/en/system/files/files/audit-communication-template-04dec15-en.pdf. As mentioned in an earlier blog post, ICANN set aside USD 0.6 million for the Three Year Audit plan.[2] (item 4)

According to the Audit FAQ on ICANN website,[3] “If a contracted party reaches the enforcement phase per process, ICANN will issue a notice of breach in which the outstanding issues are noted. The response links us to the ICANN webpage where these breach notices are listed: https://www.icann.org/compliance/notices#notices-2014. (Item 5) According to the link, 61 registrars received breach notices in 2014; a full explanation has been provided for each notice. (Item 7 and 8) Since no site visits were conducted, ICANN does not possess any document regarding this.

According to the ICANN website, “The 2013 Registrar Accreditation Agreement (RAA) requires ICANN-accredited registrars to complete an annual self-assessment and provide ICANN with a compliance certification by 20 January.”[4] The form for the same can be found here: https://www.icann.org/resources/pages/approved-with-specs-2013-09-17-en#compliance

ICANN’s response to our request can be found here.


[1] To be linked to the first post

[2] See FY15 budget (pg72): https://www.icann.org/en/system/files/files/adopted-opplan-budget-fy15-01dec14-en.pdf

[3] See Audit FAQ: https://www.icann.org/resources/pages/faqs-2012-10-31-en

[4] See CEO certification: https://www.icann.org/resources/pages/ceo-certification-2014-01-29-en

DIDP Request #15: What is going on between Verisign and ICANN?

by Asvatha Babu last modified Jul 29, 2016 02:01 AM
During a hearing of the House Committee on Energy and Commerce on “Internet Governance Progress After ICANN 53,” President and CEO of ICANN - Mr Fadi Chehade indicated that ICANN follows up with registries and registrars on receipt of any complaint against them about violations of their contract with ICANN.

At CIS, we believe that any exchange of dialogue or any outcome from ICANN acting on these complaints needs to be in the public domain. Thus, our 15th DIDP request to ICANN were for documents pertinent to Verisign’s contractual compliance and actions taken by ICANN stemming from any discrepancies of Verisign’s compliance with its ICANN contract.

The DIDP request filed by Padmini Baruah can be found here.

What ICANN said

After sorting through a response designed to obfuscate information, it was clear that ICANN was not going to provide any of the details we requested. As mentioned in their previous responses, individual audit reports and the names of the registries associated with discrepancies are confidential under the DIDP Defined Conditions of Nondisclosure. Nevertheless, some details from the response are worth mentioning.

According to the response, “As identified in Appendix B of the 2012 Contractual Compliance Year One Audit Program Report, the following TLDs were selected for auditing: DotAsia Organisation Limited (.ASIA), Telnic Limited (.TEL), Public Interest Registry (.ORG), Verisign (.NET), Afilias (.INFO), and Employ Media LLC (.JOBS).” The response goes on to state that out of these 6 registries that were selected, only 5 chose to participate in the audit, the identies of which are once again confidential.

However, on further examination, it can be seen that Verisign (.NET) was chosen to participate in  the audit the year after as well. Therefore, it’s clear that 2013 was the year Verisign was audited. Unfortunately, that was pretty much all that was relevant to our request in ICANN’s response.

Once again, ICANN was able to use the DIDP Defined Conditions of Nondisclosure, especially the following conditions to allow itself the ability not to answer the public:

  • Information exchanged, prepared for, or derived from the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process between and among ICANN, its constituents, and/or other entities with which ICANN cooperates by inhibiting the candid exchange of ideas and communications.
  • Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement.
  • Confidential business information and/or internal policies and procedures.[1]

ICANN’s response to our request can be found here.


[1] See DIDP https://www.icann.org/resources/pages/didp-2012-02-25-en

DIDP Request #16 - ICANN has no Documentation on Registrars’ “Abuse Contacts”

by Asvatha Babu last modified Jul 29, 2016 02:11 AM
Registrars on contract with ICANN are required to maintain an “abuse contact” - a 24/7 dedicated phone line and e-mail address to receive reports of abuse regarding the registered names sponsored by the registrar.

We wrote to ICANN requesting information on these abuse complaints received by registrars over the last year. We specifically wanted reports of illegal activity on the internet submitted to these abuse contacts as well as details on actions taken by registrars in response to these complaints.

The request filed by Padmini Baruah can be found here.

What ICANN said

Our request to ICANN very specifically dealt with reported illegal activities. However, in their response, ICANN first broadened it to abuse complaints and then failed to give a narrowed down list of even those complaints.

In their response, ICANN indicated that they do not store records of complaints made to the abuse contact. This is stored by the registrars and is available to ICANN only upon request. However, since ICANN is only obliged to publish documents it already has in its possession, we did not receive an answer to our first question.

As for the second item, ICANN gave a familiarly vague answer, linking us to the Contractual Compliance Complaints page with a list of all the breach notices that have been issued by ICANN to registrars. A breach notice is relevant to our request only if it is in response to an abuse complaint, and the abuse complaint specifically deals with illegal activity. Even discounting that, this is not a comprehensive list when you take into account that a breach notice is published only “if a formal contractual compliance enforcement process has been initiated relating to an abuse complaint and resulted in a breach.”[1] What about the rest of the complaints received by the registrar?

In addition, ICANN refused to publish any communication or documentation of ICANN requesting reports of illegal activity under the DIDP non-disclosure conditions.

ICANN's response to our DIDP request may be found here.


[1] See ICANN response here (Pg 4): https://www.icann.org/en/system/files/files/didp-response-20150901-4-cis-abuse-complaints-01oct15-en.pdf

DIDP Request #17 - How ICANN Chooses their Contractual Compliance Auditors

by Asvatha Babu last modified Jul 29, 2016 02:20 AM
At a congressional hearing on internet governance and progress, then President of ICANN Fadi Chehadi indicated that the number of people working on compliance audits grew substantially—from 6 to 24 (we misquoted it as 25)— in the span of a few years.

It is clear to us at CIS that the people in charge of these compliance audits perform an important function at ICANN. To that effect, we requested information on the 24 individuals mentioned by Mr Chehadi as well as the third party auditors who perform this powerful watchdog function. More specifically, we requested documents calling for appointments of the auditors and copies of their contracts with ICANN.

The request filed by Padmini Baruah can be found here.

What ICANN said

In their response to the first part of our question, ICANN linked us to a webpage containing the names and titles of all employees working on contractual compliance. This page contains 26 names including the Contractual Compliance Risk and Audit Manager: https://www.icann.org/resources/pages/about-2014-10-10-en

ICANN also described the process of selecting KPMG as their third party auditor in detail. A pre-selection process shortlists 5 companies  that fit the following criteria: knowledge of ICANN, global presence, size, expertise and reputation. Then, ICANN issues a targeted Request For Proposal (RFP) to these companies asking them for their audit proposals. After a question and answer session, a proposal analysis and rating the scorecards, a “cross-functional steering committee” decided to go with KPMG. While the process has been discussed transparently, our questions remain unanswered.

The RFP would qualify as the document requested by us in the second part of the question (i.e.)  a “document that calls for appointments to the post of the contractual compliance auditor.” Unfortunately, ICANN has not published the RFP citing the DIDP Conditions for Non-disclosure. However, the timeline for the RFP and other details have been posted here after our DIDP request. In addition, the contract between  KPMG and ICANN has also not been published.

ICANN's response to our DIDP request may be found here.

DIDP Request #18 - ICANN’s Internal Website will Stay Internal

by Asvatha Babu last modified Jul 29, 2016 02:53 PM
ICANN maintains an internal website accessible to staff and employees. We requested ICANN to provide us with a document with the contents of that website in the interest of transparency and accountability.

The request filed by Padmini Baruah can be found here. To no one’s surprise, not only did ICANN not have this document in “ICANN's possession, custody, or control,” even if it did it would be subject to DIDP conditions for non-disclosure.

ICANN's response to our DIDP request may be found here.

DIDP Request #19 - ICANN’s role in the Postponement of the IANA Transition

by Asvatha Babu last modified Jul 29, 2016 04:37 PM
In March 2014, the National Telecommunications and Information Agency (NTIA) of the United States government announced plans to shift the Internet Assigned Names and Numbers (IANA) functions from ICANN to the global multistakeholder community. The initial deadline set for this was September 2015.

See NTIA announcement here.


In August 2015, NTIA announced that it would not be technically possible to meet this deadline and extended it by a year. NTIA stated,

Accordingly, in May we asked the groups developing the transition documents how long it would take to finish and implement their proposals. After factoring in time for public comment, U.S. Government evaluation and implementation of the proposals, the community estimated it could take until at least September 2016 to complete this process.”

In our DIDP request, we asked ICANN for all documents that it had submitted to NTIA that were relevant to the IANA transition and its postponement from the date of the initial announcement— March 14, 2015 to the date of the announcement of extension — August 17, 2015. We specifically requested the documents requested by NTIA in May 2015 as referenced by this blogpost.

The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN’s response terms our request as “broadly worded” and assumes that our request is only related to documents about the extension of the deadline. It was not.

After NTIA’s announcement in 2014, ICANN launched a multi-stakeholder process and discussion at ICANN 49 in Singapore to facilitate the transition. The organizational structure of this process has been mapped out according to the different IANA functions that are being transitioned. Accordingly, we have the:

  • IANA Stewardship Transition Coordination Group (ICG)
  • Cross Community Working Group (CWG-Stewardship)
  • Consolidated RIR IANA Stewardship Proposal Team (CRISP TEAM)
  • IANAPLAN Working Group (IANAPLAN WG)
  • Cross-Community Working
  • Group (CCWG-Accountability)

In addressing our request, ICANN references this multi-stakeholder community overseeing the transition. According to the response document, the ICG, CWG-Stewardship, CRISP Team, IANAPLAN WG and the CCWG-Accountability submitted responses directly to the NTIA leaving the ICANN with no documents responsive to our request.

ICANN's response to our DIDP request may be found here.

 


DIDP Request #20 - Is Presumptive Renewal of Verisign’s Contracts a Good Thing?

by Asvatha Babu last modified Jul 30, 2016 02:01 AM
ICANN’s contract agreements with different registries contain a presumptive renewal clause. Unless they voluntarily give up their rights or there is a material breach by the registry operator, their contract with ICANN will be automatically renewed.

See the base registry agreement here.


In light of this, we filed a request asking ICANN for documents that discuss the rationale behind including the presumptive renewal clause. We also asked them for documents specific to the renewal of Verisign (.com and .net domains) and PIR (.org) contracts. The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN provided a surprisingly comprehensive response to our request. They provided documents in response to our request and stated the rationale that has been given for including a presumptive renewal clause. According to the response,

“Absent countervailing reasons, there is little public benefit, and some significant potential for disruption, in regular changes of a registry operator. In addition, a significant chance of losing the right to operate the registry after a short period creates adverse incentives to favor short term gain over long term investment.”

ICANN explains that the contracts have been drawn such that they balance the concerns above with the ability to replace a registry that doesn’t serve the community as it is obliged to do. The response also offers links to various documents substantiating this rationale.

We were provided an effective answer to our second question as well. ICANN’s response links us to various documents for the 2001, 2006 and 2012 renewals of Verisign’s contract for the .com domain. This includes a summary of the 2012 renewal, public comments for all three renewals and the proposed agreements.

For the .net domain, a presumptive renewal clause was not included in the 2001 Verisign contract which opened up the process to select an operator in 2005. ICANN chose to continue its relationship with Verisign and included the clause. The documents relevant to the 2011 renewal of the contracts have been provided.

After Verisign relinquished its rights over the .org domain in 2001, ICANN chose the Public Internet Society (PIR) to operate the domain.  While there was no presumptive renewal clause in 2002, documents relevant to the 2006 and 2013 renewals have been provided.

ICANN's response to our DIDP request may be found here.

DIDP Request #21 - ICANN’s Relationship with the RIRs

by Asvatha Babu last modified Jul 30, 2016 03:42 AM
At CIS, we wanted a clearer understanding of ICANN’s relationship with the 5 internet registries. The large amount contributed by the RIRs to ICANN’s funding lead us to question the nature of this relationship as well as the payment. We wrote to ICANN asking them for these details.

The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN’s response linked us to the Memorandum of Understanding signed by ICANN and the Number Resource Organization (NRO) which represents the 5 RIRs. The MoU replaces the ones signed by ICANN and the individual RIRs. The response also links us to a series of letters written by the NRO to ICANN reaffirming their commitment to the MoU. Interestingly, the MoU does not mention anything about payments or monetary contributions.

In response to the second part of our request focusing on their financial relationship, ICANN gave us the same information as they did earlier. However, as pointed out in this post, that information is either incomplete or inaccurate. Further, they reject the idea that providing anything more than the audited financial reports is necessary for public benefit. According to them, “the burden of compiling the requested documentary information from 2000 to the present would require ICANN to expend a tremendous amount of time and resources.” Therefore, they classified our request as falling under this condition for non-disclosure:

“Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.”

We fail to see how an organization like ICANN does not already have its receipts and documentation in order. If they do, it would not be burdensome to publish them and if they don’t, well, that’s worrying for a lot of different reasons.

ICANN's response to our DIDP request may be found here.

DIDP Request #22 - Reconsideration Requests from Parties affected by ICANN Action

by Asvatha Babu last modified Jul 30, 2016 03:52 AM
According to ICANN by-laws, ICANN has the responsibility to answer to reconsideration requests filed by those directly affected by its actions.

See ICANN bye-laws here


The board governance committee must submit an annual report to the board containing the following information (paraphrased):

  • Number and nature of Reconsideration Requests received including an identification of whether they were dismissed, acted upon or are pending.
  • If pending, the length of time  and explanation if they have been pending for more than 90 days.
  • Explanation of other mechanisms ICANN has made available to ensure its accountability to those directly affected by its actions.

CIS requested copies of documents containing all this information. The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN surmised that all the information we sought can be found in their annual reports. ICANN linked us to those: https://www.icann.org/resources/pages/annual-reports-2012-02-25-en

ICANN's response to our DIDP request may be found here.

 

DIDP Request #23 - ICANN does not Know how Diverse its Comment Section Is

by Asvatha Babu last modified Jul 30, 2016 05:55 AM
While researching ICANN and the IANA Stewardship Transition Coordination Group (ICG), we came across a diversity analysis report of a public comment section.

See ICG report here.


We requested ICANN for similar reports on the ICANN public comment section. The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN stated that they do not conduct diversity analysis on their comment sections. This is a shame, given that the one from ICG was so informative, clear and concise. Instead they provided us with links to reports and analyses of the different topics that were up for comments and an annual report on public comments.

ICANN’s public comments section is one of the important ways in which different stakeholders and community members get involved with the organization. A diversity analysis of this section for different topics could help in informing the public about which parts of the world actually get involved in ICANN through this mechanism We suggest that ICANN make it a regular part of their report.

ICANN's response to our DIDP request may be found here.

https://www.ianacg.org/icg-files/documents/Public-Comment-Summary-final.pdf

DIDP Request #25 - Curbing Sexual Harassment at ICANN

by Asvatha Babu last modified Jul 30, 2016 06:14 AM
Markus Kummer at Public Forum 2 mentioned that ICANN has standards of behavior regarding sexual harassment that are applicable for its staff.

Marrakech Public Forum 2

In light of that statement, CIS requested ICANN to publish the following information:

  • Information about the individual or organization conducting ICANN’s sexual harassment training
  • Materials used during this training
  • ICANN’s internal sexual harassment policy

The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN’s response answered our questions adequately. The organization conducting their sexual harassment training is NAVEX Global. It is an interactive online training and as such, all materials are within that platform. Besides, ICANN could not publish these materials as it would be an infringement of NAVEX Global’s intellectual property right. ICANN also attached with the response, their internal sexual harassment policy.

ICANN's response to our DIDP request (and the attached policy document)  may be found here.

DIDP Request #27 - On ICANN’s support to new gTLD Applicants

by Asvatha Babu last modified Jul 30, 2016 08:03 AM
In order to promote access to the New gTLD Program in developing regions, ICANN set up the New gTLD Applicant Support Program (Program) which seeks to facilitate cooperation between gTLD applicants from developing countries and those willing and able to support them financially (and in kind).

Click for Applicant Support Directory


We requested ICANN for information about this program. Specifically, we asked them for information on:

  • The number of applicants to the program and the amount received by them;
  • The basis on which these applicants were selected;
  • The amount that has been utilized thus far for this program;
  • Contributions by donors;
  • What “in kind” support means and includes.

The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN answered all our questions in a satisfactory manner. There were three applicants to the program. Two of these: Nameshop, and Ummah Digital Ltd, did not meet the eligibility criteria listed in the handbook and therefore only one other applicant, DotKids, received the financial support. Of the USD 2,000,000 set aside, USD 135,000 was awarded to them.

The eligibility criteria is listed in the New gTLD Financial Assistance Handbook and candidates are evaluated by the Support Applicant Review Panel (SARP), “which was comprised of five volunteer members from the community with experience in the domain name industry, in managing small businesses, awarding grants, and assisting others on financial matters in developing countries.”

The USD 2,000,000 allotted to this program was set aside by ICANN’s board and as it is not exhausted, no external contributions were sought by ICANN (in cash or in kind). However, ICANN failed to explain what “in kind” contributions would be.

DIDP Request #28 - ICANN renews Verisign’s RZM Contract?

by Asvatha Babu last modified Jul 30, 2016 08:10 AM
Our request to ICANN was related to our (mistaken) assumption that Verisign and ICANN had signed an agreement for Root Zone Maintenance and had recently renewed it. In that context we had asked for information such as documents reflecting the decision making process, copy of the current RZM agreement, public comments and an audit report of Verisign’s RZM functions.

The request filed by Padmini Baruah can be found here.

What ICANN said

ICANN clarified that it has never been party to the RZM agreement which was made between NTIA and Verisign. According to an ICANN-Verisign joint document, the Root Zone Management Systems allows “ICANN as the IANA Functions Operator (IFO), Verisign, as the Root Zone Maintainer (RZM), and the National Telecommunications and Information Administration (NTIA) at the U.S. Department of Commerce (DoC), as the Root Zone Administrator (RZA).” The only agreement related to this is the one of cooperation between Verisign and the NTIA.

Accordingly, as the role of NTIA is transitioned to the multi-stakeholder community, Verisign and ICANN are working out terms and conditions of their own agreement to facilitate this transition together.  In response to NTIA’s request for a proposal for this transition, Verisign and ICANN submitted this document. Besides these, ICANN states that it does not have any documents responsive to our requests.

ICANN's response to our DIDP request may be found here.



Analysis of the Report of the Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security and Implications for India

by Elonnai Hickok and Vipul Kharbanda — last modified Aug 11, 2016 09:58 AM
This paper analyses the report of the Group of Experts and and India’s compliance with its recommendations based on existing laws and policies. Given the global nature of these challenges and the need for nations to holistically address such challenges from a human rights and security perspective, CIS believes that the Group of Experts and similar international forums are useful and important forums for India to actively engage with.

 

The United Nations Group of Experts on ICT issued their report on Developments in the Field of Information and Telecommunications in the Context of International Security in June, 2015. This paper analyses the report of the Group of Experts and and India’s compliance with its recommendations based on existing laws and policies. CIS believes that the report of the Group of Experts provides important minimum standards that countries could adhere to in light of challenges to international security posed by ICT developments. Given the global nature of these challenges and the need for nations to holistically address such challenges from a human rights and security perspective, CIS believes that the Group of Experts and similar international forums are useful and important forums for India to actively engage with.

Download: PDF (627 kb)


1. Introduction

2. Analysis of the Recommendations

2a. Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security

2b. In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution in the ICT environment and the nature and extent of the consequences

2c. States should not knowingly allow their territory to be used for internationally wrongful acts using ICTs; of the Recommendations

2d. States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect

2e. States, in ensuring the secure use of ICTs, should respect Human Rights Council resolutions 20/8 and 26/13 on the promotion, protection and enjoyment of human rights on the Internet, as well as General Assembly resolutions 68/167 and 69/166 on the right to privacy in the digital age, to guarantee full respect for human rights, including the right to freedom of expression

2f. A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public

2g. States should take appropriate measures to protect their critical infrastructure from ICT threats, taking into account General Assembly resolution 58/199 on the creation of a global culture of cybersecurity and the protection of critical information infrastructures, and other relevant resolutions

2h. States should respond to appropriate requests for assistance by another State whose critical infrastructure is subject to malicious ICT acts. States should also respond to appropriate requests to mitigate malicious ICT activity aimed at the critical infrastructure of another State emanating from their territory, taking into account due regard for sovereignty

2i. States should take reasonable steps to ensure the integrity of the supply chain so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions

2j. States should encourage responsible reporting of ICT vulnerabilities and share associated information on available remedies to such vulnerabilities to limit and possibly eliminate potential threats to ICTs and ICT-dependent infrastructure

2k. States should not conduct or knowingly support activity to harm the information systems of the authorized emergency response teams (sometimes known as computer emergency response teams or cyber security incident response teams) of another State. A State should not use authorized emergency response teams to engage in malicious international activity

3. Conclusion


1. Introduction

Cyberspace[1] touches every aspect of our lives, has enormous benefits, but is also accompanied by a number of risks. The international community at large has realized that cyberspace can be made stable and secure only through international cooperation. Traditionally, though there are a number of bilateral agreements and forms of cooperation the foundation of this cooperation has been the international law and the principles of the Charter of the United Nations.

To this end, on December 27, 2013 the United Nations General Assembly adopted Resolution No. 68/243 requesting the" Secretary General, with the assistance of a group of governmental experts,…… to continue to study, with a view to promoting common understandings, existing and potential threats in the sphere of information security and possible cooperative measures to address them, including norms, rules or principles of responsible behaviour of States and confidence-building measures, the issues of the use of information and communications technologies in conflicts and how international law applies to the use of information and communications technologies by States……. and to submit to the General Assembly at its seventieth session a report on the results of the study. "In pursuance of this resolution the Secretary General established a Group of Experts on Developments in the Field of Information and Telecommunications in the Context of International Security; the report was agreed upon by the Group of Experts in June, 2015. On 23 December 2015, the UN General Assembly unanimously adopted resolution 70/237[2] which welcomed the outcome of the Group of Experts and requested the Secretary-General to establish a new GGE that would report to the General Assembly in 2017.

The report developed by governmental experts from 20 States addresses existing and emerging threats from uses of ICTs, by States and non-State actors alike. These threats have the potential to jeopardize international peace and security. The experts gave recommendations which have built on consensus reports issued in 2010 and 2013, and offer ideas on norm-setting, confidence-building, capacity-building and the application of international law for the use of ICTs by States. Among other recommendations, the Report lays down recommendations for States for voluntary, non-binding norms, rules or principles of responsible behaviour to promote an open, secure, stable, accessible and peaceful ICT environment.

As larger international dialogues around cross border sharing of information and cooperation for cyber security purposes take place between the US and EU, it is critical that India begin to participate in these discussions.[3] It is also necessary to take cognizance of the importance of implementing internal practices and policies that are recognized and set strong standards at the international level.

This paper marks the beginning of a series of questions we will be asking and processes we will be analysing with the aim of understanding the role of international cooperation for cyber security and the interplay between privacy and security. The report analyses the existing norms in India in the backdrop of the recommendations in the Report of Experts to discover how interoperable Indian law and policy is vis-à-vis the recommendations made in this report as well as making recommendations towards ways India can enhance national policies, practices, and approaches to enable greater collaboration at the international level with respect to issues concerning ICTs and security.

2. Analysis of the Recommendations

The Group of Experts took into account existing and emerging threats, risks and vulnerabilities, in the field of ICT and offered the following recommendations for consideration by States for voluntary, non-binding norms, rules or principles of responsible behaviour.

2a. Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security

1. India has been working with a number of countries such as Belarus, Canada, China, Egypt, and France on a number of ICT-related isues thereby increasing international cooperation in the ICT sector, such as:

(i) setting up the India-Belarus Digital Learning Centre (DLC-ICT) to promote

development of ICT in Belarus;

(ii) sending an official business delegation to Canada to attend the 2ndJoint Working Group meeting in ICTE;

(iii) holding Joint Working Groups on ICT with China.[4]

As can be seen from this, most of the cooperation with other countries is currently government to government (or government institution to government institution) cooperation. However, it must be noted that the entire digital revolution, including ICT necessarily involves ICT companies, and thus the role of the private sector in participating in these negotiations as well as the responsibilities of private sector ICT companies in cross border cooperation. Furthermore, the above examples are a few of the many agreements, Memoranda of Understanding (MOU), and negotiations that India has with other countries on cross border cooperation. It is important that, to the extent possible, these negotiations and transparent and easily publicly available.

2. The primary legislation governing ICT in India is the Information Technology Act, 2000 ("IT Act") which was passed to provide legal recognition for the transactions carried out by means of electronic data interchange and other means of electronic communication. The IT Act contains a number of provisions that declare illegal activities that threatenICT infrastructure, data, and individuals as illegal and provide for penalties for the same. These activities are:

Section 43 - Penalty and Compensation for damage to computer, computer system, etc.: If any person without permission: (i) accesses a computer, computer system or network; (ii) downloads, copies or extracts any data from such computer, computer system or network; (iii) introduces any computer contaminant or computer virus into, destroys, deletes or alters any information on, damages or disrupts any computer, computer system or network; (iv) denies or causes the denial of access to any computer, computer system or network by any means; (v) helps any person to access a computer, computer system or network in contravention of the Act; (vi) charges the services availed of by a person to the account of another person through manipulation; or (vii) Steals, conceals, destroys or alters or causes any person to steal, conceal, destroy or alter any computer source code used for a computer resource with an intention to cause damage, he shall be liable to pay damages by way of compensation to the person so affected.

Section 66 - Computer Related Offences: If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to Rs. 5,00,000/- or with both.

Section 66B - Punishment for dishonestly receiving stolen computer resource or communication device: Whoever dishonestly receives or retains any stolen computer resource or communication device knowing or having reason to believe the same to be stolen computer resource or communication device, shall be punished with imprisonment of either description for a term which may extend to three years or with fine which may extend to Rs. 1,00,000/- or with both.

Section 66C - Punishment for identity theft: Whoever, fraudulently or dishonestly make use of the electronic signature, password or any other unique identification feature of any other person, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to rupees one lakh.

Section 66D - Punishment for cheating by personation by using computer resource: Whoever, by means of any communication device or computer resource cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to Rs. 1,00,000/-.

Section 66E - Punishment for violation of privacy: Whoever, intentionally or knowingly captures, publishes or transmits the image of a private area of any person without his or her consent, under circumstances violating the privacy of that person, shall be punished with imprisonment which may extend to three years or with fine not exceeding Rs. 2,00,000 or with both.

Section 66F - Punishment for cyber terrorism: (1) Whoever,- (A) with intent to threaten the unity, integrity, security or sovereignty of India or to strike terror in the people or any section of the people by -

  • Denying or cause the denial of access to computer resource; or
  • Attempting to penetrate a computer resource; or
  • Introducing or causing to introduce any computer contaminant and by means of such conduct causes or is likely to cause death or injuries to persons or damage to or destruction of property or disrupts or knowing that it is likely to cause damage or disruption of supplies or services essential to the life of the community or adversely affect the critical information infrastructure, or

(B) knowingly or intentionally penetrates a computer resource and by by doing so obtains access to information that is restricted for reasons of the security of the State or foreign relations; or any restricted information with reasons to believe that such information may be used to cause or likely to cause injury to the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation or incitement to an offence, or to the advantage of any foreign nation, group of individuals or otherwise, commits the offence of cyber terrorism.

(2) Whoever commits or conspires to commit cyber terrorism shall be punishable with imprisonment which may extend to imprisonment for life.

Section 67 - Publishing of information which is obscene in electronic form: Whoever publishes or transmits in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons, shall be punished on first conviction with a maximum imprisonment upto 2 years and a maximum fine upto Rs. 5,00,000 and for a second or subsequent conviction with a maximum imprisonment upto 5 years and also a maximum with fine upto Rs. 10,00,000.

Section 67A - Punishment for publishing or transmitting of material containing sexually explicit act, etc. in electronic form: Whoever publishes or transmits in the electronic form any material which contains sexually explicit act or conduct shall be punished on 1st conviction with a maximum imprisonment for 5 years and a maximum fine of upto Rs. 10,00,000 and for a 2nd or subsequent conviction with a maximum imprisonment of 7 years and a maximum fine upto Rs. 10,00,000.

Section 67B - Punishment for publishing or transmitting of material depicting children in sexually explicit act, etc. in electronic form: Whoever,- (a) publishes or transmits material in any electronic form which depicts children engaged in sexually explicit act or conduct; or (b) creates text or digital images, collects, seeks, browses, downloads, advertises, promotes, exchanges or distributes material in any electronic form depicting children in obscene or indecent or sexually explicit manner; or (c) cultivates, entices or induces children to online relationship with one or more children for and on sexually explicit act or in a manner that may offend a reasonable adult on the computer resource; or (d) facilitates abusing children online; or (e) records in any electronic form own abuse or that of others pertaining to sexually explicit act with children, shall be punished on first conviction with a maximum imprisonment upto 5 years and a maximum fine upto Rs. 10,00,000 and in the event of a 2nd or subsequent conviction with a maximum imprisonment upto 7 years and also a maximum fine upto Rs. 10,00,000.[5]

Section 72 - Breach of confidentiality and privacy: Any person who, in pursuance of any of the powers conferred under this Act, has secured access to any electronic record, book, register, correspondence, information, document or other material without the consent of the person concerned discloses the same to any other person shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to Rs. 1,00,000 or with both.

Section 72-A - Punishment for Disclosure of information in breach of lawful contract: Any person including an intermediary who, while providing services under the terms of lawful contract, has secured access to any material containing personal information about another person, with the intent to cause or knowing that he is likely to cause wrongful loss or wrongful gain discloses such material to any other person shall be punished with imprisonment for a term which may extend to three years, or with a fine which may extend to Rs. 5,00,000 or with both.

3. The broad language and wide terminology used IT Act seems to cover most of the cyber crimes faced in India as of now, though the technical abilities to prevent the crimes still leave a lot to be desired. The prevention of cyber crime is not the domain of the IT Act and is rather the responsibility of the law enforcement authorities (note: there is no specific authority created under the IT Act, the Act is enforced by the police and other law enforcement authorities). That said, it may be a useful exercise to briefly compare these provisions with the crimes mentioned in the Convention on Cybercrime, 2001 (Budapest Convention), an international treaty that seeks to addresses threats in cyber space by promoting the harmonization of national laws and cooperation across jurisdictions, to examine if there are any that are not covered by the IT Act. A comparison of the principles in Budapest Convention and the IT Act is below:

S. No.

Article of the Budapest Convention

Provisions of the IT Act which cover the same

1

Article 2 - Illegal Access

Section 43(a) read with Section 66

2

Article 3 - Illegal Interception

Section 69 of the IT Act read with section 45 as well as Section 24 of the Telegraph Act, 1885

3

Article 4 - Data interference

Sections 43(d) and 43(f) read with section 66

4

Article 5 - System interference

Sections 43(d), (e) and (f) read with section 66

5

Article 6 - Misuse of devices

Not specifically covered

6

Article 7 - Computer related forgery

Computer related forgery is not specifically covered, but it is possible that when such a case comes to light, the provisions of Section 43 read with section 66 as well as provisions of the Indian Penal Code, 1860 would be pressed into service to cover such crimes

7

Article 8 - Computer related fraud

While not specifically covered by the IT Act, it is possible that when such a case comes to light, the provisions of Section 43 read with section 66 as well as provisions of the Indian Penal Code, 1860 would be pressed into service to cover such crimes

8

Article 9 - Offences relating to child pornography

Section 67B

As can be seen from the above discussion, most of the criminal acts elucidated in the Budapest Convention are covered under the IT Act except for the provision on misuse of devices, which requires the production, dealing, trading, etc. in devices whose sole objective is to violate the provisions of the IT Act, though it is possible that provisions of the Indian Penal Code, 1860 dealing with conspiracy and aiding and abetment may be pressed into service to cover such incidents.

4. Further, there are a number of laws which deal with critical infrastructure in India, however since these are mostly sectoral laws dealing with specific infrastructure sectors, the one most relevant to ICT is the Telegraph Act, 1885, which makes it illegal to interfere with or damage critical telegraph infrastructure. The specific penal provisions are listed below:

Section 23 - Intrusion into signal-room, trespass in telegraph office or obstruction: If any person - (a) without permission of competent authority, enters the signal room of a telegraph office of the Government, or of a person licensed under this Act, or (b) enters a fenced enclosure round such a telegraph office in contravention of any rule or notice not to do so, or (c) refuses to quit such room or enclosure on being requested to do so by any officer or servant employed therein, or (d) wilfully obstructs or impedes any such officer or servant in the performance of his duty, he shall be punished with fine which may extend to Rs. 500.

Section 24 - Unlawfully attempting to learn the contents of messages: If any person does any of the acts mentioned in section 23 with the intention of unlawfully learning the contents of any message, or of committing any offence punishable under this Act, he may (in addition to the fine with which he is punishable under section 23) be punished with imprisonment for a term which may extend to one year.

Section 25 - Intentionally damaging or tampering with telegraphs: If any person, intending - (a) to prevent or obstruct the transmission or delivery of any message, or (b) to intercept or to acquaint himself with the contents of any message, or (c) to commit mischief, damages, removes, tampers with or touches any battery, machinery, telegraph line, post or other thing whatever, being part of or used in or about any telegraph or in the working thereof, he shall be punished with imprisonment for a term which may extend to three years, or with fine or with both.

Section 25A - Injury to or interference with a telegraph line or post: If, in any case not provided for by section 25, any person deals with any property and thereby wilfully or negligently damages any telegraph line or post duly placed on such property in accordance with the provisions of this Act, he shall be liable to pay the telegraph authority such expenses (if any) as may be incurred in making good such damage, and shall also, if the telegraphic communication is by reason of the damage so caused interrupted, be punishable with a fine which may extend to Rs. 1000:

5. The telecom service providers in India have to sign a license agreement with the Department of Telecommunications for the right to provide telecom services in various parts of India. The telecom regulatory regime in India has gone through a lot of turmoil and evolution and currently any service provider wanting to provide telecom services is issued a Unified License (UL) and has to abide by the terms of the UL. Whilst most of the prohibited activities under the UL refer to specific terms under the UL itself such as non payment of fees and not fulfilling obligations under the UL, section 38 provides for certain specific prohibited activities which may be relevant for the ICT sector. These prohibited activities include:

(i) Carrying objectionable, obscene, unauthorized or any other content, messages or communications infringing copyright and intellectual property right etc., which may be prohibited by the laws of India;

(ii) Provide tracing facilities to trace nuisance, obnoxious or malicious calls, messages or communications transported through his equipment and network, to the authorised government agencies;

(iii) Ensuring that the Telecommunication infrastructure or installation thereof, carried out by it, should not become a safety or health hazard and is not in contravention of any statute, rule, regulation or public policy;

(iv) not permit any telecom service provider whose license has been revoked to use its services. Where such services are already provided, i.e. connectivity already exists, the license is required to immediately sever connectivity immediately.

2b. In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution in the ICT environment and the nature and extent of the consequences

The Department of Electronics and Information Technology (DEITY) has released the XIIth Five Year Plan on the information technology sector and the report of the Sub-Group on Cyber Security in the plan recognizes that cyber security threats emanate from a wide variety of sources and manifest themselves in disruptive activities that target individuals, businesses, national infrastructure and Governments alike. [6] The primary objectives of the plan for securing the country's cyber space are preventing cyber attacks, reducing national vulnerability to cyber attacks, and minimizing damage and recovery time from cyber attacks. The plan takes into account a number of focus areas to achieve its stated objectives, which are described briefly below:

  • Enabling Legal Framework - Setting up think tanks in Public-Private mode to identify gaps in the existing policy and frameworks and take action to address them including addressing the privacy concerns of online users.
  • Security Policy, Compliance and Assurance - Enhancement of IT product security assurance mechanism (Common Criteria security test/evaluation, ISO 15408 & Crypto Module Validation Program), establishing a mechanism for national cyber security index leading to national risk management framework.
  • Security Resarch&Development (R&D) - Creation of Centres of Excellence in identified areas of advanced Cyber Security R&D and Centre for Technology Transfer to facilitate transition of R&D prototypes to production, supporting R&D projects in thrust areas.
  • Security Incident - Early Warning and Response - Comprehensive threat assessment and attack mitigation by means of net traffic analysis and deployment of honey pots, development of vulnerability database.
  • Security awareness, skill development and training - Launching formal security education, skill building and awareness programs.
  • Collaboration - Establishing a collaborative platform/ think-tank for cyber security policy inputs, discussion and deliberations, operationalisation of security cooperation arrangements with overseas CERTs and industry, and seeking legal cooperation of international agencies on cyber crimes and cyber security.

2c. States should not knowingly allow their territory to be used for internationally wrongful acts using ICTs

As mentioned in response to (a) above, the primary legislation in India that deals with information technology and hence ICT as well is the Information Technology Act, 2000. The IT Act contains a number of penal provisions which make it illegal to indulge in a number of practices such as hacking, online fraud, etc. which have been recognised internationally as wrongful acts using ICT ( Please refer to answer under section (a) above for details of the penal provisions). Further section 1(2) of the IT Act provides that it also applies to any offence or contravention hereunder committed outside India by any person. This means that the IT Act also covers internationally wrongful acts using ICTs.

2d. States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect

There are a number of ways in which states can share information by using widely accepted formal processes precisely for this purpose. Some of the most common methods of international exchange used by India are given below.

MLATs

Although the exact process by which intelligence agencies in India share information with other agencies internationally is unclear, India is a member of Interpol and the Central Bureau of Investigation, which is a Federal/Central investigating agency functioning under the Central Government, Department of Personnel & Training and is designated as the National Central Bureau of India. A very useful tool in the effort to establish cross-border cooperation is Mutual Legal Assistance Treaties (MLATs). MLATs are extremely important for law enforcement agencies, governments and the private sector, since they act as formal mechanisms for access to data which falls under different jurisdictions. India currently has MLATs with the following 39 countries [7]

Although MLATs are considered to be a useful mechanism to ensure international cooperation, there are certain criticisms of the MLAT mechanism, such as:

  • The Lack of Clear Time Tables: Although MLATs do provide for broad time frames, they do not provide for more specific time tables and usually do not have any provision for an expedited process, for eg. it is believed that for requests to the U.S., processing can take from six weeks (for requests with minimal issues complying with U.S. legal standards) to 10 months.[8] Such a long time frame is clearly a burden on the investigation process and has been criticised for being ineffectual as they may not provide information fast enough;
  • Variation in Legal Standards: The legal standards for requesting information, for eg. the circumstances under which information can be requested or what information can be requested, differ from jurisdiction to jurisdiction. These differences are often not understood by requesting nations thus causing problems in accessing information;[9]
  • Inefficient Legal Process: The legal process to carry out requests through the MLAT process is often considered too cumbersome and inefficient.
  • Non-incorporation of Technological Challenges: MLATs have not been updated to meet the challenges brought about by technology, especially with the advent of networked infrastructure and ICT which raise issues of attribution and cross-jurisdictional access to information. [10]

Extradition

Extradition generally refers to the surrender of an alleged or convicted criminal by one State to another. More precisely, it may be defined as the process by which one State upon the request of another surrenders to the latter a person found within its jurisdiction for trial and punishment or, if he has been already convicted, only for punishment, on account of a crime punishable by the laws of the requesting State and committed outside the territory of the requested State. Extradition plays an important role in the international battle against crime and owes its existence to the so-called principle of territoriality of criminal law, according to which a State will not apply its penal statutes to acts committed outside its own boundaries except where the protection of special national interests is at stake. India currently has extradition treaties with 37 countries and extradition arrangements with an additional 8 countries.[11]

Letters Rogatory

A Letter Rogatory is a formal communication in writing sent by the Court in which an action is pending to a foreign court or Judge requesting that the testimony of a witness residing within the jurisdiction of that foreign court be formally taken under its direction and transmitted to the issuing court making the request for use in a pending legal contest or action. This request entirely depends upon the comity of courts towards each other and usages of the court of another nation.

Apart from the above methods, India also regularly signs Bilateral MoUs with various countries on law enforcement and information sharing specially in cases related to terrorism. India also regularly helps and gets helps from Interpol, the International Criminal Police Organisation for purposes of investigation, arrests and sharing of information.[12]

Other than these formal methods states sometimes share information on an informal basis, where the parties help each other purely on the basis of goodwill, or sometimes even coercion. A recent example of informal cooperation between the security agencies of India and Nepal, although not in the realm of cyber space, was the arrest of YasinBhatkal, leader of the banned organisation Indian Mujahideen (IM) where the Indian security agencies allegedly sought informal help from their Neapaelese counterparts to arrest a person who was wantedhad long been wanted by the Indian security agencies for a long time. [13]

In the current environment of growing ICT and increased cross-border information sharing between individuals, the role of private companies who carry this information has become much more pronounced. This changed dynamic raises new problems, especially because manyin light of thesefact that a number of these companies do not have a physical presence in all the countries where they offer services over the internet. This leads to problems for states in terms of law enforcement, speciallyespecially if they want information from these companies who do not have an incentive or desire to provide itagainst their will. These circumstances lead to a number of prickly situations where states are often frustrated in using legal and formal means and often resort to informal pressure to get the companies to agree to data localization requests, encryption/decryption standards and keys, back doors, and other requests. etc., Tthe most famous of these in the Indian context being the disagreement/ heated exchange between the Indian government and Canada based Blackberry Limited (formerly Research in Motion) for data requests on their Blackberry enterprise platform.

2e. States, in ensuring the secure use of ICTs, should respect Human Rights Council resolutions 20/8 and 26/13 on the promotion, protection and enjoyment of human rights on the Internet, as well as General Assembly resolutions 68/167 and 69/166 on the right to privacy in the digital age, to guarantee full respect for human rights, including the right to freedom of expression

Right to Privacy

  1. The right to privacy has been recognised as a constitutionally protected fundamental right in India through judicial interpretation of the right to life which is specifically guaranteed under the Constitution of India. Since the right to privacy was read into the constitution by judicial pronouncements, it could be said that the right to privacy in India is a creature of the courts at least in the Indian context. For this reason it may be useful to list out some of the major cases which deal with the right to privacy in India:

    i. Kharak Singh v. Union of India¸[14] (1962)

    a. For the first time, the courts recognized the right to privacy as a fundamental right, although in a minority opinion.

    b. The decision lLocated the right to privacy under both the right to personal liberty as well as freedom of movement.

    ii. Govind v. State of M.P.,[15] (1975)

    a. Adopted the minority opinion of Kharak Singh as the opinion of the Supreme Court and held that the right to privacy is a fundamental right.

    b. An individual deDerivesd the right to privacy from both the right to life and personal liberty as well as freedom of speech and movement.

    c. The right to privacy was said to encompass and protect the personal intimacies of the home, the family marriage, motherhood, procreation and child rearing.

    d. The court established that the rRight to privacy can be violated in the following circumstances (i) important countervailing interest which is superior, (ii) compelling state interest test, and (iii) compelling public interest.

    iii. R. Rajagopal v. Union of India,[16] (1994)

    a. Recognised that the rRight to privacy is a part of the right to personal liberty guaranteed under the constitution.

    b. Recognizeds that the right to privacy can be both a tort (actionable claim) as well as a fundamental right.

    c. Established that aA citizen has a right to safeguard the privacy of his own, his family, marriage, procreation, motherhood, child-bearing and education among other matters and nobody can publish anything regarding the same unless (i) he consents or voluntarily thrusts himself into controversy, (ii) the publication is made using material which is in public records (except for cases of rape, kidnapping and abduction), or (iii) he is a public servant and the matter relates to their discharge of official duties.

    iv. People's Union for Civil Liberties v. Union of India,[17] (1996)

    a. Extended the right to privacy to include communications privacy..

    b. Laid down guidelines which form the backbone for checks and balances in interception provisions.

    v. District Registrar and Collector, Hyderabad and another v. Canara Bank and another, [18] (2004)

    a. Refers to personal liberty, freedom of expression and freedom of movement as the fundamental rights which give rise to the right to privacy.

    b. The rRight to privacy deals with persons and not places.

    c. Intrusion into privacy may be by - (1) legislative provisions, (2) administrative/executive orders and (3) judicial orders.

    vi. Selvi and others v. State of Karnataka and others,[19] (2010)

    a. The Court acknowledged the distinction between bodily/physical privacy and mental privacy

    b. Subjecting a person to techniques such as narcoanalysis, polygraph examination and the Brain Electrical Activation Profile (BEAP) test without consent violates the subject's mental privacy

  2. Although the judgements in the above cases (except for the case of People's Union for Civil Liberties v. Union of India) were pronounced given in a non telecomnot delivered in a telecommunications context, however the ease with which these principles were applied in the case of People's Union for Civil Liberties v. Union of India, suggests that these principles, where applicable, would be applied even in the context of ICT and are not limited to only the non-digital world.

  3. It must however be noted that dueDue to some incongruities in the interpretation of the earlier judgments, the Supreme Court has recently referred the matter regarding the existence and scope of the right to privacy in India to a larger bench so as to bring clarity regarding the exact scope of the right to privacy in Indian law. The very concept that the Constitution of India guarantees a right to privacy was challenged due to an "unresolved contradiction" in judicial pronouncements. This "unresolved contradiction" arose because in the cases of M.P. Sharma & Others v. Satish Chandra & Others,[20] and Kharak Singh v. State of U.P. & Others, [21](decided byEigheighttandsixSixJudges respectively) the majority judgment of the Supreme Court had categorically denied the existence of a right to privacy under the Indian Constitution.

    However somehow the later case of Gobind v. State of M.P. and another,[22] (which was decided by a two Judge Bench of the Supreme Court) relied upon the opinion given by the minority of two judges in Kharak Singh to hold that a right to privacy does exist and is guaranteed as a fundamental right under the Constitution of India without addressing the fact that this was a minority opinion and that the majority opinion had denied the existeance of the right to privacy. Thereafter a large number of cases have held the right to privacy to be a fundamental right, the most important of which are R. Rajagopal& Another v. State of Tamil Nadu & Others,[23] (popularly known as Auto Shanker's case) and People's Union for Civil Liberties (PUCL) v. Union of India & Another.[24] However, as was noticed by the Supreme Court in its August 11, 2015 order, all these judgments were decided by two or three Judges only which could not have overturned the judgments given by larger benches.[25] It was to resolve this judicial incongruity that the Supreme Court referred this issue to a larger bench to decide on the existence and scope of the right to privacy in India.

Freedom of Expression

  1. Freedom of expression is one of the most important fundamental rights guaranteed under the constitution and has been vehemently protected by the judiciary on a number of occasions whenever it has been threatened. With the advent of social media, the entire dynamics of the freedom of speech and expression have changed in that it is now possible for every individual, with an internet connection and a Facebook/Twitter/Whatsapp account to reach millions of people without spending any extra money. This ability to reach a much larger and wider audience also led to greater friction between people holding different opinions. As the ease of the internet removed the otherwise filtering effects of geography and made it easier for people to communicate with each other, the advent of social media made it easier for them to communicate with a larger number of people at the same time. This ability to communicate within a group also gave rise to "debates" which often turngot ugly, highlighting giving way to concerns of how easy it is to harass people on social media.

  2. This concern over of harassment led a number of people to call for greater censorship of social media and it was perhaps this concern which gave rise to the biggest challenge to the freedom of speech and expression in the online world, in the form of section 66A of the Information Technology Act, 2000 which made it an offense to send information which was "grossly offensive" (s.66A(a)) or caused "annoyance" or "inconvenience" while being known to be false (s.66A(c)). This section was used widely seen by Oonline activists, including the Centre for Internet and Society, widely considered this section as a tool for the government to silence those who criticised it. In fact, statistics compiled by the National Crime Records Bureau from 2014 revealed that 2,402 people, including 29 women, were arrested in 4,192 cases under section 66A which accounted for nearly 60% of all arrests under the IT Act, and 40% of arrests for cyber crimes in 2014. [26]

  3. The section was finally struck down by the Supreme Court in 2015 in the case of Shreya Singhalv. Union of India, [27] on the ground of being too vague. This decision was seen as a huge victory for the campaign for freedom of speech and expression in the virtual world since this section was frequently used by the state (or rather government in power) to muzzle free speech against the incumbent government or political leaders. The offending section 66A made it an offence to send any information that was "grossly offensive or has menacing character" or "which he knows to be false, but for the purpose of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred, or ill will, persistently makes by makinguse of such computer resource or a communication device,". These terms quoted above were held by the Court to be too vague and wide and falling foul of the limited restrictions constitutionally imposed on the freedom of expression. The Supreme Court therefore, and were therefore struck down section 66A by the Supreme Court.

2f. A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public

The researchers of this report could not locate any norms in India which address this issue. To the best of their knowledge, India does not support any ICT activity that intentionally damages critical infrastructure or impairs the use and operation of critical infrastructure.

2g. States should take appropriate measures to protect their critical infrastructure from ICT threats, taking into account General Assembly resolution 58/199 on the creation of a global culture of cybersecurity and the protection of critical information infrastructures, and other relevant resolutions

1. Section 70 of the IT Act gives the government the authority to declare any computer system which directly affects any critical information infrastructure to be a protected system. The term "critical information infrastructure" (CII) is defined in the IT Act "the computer resource, the incapacitation or destruction of which, shall have debilitating impact on national security, economy, public health or safety." Once the government declares any computer resource as a protected system it gets the authority to prescribe information security practices for such as system as well as identify the persons who are authorised to access such systems. Any person who accesses a protected system in contravention of the provision of Section 70 of the IT Act shall be liable to be imprisoned for a maximum period of 10 years and also pay a fine. Further, section 70A of the IT Act gives the government the power to name a national nodal agency in respect of CII and also prescribe the manner for such agency to perform its duties. In pursuance of the powers under sections 70A the government has designated the National Critical Information Infrastructure Protection Centre (NCIIPC) situated in the JNU campus as the nodal agency [28]. This agency is a part of and under the administrative control of the National Technical Research Organisation (NTRO) [29].

2. The functions and manner of performing such functions by the NCIIPC has been prescribed in the Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013.[30] According to these Rules the functions of the NCIIPC include, inter alia, (i) the protecting and giving advice to reduce the vulnerabilities of CII against cyber terrorism, cyber warfare and other threats; (ii) identification of all critical infrastructure elements so that they can be notified by the government; (iii) providing strategic leadership and coherence across the government to respond to cyber security threats against CII; (iv) coordinating, sharing, monitoring, analysing and forecasting national level threats to CII for policy guidance, expertiese sharing and situational awareness for early warning alerts; (v) assisting in the development of appropriate plans, adoption of standards, sharing best practices and refinining procurement processes for CII; (vi) undertaking and funding research and development to innovate future technologies and collaborate with PSUs, academia and international partners for protection of CII; (vii) organising training and awareness programmes and development of audit and certification agencies for protection of CII; (viii) developing and executing national and international cooperation strategies for protection of CII; (ix) issuing guidelines, advisories and vulnerability notes relating to CII and practices, procedures, prevention and responses in consultation with CERT-In and other organisations; (x) exchanging information with CERT-In, especially in relation to cyber incidents; and (xi) calling for information and giving directions to critical sectors or persons having a critical impact on CII, in the event of any threat to CII.[31]

3. The NCIIPC had in the year 2013 released (non publicly) Guidelines for the Protection of National Critical Information Infrastructure [32] (CII Guidelines) which presented 40forty controls and respective guiding principles for the protection of CII. It is expected that these controls and guiding principles will help critical sectors to draw a CII protection roadmap to achieve safe, secure and resilient CII for India. The 'Guidelines for forty Critical Controls' is considered by the NCIIPC to be a significant milestone in its efforts for the protection of nation's critical information assets. These fort controls can be found in Section 6 (Best Practices, Controls and Guidelines) of the CII Guidelines. It must be noted that the CII Guidelines were drafted after taking inputs from a number of stakeholders such as the national Stock Exchange, the Airports Authority of India, National Thermal Power Corporation, Reserve Bank of India, Indian Railways, Telecom Regulatory Authority of India, Bharat Sanchar Nigam Limited, etc. This exercise of taking inputs from different stakeholders as well as developing a standard of as many as 40forty aspects of security seems to suggest that the NCIIPC is taking steps in the right direction.

4. The Recommendations on Telecommunication Infrastructure Policy issued by the Telecom Regulatory Authority of India in April, 2011 are silent on the issue of security of critical information infrastructure.s. However, the National Policy on Information Technology, 2012 (NPIT) does address the issue of security of cyber space by saying that the government should make efforts to do the following:

"9.1 To undertake policy, promotion and enabling actions for compliance to international security best practices and conformity assessment (product, process, technology & people) and incentives for compliance.

9.2 To promote indigenous development of suitable security techniques & technology through frontier technology research, solution oriented research, proof of concept, pilot development etc. and deployment of secure IT products/processes

9.3 To create a culture of cyber security for responsible user behavior & actions including building capacities and awareness campaigns.

9.4 To create, establish and operate an 'Information Security Assurance Framework'."

5. The Department of Information and Technology has formed the Computer Emergency Response Term of India (CERT-In) to enhance the security of India's Communications and Information Infrastructure through proactive action and effective collaboration. The Information Security Policy on Protection of Critical Infrastructure released by the CERT-In considers information recorded, processed or stored in electronic medium as a valuable asset and is geared towards protection of such "valuable asset". The policy recognises the importance of critical information infrastructure network and says that any disruption of the operation of such networks is likely to have devastating effects. The policy prescribes that personnel with program delivery responsibilities should also recognise the importance of security of information resources and their management. Thus Ddue to this recognition of the growing networked nature of government as well as critical organisations and the need to have a proper vulnerability analysis as well as effective management of information security risks, the Department of Technology prescribes the following information security policy:

"In order to reduce the risk of cyber attacks and improve upon the security posture of critical information infrastructure, Government and critical sector organizations are required to do the following on priority:

  • Identify a member of senior management, as Chief Information Security Officer (CISO), knowledgeable in the nature of information security & related issues and designate him/her as a 'Point of contact', responsible for coordinating security policy compliance efforts and to regularly interact with the Indian Computer Emergency Response Team (CERT-In), Department of Information Technology (DIT), which is the nodal agency for coordinating all actions pertaining to cyber security;
  • Prepare information security plan and implement the security control measures as per ISI/ISO/IEC 27001: 2005 and other guidelines/standards, as appropriate;
  • Carry out periodic IT security risk assessments and determine acceptable level of risks, consistent with criticality of business/functional requirements, likely impact on business/ functions and achievement of organisational goals/objectives;
  • Periodically test and evaluate the adequacy and effectiveness of technical security control measures implemented for IT systems and networks. Especially, Test and evaluation may become necessary after each significant change to the IT applications/systems/networks and can include, as appropriate the following:

➢ Penetration Testing (both announced as well as unannounced)

➢ Vulnerability Assessment

➢ Application Security Testing

➢ Web Security Testing

  • Carry out Audit of Information infrastructure on an annual basis and when there is major upgradation/change in the Information Technology Infrastructure, by an independent IT Security Auditing organization;..........
  • Report to CERT-In the cyber security incidents, as and when they occur and the status of cyber security, periodically."

6. The Department of Electronics and Information Technology (DEITY) released the National Policy on Electronics in 2012 which contained the government's take on the electronics industry in India. Section 5 of the said policy talks about cCyber sSecurity and states that to create a complete secure cyber eco-system in the country, careful and due attention is required for creation of well-d defined technology and systems, use of appropriate technology and more importantly development of appropriate products and& solutions. The priorities for action should be suitable design and development of indigenous appropriate products through frontier technology/product oriented research, testing and& validation of security of products meeting the protection profile requirements needed to secure the ICT infrastructure and cyber space of the country.

7. In addition the CERT-In has issued an Information Security Management Implementation Guide for Government Organisations. [33] CERT-In has also prescribed progressive steps for implementation of Information Security Management System in Government & Critical Sectors as per ISO 27001. The steps prescribed are as follows:

  • Identification of a Point-of-Contact (POC) / Chief Information Security Officer (CISO) for coordinating information security policy implementation efforts and communication with CERT-In
  • Information Security Awareness Programme
  • Determination of general Risk environment of the organization (low / medium / hHigh) depending on the nature of web and& networking environment, criticality of business functions and impact of information security incidents on the organization, business activities, assets / resources and individuals
  • Status appraisal and gap analysis against ISO 27001 based best information security practices
  • Risk assessment covering evaluation of threat perception and technical and &operational vulnerabilities
  • Comprehensive risk mitigation plan including selection of appropriate information security controls as per ISO 27001 based best information security practices
  • Documentation of agreed information security control measures in the form of information security policy manual, procedure manual and work instructions
  • Implementation of information security control measures (Managerial, Technical and& operational)
  • Testing & evaluation of technical information security control measures for their adequacy & effectiveness and audit of IT applications/systems/networks by an independent information security auditing organization (penetration testing, vulnerability assessment, application security testing, web security testing, LAN audits, etc)
  • Information Security Management assessment and certification against ISO 27001 standard, preferably by an independent & accredited organization

8. The Unified License for providing various telecommunication services also discusses contains certain terms which talk about how to engagedeal with telecommunication infrastructure in light of national security, which include the following recommendations:

  • Providing necessary facilities to the Government to counteract espionage, subversive act, sabotage or any other unlawful activity;
  • Giving full access to its network and equipment to the authorised persons for technical scrutiny and inspection;
  • Obtaininggettting security clearance for all foreign nationals deployed on for installation, operation and maintenance of the network;
  • Being completely responsible for the security of its network and having organizational policy on security and security management of its network including Network forensics, Network Hardening, Network penetration test, Risk assessment;
  • Auditing its network or getting the network audited from security point of view once in a financial year from a network audit and certification agency;
  • Inducting only those network elements into its telecommunications network, which have been got tested according tos per relevant contemporary Indian or International Security Standards;
  • Including all contemporary security related features (including communication security) as prescribed under relevant security standards while procuring the equipment and implementing all such contemporary features into the network;
  • Keeping requisite records of operations in the network;
  • Monitoring of all intrusions, attacks and frauds on his technical facilities and provide reports on the same to the Licensor.

Further statutory restrictions on tampering critical infrastructure are already contained in the Telegraph Act and have been discussed above, though the penalties provided may need to be increased if they are to act as a deterrent in this age where the stakes are much higher.

2h. States should respond to appropriate requests for assistance by another State whose critical infrastructure is subject to malicious ICT acts. States should also respond to appropriate requests to mitigate malicious ICT activity aimed at the critical infrastructure of another State emanating from their territory, taking into account due regard for sovereignty

There is yet to be a publicly acknowledged request from a foreign government asking the Indian government to take steps to prevent malicious ICT acts originating from its territory.

2i. States should take reasonable steps to ensure the integrity of the supply chain so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions;

 

Section 4 of the National Electronics Policy, 2012 talks about "Developing and Mandating Standards" and says that in order to curb the inflow of sub-standard and unsafe electronic products the government should mandate technical and safety standards which conform to international standards and do the following:

  • Develop Indian standards to meet specific Indian conditions including climatic, power supply, and handling and other conditions etc., by suitably reviewing existing standards.
  • Mandate technical standards in the interest of public health and safety.
  • Set up an institutional mechanism within Department of Information Technology for mandating compliance to standards for electronics products.
  • Develop a National Policy Framework for enforcement and use of Standards and Quality Management Processes.
  • Strengthen the lab infrastructure for testing of electronic products and encouraging development of conformity assessment infrastructure by private participation.
  • Create awareness amongst consumers against sub-standard and spurious electronic products.
  • Build capacity within the Government and public sector for developing and mandating standards.
  • Actively participate in the international development of standards in the Electronic System Design and Manufacturing sector.

2j. States should encourage responsible reporting of ICT vulnerabilities and share associated information on available remedies to such vulnerabilities to limit and possibly eliminate potential threats to ICTs and ICT-dependent infrastructure

Under section 70B of the IT Act, India has established a Computer Emergency Response Team (CERT-In) to serve as the national agency for incident responses. The functions mandated to be performed by CERT-In as per the IT Act are:

  • Collection, analysis and dissemination of information on cyber incidents;
  • Forecasting and alerts of cyber security incidents;
  • Emergency measures for handling cyber security incidents;
  • Coordination of cyber incidents response activities;
  • Issuing ofe guidelines, advisories, vulnerability notes and white papers relating to information security practices, procedures, prevention, response and reporting of cyber incidents;
  • Such other functions relating to cyber security as may be prescribed.

CERT-In also publishes information regarding various cyber threats on its websites so as to keep internet users aware of the latest threats in the online world. Such information can be accessed both on the main page of the CERT-In website or under the Advisories section on the website. [34]

2k. States should not conduct or knowingly support activity to harm the information systems of the authorized emergency response teams (sometimes known as computer emergency response teams or cyber security incident response teams) of another State. A State should not use authorized emergency response teams to engage in malicious international activity.

There are no official or public reports of India using its CERT-In to harm the information systems of another state, although it is highly unlikely that any state would publicly acknowledge such activities even if it was indulging in them.

3. Conclusion

As can be seen from the discussion above, the statutory, regulatory and policy regime in India does seem to address most of the cyber security norms in some manner or the other, but these efforts almost always fall short of meeting some of the norms. While the Information Technology Act along with the Rules thereunder, as being the umbrella legislation for digital transactions in India, does address some of the issues mentioned above, it does not address some of the problems that arise out of a greater reliance on the internet such as spamming, trolling, and, online harassment, etc. Although some of these acts may be addressed by regular legislation by applying them in the online world however this does not always take into account the unique features and complexities of committing these acts/crimes in the online world.

In the area of exchange of information between states, India has entered into a number of MLATs and extradition treaties, and frequently issues Letters of Rogatory. Yet however these mechanisms may not be adequate to address the needs of crime prevention of crimes in the age of ICT, as crime prevention it often requires exchange of information inon r a real time basis which is not possible with the bureaucratic procedures involved in the MLAT process. There also needsd to be stronger standards which are applicable to ICT equipment, including imported equipment especially in light of the fact that security concerns related to Chinese ICT equipment that from China have been raised quite frequently in the past. There also needs to be a better system of reporting ICT vulnerabilities to CERT-In or other authorized agencies so that mitigation measure can be implemented in time.

It should be noted that the work of the Group of Experts is not complete since the General Assembly has asked the Secretary General to form a new Group of Experts which would report back to the Secretary General in 2017. It is imperative that the Government of India realise the importance of the work being done by the Group of Experts and take measures to ensure that a representative from India is included in or atleast the comments and concerns of India are included and addressed by the Group of Experts. Meanwhile, India can begin by strengthening domestic privacy safeguards, improving transparency and efficiency of relevant policies and processes, and looking towards solutions that respect rights and strengthen security. Brutent force solutions such as demands for back doors, unfair and unreasonable encryption regulation, and data localization requirements will not help propel India forward in international discussions, dialogues, or agreements on cross-border sharing of information. Though the recommendations from the Group of Experts are welcome, beyond a preliminary mention of privacy and freedom of expression, the rights of individuals - and the ways in which these can be protected, various components that go into supporting those rights including redress, transparency, and due process measures - was inadequately addressed.


[1] The terms "cyberspace" has been defined in the Oxford English Dictionary as the notional environment in which communication over computer networks occurs. Although the scope of this paper is not to discuss the meaning of this term, it was felt that a simple definition of the term would be useful to better define the parameters of the discussion.

[3] https://www.justsecurity.org/29203/british-searches-america-tremendous-opportunity/

[5] Provided that the provisions of section 67, section 67A and this section does not extend to any book, pamphlet, paper, writing, drawing, painting, representation or figure in electronic form-

(i) The publication of which is proved to be justified as being for the public good on the ground that such book, pamphlet, paper writing, drawing, painting, representation or figure is in the interest of science, literature, art or learning or other objects of general concern; or

(ii) which is kept or used for bona fide heritage or religious purposes

Explanation: For the purposes of this section, "children" means a person who has not completed the age of 18 years.

[7] List of the countries is available at http://cbi.nic.in/interpol/mlats.php

[9] Peter Swire & Justin D. Hemmings, "Re-Engineering the Mutual Legal Assistance Treaty Process", http://www.heinz.cmu.edu/~acquisti/SHB2015/Swire.docx, cf. https://www.lawfareblog.com/mlat-reform-some-thoughts-civil-society .

[10] MLATS and International Cooperation for Law Enforcement Purposes, available at http://cis-india.org/internet-governance/blog/presentation-on-mlats.pdf

[11] The full list of the countries with which India has agreed an MLAT is available at http://cbi.nic.in/interpol/extradition.php

[20] AIR 1954 SC 300. In para 18 of the Judgment it was held: "A power of search and seizure is in any system of jurisprudence an overriding power of the State for the protection of social security and that power is necessarily regulated by law. When the Constitution makers have thought fit not to subject such regulation to constitutional limitations by recognition of a fundamental right to privacy, analogous to the American Fourth Amendment, we have no justification to import it, into a totally different fundamental right, by some process of strained construction."

[21] AIR 1963 SC 1295. In para 20 of the judgment it was held: "… Nor do we consider that Art. 21 has any relevance in the context as was sought to be suggested by learned counsel for the petitioner. As already pointed out, the right of privacy is not a guaranteed right under our Constitution and therefore the attempt to ascertain the movement of an individual which is merely a manner in which privacy is invaded is not an infringement of a fundamental right guaranteed by Part III."

[22] (1975) 2 SCC 148.

[23] (1994) 6 SCC 632.

[24] (1997) 1 SCC 301.

[31] Rule 4 of the Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013.

[32] Since these Guidelines were not publicly released they are not available on any government website. In this paper we have relied on a version available on a private website at http://perry4law.org/cecsrdi/wp-content/uploads/2013/12/Guidelines-For-Protection-Of-National-Critical-Information-Infrastructure.pdf


List of Acronyms

  • ICTs – Information Communication Technologies
  • GGE – Group of Experts
  • EU – European Union
  • DLC-ICT – India-Belarus Digital Learning Center
  • IT Act – Information Technology Act, 2000
  • UL - Unified License
  • DEITY – Department of Electronics and Information Technology
  • IT – Information Technology
  • ISO – International Organization  for Standardisation
  • CERT – Computer Emergency Response Team
  • CERT-In - Computer Emergency Response Team, India
  • MLAT – Mutual Legal Assistance Treaty
  • CII – Critical Information Infrastructure
  • NCIIPC - National Critical Information Infrastructure Protection Centre
  • NTRO - National Technical Research Organisation
  • NPIT - National Policy on Information Technology
  • CISO - Chief Information Security Officer

Book Review: Apocalypse Now Redux

by Nishant Shah last modified Aug 06, 2016 04:16 AM
My review for Arundhati Roy and John Cusack's new book that captures their encounter with Edward Snowden, 'Things that can and cannot be said' is now out. It's an engaging, if somewhat freewheeling, political critique of the times we live in.

The review was published in the Indian Express on August 6, 2016.


Book: Things That Can and Cannot Be Said
Authors: Arundhati Roy & John Cusack
Publication: Juggernaut
Pages: 132
Price: Rs 250

The title of the book — Things That Can and Cannot be Said — demands an imperative. It is as if Arundhati Roy and John Cusack, aware of their internal turmoil in dealing with a world that is rapidly becoming unintelligible, though not incomprehensible, are demanding an order where none exists. Hence, they are advocating for certainty and assurance, only to undermine it, ironically, through their own freely associative writing that mimics linear time and causative narrative. This deep-seated irony of needing to say something, but knowing that saying it is not going to shine a divining light on the sordid realities of the world that is being managed through the production of grand structures like valorous nation states, virtuous civil societies, the obsequious NGO-isation of radical action, and the persistent neutering of justice through the benign vocabulary of human rights, defines the oeuvre, the politics and the poetics of the book. Written like a scrap book, filled with excerpts from long conversations scattered over time and space, annotated by reminiscences of books read long ago that have seared their imprints on the mind, and events that are simultaneously platitudinous for their status as global landmarks and fiercely personal for the scars that they have left on the minds of the authors, the book remains an engaging, if a somewhat freewheeling, ride into a political critique that makes itself all the more palatable and disconcerting for the levity, irreverence and the dark sense of humour that accompanies it.

Composed in alternating chapters, the first half of the book is about Cusack and Roy laying themselves bare. They spare no words, square no edges, and put their personal, political and collective wounds on display with humble pride and proud humility. Cusack’s experience as a screenplay writer comes in handy — he rescues what could have been a long tirade, into a series of conversations. The familiar narratives are rehistoricised and de-territorialised, put into new contexts while eschewing the older ones, thus providing a large landscape that refers to state-sponsored genocide, structural reorganisation of nation states, the dying edge of political action, the overwhelming but invisible presence of capital, and the dithering state of social justice that treats human beings like things. Cusack, identifying the poetic genius of Roy, gives her centre stage, making her the voice in command.

Roy, for her part, seems to have enjoyed this moment in the soapbox — something that she has been doing quite effectively and provocatively to a national and global audience — and gives it her all. There are moments when the text feels indulgent, when the voice feels a little relentless, when the almost schizophrenic global and historical references become a litany of mixed-up events that might have required further nuance and deeper interpretation. However, the whimsical style of Roy’s narrative, with her sense of what is right, and her demeanour that remains friendly, curious and disarming, saves the text from being heavy handed, even when it does dissolve into cloying poignancy and makes you pause, just so that you can breathe.

Surprisingly, it is the second part of the book, where the two encounter Edward Snowden along with Daniel Ellsberg, the “Snowden of the 1960s” who had leaked the Pentagon papers, that falters. Snowden had jocularly mentioned that Roy was there to “radicalise him”. She does that, but in a way that doesn’t give us anything more than what we already know. While Cusack and Roy were committed to getting to know Snowden beyond his systems-man image, there wasn’t much that they could uncover, either in dialogue or in discourse, that could have told us more, endeared us further to possibly the most over-exposed person in recent times. However, one realises that the genius of the narrative is actually in reminding us how transparent Edward Snowden has become to us. We know all kinds of things about this young man — from his girlfriends past to his actions future, from his values and convictions to his opinion on the NSA watching people’s naked pictures — and yet, what has been missing in the Snowden files, has been the larger arc of global politics, social reordering, and perhaps, a glimpse of the post-nation future that Snowden might have seen in his act of whistleblowing that is going to remain the landmark moment that defines the rest of this century.

Once you have gotten over the fact that this is not a book about Snowden, the expectations are better tailored for what is to come, and suddenly, the long prelude to the meeting falls into place. Snowden matches Roy and Cusack in whimsy, irony, political conviction, and the sacred faith in human values that make you want to give them all a fierce hug of hesitant reassurance. What Snowden says, what Roy and Cusack make of it, and how they leave us, almost abruptly at the end, breathless, unnerved, and severely conflicted about some of the 20th century structures like society, activism, nation states, governance, communication, technologies, sharing and caring is what the book has to be read for. The tight screen-writing skills of Cusack meet the perfect timing of Roy’s prose, and all of it becomes surreal, futuristic and indelibly real when it gets anchored on the physical presence of Snowden, who, in exile, talks achingly of the home that has thrown him out and the home that he can never really call his own.

And while there are lapses — fragments, translations and evocations which might have needed more explanations to have their pedagogic intent shine through — there is no denying that, in all its flaws, much like the narrators, the book manages to first immerse you in the cold shock of a sobering reality, clearly positioning the apocalypse as the now, and then drags you out and wraps you up in a warm blanket, opening up forms of critique, formats of intervention, and functions of political commitment towards saying things that have and have not been said. The book should have, perhaps, been titled what could, would, should have been said, but can’t, won’t, shan’t be said — not because of anything else, but because it seems futile.

New Approaches to Information Privacy – Revisiting the Purpose Limitation Principle

by Amber Sinha last modified Nov 09, 2016 01:54 PM
Article on Aadhaar throwing light on privacy and data protection.

 

This was published in Digital Policy Portal on July 13, 2016.


Introduction

Last year, Mukul Rohatgi, the Attorney General of India, called into question existing jurisprudence of the last 50 years on the constitutional validity of the right to privacy.1 Mohatgi was rebutting the arguments on privacy made against Aadhaar, the unique identity project initiated and implemented in the country without any legislative mandate.2 The question of the right to privacy becomes all the more relevant in the context of events over the last few years—among them, the significant rise in data collection by the state through various e-governance schemes,3 systematic access to personal data by various wings of the state through a host of surveillance and law enforcement initiatives launched in the last decade,4 the multifold increase in the number of Indians online, and the ubiquitous collection of personal data by private parties.5

These developments have led to a call for a comprehensive privacy legislation in India and the adoption of the National Privacy Principles as laid down by the Expert Committee led by Justice AP Shah.6 There are privacy-protection legislation currently in place such as the Information Technology Act, 2000 (IT Act), which was enacted to govern digital content and communication and provide legal recognition to electronic transactions. This legislation has provisions that can safeguard—and dilute—online privacy. At the heart of the data protection provisions in the IT Act lies section 43A and the rules framed under it, i.e., Reasonable security practices and procedures and sensitive personal data information.7Section 43A mandates that body corporates who receive, possess, store, deal, or handle any personal data to implement and maintain ‘reasonable security practices’, failing which, they are held liable to compensate those affected. Rules drafted under this provision also mandated a number of data protection obligations on corporations such the need to seek consent before collection, specifying the purposes of data collection, and restricting the use of data to such purposes only. There have been questions raised about the validity of the Section 43A Rules as they seek to do much more than mandate in the parent provisions, Section 43A— requiring entities to maintain reasonable security practices.

Privacy as control?

Even setting aside the issue of legal validity, the kind of data protection framework envisioned by Section 43A rules is proving to be outdated in the context of how data is now being collected and processed. The focus of Section 43 A Rules—as well as that of draft privacy legislations in India8—is based on the idea of individual control. Most apt is Alan Westin’s definition of privacy: “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other.”9 Westin and his followers rely on the normative idea of “informational self- determination”, the notion of a pure, disembodied, and atomistic self, capable of making rational and isolated choices in order to assert complete control over personal information. More and more this has proved to be a fiction especially in a networked society.

Much before the need for governance of information technologies had reached a critical mass in India, Western countries were already dealing with the implications of the use of these technologies on personal data. In 1973, the US Department of Health, Education and Welfare appointed a committee to address this issue, leading to a report called ‘Records, Computers and Rights of Citizens.’10 The Committee’s mandate was to “explore the impact of computers on record keeping about individuals and, in addition, to inquire into, and make recommendations regarding, the use of the Social Security number.” The Report articulated five principles which were to be the basis of fair information practices: transparency; use limitation; access and correction; data quality; and security. Building upon these principles, the Committee of Ministers of the Organization for Economic Cooperation and Development (OECD) arrived at the Guidelines on the Protection of Privacy and Transborder Flows of Personal Data in 1980.11 These principles— Collection Limitation, Data Quality, Purpose Specification, Use Limitation, Security Safeguards, Openness, Individual Participation and Accountability—are what inform most data protection regulations today including the APEC Framework, the EU Data Protection Directive, and the Section 43A Rules and Justice AP Shah Principles in India.

Fred Cate describes the import of these privacy regimes as such:

“All of these data protection instruments reflect the same approach: tell individuals what data you wish to collect or use, give them a choice, grant them access, secure those data with appropriate technologies and procedures, and be subject to third-party enforcement if you fail to comply with these requirements or individuals’ expressed preferences”12

This is in line with Alan Westin’s idea of privacy exercised through individual control. Therefore the focus of these principles is on empowering the individuals to exercise choice, but not on protecting individuals from harmful or unnecessary practices of data collection and processing. The author of this article has earlier written13 about the sheer inefficacy of this framework which places the responsibility on individuals. Other scholars like Daniel Solove,14 Jonathan Obar15 and Fred Cate16 have also written about the failure of traditional data protection practices of notice and consent. While these essays dealt with the privacy principles of choice and informed consent, this paper will focus on the principles of purpose limitation.

Purpose Limitation and Impact of Big Data

The principles of purpose limitation or purpose specification seeks to ensure the following four objectives:

  1. Personal information collected and processed should be adequate and relevant to the purposes for which they are processed.
  2. The entities collect, process, disclose, make available, or otherwise use personal information only for the stated purposes.
  3. In case of change in purpose, the data’s subject needs to be informed and their consent has to be obtained.
  4. After personal information has been used in accordance with the identified purpose, it has to be destroyed as per the identified procedures.

The purpose limitation along with the data minimisation principle—which requires that no more data may be processed than is necessary for the stated purpose—aim to limit the use of data to what is agreed to by the data subject. These principles are in direct conflict with new technology which relies on ubiquitous collection and indiscriminate uses of data. The main import of Big Data technologies on the inherent value in data which can be harvested not by the primary purposes of data collection but through various secondary purposes which involve processing of the data repeatedly.17Further, instead to destroying the data when its purpose has been achieved, the intent is to retain as much data as possible for secondary uses. Importantly, as these secondary uses are of an inherently unanticipated nature, it becomes impossible to account for it at the stage of collection and providing the choice to the data subject.

Followers of the discourse on Big Data would be well aware of its potential impacts on privacy. De-identification techniques to protect the identities of individuals in dataset face a threat from an increase in the amount of data available either publicly or otherwise to a party seeking to reverse-engineer an anonymised dataset to re-identify individuals. 18 Further, Big Data analytics promise to find patterns and connections that can contribute to the knowledge available to the public to make decisions. What is also likely is that it will lead to revealing insights about people that they would have preferred to keep private.19In turn, as people become more aware of being constantly profiled by their actions, they will self-regulate and ‘discipline’ their behaviour. This can lead to a chilling effect.20 Meanwhile, Big Data is also fuelling an industry that incentivises businesses to collect more data, as it has a high and growing monetary value. However, Big Data also promises a completely new kind of knowledge that can prove to be revolutionary in fields as diverse as medicine, disaster-management, governance, agriculture, transport, service delivery, and decision-making.21 As long as there is a sufficiently large and diverse amount of data, there could be invaluable insights locked in it, accessing which can provide solutions to a number of problems. In light of this, it is important to consider what kind of regulatory framework is most suitable which could facilitate some of the promised benefits of Big Data and at the same time mitigate its potential harm. This, coupled with the fact that the existing data protection principles have, by most accounts, run their course, makes the examination of alternative frameworks even more important. This article will examine some alternate proposals made to the existing framework of purpose limitation below.

Harms-based approach

Some scholars like Fred Cate22 and Daniel Solove23 have argued that there is a need for the primary focus of data protection law to move from control at the stage of data collection to actual use cases. In his article on the failure of Fair Information Practice Principles,24Cate puts forth a proposal for ‘Consumer Privacy Protection Principles.’ Cate envisions a more interventionist role of the data protection authorities by regulating information flows when required, in order to protect individuals from risky or harmful uses of information. Cate’s attempt is to extend the principles of consumer protection law of prevention and remedy of harms.

In a re-examination of the OECD Privacy Principles, Cate and Viktor Mayer Schöemberger attempt to discard the use of personal data to only purposes specified. They felt that restricting the use of personal to only specified purposes could significantly threaten various research and beneficial uses of Big Data. Instead of articulating a positive obligations of what personal data collected could be used for, they attempt to arrive at a negative obligation of use-cases prevented by law. Their working definition of the Use specification principle broaden the scope of use cases by only preventing use of data “if the use is fraudulent, unlawful, deceptive or discriminatory; society has deemed the use inappropriate through a standard of unfairness; the use is likely to cause unjustified harm to the individual; or the use is over the well-founded objection of the individual, unless necessary to serve an over-riding public interest, or unless required by law.”25

While most standards in the above definition have established understanding in jurisprudence, the concept of unjustifiable harm is what we are interested in. Any theory of harms-based approach goes back to John Stuart Mill’s dictum that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. Therefore, any regulation that seeks to control or prevent autonomy of individuals (in this case, the ability of individuals to allow data collectors to use their personal data, and the ability of data collectors to do so, without any limitation) must clearly demonstrate the harm to the individuals in question.

Fred Cate articulates the following steps to identify tangible harm and respond to its presence:26

  1. Focus on Use — Actual use of the data should be considered, not mere possession. The assumption is that the collection, possession, or transfer of information do not significantly harm people, rather it is the use of information following such collection, possession, or transfer.
  2. Proportionality — Any regulatory measure must be proportional to the likelihood and severity of the harm identified.
  3. Per se Harmful Uses — Uses which are always harmful must be prohibited by law
  4. Per se not Harmful Uses — If uses can be considered inherently not harmful, they should not be regulated.
  5. Sensitive Uses — In case where the uses are not per se harmful or not harmful, individual consent must be sought for using that data for those purposes.

The proposal by Cate argues for what is called a ‘use based system’, which is extremely popular with American scholars. Under this system, data collection itself is not subject to restrictions; rather, only the use of data is regulated. This argument has great appeal for both businesses who can reduce their overheads significantly if consent obligations are done away with as long as they use the data in ways which are not harmful, as well as critics of the current data protection framework which relies on informed consent. Lokke Moerel explains the philosophy of ‘harms based approach’ or ‘use based system’ in United States by juxtaposing it against the ‘rights based approach’ in Europe.27 In Europe, rights of individuals with regard to processing of their personal data is a fundamental human right and therefore, a precautionary principle is followed with much greater top-down control upon data collection. However, in the United States, there is a far greater reliance on market mechanisms and self-regulating organisations to check inappropriate processing activities, and government intervention is limited to cases where a clear harm is demonstrable.28

Continuing research by the Centre for Information Policy Leadership under its Privacy Risk Framework Project looks at a system of articulating what harms and risks arising from use of collected data. They have arrived a matrix of threats and harms. Threats are categorised as —a) inappropriate use of personal information and b) personal information in the wrong hands. More importantly for our purposes, harms are divided into: a) tangible harms which are physical or economic in nature (bodily harm, loss of liberty, damage to earning power and economic interests); b) intangible harms which can be demonstrated (chilling effects, reputational harm, detriment from surveillance, discrimination and intrusion into private life); and c) societal harm (damage to democratic institutions and loss of social trust).29For any harms-based system, a matrix like above needs to emerge clearly so that regulation can focus on mitigating practices leading to the harms.

Legitimate interests

Lokke Moerel and Corien Prins, in their article “Privacy for Homo Digitalis – Proposal for a new regulatory framework for data protection in the light of Big Data and Internet of Things”30 use the ideal of responsive regulation which considers empirically observable practices and institutions while determining the regulation and enforcement required. They state that current data protection frameworks—which rely on mandating some principles of how data has to be processed—is exercised through merely procedural notification and consent requirements. Further, Moerel and Prins feel that data protection law cannot only involve a consideration of individual interest but also needs to take into account collective interest. Therefore, the test must be a broader assessment than merely the purpose limitation articulating the interests of the parties directly involved, but whether a legitimate interest is achieved.

Legitimate interest has been put forth as an alternative to the purpose limitation. Legitimate is not a new concept and has been a part of the EU Data Protection Directive and also finds a place in the new General Data Protection Regulation. Article 7 (f) of the EU Directive31 provided for legitimate interest balanced against the interests or fundamental rights and freedoms of the data subject as the last justifiable reason for use of data. Due to confusion in its interpretation, the Article 29 Working Party, in 2014,32looked into the role of legitimate interest and arrived at the following factors to determine the presence of a legitimate interest— a) the status of the individual (employee, consumer, patient) and the controller (employer, company in a dominant position, healthcare service); b) the circumstances surrounding the data processing (contract relationship of data subject and processor); c) the legitimate expectations of the individual.

Federico Ferretti has criticised the legitimate interest principle as vague and ambiguous. The balancing of legitimate interest in using the data against fundamental rights and freedoms of the data subject gives the data controllers some degree of flexibility in determining whether data may be processed; however, this also reduces the legal certainty that data subject have of their data not being used for purposes they have not agreed to.33However, it is this paper’s contention that it is not the intent of the legitimate interest criteria but the lack of consensus on its application which creates an ambiguity. Moerel and Prins articulate a test for using legitimate interest which is cognizant of the need to use data for the purpose of Big Data processing, as well as ensuring that the rights of data subjects are not harmed.

As demonstrated earlier, the processing of data and its underlying purposes have become exceedingly complex and the conventional tool to describe these processes ‘privacy notices’ are too lengthy, too complex and too profuse in numbers to have any meaningful impact.34The idea of information self-determination, as contemplated by Westin in American jurisprudence, is not achieved under the current framework. Moerel and Prins recommend five factors35 as relevant in determining the legitimate interest. Of the five, the following three are relevant to the present discussion:

  1. Collective Interest — A cost-benefit analysis should be conducted, which examines the implications for privacy for the data subjects as well as the society, as a whole.
  2. The nature of the data — Rather than having specific categories of data, the nature of data needs to be assessed contextually to determine legitimate interest.
  3. Contractual relationship and consent not independent grounds — This test has two parts. First, in case of contractual relationship between data subject and data controller: the more specific the contractual relationship, the more restrictions apply to the use of the data. Second, consent does not function as a separate principle which, once satisfied, need not be revisited. The nature of the consent (opportunities made available to data subject, opt in/opt out, and others) will continue to play a role in determining legitimate interest.

Conclusion

Replacing the purpose limitation principles with a use-based system as articulated above poses the danger of allowing governments and the private sector to carry out indiscriminate data collection under the blanket guise that any and all data may be of some use in the future. The harms-based approach has many merits and there is a stark need for more use of risk assessments techniques and privacy impact assessments in data governance. However, it is important that it merely adds to the existing controls imposed at data collection, and not replace them in their entirety. On the other hand, the legitimate interests principle, especially as put forth by Moerel and Prins, is more cognizant of the different factors at play — the inefficacy of existing purpose limitation principles, the need for businesses to use data for purposes unidentified at the stage of collection, and the need to ensure that it is not misused for indiscriminate collection and purposes. However, it also poses a much heavier burden on data controllers to take into account various factors before determining legitimate interest. If legitimate interest has to emerge as a realistic alternative to purpose limitation, there needs to be greater clarity on how data controllers must apply this principle.

Endnotes

  1. Prachi Shrivastava, “Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise,” Legally India, Jyly 23, 2015, http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise.
  2.  Rebecca Bowe, “Growing Mistrust of India’s Biometric ID Scheme,” Electronic Frontier Foundation, May 4, 2012, https://www.eff.org/deeplinks/2012/05/growing-mistrust-india-biometric-id-scheme.
  3. Lisa Hayes, “Digital India’s Impact on Privacy: Aadhaar numbers, biometrics, and more,” Centre for Democracy and Technology, January 20, 2015, https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/.
  4. “India’s Surveillance State,” Software Freedom Law Centre, http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/.
  5. “Internet Privacy in India,” Centre for Internet and Society, http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-india.
  6. Vivek Pai, “Indian Government says it is still drafting privacy law, but doesn’t give timelines,” Medianama, May 4, 2016, http://www.medianama.com/2016/05/223-government-privacy-draft-policy/.
  7. Information Technology (Intermediaries Guidelines) Rules, 2011,
    http://deity.gov.in/sites/upload_files/dit/files/GSR314E_10511%281%29.pdf.
  8. Discussion Points for the Meeting to be taken by Home Secretary at 2:30 pm on 7-10-11 to discuss the drat Privacy Bill, http://cis-india.org/internet-governance/draft-bill-on-right-to-privacy.
  9. Alan Westin, Privacy and Freedom (New York: Atheneum, 2015).
  10. US Secretary’s Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, http://www.justice.gov/opcl/docs/rec-com-rights.pdf.
  11. OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm
  12. Fred Cate, “The Failure of Information Practice Principles,” in Consumer Protection in the Age of the Information Economy, ed. Jane K. Winn (Burlington: Aldershot, Hants, England, 2006) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1156972.
  13. Amber Sinha and Scott Mason, “A Critique of Consent in Informational Privacy,” Centre for Internet and Society, January 11, 2016, http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy.
  14. Daniel Solove, “Privacy self-management and consent dilemma,” Harvard Law Review 126, (2013): 1880.
  15. Jonathan Obar, “Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management,” Big Data and Society 2(2), (2015), doi: 10.1177/2053951715608876.
  16. Supra Note 12.
  17. Supra Note 14.
  18. Paul Ohm, “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization” available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1450006; Arvind Narayanan and Vitaly Shmatikov, “Robust De-anonymization of Large Sparse Datasets” available at https://www.cs.utexas.edu/~shmat/shmat_oak08netflix.pdf.
  19. D. Hirsch, “That’s Unfair! Or is it? Big Data, Discrimination and the FTC’s Unfairness Authority,” Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf
  20. A Marthews and C Tucker, “Government Surveillance and Internet Search Behavior”, available at http://ssrn.com/abstract=2412564; Danah Boyd and Kate Crawford, “Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society, Vol. 15, Issue 5, (2012).
  21. Scott Mason, “Benefits and Harms of Big Data”, Centre for Internet and Society, available at http://cis-india.org/internet-governance/blog/benefits-and-harms-of-big-data#_ftn37.
  22. Cate, “The Failure of Information Practice Principles.”
  23. Solove, “Privacy self-management and consent dilemma,” 1882.
  24. Cate, “The Failure of Information Practice Principles.”
  25. Fred Cate and Viktor Schoenberger, “Notice and Consent in a world of Big Data,” International Data Privacy Law 3(2), (2013): 69.
  26. Solove, “Privacy self-management and consent dilemma,” 1883.
  27. Lokke Moerel, “Netherlands: Big Data Protection: How To Make The Draft EU Regulation On Data Protection Future Proof”, Mondaq, March 11. 2014, http://www.mondaq.com/x/298416/data+protection/Big+Data+Protection+How+To+Make+The+Dra%20ft+EU+Regulation+On+Data+Protection+Future+Proof%20al%20Lecture.
  28. Moerel, “Netherlands: Big Data Protection.”
  29. Centre for Information Policy Leadership, “A Risk-based Approach to Privacy: Improving Effectiveness in Practice,” Hunton and Williams LLP, June 19, 2014, https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/white_paper_1-a_risk_based_approach_to_privacy_improving_effectiveness_in_practice.pdf.
  30. Lokke Moerel and Corien Prins, “Privacy for Homo Digitalis: Proposal for a new regulatory framework for data protection in the light of Big Data and Internet of Things”, Social Science Research Network, May 25, 2016, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2784123.
  31. EU Directive 95/46/EC – The Data Protection Directive, https://www.dataprotection.ie/docs/EU-Directive-95-46-EC-Chapter-2/93.htm.
  32. Article 29 Data Protection Working Party, “Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC,” http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp217_en.pdf.
  33. Frederico Ferretti, “Data protection and the legitimate interest of data controllers: Much ado about nothing or the winter of rights?,” Common Market Law Review 51(2014): 1-26. http://bura.brunel.ac.uk/bitstream/2438/9724/1/Fulltext.pdf.
  34. Sinha and Mason, “A Critique of Consent in Informational Privacy.”
  35. Moerel and Prins, “Privacy for Homo Digitalis.”

Policy Brief on the Report of the UN Group of Governmental Experts on ICT

by Elonnai Hickok and Vipul Kharbanda — last modified Aug 23, 2016 03:37 PM
In light of the complex challenges and threats posed to, and by, the field of information telecommunications in cyberspace, in 1998 the draft resolution in the First Committee of the UN General Assembly was introduced and adopted without a vote (A/RES/53/70) ]. Since then, the Secretary General to the General Assembly has invited annual reports on the issue.

The most recent report, Developments in the Field of Information and Telecommunications in the Context of International Security, was published in June 2015. The 2015 Report touches upon a number of issues, including international cooperation, norms and principles for responsible state behavior, confidence building measures cross border  exchange of information, and capacity building measures.

Annual reports will continue to be accepted by the General Assembly, and the 2016/2017 Group of Governmental Experts will have it's first meeting in August 2016.  India was a member of the Group of Governmental Experts in 2013.

The Centre for Internet and Society (CIS) has written an article analyzing India’s alignment with the recommendations of the report of the Group of Governmental Experts. This policy brief attempts to articulate the major policy actions that may be considered by India to further incorporate and implement the principles enunciated in the Report.

CIS believes that the report of the Group of Governmental Experts provides important minimum standards that countries could adhere to in light of challenges to international security posed by ICT developments. Given the global nature of these challenges and the need for nations to holistically address such challenges from a human rights and security perspective, CIS believes that the Group of Governmental Experts and similar international forums are useful and important forums for India to continue to actively engage with.

Below are our specific recommendations:

(a) Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security;

India has entered into treaties on ICT issues with countries such as Belarus, Canada, China, Egypt, and France. Additionally, India’s IT Act addresses a number of  the cyber crimes listed in the Budapest Convention. However, India is not yet a signatory to the Convention. This leaves scope for India to consider further forums and means of international cooperation to better realise this principle.

India has been invited to accede to the Budapest Convention in the past but for various tactical and political reasons has not yet agreed to do so. Although whether to accede to an International Convention or not is usually a well discussed and thought out policy decision of the diplomatic core of a country, the mutual assistance framework, however flawed it may be, would offer a better opportunity for India for international cooperation for increasing the stability and security of ICTs and prevent harmful ICT practices as envisaged in the Report of the Group of Governmental Experts.

(b) In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution [of cybercrime] in the ICT environment and the nature and extent of the consequences;

While the Department of Electronics and Information Technology (DEITY) as well as the Computer Emergency Response Team, India (CERT-In) have a number of policies which talk about maintaining security and means of addressing threats in the ICT environment, most ICT incidents, crimes or illegal activities using ICT, unless they involve large or government institutions, are handled by the regular police establishment of the country. The lack of capacity, both in terms of infrastructure and skill, of the regular police to adequately address most cyber crimes is an area that needs to be strengthened. The need for cyber security capacity building in India was highlighted in 2015 by the Standing Committee on Information Technology.   It would be useful for dedicated cyber crime departments to be established in all districts. This would be a step in the right direction to provide the requisite capacity and resources to deal with the various technical issues such as attribution, jurisdiction, etc. arising out of ICT incidents.

(d) States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect;

Owing to the growing irrelevance of physical and political borders in the age of globally networked devices, one of the most important issues arising out of ICTs and cyber crimes is the need for greater and more efficient exchange of information between nations. It has been widely accepted that sharing of information on a regular and sustained basis between nation states would be a very important tool. Limitations in the traditional mechanisms (MLATs, Letters Rogatory, etc.) such as the delay in accessing the information as well as denial of access due to differences in legal standards, present  hurdles to the efficacy of law enforcement agencies only emphasize the urgency of developing a new mechanism of international information sharing that would be able to deal with ICT incidents, while at the same time protecting the freedoms and privacy rights of the citizens of the world. Exploration and participation in dialogues and solutions that are evolving at the international level around cross border sharing of information is key.

(i) States should take reasonable steps to ensure the integrity of the supply chain [of ICT equipment] so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions;

While the National Electronics Policy of 2012 states that the government should mandate technical and safety standards in order to curb the inflow of sub-standard and unsafe electronic products, the government is yet to mandate any broad standards in the Indian market for ICT equipment. Considering the enormous security implications of compromised ICT this is an area where the government should prioritize and must act immediately. Mandating standards may require the establishment of a monitoring or enforcement mechanism to ensure that the standards are being implemented. This should be done with the aim of ensuring security while not hindering innovation or the flow of business. To achieve such a balance, research and discussion is needed within the government to formulate a mechanism which would ensure the safety and quality of ICT tools while at the same time ensuring that industry is not hindered.

Conclusion

The suggestions given above are some of the major lessons from the analysis of the UN Report on ICT which CIS believe the government of India could adopt and pursue to strengthen its enlightenment with the recommendations of the Report. It is also imperative that the Government of India continues to realise the importance of the work being done by the Group of Governmental Experts and take measures to ensure that a representative from India is included in future Groups. Meanwhile, India can take positive steps by strengthening domestic privacy safeguards, improving transparency and efficiency of relevant policies and processes, and looking towards solutions that respect rights and strengthen security.

Report on Understanding Aadhaar and its New Challenges

by Japreet Grewal, Vanya Rakesh, Sumandro Chattapadhyay, and Elonnai Hickock — last modified Mar 16, 2019 04:42 AM
The Trans-disciplinary Research Cluster on Sustainability Studies at Jawaharlal Nehru University collaborated with the Centre for Internet and Society, and other individuals and organisations to organise a two day workshop on “Understanding Aadhaar and its New Challenges” at the Centre for Studies in Science Policy, JNU on May 26 and 27, 2016. The objective of the workshop was to bring together experts from various fields, who have been rigorously following the developments in the Unique Identification (UID) Project and align their perspectives and develop a shared understanding of the status of the UID Project and its impact. Through this exercise, it was also sought to develop a plan of action to address the welfare exclusion issues that have arisen due to implementation of the UID Project.

 

Report: Download (PDF)


This Report is a compilation of the observations made by participants at the workshop relating to myriad issues under the UID Project and various strategies that could be pursued to address these issues. In this Report we have classified the observations and discussions into following themes:

1. Brief Background of the UID Project

2. Legal Status of the UIDAI Project

3. National Identity Projects in Other Jurisdictions

4. Technologies of Identification and Authentication

5. Aadhaar for Welfare?

6. Surveillance and UIDAI

7. Strategies for Future Action

Annexure A Workshop Agenda

Annexure B Workshop Participants


1. Brief Background of the UID Project

In the year 2009, the UIDAI was established and the UID project was conceived by the Planning Commission under the UPA government to provide unique identification for each resident in India and to be used for delivery of welfare government services in an efficient and transparent manner, along with using it as a tool to monitor government schemes.  The objective of the scheme has been to issue a unique identification number by the Unique Identification Authority of India, which can be authenticated and verified online. It was conceptualized and implemented as a platform to facilitate identification and avoid fake identity issues and delivery of government benefits based on the demographic and biometric data available with the Authority.

The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (the “Act”) was passed as a money bill on March 16, 2016 and was notified in the gazette March 25, 2016 upon receiving the assent of the President. However, the enforceability date has not been mentioned due to which the bill has not come into force.

The Act provides that the Aadhaar number can be used to validate a person’s identity, but it cannot be used as a proof of citizenship. Also, the government can make it mandatory for a person to authenticate her/his identity using Aadhaar number before receiving any government subsidy, benefit, or service. At the time of enrolment, the enrolling agency is required to provide notice to the individual regarding how the information will be used, the type of entities the information will be shared with and their right to access their information. Consent of an individual would be obtained for using his/her identity information during enrolment as well as authentication, and would be informed of the nature of information that may be shared. The Act clearly lays that the identity information of a resident shall not be sued for any purpose other than specified at the time of authentication and disclosure of information can be made only pursuant to an order of a court not inferior to that of a District Judge and/or disclosure made in the interest of national security.

2. Legal Status of the UIDAI Project

In this section, we have summarised the discussions on the procedural issues with the passage of the Act. The participants had criticised the passage of the Act as a money bill in the Parliament. The participants also assessed the litigation pending in the Supreme Court of India that would be affected by this law. These discussions took place in the session titled, ‘Current Status of Aadhaar’ and have been summarised below.

Procedural Issues with Passage of the Act

The participants contested the introduction of the Act in the form of a money bill. The rationale behind this was explained at the session and is briefly explained here. Article 110 (1) of the Constitution of India defines a money bill as one containing provisions only regarding the matters enumerated or any matters incidental to the following: a) imposition, regulation and abolition of any tax, b) borrowing or other financial obligations of the Government of India, c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, d) appropriation of money out of CFI, e) expenditure charged on the CFI or f) receipt or custody or audit of money into CFI or public account of India. The Act makes references to benefits, subsidies and services which are funded by the Consolidated Fund of India (CFI), however the main objectives of the Act is to create a right to obtain a unique identification number and provide for a statutory mechanism to regulate this process. The Act only establishes an identification mechanism which facilitates distribution of benefits and subsidies funded by the CFI and this identification mechanism (Aadhaar number) does not give it the character of a money bill. Further, money bills can be introduced only in the Lok Sabha, and the Rajya Sabha cannot make amendments to such bills passed by the Lok Sabha. The Rajya Sabha can suggest amendments, but it is the Lok Sabha’s choice to accept or reject them. This leaves the Rajya Sabha with no effective role to play in the passage of the bill.

The participants also briefly examined the writ petition that has been filed by former Union minister Jairam Ramesh challenging the constitutionality and legality of the treatment of this Act as a money bill which has raised the question of judiciary’s power to review the decisions of the speaker. Article 122 of the Constitution of India provides that this power of judicial review can be exercised to look into procedural irregularities. The question remains whether the Supreme Court will rule that it can determine the constitutionality of the decision made by the speaker relating to the manner in which the Act was introduced in the Lok Sabha. A few participants mentioned that similar circumstances had arisen in the case of Mohd. Saeed Siddiqui v. State of U.P. [1].

where the Supreme Court refused to interfere with the decision of the Uttar Pradesh legislative assembly speaker certifying an amendment bill to increase the tenure of the Lokayukta as a money bill, despite the fact that the bill amended the Uttar Pradesh Lokayukta and Up-Lokayuktas Act, 1975, which was passed as an ordinary bill by both houses. The Court in this case held that the decision of the speaker was final and that the proceedings of the legislature being important legislative privilege could not be inquired into by courts. The Court added, “the question whether a bill is a money bill or not can be raised only in the state legislative assembly by a member thereof when the bill is pending in the state legislature and before it becomes an Act.”

However, it is necessary to carve a distinction between Rajya Sabha and State Legislature. Unlike the State Legislature, constitution of Rajya Sabha is not optional therefore significance of the two bodies in the parliamentary process cannot be considered the same. Participants also made another significant observation about a similar bill on the UID project (National Identification Authority of India (NIDAI) Bill) that was introduced before by the UPA government in 2010 and was deemed unacceptable by the standing committee on finance, headed by Yashwant Sinha. This bill was subsequently withdrawn.

Status of Related Litigation

A panellist in this session briefly summarised all the litigation that was related to or would be affected by the Act. The panellist also highlighted several Supreme Court orders in the case of KS Puttuswamy v. Union of India [2] which limited the use of Aadhaar. We have reproduced the presentation below.

  • KS Puttuswamy v. Union of India - This petition was filed in 2012 with primary concern about providing Aadhaar numbers to illegal immigrants in India. It was contended that this could not be done without a law establishing the UIDAI and amendment to the Citizenship laws. The petitioner raised concerns about privacy and fallibility of biometrics.
  • Sudhir Vombatkere & Bezwada Wilson [3] - This petition was filed in 2013 on grounds of infringement of right to privacy guaranteed under Article 21 of the Constitution of India and the security threat on account of data convergence.
  • Aruna Roy & Nikhil Dey [4] - This petition was filed in 2013 on the grounds of large scale exclusion of people from access to basic welfare services caused by UID. After their petition, no. of intervention applications were filed. These were the following:
  • Col. Mathew Thomas [5] - This petition was filed on the grounds of threat to national security posed by the UID project particularly in relation to arrangements for data sharing with foreign companies (with links to foreign intelligence agencies).
  • Nagrik Chetna Manch [6] - This petition was filed in 2013 and led by Dr. Anupam Saraph on the grounds that the UID project was detrimental to financial service regulation and financial inclusion.
  • S. Raju [7] - This petition was filed on the grounds that the UID project had implications on the federal structure of the State and was detrimental to financial inclusion.
  • Beghar Foundation - This petition was filed in 2013 in the Delhi High Court on the grounds invasion of privacy and exclusion specifically in relation to the homeless. It subsequently joined the petition filed by Aruna Roy and Nikhil Dey as an intervener.
  • Vickram Crishna – This petition was originally filed in the Bombay High Court in 2013 on the grounds of surveillance and invasion of privacy. It was later transferred to the Supreme Court.
  • Somasekhar – This petition was filed on the grounds of procedural unreasonableness of the UID project and also exclusion & privacy. The petitioner later intervened in the petition filed by Aruna Roy and Nikhil Dey in 2013.
  • Rajeev Chandrashekhar– This petition was filed on the ground of lack of legal sanction for the UID project. He later intervened in the petition filed by Aruna Roy and Nikhil Dey in 2013. His position has changed now.
  • Further, a petition was filed by Mr. Jairam Ramesh initially challenging the passage of the Act as a money bill but subsequently, it has been amended to include issues of violation of right to privacy and exclusion of the poor and has advocated for five amendments that were suggested to the Aadhaar Bill by the Rajya Sabha.

Relevant Orders of the Supreme Court

There are six orders of the Supreme Court which are noteworthy.

  • Order of Sept. 23, 2013 - The Supreme court directed that: 1) no person shall suffer for not having an aadhaar number despite the fact that a circular by an authority makes it mandatory; 2) it should be checked if a person applying for aadhaar number voluntarily is entitled to it under the law; and 3) precaution should be taken that it is not be issued to illegal immigrants.
  • Order of 26th November, 2013 – Applications were filed by UIDAI, Ministry of Petroleum & Natural Gas, Govt of India, Indian Oil Corporation, BPCL and HPCL for modifying the September 23rd order and sought permission from the Supreme Court to make aadhaar number mandatory. The Supreme Court held that the order of September 23rd would continue to be effective.
  • Order of 24th March, 2014 – This order was passed by the Supreme Court in a special leave petition filed in the case of UIDAI v CBI [8] wherein UIDAI was asked to UIDAI to share biometric information of all residents of a particular place in Goa to facilitate a criminal investigation involving charges of rape and sexual assault. The Supreme Court restrained UIDAI from transferring any biometric information of an individual without to any other agency without his consent in writing. The Supreme Court also directed all the authorities to modify their forms/circulars/likes so as to not make aadhaar number mandatory.
  • Order of 16th March, 2015 - The SC took notice of widespread violations of the order passed on September 23rd, 2013 and directed the Centre and the states to adhere to these orders to not make aadhaar compulsory.
  • Orders of August 11, 2015 – In the first order, the Central Government was directed to publicise the fact that aadhaar was voluntary. The Supreme Court further held that provision of benefits due to a citizen of India would not be made conditional upon obtaining an aadhaar number and restricted the use of aadhaar to the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene and  the LPG Distribution Scheme. The Supreme Court also held that information of an individual that was collected in order to issue an aadhaar number would not be used for any purpose except when directed by the Court for criminal investigations. Separately, the status of fundamental right to privacy was contested and accordingly the Supreme Court directed that the issue be taken up before the Chief Justice of India.
  • Orders of October 16, 2015 – The Union of India, the states of Gujarat, Maharashtra, Himachal Pradesh and Rajasthan, and authorities including SEBI, TRAI,  CBDT, IRDA , RBI applied for a hearing before the Constitution Bench for modification of  the order passed by the Supreme Court on August 11 and allow use of aadhaar number schemes like The Mahatma Gandhi National Rural Employment Guarantee Scheme MGNREGS), National Social Assistance Programme (Old Age Pensions, Widow Pensions, Disability Pensions) Prime Minister's Jan Dhan Yojana (PMJDY) and Employees' Providend Fund Organisation (EPFO). The Bench allowed the use of aadhaar number for these schemes but stressed upon the need to keep aadhaar scheme voluntary until the matter was finally decided.

Status of these orders
The participants discussed the possible impact of the law on the operation of these orders. A participant pointed out that matters in the Supreme Court had not become infructuous because fundamental issues that were being heard in the Supreme Court had not been resolved by the passage of the Act. Several participants believed that the aforementioned orders were effective because the law had not come into force. Therefore, aadhaar number could only be used for purposes specified by the Supreme Court and it could not be made mandatory.  Participants also highlighted that when the Act was implemented, it would not nullify the orders of the Supreme Court unless Union of India asked the Supreme Court for it specifically and the Supreme Court sanctioned that.

3. National Identity Projects in Other Jurisdictions

A panellist had provided a brief overview of similar programs on identification that have been launched in other jurisdictions including Pakistan, United Kingdom, France, Estonia and Argentina in the recent past in the session titled ‘Aadhaar - International Dimensions’. This presentation mainly sought to assess the incentives that drove the governments in these jurisdictions to formulate these projects, mandatory nature of their adoption and their popularity. The Report has reproduced the presentation here.

Pakistan

The Second Amendment to the Constitution of Pakistan in 2000 established the National Database and Regulation Authority in the country, which regulates government databases and statistically manages the sensitive registration database of the citizens of Pakistan. It is also responsible for issuing national identity cards to the citizens of Pakistan. Although the card is not legally compulsory for a Pakistani citizen, it is mandatory for:

  • Voting
  • Obtaining a passport
  • Purchasing vehicles and land
  • Obtaining a driver licence
  • Purchasing a plane or train ticket
  • Obtaining a mobile phone SIM card
  • Obtaining electricity, gas, and water
  • Securing admission to college and other post-graduate institutes
  • Conducting major financial transactions

Therefore, it is pretty much necessary for basic civic life in the country. In 2012, NADRA introduced the Smart National Identity Card, an electronic identity card, which implements 36 security features. The following information can be found on the card and subsequently the central database: Legal Name, Gender (male, female, or transgender), Father's name (Husband's name for married females), Identification Mark, Date of Birth, National Identity Card Number, Family Tree ID Number, Current Address, Permanent Address, Date of Issue, Date of Expiry, Signature, Photo, and Fingerprint (Thumbprint). NADRA also records the applicant's religion, but this is not noted on the card itself. (This system has not been removed yet and is still operational in Pakistan.)

United Kingdom

The Identity Cards Act was introduced in the wake of the terrorist attacks on 11th September, 2001, amidst rising concerns about identity theft and the misuse of public services. The card was to be used to obtain social security services, but the ability to properly identify a person to their true identity was central to the proposal, with wider implications for prevention of crime and terrorism. The cards were linked to a central database (the National Identity Register), which would store information about all of the holders of the cards. The concerns raised by human rights lawyers, activists, security professionals and IT experts, as well as politicians were not to do with the cards as much as with the NIR. The Act specified 50 categories of information that the NIR could hold, including up to 10 fingerprints, digitised facial scan and iris scan, current and past UK and overseas places of residence of all residents of the UK throughout their lives. The central database was purported to be a prime target for cyber attacks, and was also said to be a violation of the right to privacy of UK citizens. The Act was passed by the Labour Government in 2006, and repealed by the Conservative-Liberal Democrat Coalition Government as part of their measures to “reverse the substantial erosion of civil liberties under the Labour Government and roll back state intrusion.”

Estonia

The Estonian i-card is a smart card issued to Estonian citizens by the Police and Border Guard Board. All Estonian citizens and permanent residents are legally obliged to possess this card from the age of 15. The card stores data such as the user's full name, gender, national identification number, and cryptographic keys and public key certificates. The cryptographic signature in the card is legally equivalent to a manual signature, since 15 December 2000. The following are a few examples of what the card is used for:

  • As a national ID card for legal travel within the EU for Estonian citizens
  • As the national health insurance card
  • As proof of identification when logging into bank accounts from a home computer
  • For digital signatures
  • For i-voting
  • For accessing government databases to check one’s medical records, file taxes, etc.
  • For picking up e-Prescriptions
  • (This system is also operational in the country and has not been removed)

France

The biometric ID card was to include a compulsory chip containing personal information, such as fingerprints, a photograph, home address, height, and eye colour. A second, optional chip was to be implemented for online authentication and electronic signatures, to be used for e-government services and e-commerce. The law was passed with the purpose of combating “identity fraud”. It was referred to the Constitutional Council by more than 200 members of the French Parliament, who challenged the compatibility of the bill with the citizens’ fundamental rights, including the right to privacy and the presumption of innocence. The Council struck down the law, citing the issue of proportionality. “Regarding the nature of the recorded data, the range of the treatment, the technical characteristics and conditions of the consultation, the provisions of article 5 touch the right to privacy in a way that cannot be considered as proportional to the meant purpose”.

Argentina

Documento Nacional de Identidad or DNI (which means National Identity Document) is the main identity document for Argentine citizens, as well as temporary or permanent resident aliens. It is issued at a person's birth, and updated at 8 and 14 years of age simultaneously in one format: a card (DNI tarjeta); it's valid if identification is required, and is required for voting. The front side of the card states the name, sex, nationality, specimen issue, date of birth, date of issue, date of expiry, and transaction number along with the DNI number and portrait and signature of the card's bearer. The back side of the card shows the address of the card's bearer along with their right thumb fingerprint. The front side of the DNI also shows a barcode while the back shows machine-readable information. The DNI is a valid travel document for entering Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay, and Venezuela. (System still operational in the country)

4. Technologies of Identification and Authentication

The panel in the session titled ‘Aadhaar: Science, Technology, and Security’ explained the technical aspects of use of biometrics and privacy concerns, technology architecture for identification and inadequacy of infrastructure for information security. In this section, we have summarised the presentation and the ensuing discussions on these issues.

Use of Biometric Information for Identification and Authentication

The panelists explained with examples that identification and authentication were different things. Identity provides an answer to the question “who are you?” while authentication is a challenge-response process that provides a proof of the claim of identity. Common examples of identity are User ID (Login ID), cryptographic public keys and ATM or Smart cards while common authenticators are passwords (including OTPs), PINs and cryptographic private keys. Identity is public information but an authenticator must be private and known only to the user. Authentication must necessarily be a conscious process and active participation by the user is a must. It should also always be possible to revoke an authenticator. After providing this understanding of the two processes the panellist then explained if biometric information could be used for identification or authentication under the UID Project. Biometric information is clearly public information and it is questionable if it can be revoked. Therefore it should never be used for authentication, but only for identity verification. There is a possibility of authentication by fingerprints under the UID Project, without conscious participation of the user. One could trace the fingerprints of an individual from any place the individual has been in contact with. Therefore, authentication must certainly be done by other means. The panellist pointed out that there were five kinds of authentication under the UID Project, out of which two-factor authentication and one time password were considered suitable but use of biometric information and demographic information was extremely threatening and must be withdrawn.

Architectures of Identification

The panelists explained the architecture of the UID Project that has been designed for identification purposes, highlighted its limitations and suggested alternatives. His explanations are reproduced below.

Under the UID Project, there is a centralised means of identification i.e. the aadhaar number and biometric information stored in one place, Central Identification Data Repository (CIDR). It is better to have multiple means of identification than one (as contemplated under the UID Project) for preservation of our civil liberties. The question is what the available alternatives are. Web of trust is a way for operationalizing distributed identification but the challenge is how one brings people from all social levels to participate in it. There is a need for registrars who will sign keys and public databases for this purpose.

The aadhaar number functions as a common index and facilitates correlation of data across Government databases. While this is tremendously attractive it raises several privacy concerns as more and more information relating to an individual is available to others and is likely to be abused.

The aadhaar number is available in human readable form. This raises the risk of identification without consent and unauthorised profiling. It cannot be revoked. Potential for damage in case of identity theft increases manifold.

Under the UID Project, for the purpose of information security, Authentication User Agencies (“AUA”) are required to use local identifiers instead of aadhaar numbers but they are also required to map these local identifiers to the aadhaar numbers. Aadhaar numbers are not cryptographically secured; in fact they are publicly available. Hence this exercise for securing information is useless. An alternative would be to issue different identifiers for different domains and cryptographically embed a “master identifier” (in this case, equivalent of aadhaar number) into each local identifier.

All field devices (for example POS machines) should be registered and must communicate directly with UIDAI. In fact, UIDAI must verify the authenticity (tamper proof) of the field device during run time and a UIDAI approved authenticity certificate must be issued for field devices. This certificate must be made available to users on demand. Further, the security and privacy frameworks within which AUAs work must be appropriately defined by legal and technical means.

Security Infrastructure of CIDR

The panelists also enumerated the security features of the UID Project and highlighted the flaws in these features. These have been summarised below.

The security and privacy infrastructure of UIDAI has the following main features:

  • 2048 bit PKI encryption of biometric data in transit
  • End-to-end encryption from enrolment/POS to CIDR
  • HMAC based tamper detection of PID blocks
  • Registration and authentication of AUAs
  • Within CIDR only a SHA 1 Hash of Aadhaar number is stored
  • Audit trails are stored SHA 1 encrypted. Tamper detection?
  • Only hashes of passwords and PINs are stored. (biometric data stored in original form though!)
  • Authentication requests have unique session keys and HMAC
  • Resident data stored using 100 way sharding (vertical partitioning). First two digits of Aadhaar number as shard keys
  • All enrolment and update requests link to partitioned databases using Ref IDs (coded indices)
  • All accesses through a hardware security module
  • All analytics carried out on anonymised data

The panellists pointed out the concerns about information security on account of design flaws, lack of procedural safeguards, openness of the system and too much trust imposed on multiple players. All symmetric and private keys and hashes are stored somewhere within UIDAI.  This indicates that trust is implicitly assumed which is a glaring design flaw.  There is no well-defined approval procedure for data inspection, whether it is for the purpose of investigation or for data analytics. There is a likelihood of system hacks, insider leaks, and tampering of authentication records and audit trails. The ensuing discussions highlighted that the UIDAI had admitted to these security risks. The enrolment agencies and the enrolment devices cannot be trusted. AUAs cannot be trusted with biometric and demographic data; neither can they be trusted with sensitive user data of private nature. There is a need for an independent third party auditor for distributed key management, auditing and approving UIDAI programs, including those for data inspection and analytics, whitebox cryptographic compilation of critical parts of the UIDAI programs, issue of cryptographic keys to UIDAI programs for functional encryption, challenge-response for run-time authentication and certification of UIDAI programs. The panellist recommended that there was a need to to put a suitable legal framework to execute this.

The participants also discussed that information infrastructure must not be made of proprietary software (possibility for backdoors for US) and there must be a third party audit with a non-negotiable clause for public audit.

5. Aadhaar for Welfare?

The Report has summarised the discussions that took place in the sessions on ‘Direct Benefits Transfers’ and ‘Aadhaar: Broad Issues - II’ where the panellists critically analysed the claims of benefits and inclusion of Aadhaar made by the government in light of the ground realities in states where Aadhaar has been adopted for social welfare schemes.

Social Welfare: Modes of Access and Exclusion

Under the Act, a person may be required to authenticate or give proof of the aadhaar number in order to receive subsidy from the government (Section 7). A person is required to punch their fingerprints on POS machines in order to receive their entitlement under the social welfare schemes such as LPG and PDS. It was pointed out in the discussions that various states including Rajasthan and Delhi had witnessed fingerprint errors while doling out benefits at ration shops under the PDS scheme. People have failed to receive their entitled benefits because of these fingerprint errors thus resulting in exclusion of beneficiaries [9]. A panellist pointed out that in Rajasthan, dysfunctional biometrics had led to further corruption in ration shops. Ration shop owners often lied to the beneficiaries about functioning of the biometric machines (POS Machines) and kept the ration for sale in the market therefore making a lot of money at the expense of uninformed beneficiaries and depriving them of their entitlements.

Another participant organisation also pointed out similar circumstances in the ration shops in Patparganj and New Delhi constituencies. Here, the dealers had maintained the records of beneficiaries who had been categorized as follows: beneficiaries whose biometrics did not match, beneficiaries whose biometrics matched and entitlements were provided, beneficiaries who never visited the ration shop. It had been observed that there were no entries in the category of beneficiaries whose biometrics did not match however, the beneficiaries had a different story to tell. They complained that their biometrics did not match despite trying several times and there was no mechanism for a manual override. Consequently, they had not been able to receive any entitlements for months. The discussions also pointed out that the food authorities had placed complete reliance on authenticity of the POS machines and claim that this system would weed out families who were not entitled to the benefits. The MIS was also running technical glitches as a result there was a problem with registering information about these transactions hence, no records had been created with the State authority about these problems. A participant also discussed the plight of 30,000 widows in Delhi, who were entitled to pension and used to collect their entitlement from post offices, faced exclusion due to transition problems under the Jan Dhan Yojana (after the Jandhan was launched the money was transferred to their bank accounts in order to resolve the problem of misappropriation of money at the hands of post office officials). These widows were asked to open bank accounts to receive their entitlements and those who did not open these accounts and did not inform the post office were considered bogus.

In the discussions, the participants also noted that this unreliability of fingerprints as a means of authentication of an individual’s identity was highlighted at the meeting of Empowered Group of Ministers in 2011 by J Dsouza, a biometrics scientist. He used his wife’s fingerprints to demonstrate that fingerprints may change overtime and in such an event, one would not be able to use the POS machine anymore as the machine would continue to identify the impressions collected initially.

The participants who had been working in the field had contributed to the discussions by busting the myth that the UID Project helped to identify who was poor and resolve the problem of exclusion due to leakages in the social welfare programs. These discussions have been summarised below.

  • It is important to understand that the UID Project is merely an identification and authentication system. It only helps in verifying if an individual is entitled to benefits under a social security scheme. It does not ensure plugging of leakages and reducing corruption in social security schemes as has been claimed by the Government. The reduction in leakage of PDS, for instance, should be attributed to digitization and not UID. The Government claims, that it has saved INR 15000 crore in provision of LPG on identification of 3.34 crore inactive accounts on account of the UID Project. This is untrue because the accounts were weeded by using mechanisms completely unrelated to the UID Project. Consequently, the savings on account of UID are only of INR 120 crore and not 15000 crore.
  • The UID Project has resulted in exclusion of people either because they do not have an aadhaar number, or they have a wrong identification, or there are errors of classification or wilful misclassification. About 99.7% people who were given aadhaar numbers already had an identification document. In fact, during enrolment a person is required to produce one of 14 identification documents listed under the law in order to get an aadhaar number which makes it very difficult for a person with no identity to become entitled to a social welfare scheme.

A participant condemned the Government’s claim that the UID Project had helped in removing fake, bogus and duplicate cards and said that these terms could not be used synonymously and the authorities had no clarity about the difference between the meanings of these terms. The UID Project had only helped in removal of duplicate cards but had not helped in combating the use of fake and bogus cards.

Financial Inclusion and Direct Benefits Transfer

The participants also engaged in the discussions about the impact of the UID project on financial inclusion in India in the sessions titled ‘Aadhaar: Broad Issues - I & II’. We have summarised these discussions below.

The UID Project seeks to directly transfer money to a bank account in order to combat corruption. The discussions highlighted that this was nothing but introducing a neo liberal thrust in social policy and that it was not feasible for various reasons. First, 95% of rural India did not have functioning banks and banks are quite far away. Second, in order to combat this dearth of banks the idea of business correspondents, who handled banking transactions and helped in opening of bank accounts, had been introduced which had created various problems. The Reserve Bank of India reported that there was dearth of business correspondents as there was very little incentive to become one; their salary is merely INR 4000. Third, there were concerns about how an aadhaar number was considered a valid document for Know Your Customer (KYC) checks. There was a requirement for scrutiny and auditing of documents submitted during the time of enrolment which, in the present scheme of things, could not be verified. Fourth, there were no restrictions on number of bank accounts that could be opened with a single aadhaar number which gave rise to a possibility of opening multiple and shell accounts on a single aadhaar number. Therefore, records only showed transactions when money was transferred from an aadhaar number to another aadhaar number as opposed to an account-to-account transfer. The discussion relied on NPCI data which shows which bank an aadhaar number is associated with but does not show if a transaction by an aadhaar number is overwritten by another bank account belonging to the same aadhaar number.

6. Surveillance and UIDAI

The participants had discussed the possibility of an alternative purpose for enrolling Aadhaar in the session titled ‘Privacy, Surveillance, and Ethical Dimensions of Aadhaar’. The discussion traced the history of this project to gain insight on this issue. We have summarised below the key take aways from this discussion.

There are claims that the main objective of launching the UID Project is not to facilitate implementation of social security schemes but to collect personal (financial and non-financial) information of the citizens and residents of the country to build a data monopoly. For this purpose, PDS was chosen as a suitable social security scheme as it has the largest coverage. Several participants suggested that numerous reports authored by FICCI, KPMG and ASSOCHAM contained proposals for establishing a national identity authority which threw some light on the commercial intentions behind information collection under the UID Project.

It was also pointed out that there was documented proof that information collected under the UID Project might have been shared with foreign companies. There are suggestions about links established between proponents of the UID Project and companies backed by CIA or the French Government which run security projects and deal in data sharing in several jurisdictions.

7. Strategies for Future Action

The participants laid down a list of measures that must be taken to take the discussions forward. We have enumerated these recommendations below.

  • Prepare and compile an anthology of articles as an output of this workshop.
  • Prepare position papers on specific issues related to the UID Project
  • Prepare pamphlets/brochures on issues with the UID Project for public consumption
  • Prepare counter-advertisements for Aadhaar
  • Publish existing empirical evidence on the flaws in Aadhaar.
  • Set up an online portal dedicated to providing updates on the UID Project and allows discussions on specific issues related to Aadhaar.
  • Use Social Media to reach out to the public. Regularly track and comment on social media pages of relevant departments of the government.
  • Create groups dedicated to research and advocacy of specific aspects of the UID Project.
  • Create a Coordination Committee preferably based in Delhi which would be responsible for regularly holding meetings and for preparing a coordinated plan of action. Employ permanent to staff to run the Committee.
  • Organise an advocacy campaign against use of Aadhaar in collaboration with other organisations and build public domain acceptance.
  • The campaign must specifically focus on the unfettered scope of UID and expanse, misrepresentation of the success of Aadhaar by highlighting real savings, technological flaws, status of pilot programs and increasing corruption on account of the UID Project
  • Prepare a statement of public concern regarding the UID Project and collect signatures from eminent persons including academics, technical experts, civil society groups and members of parliament.
  • Organise events and discussions on issues relating to Aadhaar and invite members og government departments to speak and discuss the issues.
  • Write to Members of Parliament and Members of Legislative Assemblies raising questions on their or their parties’ support for Aadhaar and silence on the problems created by the UID Project.
  • Organise public hearings in states like Rajasthan to observe and document ground realities of the UID Project and share these outcomes with the state government and media.
  • Plan a national social audit and public hearing on the working of UID Project in the country.
  • File Contempt Petitions in the Supreme Court and High Courts against mandatory use of Aadhaar number for services not allowed by the Supreme Court.
  • Reach out to and engage with various foreign citizens and organisations that have been fighting on similar issues. The organisations and individuals who could be approached would include EPIC, Electronic Frontier foundation, David Moss, UK, Roger Clarke, Australia, Prof. Ian Angel, Snowden, Assange and Chomsky.
  • Work towards increasing awareness about the UID Project and gaining support from the student and research community, student organisations, trade unions, and other associations and networks in the unorganised sector.

Annexure A – Workshop Agenda

May 26, 2016

9:00-9:30

Registration

9:30-10:00

Prof. Dinesh Abrol - Welcome
Self-introduction and expectations of participants
Dr. Usha Ramanathan - Overview of the Workshop

10:00-11:00

Session 1: Current Status of Aadhaar
Dr. Usha Ramanathan, Legal Researcher, New Delhi - What the 2016 Law Says, and How it Came into Being
S. Prasanna, Advocate, New Delhi - Status and Force of Supreme Court Orders on Aadhaar
Discussion

11:00-11:30

Tea Break

11:30-13:30

Session 2: Direct Benefits Transfers
Prof. Reetika Khera, Indian Institute of Technology, Delhi - Welfare Needs Aadhaar like a Fish Needs a Bicycle
Prof. R. Ramakumar, Tata Institute of Social Sciences, Mumbai - Aadhaar and the Social Sector: A critical analysis of the claims of benefits and inclusion
Ashok Rao, Delhi Science Forum - Cash Transfers Study
Discussion

13:30-14:30

Lunch

14:30-16:00

Session 3: Aadhaar: Science, Technology, and Security
Prof. Subashis Banerjee, Dept of Computer Science & Engineering, IIT, Delhi - Privacy and Security Issues Related to the Aadhaar Act
Pukhraj Singh, Former National Cyber Security Manager, Aadhaar, New Delhi - Aadhaar: Security and Surveillance Dimensions
Discussion

16:00-16:30

Tea Break

16:30-17:30

Session 4: Aadhaar - International Dimensions
Joshita Pai, Center for Communication Governance, National Law University, Delhi - Biometrics and Mandatory IDs in Other Parts of the World
Dr. Gopal Krishna, Citizens Forum for Civil Liberties - International Dimensions of Aadhaar
Discussion

17:30-18:00

High Tea

May 27, 2016

9:30-11:00

Session 5: Privacy, Surveillance and Ethical Dimensions of Aadhaar
Prabir Purkayastha, Free Software Movement of India, New Delhi - Surveillance Capitalism and the Commodification of Personal Data
Arjun Jayakumar, SFLC - Surveillance Projects Amalgamated
Col Mathew Thomas, Bengaluru - The Deceit of Aadhaar
Discussion

11:00-11:30

Tea Break

11:30-13:00

Session 6: Aadhaar - Broad Issues I
Prof. G Nagarjuna, Homi Bhabha Center for Science Education, Tata Institute of Fundamental Research, Mumbai - How to prevent linked data in the context of Aadhaar
Dr. Anupam Saraph, Pune - Aadhaar and Moneylaundering
Discussion

13:00-14:00

Lunch

14:00-15:30

Session 7: Aadhaar - Broad Issues II
Prof. MS Sriram, Visiting Faculty, Indian Institute of Management, Bangalore - Financial lnclusion
Nikhil Dey, MKSS, Rajasthan - Field witness: Technology on the Ground
Prof. Himanshu, Centre for Economic Studies & Planning, JNU - UID Process and Financial Inclusion
Discussion

15:30-16:00

Session 8: Conclusion

16:00-18:00

Informal Meetings

Annexure B – Workshop Participants

Anjali Bhardwaj, Satark Nagrik Sangathan

Dr. Anupam Saraph

Arjun Jayakumar, Software Freedom Law Centre

Ashok Rao, Delhi Science Forum

Prof. Chinmayi Arun, National Law University, Delhi

Prof. Dinesh Abrol, Jawaharlal Nehru University

Prof. G Nagarjuna, Homi Bhabha Center for Science Education, Tata Institute of Fundamental Research, Mumbai

Dr. Gopal Krishna, Citizens Forum for Civil Liberties

Prof. Himanshu, Jawaharlal Nehru University

Japreet Grewal, the Centre for Internet and Society

Joshita Pai, National Law University, Delhi

Malini Chakravarty, Centre for Budget and Governance Accountability

Col. Mathew Thomas

Prof. MS Sriram, Indian Institute of Management, Bangalore

Nikhil Dey, Mazdoor Kisan Shakti Sangathan

Prabir Purkayastha, Knowledge Commons and Free Software Movement of India

Pukhraj Singh, Bhujang

Rajiv Mishra, Jawaharlal Nehru University

Prof. R Ramakumar, Tata Institute of Social Sciences, Mumbai

Dr. Reetika Khera, Indian Institute of Technology, Delhi

Dr. Ritajyoti Bandyopadhyay, Indian Institute of Science Education and Research, Mohali

S. Prasanna, Advocate

Sanjay Kumar, Science Journalist

Sharath, Software Freedom Law Centre

Shivangi Narayan, Jawaharlal Nehru University

Prof. Subhashis Banerjee, Indian Institute of Technology, Delhi

Sumandro Chattapadhyay, the Centre for Internet and Society

Dr. Usha Ramanathan, Legal Researcher

Note: This list is only indicative, and not exhaustive.


[1] Civil Appeal No. 4853 of 2014

[2] WP(C) 494/2012

[3] . WP(C) 829/2013

[4] WP(C) 833/2013

[5] WP (C) 37/2015; (Earlier intervened in the Aruna Roy petition in 2013)

[6] WP (C) 932/2015

[7] Transferred from Madras HC 2013.

[8] SLP (Crl) 2524/2014 filed against the order of the Goa Bench of the Bombay HC in CRLWP 10/2014 wherein the High Court had directed UIDAI to share biometric information held by them of all residents of a particular place in Goa to help with a criminal investigation in a case involving charges of rape and sexual assault.

[9] See :http://scroll.in/article/806243/rajasthan-presses-on-with-aadhaar-after-fingerprint-readers-fail-well-buy-iris-scanners

 

We Truly are the Product being Sold

by Vidushi Marda last modified Sep 01, 2016 02:08 AM
WhatsApp has announced it will begin sharing user data such as names, phone numbers, and other analytics with its parent company, Facebook, and with the Facebook family of companies. This change to its terms of service was effected in order to enable users to “communicate with businesses that matter” to them. How does this have anything to do with Facebook?
We Truly are the Product being Sold

The change to WhatsApp’s terms of service to begin sharing user data with Facebook was effected in order to enable users to “communicate with businesses that matter” to them. (Reuters)

The article was published in the Hindustan Times on August 31, 2016.


WhatsApp clarifies in its blog post, “... by coordinating more with Facebook, we’ll be able to do things like track basic metrics about how often people use our services and better fight spam on WhatsApp. And by connecting your phone number with Facebook’s systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

WhatsApp’s further clarifies that it will not post your number on Facebook or share this data with advertisers. This means little because it will share your number with Facebook for advertisement. It is simply doing indirectly, what it has said it won’t do directly. This new development also leads to the collapsing of different personae of a user, even making public their private life that they have so far chosen not to share online. Last week, Facebook published a list of 98 data points it collects on users. These data points combined with your WhatsApp phone number, profile picture, status message, last seen status, frequency of conversation with other users, and the names of these users (and their data) could lead to a severely uncomfortable invasion of privacy.

Consider a situation where you have spoken to a divorce lawyer in confidence over WhatsApp’s encrypted channel, and are then flooded with advertisements for marriage counselling and divorce attorneys when you next log in to Facebook at home. Or, you are desperately seeking loans and get in touch with several loan officers; and when you log in to Facebook at work, colleagues notice your News Feed flooded with ads for loans, articles on financial management, and support groups for people in debt.

It is no secret that Facebook makes money off interactions on its platform, and the more information that is shared and consumed, the more Facebook is benefitted. However, the company’s complete disregard for user consent in its efforts to grow is worrying, particularly because Facebook is a monopoly. In order for one to talk to friends and family and keep in touch, Facebook is the obvious, if not the only, choice. It is also increasingly becoming the most accessible way to engage with government agencies. For example, Indian embassies around the world have recently set up Facebook portals, the Bangalore Traffic Police is most easily contacted through Facebook, and heads of states are also turning to the platform to engage with people. It is crucial that such private and collective interactions of citizens with their respective government agencies are protected from becoming data points to which market researchers have access.

Given Facebook’s proclivity for unilaterally compromising user privacy, the Federal Trade Commission (FTC) in 2011 charged the company for deceiving consumers by misleading them about the privacy of their information. Following these charges, Facebook reached an agreement to give consumers clear notice and obtain consumers’ express consent before extending privacy settings that they had established. The latest modification to WhatsApp’s terms of service seems to amount to a clear violation of this agreement and brings out the grave need to treat user consent more seriously.

There is a way to opt out of sharing data for Facebook ads targeting that is outlined by WhatsApp on its blog, which is the best example for a case of invasion-of-privacy-by-design. WhatsApp plans to ask the users to untick a small green arrow, and then click on a large green button that says “Agree” (which is the only button) so as to indicate that they are opting-out. The interface of the notice seems to be consciously designed to confuse users by using the power of default option. For most users, agreeing to terms and conditions is a hasty click on a box and the last part of an installation process. Predictably, most users choose to go with default options, and this specific design of the opt-out option is not meaningful at all.

In 2005, Facebook’s default profile settings were such that anyone on Facebook could see your name, profile picture, gender and network. Your photos, wall posts and friends list were viewable by people in your network. Your contact information, birthday and other data could be seen by friends and only you could view the posts that you liked. Fast forward to 2010, and the entire internet, not just all Facebook users, can see your name, profile picture, gender, network, wall posts, photos, likes, friends list and other profile data. There hasn’t been a comprehensive study since 2010, but one can safely assume that Facebook’s privacy settings will only get progressively worse for users, and exponentially better for Facebook’s revenues. The service is free and we truly are the product being sold.

Indians Ask: Is Visiting a Torrent Site Really A Crime?

by Subhashish Panigrahi last modified Sep 06, 2016 02:09 PM
India has banned various large-scale torrent sites for a long time — this is old news. But under a new federal policy in India, one can be jailed for three years and fined 300,000 Indian Rupees (~US $4464) for downloading content on any of these blocked websites.

The blog post was first published in Global Voices on September 5, 2016.


Screenshot of a Bittorent client. Image by Carl Sagan via Wikimedia Commons. CC BY-SA 3.0

Netizens who regularly use these and similar services have become anxious about what the rule may mean for them. Last week, a new legal notice concerning copyright violations sparked widespread rumors that users could be penalized for simply viewing torrent sites.

The notice now appears when one visits any of the banned websites. It reads:

This URL has been blocked under the instructions of the Competent Government Authority or in compliance with the orders of a Court of competent jurisdiction. Viewing, downloading, exhibiting or duplicating an illicit copy of the contents under this URL is punishable as an offence under the laws of India, including but not limited to under Sections 63, 63-A, 65 and 65-A of the Copyright Act, 1957 which prescribe imprisonment for 3 years and also fine of upto Rs. 3,00,000/-. Any person aggrieved by any such blocking of this URL may contact at [email protected] who will, within 48 hours, provide you the details of relevant proceedings under which you can approach the relevant High Court or Authority for redressal of your grievance.

Soon after news of the notice began to circulate, the Chennai High Court – one of the oldest courts in India — issued a John Doe order to block as many as 830 websites, including several torrent websites such as thepiratebay.se, torrenthound.com, and kickasstorrents.come.in.

Indian tech news portal Medianama published a blog post arguing that it is the downloading of pirated content from certain banned websites and not accessing those website that should lead to the legal issues. The problem, it seems, lies in the poor wording of the notice. Medianama described this as “bizarre by any rational standard” and noted that, taken literally, it does not comply with the Indian Copyright Act.

Digital piracy legislation in India has been modified quite a lot in the recent times in general and over last five years in particular (Sections 63, 63A and 65 of the Indian Copyright Act of 1957 in particular.) But it has not been implemented with such force in the past.

What is a torrent?

torrent is part of a system that enables peer-to-peer file sharing (“P2P”) that is used to distribute data and electronic files over the Internet. Known as BitTorrent, this file distribution system is one of the most common technical protocols for transferring large files, such as digital audio files containing TV shows or video clips or digital audio files containing songs.

Within this system, files labeled with the .torrent extension contain meta data about files — e.g. file names, their sizes, folder structure and cryptographic hash value for integrity verification. They do not contain the content to be distributed, but without them, the system does not work. (via Wikipedia)

This is not the first time India has put a blanket ban on such sites. In December 2014, 32 websites — including including code repository Github, video streaming sites Vimeo and Dailymotion, online archive Internet Archive, free software hosting site Sourceforge — were banned in India. They were later unblocked after agreeing to remove some ISIS-related content.

As they have in the past, tech-savvy netizens began suggesting hacks to mask or fake one's IP address. Sumiteshwar Choudhary, a practicing criminal and matrimony lawyer, described on Quora how the law had existed for quite some time but the government had never fully enforced it:

[..] The only reason that India has not been able to successfully ban these services is because the servers rest outside India and we don’t have any law to extend our jurisdiction to that extent today. As an end user if you download a pirated version of things you are not entitled to, you can be booked criminally under this Act and can face prison for up to 2 years…

Twitter user Prisma Mama Thakur criticized the ban, arguing that it should be a low priority in a moment when India has many other important problems to solve:

Tweet

Glaring Errors in UIDAI's Rebuttal

by Pranesh Prakash last modified Sep 18, 2016 03:22 AM
This response note by Pranesh Prakash questions Unique Identification Authority of India’s reply to Hans Verghese Mathews' article titled “Flaws in the UIDAI Process” (EPW, March 12, 2016), which found “serious mathematical errors” in the article.

 

The article was published in Economic & Political Weekly Vol. 51, Issue No. 36, September 3, 2016.


While I am not a statistician, I have followed the technical debate between Hans Verghese Mathews and the UIDAI closely, and see a number of glaring errors in the latter’s so-called rebuttal in EPW (March 12, 2016).

The UIDAI alleges Mathews to have ignored the evidence that the Receiver Operating Characteristic (ROC) "flattens" with more factors. However, Mathews cannot be accused of ignorance if the flattening of the ROC is not relevant to his argument. To explain this in simple terms, the ROC curve is used to choose the appropriate "threshold distance" which determines false positives and false negatives, and belongs to a stage which precedes the estimation of the false positive identification rates (FPIR).

However, Mathews has used the FPIR estimates provided by the UIDAI (based on evidence from the enrolment of 84 million persons), and calculated how the FPIR changes when extrapolated for a population of 1.2 billion persons. In other words, he did not need to look at the ROC curve as that factor is not relevant to his argument, since he has used UIDAI data (which has presumably been estimated on the basis of all 12 factors : 10 fingerprints and 2 irises).

Further, UIDAI asks why Mathews has assumed a linear curve for his extrapolation. Mathews has done no such thing. In fact, in their paper "Role of Biometric Technology in Aadhaar Enrollment," the UIDAI states: "FPIR rate grows linearly with the database size" (nd, 19). Thus, this is an assumption formerly made by them (without providing rationale for it to be a linear curve as opposed to anything else).  Mathews mathematically derives bounds for the FPIR in his paper, that is, the range within which the FPIR lies. One gets a linear curve only if they use the upper bound and not on the usage of anything else. So while Mathews does, as he explains, provide the results of the calculation based on the upper bound for the sake of simplicity, he nowhere asserts nor assumes a linear curve.

If, as the UIDAI claims, one cannot perform such an extrapolation and needs to depend on “empirical evidence” instead, the question arises as to how the UIDAI decided to scale up the programme to 1.3 billion people given the error rates. One could also ask if the machines being used to capture biometrics are good enough for the enlargement. Surely they would have performed some extrapolations to decide this.

In their paper they note that "although it [FPIR] is expected to grow as the database size increases, it is not expected to exceed manageable values even at full enrolment of 120 crores" (UIDAI nd, 13). They do not illustrate the extent to which the FPIR is expected to grow—neither in their initial paper, nor in their rebuttal to Mathews—whereas Mathews provides a method of estimating the increase of FPIR. Even if UIDAI is correct in its appraisal of FPIR and that it will not exceed "manageable values," they need to either exemplify their calculations or release the latest data. They have done neither, and that is quite unfortunate.


References

UIDAI (nd): “Role of Biometric Technology in Aadhaar Enrollment,” Unique Identification Authority of India, Government of India, New Delhi, viewed on 18 August 2016, https://uidai.gov.in/images/FrontPageUpdates/role_of_biometric_technology
 
Related Links
 

Internet Rights and Wrongs

by Pranesh Prakash last modified Sep 22, 2016 11:36 PM
With a rise in PIL's for unwarranted censorship, do we need to step back and inspect if it's about time unreasonable trends are checked?
Internet Rights and Wrongs

Internet Rights & Wrongs, picture by India Today

The article was published in India Today on September 1, 2016. The original piece can be read here.


Over the last few weeks, there have been a number of cases of egregious censorship of websites in India. Many people started seeing notices that (incorrectly) gave an impression that they may end up in jail if they visited certain websites. However, these notices weren't an isolated phenomenon, nor one that is new. Worryingly, the higher judiciary has been drawn into these questionable moves to block websites as well.

Since 2011, numerous torrent search engines and communities have been blocked by Indian internet service providers (ISPs). Torrent search engines provide the same functionality for torrents that Google provides for websites. Are copyright infringing materials indexed and made searchable by Google? Yes. Do we shut down Google for this reason? No. However, that is precisely what private entertainment companies have done over the past five years in India. Companies hired by the producers of Tamil movies Singham and 3 managed to get video-sharing websites like Vimeo, Dailymotion and numerous torrent search engines blocked even before the movies released, without showing even a single case of copyright infringement existed on any of them. During the FIFA World Cup, Sony even managed to get Google Docs blocked. In some cases, these entertainment companies have abused 'John Doe' orders (generic orders that allow copyright enforcement against unnamed persons) and have asked ISPs to block websites. The ISPs, instead of ignoring such requests as instances of private censorship, have also complied. In other cases (like Sony's FIFA World Cup case), courts have ordered ISPs to block hundreds of websites without any copyright infringement proven against them. High court judges haven't even developed a coherent theory on whether or how Indian law allows them to block websites for alleged copyright infringement. Still they have gone ahead and blocked.

In 2012, hackers got into Reliance Communications servers and released a list of websites blocked by them. The list contained multiple links that sought to connect Satish Seth-a group MD in Reliance ADA Group-to the 2G scam: a clear case of secretive private censorship by RCom. Further, visiting some of the YouTube links which pertained to Satish Seth showed that they had been removed by YouTube due to dubious copyright infringement complaints filed by Reliance BIG Entertainment. Did the department of telecom, whose licences forbid ISPs from engaging in private censorship, take any action against RCom? No. Earlier this year, Tata Sky filed a complaint against YouTube in the Delhi High Court, noting that there were videos on it that taught people how to tweak their set-top boxes to get around the technological locks that Tata Sky had placed. The Delhi HC ordered YouTube "not to host content that violates any law for the time being in force", presuming that the videos in question did in fact violate Indian law. They cite two sections: Section 65A of the Copyright Act and Section 66 of the Information Technology Act. The first explicitly allows a user to break technological locks of the kind that Tata Sky has placed for dozens of reasons (and allows a person to teach others how to engage in such breaking), whereas the second requires finding of "dishonesty" or "fraud" along with "damage to a computer system, etc", and an intention to violate the law-none of which were found. The court effectively blocked videos on YouTube without any finding of illegality, thus once again siding with censorial corporations.

In 2013, Indore-based lawyer Kamlesh Vaswani filed a PIL in the Supreme Court calling for the government to undertake proactive blocking of all online pornography. Normally, a PIL is only admittable under Article 32 of the Constitution, on the basis of a violation of a fundamental right (which are listed in Part III of our Constitution). Vaswani's petition-which I have had the misfortune of having read carefully-does not at any point complain that the state is violating a fundamental right by not blocking pornography. Yet the petition wants to curb the fundamental right to freedom of expression, since the government is by no means in a position to determine what constitutes illegal pornography and what doesn't.

The larger problem extends to the now-discredited censor board (headed by the notorious Pahlaj Nihalani), as also the self-censorship practised on TV by the private Indian Broadcasters Federation (which even bleeps out words and phrases like 'Jesus', 'period', 'breast cancer' and 'beef'). 'Swachh Bharat' should not mean sanitising all media to be unobjectionable to the person with the lowest outrage threshold. So who will file a PIL against excessive censorship?

Services like TwitterSeva aren’t the silver bullets they are made out to be

by Sunil Abraham last modified Oct 06, 2016 04:31 PM
TwitterSeva is great, but it should not be considered a sufficient replacement for proper e-governance systems. This is because there are several serious shortcomings with the TwitterSeva approach, and it is no wonder that enthusiastic police officers and bureaucrats are somewhat upset with the slow deployment of e-governance applications. They are also right in being frustrated with the lack of usability and scalability of existing applications that hold out the promise of adopting private sector platforms to serve citizens better.

Sunil Abraham, executive director of the Centre for Internet and Society, wrote this in response to the FactorDaily story on TwitterSeva, a special feature developed by Twitter’s India team to help citizens connect better with government services. Sunil's article in FactorDaily can be read here.


Let’s take a look at why the TwitterSeva approach is not adequate:

1. Vendor and Technology Neutrality: Providing a level ground for competing technologies in e-governance has been a globally accepted best practice for about 15 years now. This is usually done by using open standards policies and interoperability frameworks.

India does have a national open standards policy, but the National Informatics Centre (NIC) has only published one chapter of the Interoperability Framework for e-Governance .

The thing is, while Twitter might be the preferred choice for urban elites and the middle class, it might not be the choice of millions of Indians coming online. By implicitly signaling to citizens that Twitter complaints will be taken more seriously than e-mail or SMS complaints, the government is becoming a salesperson for Twitter. Ideally, all interactions that the state has with citizens should be such that citizens can choose which vendor and technology they would like to use. Ideally, the government should have its own work-flow so that it can harvest complaints, feedback and other communications from all social media platforms be it Twitter or Identica, Facebook or Diaspora, and publish responses back onto them.

By implicitly signalling to citizens that Twitter complaints will be taken more seriously than e-mail or SMS complaints, the government is becoming a salesperson for Twitter

Apart from undermining the power of choice for citizens, lack of vendor and technology neutrality in government use of technology undermines the efficient functioning of a competitive free market, which is the bedrock of future innovation.

When it comes to micro-blogging, Twitter has established a near monopoly in India. There are no clear signs of harm and therefore it would not be wise to advocate that the Competition Commission of India investigate Twitter. However, if the government helps Twitter tighten its grip over the Indian market, it is preventing the next cycle of creative destruction and disruption. Therefore, e-governance applications should ideally only “loosely couple” with the APIs of private firms so that competition and innovation are protected.

2. Holistic Approach and Accountability: Ideally, as the Electronic Service Delivery Bill 2011 had envisaged, every agency within the government was supposed to (within 180 days of the enactment of the Act) do several things: publish a list of services that will be delivered electronically with a deadline for each service; commit to service-level agreements for each service and provide details of the manner of delivery; provide an agency-level grievance redressal mechanism for citizens unhappy with the delivery of these electronic services.

Notwithstanding the 180-day commitment, the Bill required that “all public services shall be delivered in electronic mode within five years” after the enactment of the Bill with a potential three-year extension if the original deadline was not met. The Bill also envisaged the constitution of a Central Electronic Service Delivery Commission with a team of commissioners who “monitor the implementation of this Bill on a regular basis” and publish an annual report which would include “the number of electronic service requests in response to which service was provided in accordance with the applicable service levels and an analysis of the remaining cases.”

The Electronic Service Delivery Bill 2011 had a much more comprehensive and accountable plan for e-governance adoption in the country

Citizens suffering from non-compliance with the provisions of the Bill and unsatisfied with the response from the agency level grievance redressal mechanism could appeal to the Commission. The state or central commissioners after giving the government officials an opportunity to be heard were empowered to impose a fine of Rs 5000.

Unlike the piecemeal approach of TwitterSeva, the Bill had a much more comprehensive and accountable plan for e-governance adoption in the country.

3. Right To Transparency: Some of the interactions that the government has with citizens and firms may have to be disclosed under the obligation emerging from the Right to Information Act for disclosure to the public or to the requesting party. Therefore it is important that the government take its own steps for the retention of all data and records — independent of the goodwill and lifecycles of private firms.

Twitter is only 10 years old. It took 10 years for Orkut to shut down. Maybe Twitter will shut down in the next 10 years. How then will the government comply with RTI requests? Even if the government is not keen on pushing for data portablity as a right for consumers (just like mobile number portability in telecom, so that consumers can seamlessly shift between competing service providers), it absolutely should insist on data portability for all government use.

Twitter is only 10 years old. It took 10 years for Orkut to shut down. Maybe Twitter will shut down in the next 10 years. How then will the government comply with RTI requests?

This will allow it to shift to a) support multiple services, b) shift to competing/emerging services c) incrementally build its own infrastructure and also comply with the requirements of the Right to Information Act.

4. Privacy: Unfortunately, thanks to the techno-utopians behind the Aadhaar project, the current government is infected with “data ideology.” There is an obsession with collecting as much data as possible from citizens, storing it in centralized databases and providing “dashboards” to bureaucrats and politicians. This is diametrically opposed to the view of the security community.

Unfortunately, thanks to the techno-utopians behind the Aadhaar project, the current government is infected with “data ideology”

For example, Bruce Schneier posted on his blog in March this year (in a piece titled ‘Data is a Toxic Asset‘) saying: “What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous. This idea has always been part of the data protection law starting with the 2005 EU Data Protection Directive expressed as the principle of “Data Minimization” or “Collection Limitation”. More recently technologists and policy makers also use the phrase “Privacy by Design”. Introducing an unnecessary intermediary or gate-keeper between what is essentially transactions between citizens and the state is an egregious violation of a key privacy principle.”

5. Middle Class and Elite Capture: The use of Twitter amplifies the voices of the English-speaking, elite, and middle class citizens at the expense of the voices of the poor. While elites don’t exhibit fear when tagging police IDs and making public complaints from the comforts of their gated communities with private security guards shielding them the violence of the state, this might be a very intimidating option for the poor and disempowered.

While elites don’t fear tagging police IDs and making public complaints from the comforts of their gated communities, it’s intimidating for the disempowered

While the system may not be discriminatory in its design, it will have disparate impact on different sections of our society. In other words, the introduction of TwitterSeva will exacerbate power asymmetries in our society rather than ameliorating them.

The canonical scholarly reference for this is Kate Crawford’s analysis of City of Boston’s StreetBump smartphone, which resulted in an over-reporting of potholes in elite neighbourhoods and under-reporting from poor and elderly residents. This meant that efficiency in the allocation of the city’s resources was only a cover for increased discrimination against the powerless.

6. Security: The most important conclusion to draw from the Snowden disclosure is that the tin-foil conspiracy theorists who we used to dismiss as lunatics were correct. What has been established beyond doubt is that the United States of America is the world leader when it comes to conducting mass surveillance on netizens across the globe. It is still completely unclear how much access the NSA has to the databases of American social media giants. When the complete police force of a state starts to use Twitter for the delivery of services to the public, then it may be possible for foreign intelligence agencies to use this information to undermine our sovereignty and national security.

Internet Democratisation: IANA Transition Leaves Much to be Desired

by Vidushi Marda last modified Nov 03, 2016 07:52 AM
At best, the IANA transition is symbolic of Washington’s oversight over ICANN coming to an end. It is also symbolic of the empowerment of the global multistakeholder community. In reality, it fails to do either meaningfully.

 

The article was published in the Hindustan Times on October 6, 2016.


PardonSnowden.org
Many suspect Washington’s 2014 announcement of handing over control of the IANA contract to be fuelled by the outcry following Edward Snowden’s revelations of the extent of US government surveillance. Source: AFP

September 30, 2016, marked the expiration of a contract between the US government and the Internet Corporation for Assigned Names and Numbers (ICANN) to carry out the Internet Assigned Numbers Authority (IANA) functions.

In simpler, acronym-free terms, Washington’s formal oversight over the Internet’s address book has come to an end with the expiration of this contract, with control now being passed on to the “global multistakeholder community”.

ICANN was incorporated in California in 1998 to manage the backbone of the Internet, which included the domain name system (DNS), allocation of IP addresses and root servers. After an agreement with the US National Telecommunications and Information Administration (NTIA), ICANN was tasked with operating the IANA functions, which includes maintenance of the root zone file of the DNS. Over the years Washington has rejected calls to hand over the control of IANA functions, but in March 2014 it announced its intentions to do so and laid down conditions for the handover. Many suspect the driving force behind this announcement to be the outcry following Edward Snowden’s revelations of the extent of US government surveillance.

The conditions laid down by the NTIA were met, and the US government accepted the transition proposal, amidst much political pressure and opposition, most notably from Senator Ted Cruz.

This transition is a step in the right direction, but in reality, it changes very little as it fails to address two critical issues: Of jurisdiction and accountability.

Jurisdiction is important while considering the resolution of contractual disputes, application of labour and competition laws, disputes regarding ICANN’s decisions, consumer protection, financial transparency, etc. Many of these questions, although not all, will depend on where ICANN is located. ICANN’s new bylaws mention that it will continue to be incorporated in California, and subject to California law just as it was pre-transition. Having the DNS subject to the laws of a single country can only lend to its fragility. ICANN’s US jurisdiction also means that it is not free from the political pressures from the US Senate and in turn, the toxic effect of American party politics that were made visible in the events leading up to September 30.

Another critical issue that the transition does not address is that of ICANN accountability. Post-transition, ICANN’s board will continue to be the ultimate decision-making authority, thus controlling the organisation’s functioning, and ICANN staff will be accountable to the board alone.

To put things in perspective, look at the board’s track record in the recent past. In August, an Independent Review Panel (IRP) found that ICANN’s board had violated ICANN’s own bylaws and had failed to discharge its transparency obligations when it failed to look into staff misbehaviour. Following this, in September, ICANN decided to respond to such allegations of mismanagement, opacity and lack of accountability by launching a review. The review however, would not look into the issues, failures and false claims of the board, but instead focus on the process by which ICANN staff was able to engage in such misbehaviour. This ironically, will be in the form of an internal review that will pass through ICANN staff — the subjects of the investigation — before being taken up to the board.

At best, the transition is symbolic of Washington’s oversight over ICANN coming to an end. It is also symbolic of the empowerment of the global multistakeholder community. In reality, it fails to do either meaningfully.

IANA Transition: A Case of the Emperor’s New Clothes?

by Vidushi Marda last modified Nov 03, 2016 06:20 AM
Transparency is key to engaging meaningfully with ICANN. CIS has filed the most number of Documentary Information Disclosure Policy (DIDP) requests with ICANN, covering a range of subjects including its relationships with contracted parties, financial disclosure, revenue statements, and harassment policies. Asvatha Babu, an intern at CIS, analysed all responses to our requests and found that only 14% of our requests were answered fully.

 

The post was published by Digital Asia Hub on October 6, 2016.


In March 2014, the US Government committed to ending its contract with ICANN to run the Internet Assigned Numbers Authority (IANA), and also announced that it would hand over control to the “global multistakeholder community”. The conditions for this handover were that the changes must be developed by stakeholders across the globe, with broad community consensus.

Further, it was indicated that any proposal must support and enhance the multistakeholder model; maintain the security, stability, and resiliency of the Internet Domain Name System (DNS);  meet the needs and expectation of the global customers and partners of the IANA services and maintain the openness of the Internet. Further, it must not replace the NTIA role with a solution that is government-led or by an inter-governmental organisation.

These conditions were met, ICANN’s Supporting Organisations (SO’s) and Advisory Committees (ACs) accepted transition proposals, and these proposals were then accepted by the ICANN Board as well, putting the transition in motion. But not quite. The “global multistakeholder community” still had to wait for approval from the NTIA and the US government, both of whom eventually approved the proposal. The latter’s approval was confirmed after considerable uncertainty due to Senator Ted Cruzs efforts to stop the transition, due to his belief that the transition was an exercise of the US government handing over control of the internet to foreign governments. Notwithstanding this, on 29th September, the US Senate passed a short term bill to keep the US Government funded till the end of the year, without a rider on the IANA transition. The next hurdle was a lawsuit filed in federal court in Texas by the attorney generals of four states to stop the handover of the IANA contract. As on the 30th of September, the court denied the Plaintiffs’ Application, thus allowing the transition to proceed.

What does this transition mean? What does it change? The transition, while a welcome step, leaves much to be desired in terms of tangible change, primarily because it fails to address the most important question, that of ICANN jurisdiction. It is important to have the Internet’s core Domain Name System (DNS) functioning free from the pressures and control of a single country or even a few countries; the transition does not ensure this, as the Post Transition IANA entity (PTI) will be under Californian jurisdiction, just like ICANN was pre-transition. The entire ICANN community has been witness to a single American political figure almost derailing its meticulous efforts simply because he could; and in many ways these events cemented the importance of having diversity in terms of legal jurisdiction of ICANN, the PTI and the root zone maintainer.

My colleague Pranesh Prakash has identified 11 reasons why the question of jurisdiction is important to consider during the IANA transition. Some of these issues depend on where ICANN, the PTI and the root zone maintainer are situated, some depend on the location of the office in question and still others depend on contracts that ICANN enters into. ICANN’s new bylaws state that it will be situated in California, the post transition IANA entities bylaws also make a Californian jurisdiction integral to its functioning. As an alternative, the Centre for Internet & Society has called for the “jurisdictional resilience” of ICANN, encompassing three crucial points: legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated, division of core Internet operators among multiple jurisdictions, and jurisdictional division of policymaking functions from technical implementation functions.

Transparency is also key to engaging meaningfully with ICANN. CIS has filed the most number of Documentary Information Disclosure Policy (DIDP) requests with ICANN, covering a range of subjects including its relationships with contracted parties, financial disclosure, revenue statements, and harassment policies. Asvatha Babu, an intern at CIS, analysed all responses to our requests and found that only 14% of our requests were answered fully. 40% of our requests had no relevant answers disclosed at all (these were mostly to do with complaints and contractual compliance). To illustrate the importance of engaging with ICANN transparency, CIS has focused on understanding ICANN’s sources of income since 2014. This is because we believe that conflict of interest can only be properly understood by following the money in a granular fashion. This information was not publicly available, and in fact, it seemed like ICANN didnt know where it got its money from, either. It is only through the DIDP process that we were able to get ICANN to disclose sources of income, and figures along with those sources for a single financial year.

ICANN prides itself on being transparent and accountable, but in reality it is not. The most often used exception to avoid answering DIDP requests has been “Confidential business information and/or internal policies and procedures”, which in itself is a testament to ICANN’s opacity. Another condition for non-disclosure allows ICANN to reject answering “Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.”. These exemptions are not only vague, they are also extremely subjective: again, demonstrative of the need for enhanced accountability and transparency within ICANN. Key issues have not been addressed even at the time that the transition is formally underway. The grounds for denying DIDP requests are still vague and wide, effectively giving ICANN the discretion to decline answering difficult questions, which is unacceptable from an entity that is at the center of the multi-billion dollar domain name industry.

ICANN’s jurisdictional resilience and enhanced accountability are particularly vital for countries in Asia. Its policies, processes and functioning have historically been skewed towards western and industry interests, and ICANN can neither be truly global nor multistakeholder till such countries can engage meaningfully with it in a transparent fashion. The IANA transition is, of course, largely political, and may symbolise a transition to the global multistakeholder community, but in reality, it changes very little, if anything.

MLATs and the proposed Amendments to the US Electronic Communications Privacy Act

by Vipul Kharbanda and Elonnai Hickok — last modified Dec 28, 2016 01:09 AM
In continuance of our blog post on mutual legal assistance treaties (MLATs), we examine a new approach to international bilateral cooperation being suggested in the United States, by creating a mechanism for certain foreign governments to directly approach the data controllers.

Published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document.


In the previous article on MLATs we discussed, in some detail, what MLATs are and why they are needed.  One area which was briefly focused upon in that article was the limitations and criticisms of the MLAT mechanism, of which one of the main criticisms being the problems caused due to different legal standards in various jurisdictions as well as the time taken to process a request for information sent from one country to another. Talking specifically about the United States, where most internet companies are headquartered and hold large amounts of data, it typically takes months to process requests under MLATs and foreign governments often struggle to comprehend and comply with the legal standards in the United States for obtaining data for use in their investigations.[1] The requirement that a foreign government should take permission from, and comply with the requirements of a foreign government simply because the data needed happens to be controlled by a service provider based in a foreign country strikes many foreign law enforcement officials as damaging to security and law enforcement efforts, especially when they are requesting data pertaining to a crime between two of their own citizens that primarily took place on their soil.[2]

These inefficiencies of the MLAT process lead to further problems of foreign governments attempting to apply their search and surveillance laws in an extraterritorial manner for example in 2014 the UK passed the Data Retention and Investigatory Powers Act, 2014 with gives the government the power to directly access data from foreign service providers if sought for specific purposes and the request is approved by the Secretary of State or other specified executive branch official.[3] Another response that may occur is if, frustrated by such inefficiencies of the existing systems, courts in foreign states start assuming extra territorial jurisdiction, as happened when a District Court in Vishakhapatnam restrained Google from complying with a subpoena issued by the Superior Court of California, ordering Google to share the password of the Gmail account belonging to an Indian citizen residing in Vishakhapatnam.[4]

Solution proposed in the United States

In order to overcome these inefficiencies, at least in the American context, the Department of Justice has proposed a legislation which seeks to make the process of foreign governments getting information from US based entities more streamlined by amending the provisions of the Electronic Communications Privacy Act (ECPA) of the United States (the “Amendment”). These amendments have been proposed primarily for the US and UK to effectuate a proposed bilateral agreement whereby the UK government will be able to approach US companies directly with requests for information without going through the MLAT process or getting an order from a US court.

The Amendment seeks to ensure that requests from foreign governments for information from US entities get answered in a smooth manner by including those requests in the process for seeking information under the ECPA itself. This move would no doubt, make it easier for foreign governments to access data in the US, but such a move can be criticized on the ground that it would then allow all states, irrespective of their legal standards of privacy, etc. to get access to such information. This problem has been overcome in the amendment by adding a new section to Title 18 which would allow the Attorney General, with the concurrence of the Secretary of State to certify to the Congress that the legal standards in the contracting state which is being given access to the mechanism under the ECPA satisfies certain requirements specified in the chapter (and discussed below). Only after such a certification has been received by the Congress, a contracting state would be able to receive the benefits sought to be granted under the Amendment.

It is important to note that the US administration is looking to use the US-UK Agreement as a standard to be followed for similar potential agreements with a number of other countries wherein the agencies in those countries could request information from US based entities through court orders through a properly specified legal framework. Though to our knowledge India has not been formally approached by the US government to enter into such an agreement, it is important to ask the question viz. if approached:

  1. Does India's present legal system meet the standards laid down in the amendment to the ECPA?
  2. And if they do, should India also seek to enter into such an Agreement with the United States?
  3. And if India does, what could be the implications for citizens and for countries in a similar position as India?

We hope to be able to answer the above three questions, or at least throw some light on them, in the conclusion of this paper by relying upon the discussions contained herein.

Criticisms of the Amendment

While such a mechanism may be very effective in addressing the needs of security agencies in investigation and prevention of criminal activities, one cannot accept such an overarching change in cross border enforcement without analyzing the consequences that such a proposal will have on the right to privacy. Some of these consequences have been highlighted by experts responding to the amendment:

Lack of Judicial Authorisation: The Amendment requires that the foreign governments have a process whereby a person could seek post-disclosure review by an independent entity instead of a warrant by a court.[5] Although a court order is not the norm for interception even in Indian law, however under American law such protection is given to data held by American companies even though the data may belong to Indian citizens and this protection will no longer be available if the Amendment is passed.

Vague Standard for requests: Under the domestic law of any state there is usually a large amount of jurisprudence regarding when search orders can be issued, such as the “probable cause” standard that is followed in the United States or similar standards that may be followed in other jurisdictions. This ensures that even when the wording of the law is not precise, which it cannot be for such a subjective issue, there is still some amount of clarity around when and under what circumstances such warrants may be issued. In contrast, the Amendment requires that the orders be based on “requirements for a reasonable justification based on articulable and credible facts, particularity, legality, and severity regarding the conduct under investigation.” Although the language here may seem reasonable but in the absence of any jurisprudence backing it, it becomes very vague and susceptible to misuse. Disclosure without a Warrant: Under the current MLAT process as followed in the United States, a judge in the U.S. must issue a warrant based on probable cause in order for a U.S. company to turn over content to a foreign government. This requirement protects individuals abroad by requiring their governments to meet certain standards when seeking information held by U.S. companies. The Amendment seeks to remove this essential safeguard for a judicial warrant. The Amendment does not require requests from foreign governments to be based on a prior judicial authorization, since a large number of countries (including India) do not always require judicial orders for such orders.[6]

Allows Real Time Surveillance by Foreign Governments: American privacy rights activists have raised the concern that the Amendment would allow foreign governments to conduct ongoing surveillance by asking American companies to turn over data in real time. The requirements that the foreign governments would have to fulfill to execute such an order are less stringent than those which have to be fulfilled by the American security agencies if they want to indulge in similar activities. When the U.S. government wants to conduct real-time surveillance, it must comply with the Wiretap Act, which imposes heightened privacy protections.[7] The court orders for this purpose also require minimization of irrelevant information, are strictly time-limited, only available for certain serious crimes, etc.[8] In Indian law any such request, apart from being time limited and being available only for certain specified purposes, also has to satisfy that interception is the only reasonable option to acquire such information.

Process to determine which countries can make demands is not credible: Under the Amendment, the Attorney General and the Secretary of State, would decide whether the laws and practices of the foreign government adequately meet the standards set forth in the legislation for entering into a bilateral agreement. Their decisions would not be liable to be reviewed by a court or in any administrative procedure. They could make their determinations based on information which is not available to the public and the criteria for making the decision are vague and flexible. Further these criteria have been described as “factors” and not “requirements”[9] so that even if some of them are not satisfied, the certification process can still be completed.

Companies do not have the resources to determine if a request complies with the terms of the agreement: The Amendment does not provide any oversight to ensure that technology companies are only turning over information permitted in a specific bilateral agreement. For example, a bilateral agreement may permit disclosure of information only in response to orders that do not discriminate on the basis of religion, however, it may not be possible for the companies receiving the request to determine whether a particular request complies with that condition or not. The Amendment does not require that individual companies put in place requisite processes to weed out requests that may be non compliant with the provisions of the agreement; nor are there periodic audits to ensure that companies are properly responding to foreign government information requests.[10]

Non compliance with Human Rights Standards: Under international human rights law, governments are allowed to conduct surveillance only based on individualized and sufficient suspicion; authorized by an independent and impartial decision-maker; necessary and proportionate to achieve a legitimate aim, including by being the least intrusive means possible.[11] However the mechanism proposed by the Amendment falls woefully short of these standards.[12]

One must not lose sight of the fact that most of the criticisms of the proposal that have been discussed above have been made in the context of, and based on the standards of privacy protection that are available to American citizens. If we look at it from an Indian perspective most of those protections are not available to Indian citizens in any case since independent judicial oversight is not a sine qua non for access to information by the security agencies in India. Although the Amendment leaves open the question of how a request would be made by the foreign government to the individual Agreements, it may be safe to assume that were India to enter into such an Agreement with the United States, it would require the orders for access to comply with the standards laid down under Indian law before the relevant authorities send the request to the US based data controllers. At the least, this would ensure that the rights of Indian citizens currently guaranteed under Indian law, howsoever flawed they might be, would in all likelihood be safeguarded as per Indian law.

Certification from the Attorney General to the US Congress

In the above background if India were to enter into the agreement with the U.S Government   apart from actually negotiating and signing that Agreement, the Indian government will also have to ensure (if the Amendment is passed) that the Attorney General of the United States, with the concurrence of the Secretary of State gives a certificate to the Congress that Indian law satisfies the requirements set forth in the proposed section XXXX of Title 18.

It must be kept in mind that if the negotiations between India and the United States in this regard reach such a mature stage that the certification from the Attorney General is required, then that would mean that there is enough political will on both sides to ensure that such an arrangement actually comes to fruition. In this context it would not be unfair to assume that the Attorney General may have a slight bias towards opining that Indian laws do conform to the requirements of the Amendment, as the Attorney General would want to support the decision taken by the administration, and our analysis shall have a similar bias in order to be more contextual.

The certification would, inter alia, contain the determination of the Attorney General:

  • That the domestic law of India affords robust substantive and procedural protections for privacy and civil liberties in light of the data collection and activities of the Indian government that will be subject to the agreement.It should be noted that the Amendment specifies various factors that should be taken into account to reach such a determination, which include whether the Indian government:

(i) has adequate substantive and procedural laws on cybercrime and electronic evidence, as demonstrated through accession to the Budapest Convention on Cybercrime, or through domestic laws that are consistent with definitions and the requirements set forth in Chapters I and II of that Convention; Although India is not a signatory to the Budapest Convention the Information Technology Act, 2000 (which is the main legislation dealing with cybercrime) has penal provisions which have borrowed heavily from the provisions of the Budapest Convention.

  • demonstrates respect for the rule of law and principles of nondiscrimination;

The provisions of Article 14 as well as Article 21 of the Constitution of India demonstrates that the legal regime in India is committed to the rule of law and principles of non discrimination.

  • adheres to applicable international human rights obligations and commitments or demonstrates respect for international universal human rights (including but not limited to protection from arbitrary and unlawful interference with privacy; fair trial rights; freedoms of expression, association and peaceful assembly; prohibitions on arbitrary arrest and detention; and prohibitions against torture and cruel, inhuman, or degrading treatment or punishment);

India is a signatory to a number of international human rights conventions and treaties, it has acceded to the International Covenant on Civil and Political Rights (ICCPR), 1966, International Covenant on Economic, Social and Cultural Rights (ICESCR), 1966, ratified the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), 1965, with certain reservations, signed the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), 1979 with certain reservations, Convention on the Rights of the Child (CRC), 1989 and signed the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), 1984. Further the right to life guaranteed under Article 21 of the Constitution takes within its fold a number of human rights such as the right to privacy. Freedom of expression, right to fair trial, freedom of assembly, right against arbitrary arrest and detention are all fundamental rights guaranteed under the Constitution of India.

  • has clear legal mandates and procedures governing those entities of the foreign government that are authorized to seek data under the executive agreement, including procedures through which those authorities collect, retain, use, and share data, and effective of oversight of these activities;

India has a number of legislations which govern the interception and request for information such as the Information Technology Act, 2000, the Indian Telegraph Act, 1885, Code of Criminal Procedure, 1973, etc. which put in place mechanisms governing the authorities and entities which can ask for information.

  • has sufficient mechanisms to provide accountability and appropriate transparency regarding the government’s collection and use of electronic data; and

The Right to Information Act, 2005 provides the citizens the right to access any public document unless access to the same is prohibited due to the specific exemptions provided in the Act. It may be noted here that the provisions of the Right to Information Act are often frustrated by the bureaucracy by using exceptions such as “national security”, but for the purposes of this write up we are already assuming a bias towards fulfillment of these factors/conditions and therefore as long as there is even some evidence of compliance, the conditions will be considered as fulfilled by the Attorney General for the purposes of his certificate.

  • demonstrates a commitment to promote and protect the global free flow of information and the open, distributed, and interconnected nature of the Internet.

The Telecom Regulatory Authority of India, which regulates telecom services in India has also issued the Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016 which prohibits service providers from charging discriminatory tariffs for data services on the basis of content.

Other than Indian law, the certificate from the Attorney General will also have to certify certain issues which would have to be addressed in the bilateral agreement itself, viz.:

  • That the Indian government has adopted appropriate procedures to minimize the acquisition, retention, and dissemination of information concerning United States persons subject to the agreement.
  • That the agreement requires the following with respect to orders subject to the agreement:

(i) The Indian government may not intentionally target a United States person or a person located in the United States, and must adopt targeting procedures designed to meet this requirement;

(ii) The Indian government may not target a non–United States person located outside the United States if the purpose is to obtain information concerning a United States person or a person located in the United States;

(iii) The Indian government may not issue an order at the request of or to obtain information to provide to the United States government or a third-party government, nor shall the Indian government be required to share any information produced with the United States government or a third-party government;

(iv) Orders issued by the Indian government must be for the purpose of obtaining information relating to the prevention, detection, investigation, or prosecution of serious crime, including terrorism;

(v) Orders issued by the Indian government must identify a specific person, account, address, or personal device, or any other specific identifier as the object of the Order;

(vi) Orders issued by the Indian government must be in compliance with the domestic laws of India, and any obligation for a provider of an electronic communications service or a remote computing service to produce data shall derive solely from Indian law;

(vii) Orders issued by the Indian government must be based on requirements for a reasonable justification based on articulable and credible facts, particularity, legality, and severity regarding the conduct under investigation;

(viii) Orders issued by the Indian government must be subject to review or oversight by a court, judge, magistrate, or other independent authority;

(ix) Orders issued by the Indian government for the interception of wire or electronic communications, and any extensions thereof, must be for a fixed, limited duration; interception may last no longer than is reasonably necessary to accomplish the approved purposes of the order; and orders may only be issued where that same information could not reasonably be obtained by another less intrusive method;

(x) Orders issued by the Indian government may not be used to infringe freedom of speech;

(xi) The Indian government must promptly review all material collected pursuant to the agreement and store any unreviewed communications on a secure system accessible only to those trained in applicable procedures;

(xii) The Indian government must segregate, seal, or delete, and not disseminate material found not to be information that is, or is necessary to understand or assess the importance of information that is, relevant to the prevention, detection, investigation, or prosecution of serious crime, including terrorism, or necessary to protect against a threat of death or seriously bodily harm to any person;

(xiii) The Indian government may not disseminate the content of a communication of a U.S. person to U.S. authorities unless the communication (a) may be disseminated pursuant to Section 4(a)(3)(xii) and (b) relates to significant harm, or the threat thereof, to the United States or U.S. persons, including but not limited to crimes involving national security such as terrorism, significant violent crime, child exploitation, transnational organized crime, or significant financial fraud;

(xiv) The Indian government must afford reciprocal rights of data access to the United States government;

(xv) The Indian government must agree to periodic review of its compliance with the terms of the agreement by the United States government; and

(xvi) The United States government must reserve the right to render the agreement inapplicable as to any order for which it concludes the agreement may not properly be invoked.

Conclusion

It is clear from the discussion above that the proposed Amendment is a controversial piece of legislation which will affect the way law enforcement is carried out in the internet. While there is no doubt that proposing an alternate mechanism to the existing inefficient MLAT structure is definitely the need of the hour, whether the mechanism proposed in the proposed Amendment, with all the negative implications on privacy, is the right way forward is far from certain.

As for the three questions that we had sought out to answer in the beginning of this paper, we would not like to say that Indian law definitely conforms to all the requirements listed in the Amendments, but it can safely be said that it appears that if the governments of India and the United States so wish, it would not be difficult for the Attorney General of the United States to be able to give a certification to the Congress as required in the proposed Amendment.

The other two questions as to whether India should try to opt for such an arrangement if given a chance and what would be the consequence for its people are somewhat related, in the sense that it is only by examining the consequences on its citizens that we will arrive at an answer as to whether India should opt for such an arrangement or not. The level of protections offered to Indian citizens under India law in terms of protection of their private data from government surveillance is lower than that which is offered to American citizens under American law. The growing influence of the internet is changing the citizen-state dynamic giving rise to increasing incidents where the government has to approach private actors for permission in order to carry out their governmental functions of providing security. This is because more and more private data of individual citizens is being uploaded on to the internet and controlled by private actors such as telecom companies, social media sites, etc. and the governments have to approach these private actors in case they want access to this information. The fact that the government has to approach private actors to get access to data gives private citizens some leverage to ask for better privacy protections in the context of state surveillance.

Although this proposed Amendment may not affect the local surveillance laws in India, however it would definitely have an effect on the way that citizens’ data is protected and accessed by the government.


[1] Explanation by the Assistant Attorney General attached to the proposed Amendment.

[2] https://www.justsecurity.org/24145/u-s-u-k-data-sharing-treaty/

[3] https://www.justsecurity.org/24145/u-s-u-k-data-sharing-treaty/

[4] http://spicyip.com/2012/04/clash-of-courts-indian-district-court.html

[5] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/

[6] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing

[7] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing

[8] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/

[9] https://www.justsecurity.org/32529/foreign-governments-tech-companies-data-response-jennifer-daskal-andrew-woods/

[10] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing

[11] International Covenant on Civil and Political Rights, art. 17, Dec. 19, 1966, U.N.T.S 999, cf. https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing

[12] https://www.aclu.org/letter/aclu-amnesty-international-usa-and-hrw-letter-opposing-doj-proposal-cross-border-data-sharing

RBI Directions on Account Aggregators

by Vipul Kharbanda and Elonnai Hickok — last modified Oct 21, 2016 03:25 PM
The Reserve Bank of India's (RBI) Directions for account aggregator services in India seem to lay great emphasis on data security by allowing only direct access between institutions and do away with data scraping techniques.

These days’ people have access to various financial services and manage their finances in a diverse manner while dealing with a large number of financial service providers, each providing one or more services that the user may need such as banking, credit card services, investment services, etc. This multiplicity of financial service providers could make it inconvenient for the users to keep track of their finances since all the information cannot be provided at the same place. This problem is sought to be solved by the account aggregators by providing all the financial data of the user at a single place. Account aggregation is the consolidation of online financial account information (e.g., from banks, credit card companies, etc.) for online retrieval at one site. In a typical arrangement, an intermediary (e.g., a  portal) agrees with a third party service provider to provide the service to consumers, the intermediary would then generally privately label the service and offer consumers access to it at the intermediary’s website.[1] There are two major ways in which account aggregation takes place, (i) direct access: wherein the account aggregator gets direct access to the data of the user residing in the computer system of the financial service provider; and (ii) scraping: where the user provides the account aggregator the username and password for its account in the different financial service providers and the account aggregator scrapes the information off the website/portal of the different financial service providers.

Since account aggregation involves the use and exchange of financial information there could be a number of potential risks associated with it such as (i) loss of passwords; (ii) frauds; (iii) security breaches at the account aggregator, etc. It is for this reason that on the advice of the Financial Stability and Development Council,[2] the Reserve Bank of India (“RBI”) felt the need to regulate this sector and on September 2, 2016 issued the Non-Banking Financial Company - Account Aggregator (Reserve Bank) Directions, 2016 to provide a framework for the registration and operation of Account Aggregators in India (the “Directions”). The Directions provide that no company shall be allowed to undertake the business of account aggregators without being registered with the RBI as an NBFC-Account Aggregator. The Directions also specify the conditions that have to be fulfilled for consideration of an entity as an Account Aggregator such as:

  1. the company should have a net owned fund of not less than rupees two crore, or such higher amount as the Bank may specify;
  2. the company should have the necessary resources and wherewithal to offer account aggregator services;
  3. the company should have adequate capital structure to undertake the business of an account aggregator;
  4. the promoters of the company should be fit and proper individuals;
  5. the general character of the management or proposed management of the company should not be prejudicial to the public interest;
  6. the company should have a plan for a robust Information Technology system;
  7. the company should not have a leverage ratio of more than seven;
  8. the public interest should be served by the grant of certificate of registration; and
  9. Any other condition that made be specified by the Bank from time to time.[3]

The Direction further talk about the responsibilities of the Account Aggregators and specify that the account aggregators shall have the duties such as: (a) Providing services to a customer based on the customer’s explicit consent; (b) Ensuring that the provision of services is backed by appropriate agreements/ authorisations between the Account Aggregator, the customer and the financial information providers; (c) Ensuring proper customer identification; (d) Sharing the financial information only with the customer or any other financial information user specifically authorized by the customer; (e) Having a Citizen's Charter explicitly guaranteeing protection of the rights of a customer.[4]

The Account Aggregators are also prohibited from indulging in certain activities such as: (a) Support transactions by customers; (b) Undertaking any other business other than the business of account aggregator; (c) Keeping or “residing” with itself the financial information of the customer accessed by it; (d) Using the services of a third party for undertaking its business activities; (e) Accessing user authentication credentials of customers; (f) Disclosing or parting with any information that it may come to acquire from/ on behalf of a customer without the explicit consent of the customer.[5] The fact that there is a prohibition on the information accessed from actually residing with the Account Aggregator will ensure greater security and protection of the information.

Consent Framework

The Directions specify that the function of obtaining, submitting and managing the customer’s consent should be performed strictly in accordance with the Directions and that no information shall be retrieved, shared or transferred without the explicit consent of the customer.[6] The consent is to be taken in a standardized artefact, which can also be obtained in electronic form,[7] and shall contain details as to (i) the identity of the customer and optional contact information; (ii) the nature of the financial information requested; (iii) purpose of collecting the information; (iv) the identity of the recipients of the information, if any; (v) URL or other address to which notification needs to be sent every time the consent artefact is used to access information; (vi) Consent creation date, expiry date, identity and signature/ digital signature of the Account Aggregator; and (vii) any other attribute as may be prescribed by the RBI.[8] The account aggregator is required to inform the customer of all the necessary attributes to be contained in the consent artefact as well as the customer’s right to file complaints with the relevant authorities.[9] The customers shall also be provided an option to revoke consent to obtain information that is rendered accessible by a consent artefact, including the ability to revoke consent to obtain parts of such information.[10]

Comments: While the Directions have specific provisions regarding how the financial data shall be dealt with, it is pertinent to note that the actual consent artefact also has personal information and it is not clear whether Account Aggregators are allowed disclose that information to third parties are not.

Disclosure and sharing of financial information

Financial information providers such as banks, mutual funds, etc. are allowed to share information with account aggregators only upon being presented with a valid consent artifact and also have the responsibility to verify the consent as well as the credentials of the account aggregator.[11] Once the verification is done, the financial information provider shall digitally sign the financial information and transmit the same to the Account Aggregator in a secure manner in real time, as per the terms of the consent.[12] In order to ensure smooth flow of data, the Directions also impose an obligation on financial information providers to:

  • implement interfaces that will allow an Account Aggregator to submit consent artefacts, and authenticate each other, and enable secure flow of financial information;
  • adopt means to verify the consent including digital signatures;
  • implement means to digitally sign the financial information; and
  • maintain a log of all information sharing requests and the actions performed pursuant to such requests, and submit the same to the Account Aggregator.[13]

Comments: The Directions provide that the Account Aggregator will not support any transactions by the customers and this seems to suggest that in case of any mistakes in the information the customer would have to approach the financial information provider and not the Account Aggregator.

Use of Information

The Directions provide that in cases where financial information has been provided by a financial information provider to an Account Aggregator for transferring the same to a financial information user with the explicit consent of the customer, the Account Aggregator shall transfer the same in a secure manner in accordance with the terms of the consent artefact only after verifying the identity of the financial information user.[14] Such information, as well as information which may be provided for transferring to the customer, shall not be used or disclosed by the Account Aggregator or the Financial Information user except as specified in the consent artefact.[15]

Data Security

The Directions specify that the business of an Account Aggregator will be entirely Information Technology (IT) driven and they are required to adopt required IT framework and interfaces to ensure secure data flows from the financial information providers to their own systems and onwards to the financial information users.[16] This technology should also be scalable to cover any other financial information or financial information providers as may be specified by the RBI in the future.[17] The IT systems should also have adequate safeguards to ensure they are protected against unauthorised access, alteration, destruction, disclosure or dissemination of records and data.[18] Information System Audit of the internal systems and processes should be in place and be conducted at least once in two years by CISA certified external auditors whose report is to be submitted to the RBI.[19] The Account Aggregators are prohibited from asking for or storing customer credentials (like passwords, PINs, private keys) which may be used for authenticating customers to the financial information providers and their access to customer’s information will be based only on consent-based authorisation (for scraping).[20]

Grievance Redressal

The Directions require the Account Aggregator to put in place a policy for handling/ disposal of customer grievances/ complaints, which shall be approved by its Board and also have a dedicated set-up to address customer grievances/ complaints which shall be handled and addressed in the manner prescribed in the policy.[21] The Account Aggregator also has to display the name and details of the Grievance Redressal Officer on its website as well as place of business.[22]

Supervision

The Directions require the Account Aggregators to put in place various internal checks and balances to ensure that the business of the Account Aggregator does not violate any laws or regulations such as constitution of an Audit Committee, a Nomination Committee to ensure the “fit and proper” status of its Directors, a Risk Management Committee and establishment of a robust and well documented risk management framework.[23] The Risk Management Committee is required to (a) give due consideration to factors such as reputation, customer confidence, consequential impact and legal implications, with regard to investment in controls and security measures for computer systems, networks, data centres, operations and backup facilities; and b) have oversight of technology risks and ensure that the organisation’s IT function is capable of supporting its business strategies and objectives.[24] Further the RBI also has the power to inspect any Account Aggregator at any time.[25]

Penalties

The Directions themselves do not provide for any penalties for non compliance, however since the Directions are issued under Section 45JA of the Reserve Bank of India Act, 1934 (“RBI Act”), this means that any contravention of these directions will be punishable under Section 58B of the RBI Act which provides for an imprisonment of upto 3 years as well as a fine for any contravention of such directions.

Conclusion

The Directions by the RBI provide a number of regulations and checks on Account Aggregators with the view to ensure safety of customer financial data. These Directions appear to be quite trendsetting in the sense that in most other jurisdictions such as the United States or even Europe there are no specific regulations governing Account Aggregators but their activities are mainly being governed under existing privacy or consumer protection legislations.[26]

The entire regulatory regime for Account Aggregators seems to suggest that the RBI wants Account Aggregators to be like funnels to channel information from various platforms right to the customer (or financial information user) and it does not want to take a chance with the information actually residing with the Account Aggregators. Further, by prohibiting Account Aggregators from accessing user authentication credentials, the RBI is trying to eliminate the possibility of this information being leaked or stolen. Although this may make it more onerous for Account Aggregators to provide their services, it is a great step to ensure the safety and security of customer data.

In recent months the RBI has been trying to actively engage with the various new products being introduced in the financial sector owing to various technological advancements, be it the circular informing the public about the risks of virtual currencies including Bitcoin, the consultation paper on P2P lending platforms or these current guidelines on Account Aggregators. These recent actions of the RBI seem to suggest that the RBI is well aware of various technological advancements in the financial sector and is keeping a keen eye on these technologies and products, but appears to be taking a cautious and weighted approach regarding how to deal with them.


[1] Ann S. Spiotto, Financial Account Aggregation: The Liability Perspective, Fordham Journal of Corporate & Financial Law, 2006, Volume 8, Issue 2, Article 6, available at http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1181&context=jcfl

[2] https://rbi.org.in/scripts/BS_PressReleaseDisplay.aspx?prid=34345

[3] Clause 4.2.2 of the Directions.

[4] Clause 5 of the Directions.

[5] Clause 5 of the Directions.

[6] Clauses 6.1 and 6.2 of the Directions.

[7] Clause 6.4 of the Directions.

[8] Clause 6.3 of the Directions.

[9] Clause 6.5 of the Directions.

[10] Clause 6.6 of the Directions.

[11] Clauses 7.1 and 7.2 of the Directions.

[12] Clauses 7.3 and 7.4 of the Directions.

[13] Clause 7.5 of the Directions.

[14] Clause 7.6.1 of the Directions.

[15] Clause 7.6.2 of the Directions.

[16] Clause 9(a) of the Directions.

[17] Clause 9(c) of the Directions.

[18] Clause 9(d) of the Directions.

[19] Clause 9(f) of the Directions.

[20] Clause 9(b) of the Directions.

[21] Clauses 10.1 and 10.2 of the Directions.

[22] Clause 10.3 of the Directions.

[23] Clauses 12.2, 12.3 and 12.4 of the Directions.

[24] Clause 12.4 of the Directions.

[25] Clause 15 of the Directions.

[26] http://www.canadiancybersecuritylaw.com/2016/07/german-regulator-finds-banks-data-rules-impede-non-bank-competitors/

How Long Have Banks Known About The Debit Card Fraud?

by tiwari — last modified Oct 22, 2016 08:06 AM
The recent security breach in an Indian payment switch provider, confirmed earlier this week by the National Payments Corporation of India Ltd (NPCIL), has forced domestic banks into damage control mode over the past few days.

The article was published by Bloomberg on October 22, 2016.


The breach was detected when various customers began to lodge complaints with their banks about unauthorised transactions on their accounts, which upon investigation were said to originate from a foreign location such as China. The security breach has affected actively at least 641 customers to the tune of Rs 1.8 crore, with lakhs more being affected by the pro-active measures (including card revocation) being taken by banks to prevent further financial losses.

Surprisingly little is known, however, about the nature of the attack responsible for the breach, the extent or scope of damage it has caused and the sufficiency of the countermeasures being initiated by the banks against the attacks. This article will talk about these aspects of the attack and also suggest normative measures that can be carried out to minimize harm and prevent such attacks in the future.

The Modus Operandi

According to reports, the compromise may have happened at the level of the Hitachi Payment Services, which is a payment services provider which operates, among other financial services, ATMs for a variety of banks across the country. One or a certain number of ATMs were apparently compromised by a malware, which then infected the payment services provider network, leading to a far larger potential target area than just the physical ATMs for malware to act against. The malware could have infected the payment switch provider via physically being uploaded onto vulnerable ATM machines, which are known to run out-dated embedded operating systems with various documented loopholes that are rarely patched. The malware then could have recorded the details of the cards used on the infected ATMs (or even in the network generally) and then, via the same compromised network, transmitted confidential details, including ATM pins and CVV numbers, to the operators of the malware.

Malware

The attack could have also occurred from some other vulnerable part of the payment network, such as a payment switch within the bank itself, making it far more dangerous as it still maybe be active on parts of the network within the bank and would have access to a far wider range & variety of information than a mere ATM. There is no real way to know if the threat has been even contained, forget neutralised, as the audits being carried out by PCI-DSS authorised agencies have been on-going for the past month and their reports are not due at least another 15 days, as intimated by NPCIL.

Massive Financial Implications

The compromise of these details, regardless of the source of the compromise, has massive financial implications. This is because various international services allow debit/credit cards to be used only with the card number, expiry date, name & CVV number. They do not require the use of ATM Pins or an OTP (one time password) sent to a mobile phone for online transactions. In fact, unlike India where the RBI mandates OTPs for debit cards, this CVV based simplified online usage is the standard practice of using ATM Cards digitally in most of the developed world.

This would mean that merely changing ATM pins, something which SBI alleges less than 7 percent of its customers had done prior to all 6 lakh cards being blocked, would serve as almost no protection if the cards are enabled for international online transactions. The fact that most of the dubious, unauthorised financial transactions are occurring from foreign locations probably demonstrates that it is these kinds of internationally enabled cards that are being targeted for this sort of an attack.

Are Banks Concealing Information?

The absence of data/security breach laws in India is being sharply felt as there as has been an abject lack of clarity and information from the banking sector and the government regarding the attack. Over 47 states in the USA and most of the countries in the EU have enacted strict data security breach laws that mandate public intimation & disclosure of key information pertaining to the attack along with detailed containment measures. The presence of such a law in India would have gone a long way in preventing the breach from being under the wraps for so long (it occurred at the bank level in September, almost a month ago) and also ensured far more vigilant active compliance by corporations & banks to international security standards and best practices. For now, the only true countermeasure to prevent future harm to affected card holders is for all affected cards to be revoked by the banks and new cards being issued to affected customers.

Constant vigilance & comprehensive security audits by banks to detect affected cards and active protection for customers, using financial and identity insurance services such as AllClear ID Plus (used by Sony in the 2011 Playstation Hack) will go a long way in mitigating the harm of the breach. The banking industry, government & security agencies should all learn from this breach and a combination of new legislation, updated industry practices and consumer awareness is necessary for proactive & reactive actions in the future.

Request for Specifics: Rebuttal to UIDAI

by Hans Varghese Mathews last modified Oct 30, 2016 03:06 PM
Responding to the Unique Identification Authority of India’s article that found “serious mathematical errors” in “Flaws in the UIDAI Process” (EPW 12 March 2016), the main mathematical argument used to arrive at the number of duplicates in the biometric database is explained.

The article was published in the Economic & Political Weekly on September 3, 2016, Vol.51, Issue No.36.


The author of a technical paper will be alarmed when he is convicted of “serious mathematical errors” by someone who has not bothered himself with “going too deep into the mathematics” used. The man must possess miraculous powers of divination one feels: fears rather. The UIDAI seems to have even such formidable diviners in their employ: who have dismissed just so peremptorily, in their rebuttal, the calculations made in my paper titled Flaws in the UIDAI process. The paper appeared in the issue of this journal dated to February 27 of this year. The rebuttal was published in the issue dated to the 12th of March. The interested reader can confirm that I have only repeated what was said there. The rebuttal does not specify, in any way, the mathematical mistakes I am supposed to have made. So I shall rehearse the relevant calculations very broadly: and the experts of the UIDAI will then exhibit, I trust, the specific mistakes they impute to me.[*]


[*]My reply to the UIDAIs attempted rebuttal was sent in to the EPW a few days after that appeared in print: and published as a “web exclusive” article in Volume 51, Issue Number 36 of the EPW, on 03/09/2016.

Read the Full Article

If the DIDP Did Its Job

by Asvatha Babu last modified Nov 07, 2016 12:57 PM

 

 

Over the course of two years, the Centre for Internet and Society sent 28 requests to ICANN under its Documentary Information Disclosure Policy (DIDP). A part of ICANN’s accountability initiatives, DIDP is “intended to ensure that information contained in documents concerning ICANN's operational activities, and within ICANN's possession, custody, or control, is made available to the public unless there is a compelling reason for confidentiality.”

 

Through the DIDP, any member of the public can request information contained in documents from ICANN. We’ve written about the process here, here and here. As a civil society group that does research on internet governance related topics, CIS had a variety of questions for ICANN. The 28 DIDP requests we have sent cover a range of subjects: from revenue and financial information, to ICANN’s relationships with its contracted parties, its contractual compliance audits, harassment policies and the diversity of participants in its public forum. We have blogged about each DIDP request where we have summarized ICANN’s responses.

 

Here are the DIDP requests we sent in:

Dec 2014

Jan/Feb 2015

Aug/Sept 2015

Nov 2015

Apr/May 2016

ICANN meeting expenditure

Revenue from gTLD auction

Implementation of NETmundial principles

IANA transition postponement

Board Governance Committee Reports

Granular revenue statements

Globalisation Advisory Groups

Raw data - Granular income data

Presumptive renewal of registries

Diversity Analysis

ICANN cyber attacks

Organogram

Compliance audits - registries

ICANN-RIR relationship

Compliance audits

Implementation of NETmundial outcome document

Involvement in NETmundial Initiative

Compliance audits - registrars

 

Harassment policy

Complaints to ICANN ombudsman

RIR contract fees

Registrar abuse contact

 

DIDP statistics *

 
 

Verisign Contractual violations

 

gTLD applicant support program

 
 

Contractual auditors

 

Root Zone Maintenance agreements

 
 

Internal website

 
 

ICANN’s responses were analyzed and rated between 0-4 based on the amount of information disclosed. The reasons given for the lack of full disclosure were also studied.

 

DIDP response rating

0

No relevant information disclosed

1

Very little information disclosed; DIDP preconditions and/or other reasons for nondisclosure used.

2

Partial information disclosed; DIDP preconditions and/or other reasons for nondisclosure used.

3

Adequate information disclosed; DIDP preconditions and/or other reasons for nondisclosure used.

4

All information disclosed

 
 

ICANN has defined a set of preconditions under which they are not obligated to answer a request. These preconditions are generously used by ICANN to justify their lack of a comprehensive answer. The wording of the policy also allows ICANN to dodge answering a request if it doesn’t have the relevant documents already in its possession. The responses were also classified by the number of times a particular DIDP condition for non-disclosure was invoked. We will see why these weaken ICANN’s accountability initiatives.  

 

null

Of the 28 DIDP requests, only 14% were answered fully, without the use of the DIDP conditions of non-disclosure. Seven out of 28 or 40% of the DIDPs received a 0-rated answer which reflects extremely poorly on the DIDP mechanism itself. Of the 7 responses that received 0-rating, 4 were related to complaints and contractual compliance. We had asked for details on the complaints received by the ombudsman, details on contractual violations by Verisign and abuse contacts maintained by registrars for filing complaints. We received no relevant information.

 

We have earlier written about the extensive and broad nature of the 12 conditions of non-disclosure that ICANN uses. These conditions were used in 24 responses out of 28. ICANN was able to dodge from fully answering 85% of the DIDP requests that they got from CIS. This is alarming especially for an organization that claims to be fully transparent and accountable. The conditions for non-disclosure have been listed in this document and can be referred to while reading the following graph.

 

On reading the conditions for non-disclosure, it seems like ICANN can refuse to answer any DIDP request if it so wished. These exclusions are numerous, vaguely worded and contain among them a broad range of information that should legitimately be in the public domain: Correspondence, internal information, information related to ICANN’s relationship with governments, information derived from deliberations among ICANN constituents, information provided to ICANN by private parties and the kicker - information that would be too burdensome for ICANN to collect and disseminate.

null

 

As we can see from the graph, the most used condition under which ICANN can refuse to answer a DIDP request is F. Predictably, this is the most vaguely worded DIDP condition of the lot: “Confidential business information and/or internal policies and procedures.” It is up to ICANN to decide what information is confidential with no justification needed or provided for it. ICANN has used this condition 11 times in responding to our 28 requests.

 

It is also necessary to pay attention to condition L which allow ICANN to reject “Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual.” This is perhaps the weakest point in the entire list due its subjective nature. Firstly, on whose standards must this information request be reasonable? If the point of a transparency mechanism is to make sure that information sought by the public is disseminated, should they be allowed to obfuscate information because it is too burdensome to collect? Even if this is fair given the time constraints of the DIDP mechanism, it must not be used as liberally as has been happening. The last sub point is perhaps the most subjective. If a staff member dislikes a particular requestor, this point would justify their refusal to answer a request regardless of its validity. This hardly seems fair or transparent. This condition has been used 9 times in our 28 requests.

 

Besides the DIDP non-disclosure conditions, ICANN also has an excuse built into the definition of DIDP. Since it is not obliged to create or summarize documents under the DIDP process, it can simply claim to not have the specific document we request and thus negate its responsibility to our request. This is what ICANN did with one of our requests for raw financial data. For our research, we required raw data from ICANN specifically with regard to its expenditure on staff and board members for their travel and attendance at meetings. As an organization that is answerable to multiple stakeholders including governments and the public, it is justified to expect that they have financial records of such items in a systematic manner. However, we were surprised to learn that ICANN does not in fact have these stored in a manner that they can send as attachments or publish. Instead they directed us to the audited financial reports which did little for our research. However, in response to our later request for granular data on revenue from domain names, ICANN explained that while they do not have such a document in their possession, they would create one. This distinction between the two requests seems arbitrary to us since we consider both to be important to public.

 

Nevertheless, there were some interesting outcomes from our experience filing DIDPs. We learnt that there has been no substantive work done to inculcate the NETmundial principles at ICANN, that ICANN has no idea which regional internet registry contributes the most to its budget, and that it does not store (or is not willing to reveal) any raw financial data. These outcomes do not contribute to a sense of confidence in the organization.

 

ICANN has an opportunity to reform this particular transparency mechanism at its Workstream 2 discussions. ICANN must make use of this opportunity to listen and work with people who have used the DIDP process in order to make it useful, effective and efficient. To that effect, we have some recommendations from our experience with the DIDP process.

 

That ICANN does not currently possess a particular document is not an excuse if it has the ability to create one. In its response to our questions on the IANA transition, ICANN indicated that it does not have the necessary documents as the multi stakeholder body that it set up is the one conducting the transition. This is somewhat justified. However, in response to our request for financial details, ICANN must not be able to give the excuse that it does not have a document in its possession. It and it alone has the ability to create the document and in response to a request from the public, it should.

 

ICANN must also revamp its conditions for non-disclosure and make it tighter. It must reduce the number of exclusions to its disclosure policy and make sure that the exclusion is not done arbitrarily. Specifically with respect to condition F, ICANN must clarify how information was classified as confidential and why that is different from everything else on the list of conditions.

 

Further, ICANN should not be able to use condition L to outright reject a DIDP request. Instead, there must be a way for the requester and ICANN to come to terms about the request. This could happen by an extension of the 1 month deadline, financial compensation by requester for any expenditure on ICANN’s part to answer the request or by a compromise between the requester and ICANN on the terms of the request. The sub point about requests made “by a vexatious or querulous individual” must be removed from condition L or at least be separated from the condition so that it is clear why the request for disclosure was denied.

 

ICANN should also set up a redressal mechanism specific to DIDP. While ICANN has the Reconsideration Requests process to rectify any wrongdoing on the part of staff or board members, this is not adequate to identify whether a DIDP was rejected on justifiable grounds. A separate mechanism that deals only with DIDP requests and wrongful use of the non-disclosure conditions would be helpful. According to the icann bylaws, in addition to Requests for Reconsideration, ICANN has also established an independent third party review of allegations against the board and/or staff members. A similar mechanism solely for reviewing whether ICANN’s refusal to answer a DIDP request is justified would be extremely useful.

 

A strong transparency mechanism must make sure that its objective are to provide answers, not to find ways to justify its lack of answers. With this in mind, we hope that the revamp of transparency mechanisms after workstream 2 discussions leads to a better DIDP process than we are used to.

 

Internet's Core Resources are a Global Public Good - They Cannot Remain Subject to One Country's Jurisdiction

by Vidushi Marda last modified Nov 14, 2016 06:39 AM
This statement was issued by 8 India civil society organizations, supported by 2 key global networks, involved with internet governance issues, to the meeting of ICANN in Hyderabad, India from 3 to 9 November 2016. The Centre for Internet & Society was one of the 8 organizations that drafted this statement.

Recently, the US gave up its role of signing entries to the Internet's root zone file, which represents the addressing system for the global Internet. This is about the Internet addresses that end with .com, .net, and so on, and the numbers associated with each of them that help us navigate the Internet. We thank and congratulate the US government for taking this important step in the right direction. However, the organisation that manages this system, ICANN,[1] a US non-profit, continues to be under US jurisdiction, and hence subject to its courts, legislature and executive agencies. Keeping such an important global public infrastructure under US jurisdiction is expected to become a very problematic means of extending US laws and policies across the world.

We the undersigned therefore appeal that urgent steps be taken to transit ICANN from its current US jurisdiction. Only then can ICANN become a truly global organisation.[2] We would like to make it clear that our objection is not directed particularly against the US; we are simply against an important global public infrastructure being subject to a single country's jurisdiction.

Domain name system as a key lever of global control
A few new top level domains like .xxx and .africa are already under litigation in the US, whereby there is every chance that its law could interfere with ICANN's (global) policy decisions. Businesses in different parts of the world seeking top level domain names like .Amazon, and, hypothetically, .Ghaniancompany, will have to be mindful of de facto extension of US jurisdiction over them. US agencies can nullify the allocation of such top level domain names, causing damage to a business similar to that of losing a trade name, plus losing all the 'connections', including email based ones, linked to that domain name. For instance, consider the risks that an Indian generic drugs company, say with a top level domain, .genericdrugs, will remain exposed to.

Sector specific top level domain names like .insurance, health, .transport, and so on, are emerging, with clear rules for inclusion-exclusion. These can become de facto global regulatory rules for that sector. .Pharmacy has been allocated to a US pharmaceutical group which decides who gets domain names under it. Public advocacy groups have protested [3] that these rules will be employed to impose drugs-related US intellectual property standards globally. Similar problematic possibilities can be imagined in other sectors; ICANN could set “safety standards”, as per US law, for obtaining .car.

Country domain names like .br and .ph remain subject to US jurisdiction. Iran's .ir was recently sought to be seized by some US private parties because of alleged Iranian support to terrorism. Although the plea was turned down, another court in another case may decide otherwise. With the 'Internet of Things', almost everything, including critical infrastructure, in every country will be on the network. Other countries cannot feel comfortable to have at the core of the Internet’s addressing system an organisation that can be dictated by one government.

ICANN must become a truly global body
Eleven years ago, in 2005, the Civil Society Internet Governance Caucus at the World Summit on the Information Society demanded that ICANN should “negotiate an appropriate host country agreement to replace its California Incorporation”.

A process is currently under-way within ICANN to consider the jurisdiction issue. It is important that this process provides recommendations that will enable ICANN to become a truly global body, for appropriate governance of very important global public goods.

Below are some options, and there could be others, that are available for ICANN to transit from US jurisdiction.

  1. ICANN can get incorporated under international law. Any such agreement should make ICANN an international (not intergovernmental) body, fully preserving current ICANN functions and processes. This does not mean instituting intergovernmental oversight over ICANN.
  2. ICANN can move core internet operators among multiple jurisdictions, i.e. ICANN (policy body for Internet identifiers), PTI [4] (the operational body) and the Root Zone Maintainer must be spread across multiple jurisdictions. With three different jurisdictions over these complementary functions, the possibility of any single one being fruitfully able to interfere in ICANN's global governance role will be minimized.
  3. ICANN can institute a fundamental bylaw that its global governance processes will brook no interference from US jurisdiction. If any such interference is encountered, parameters of which can be clearly pre-defined, a process of shifting of ICANN to another jurisdiction will automatically set in. A full set-up – with registered HQ, root file maintenance system, etc – will be kept ready as a redundancy in another jurisdiction for this purpose. [5] Chances are overwhelming that given the existence of this bylaw, and a fully workable exit option being kept ready at hand, no US state agency, including its courts, will consider it meaningful to try and enforce its writ. This arrangement could therefore act in perpetuity as a guarantee against jurisdictional interference without actually having ICANN to move out of the US.
  4. The US government can give ICANN jurisdictional immunity under the United States International Organisations Immunities Act . There is precedent of US giving such immunity to non-profit organisations like ICANN. [6] Such immunity must be designed in such a way that still ensures ICANN's accountability to the global community, protecting the community's enforcement power and mechanisms. Such immunity extends only to application of public law of the US on ICANN decisions and not private law as chosen by any contracting parties. US registries/registrars, with the assent of ICANN, can choose the jurisdiction of any state of the US for adjudicating their contracts with ICANN. Similarly, registries/registrars from other countries should be able to choose their respective jurisdictions for such contracts.

We do acknowledge that, over the years, there has been an appreciable progress in internationalising participation in ICANN's processes, including participation from governments in the Governmental Advisory Committee. However, positive as this is, it does not address the problem of a single country having overall jurisdiction over its decisions.

Issued by the following India based organisation:

  • Centre for Internet and Society, Bangalore
  • IT for Change, Bangalore
  • Free Software Movement of India, Hyderabad
  • Society for Knowledge Commons, New Delhi
  • Digital Empowerment Foundation, New Delhi
  • Delhi Science Forum, New Delhi
  • Software Freedom Law Centre - India, New Delhi
  • Third World Network - India, New Delhi

Supported by the following global networks:

  • Association For Progressive Communications
  • Just Net Coalition


For any clarification or inquiries you may may write to or call:


[1] Internet Corporation for Assigned Names and Numbers

[2] The “NetMundial Multistakeholder Statement” , endorsed by a large number of governments and other stakeholders, including ICANN and US government, called for ICANN to become a “truly international and global organization”.

[3] See, https://www.techdirt.com/articles/20130515/00145123090/big-pharma-firms-seeking-pharmacy-domain-to-crowd-out-legitimate-foreign-pharmacies.shtml

[4] Public Technical Identifier, a newly incorporated body to carry out the operational aspects of managing Internet's identifiers.

[5] This can be at one of the existing non US global offices of ICANN, or the location of one of the 3 non-US root servers. Section 24.1 of ICANN Bylaws say, “The principal office for the transaction of the business of shall be in the County of Los Angeles, State of California, United States of America. may also have an additional office or offices within or outside the United States of America as it may from time to time establish”.

[6] E.g., International Fertilizer and Development Center was designated as a public, nonprofit, international organisation by US Presidential Decree, granting it immunities under United States International Organisations Immunities Act . See https://archive.icann.org/en/psc/corell-24aug06.html

How Workstream 2 Plans to Improve ICANN's Transparency

by Asvatha Babu — last modified Nov 11, 2016 10:05 AM
The Centre for Internet and Society has worked extensively on ICANN’s transparency policies. We are perhaps the single largest users of the Documentary Information Disclosure Policy. Our goal in doing so is not to be a thorn in ICANN’s side, but to try and ensure that ICANN, the organisation, as well as the ICANN community have access to the data required to carry out the task of regulating the global domain name system.

The transparency subgroup of ICANN’s Workstream 2 dialogue attempts to see how they could effectively improve the transparency and accountability of the organization. The main document under scrutiny at the moment is the draft Transparency Report published a few days before the 57th ICANN meeting in Hyderabad.

The report begins with an acknowledgement of the value of taking tips from the Right to Information policies of other institutions and governments. My colleague Padmini Baruah had earlier written a blog post comparing the exclusion policy of ICANN’s DIDP and the Indian Government’s RTI where she found that “the net cast by the DIDP exclusions policy is more vast than even than that of a democratic state’s transparency law.”[1] The WS2 report not only discusses the DIDP process, but also discusses ICANN’s proactive disclosures (with regard to lobbying etc) and whistleblower policies. This article focuses solely on the first.

As our earlier blog posts have mentioned, CIS sent in 28 DIDP requests over the last two years.  Our experience with DIDP has been less than satisfactory and we are pleased that DIDP reform was an important part of the discussions of this subgroup. The report proposes some concrete structural changes to the DIDP process but skirts around some of the more controversial ones.

The recommendation to make the process of submitting requests clearer is a good one. There are currently no instructions on the follow-up process or what ICANN requires of the requestors. The report also recommends capping any extension to the original 30-day limit to an additional 30 days. While this is good, we further recommend that ICANN stay in touch with the requestor in order to help them to the best of its ability. The correspondence should ideally not be limited to a notification that they require an extension. Any clarifications on the part of the requestor must be resolved by ICANN. We commend the report for pointing out that the status quo – where there is no outside limit for extension of time beyond the mandated 30 days – is problematic as it allows the ICANN staff to give lesser priority to responding to DIDP requests. We strongly suggest that extensions of time on responding to DIDP requests be restricted to a maximum of 7 days after the passing of the 30-day period, after which liability should be strictly imposed on ICANN in the form of an individual fine, analogous to India’s RTI policy.[2]

One of the major areas of focus for this report and for our earlier analysis was the problematic nature of the exclusions to the DIDP. I had written that the conditions were "numerous, vaguely worded and contain among them a broad range of information that should legitimately be in the public domain.”[3] This is echoed by the report which calls for a deletion of two clauses that we found most used in denying our requests for information.

The report also calls into question the subjective nature of the last condition which states that ICANN can deny information if they find requests “not reasonable, excessive or overly burdensome, not feasible, abusive or vexatious or made by a vexatious or querulous individual.” As seen from our blog posts, we are of the firm belief that such a subjective condition has no place in a robust information disclosure policy. Requiring the Ombudsman’s consent to invoke it is a good first step. In addition to that, we strongly encourage that objective guidelines which specify when a requestor is considered “vexatious” be drawn up and made public.

The most disappointing aspect of this report is that it does not delve into details about having an independent party dedicated to reviewing the DIDP process to address grievances. We believe that this must not be left to the Ombudsman who cannot devote all their time to this process. We are of the opinion that an independent party would also be able to more effectively oversee the tracking and periodic review of the DIDP mechanism.

In conclusion, we believe that this report is a good start but does not comprehensively answer all of our issues with the DIDP process as it is. We look forward to more engagement with the Transparency subgroup to close all loopholes within the DIDP process.


[1] Padmini Baruah, Peering behind the veil of ICANN’s DIDP, (September 21, 2015), available at http://cis-india.org/internet-governance/blog/peering-behind-the-veil-of-icann2019s-didp (Last visited on November 9, 2016).

[2] Section 20(1), Right to Information Act, 2005.

[3] Asvatha Babu, If the DIDP Did Its Job, (November 3, 2016), available at http://cis-india.org/internet-governance/blog/if-the-didp-did-its-job (Last Visited on November 9, 2016).

Privacy after Big Data: Compilation of Early Research

by Saumyaa Naidu — last modified Nov 12, 2016 01:37 AM
Evolving data science, technologies, techniques, and practices, including big data, are enabling shifts in how the public and private sectors carry out their functions and responsibilities, deliver services, and facilitate innovative production and service models to emerge. In this compilation we have put together a series of articles that we have developed as we explore the impacts – positive and negative – of big data. This is a growing body of research that we are exploring and is relevant to multiple areas of our work including privacy and surveillance. Feedback and comments on the compilation are welcome and appreciated.

 

Download the Compilation (PDF)


Privacy after Big Data

Evolving data science, technologies, techniques, and practices, including big data, are enabling shifts in how the public and private sectors carry out their functions and responsibilities, deliver services, and facilitate innovative production and service models to emerge. For example, in the public sector, the Indian government has considered replacing the traditional poverty line with targeted subsidies based on individual household income and assets. The my.gov.in platform is aimed to enable participation of the connected citizens, to pull in online public opinion in a structured manner on key governance topics in the country. The 100 Smart Cities Mission looks forwards to leverage big data analytics and techniques to deliver services and govern citizens within city sub-systems. In the private sector, emerging financial technology companies are developing credit scoring models using big, small, social, and fragmented data so that people with no formal credit history can be offered loans. These models promote efficiency and reduction in cost through personalization and are powered by a wide variety of data sources including mobile data, social media data, web usage data, and passively collected data from usages of IoT or connected devices.

These data technologies and solutions are enabling business models that are based on the ideals of ‘less’: cash-less, presence-less, and paper-less. This push towards an economy premised upon a foundational digital ID in a prevailing condition of absent legal frameworks leads to substantive loss of anonymity and privacy of individual citizens and consumers vis-a-vis both the state and the private sector. Indeed, the present use of these techniques run contrary to the notion of the ‘sunlight effect’ - making the individual fully transparent (often without their knowledge) to the state and private sector, while the algorithms and means of reaching a decision are opaque and inaccessible to the individual.

These techniques, characterized by the volume of data processed, the variety of sources data is processed from, and the ability to both contextualize - learning new insights from disconnected data points - and de-contextualize - finding correlation rather than causation - have also increased the value of all forms of data. In some ways, big data has made data exist on an equal playing field as far as monetisation and joining up are concerned. Meta data can be just as valuable to an entity as content data. As data science techniques evolve to find new ways of collecting, processing, and analyzing data - the benefits of the same are clear and tangible, while the harms are less clear, but significantly present.

Is it possible for an algorithm to discriminate? Will incorrect decisions be made based on data collected? Will populations be excluded from necessary services if they do not engage with certain models or do emerging models overlook certain populations? Can such tools be used to surveil individuals at a level of granularity that was formerly not possible and before a crime occurs? Can such tools be used to violate rights – for example target certain types of speech or groups online? And importantly, when these practices are opaque to the individual, how can one seek appropriate and effective remedy.

Traditionally, data protection standards have defined and established protections for certain categories of data. Yet, data science techniques have evolved beyond data protection principles. It is now infinitely harder to obtain informed consent from an individual when data that is collected can be used for multiple purposes by multiple bodies. Providing notice for every use is also more difficult – as is fulfilling requirements of data minimization. Some say privacy is dead in the era of big data. Others say privacy needs to be re-conceptualized, while others say protecting privacy now, more than ever, requires a ‘regulatory sandbox’ that brings together technical design, markets, legislative reforms, self regulation, and innovative regulatory frameworks. It also demands an expanding of the narrative around privacy – one that has largely been focused on harms such as misuse of data or unauthorized collection – to include discrimination, marginalization, and competition harms.

In this compilation we have put together a series of articles that we have developed as we explore the impacts – positive and negative – of big data. This includes looking at India’s data protection regime in the context of big data, reviewing literature on the benefits of harms of big data, studying emerging predictive policing techniques that rely on big data, and analyzing closely the impact of big data on specific privacy principles such as consent. This is a growing body of research that we are exploring and is relevant to multiple areas of our work including privacy and surveillance. Feedback and comments on the compilation are welcome and appreciated.

Elonnai Hickok
Director - Internet Governance

 

Conference on the Digitalization of the Indian Legal System

by Leilah Elmokadem — last modified Nov 16, 2016 03:34 PM
On Legal Services Day, November 9, 2016, LegalDesk.com collaborated with iSPIRT to host a conference on the “Digitalization of the Indian Legal System”. The event invited prominent speakers to present their organizations’ work and to participate in a panel discussion followed by a Q&A period for the audience.

The co-founder of DAKSH Society of India, Kishore Mandyam, opened the event with a thought-provoking presentation on the efficiency levels of the current legal system and the kinds of progress that can be brought about by technological reforms. Members of LegalDesk.com then presented their ideas and then introduced their newest white paper on Legal Digitalization, providing a brief overview of the study and summarizing the most relevant sections. The panel discussion then proceeded, moderated by Sanjay Khan Nagra, a policy expert at iSPIRT Foundation. He facilitated an insightful and conducive discussion around the advantages, disadvantages, risks and incentives of digitalizing the Indian legal system. On the discussion panel was Kishore Mandyam from DAKSH Society and Prabhuling K Navadgi, the Additional Solicitor General of India.

The objectives to the conference, as per its website, were to: (1) examine the current legal framework and the possibility of amendments in laws to facilitate digitalization of the system, (2) asses the potential of India Stack in digitalizing the legal system, (3) to identify statutes which require amendment, (4) identify the hurdles and roadblocks in the path towards digital reform of the legal ecosystem, and (5) suggest amendments to the act and potential areas of improvement. With those objectives in mind, this blog post intends to provide a brief overview of the main narratives shared in the conference and to identify some of the loopholes and unanswered questions that I was left with by the end.

Improved efficiency is the dominant narrative used to advocate for the digitalization of the Indian legal system. According to LegalDesk.com, the current Indian legal system relies mostly on paperwork, resulting in thousands of courts and over a million advocates accumulating lackhs of ongoing cases and an enormous pile of pending cases, mostly due to insufficient information. It is stated that the traditional methods of legal documentation, paperwork and court work must change through awareness, technology and pursuance by the government, as it needs to be implemented throughout the country. The key idea here is that digital transactions are faster and simplify the process of storing information. The ultimate desired outcome here, then, is increased efficiency and transparency.

One must question, however, if this narrative may be overly generous with the credit it gives to technology. IT systems, like many other manmade structures, are always bound to glitch and crash. It would be useful, then, to question whether the legal system is a department that can afford the complications that inevitably accompany a digital transformation. If portals or servers fail at critical times (i.e. when a person needs to confirm their trial date, submit a document before a deadline, or any other pressing procedures), the consequences may in fact outweigh the convenience brought about by overall digitalization. This is not to imply that the legal system cannot or should not undergo a digital transformation. Rather, it is to pose the question of whether the government will dedicate sufficient funds and expertise towards developing a resilient and reliable IT system for the courts. The conference was strongly centered on the concept that technology is always the way forward. This is a positive idea but one must pay special attention to the complications that may arise with the digitalization of a system that must function in a particularly time-sensitive manner – and to ensure that these complications can be managed efficiently and effectively should they arise. This then, requires more than a mere push for digitalization. Introducing new technological platforms is a positive step towards digitalization. However, there is a need for a detailed, government-authorized plan on how the judicial system will efficiently and smoothly undergo this digital transformation in a sustainable and resilient manner.

A presenter from LegalDesk.com mentioned Estonia’s model of complete digital governance as an example of successful digitalization: “If a small country like Estonia can do it, why can’t we?” While it is useful to draw examples and lessons from other countries, it is also crucial to recognize the contextual differences between countries. The presenter’s point was that Estonia is small in both size and population and has just recently gained independence in 1991—and has nonetheless been able to undergo technological reform and completely digitalize governance systems. India’s case is extremely different as one can logically argue that digital inclusion is more difficult to accomplish for large, spatially dispersed populations. Furthermore, the socioeconomic disparities in India, particularly in income and literacy, contribute to an immense digital divide that Estonia did not, to any comparable extent, face in order to digitalize governance over 1.3 million individuals. This is not to suggest that India cannot become a world leader in digital governance, or become comparable to Estonia. Rather, this is to highlight the importance of recognizing historical, political and sociocultural differences between countries when comparing governance models and digitalization processes. There is a need to indigenize digital reform strategies and platforms in India to cater to its unique context and vast diversity. This can be done by focusing on issues such as the language of digital governance, ensuring sufficient distribution of access to public digital platforms, and prioritizing the inclusion of all socioeconomic classes. I would argue that digitalization could come at a greater cost than benefit if it perpetuates the exclusion of the underprivileged members of society, especially from a system as critical as the judiciary. These topics were alarmingly overlooked in the conference.

The topic of privacy was also quite overlooked in the conference. As a step towards digital transformation, LegalDesk.com presented the new eNotary technology, which would be implemented by utilizing a combination of Adhaar based authentication, eSign, digilocker systems such as India Stack and video/audio recorded interviews. With the eNotary system, attestation, authentication and verification of legal instruments can be done remotely.  This is expected to make paperwork easier, faster and more secure, as individuals would log into digital platforms using their Adhaar numbers to perform their judiciary procedures. A member of the audience asked about privacy concerns associated with digitalizing the legal records or property ownership information of individuals. Kishore Mandyam, from DAKSH, answered confidently with a statement that privacy is not a pressing issue here. He asserted that privacy concerns are a western construct that we have adopted in urban parts of India but that is not a concern for the majority of locals. It is clear, however, from examples such as the United States’ predictive policing practices, that accumulating data regarding the legal affiliations of individuals can result in discriminatory practices if this data does not remain strictly confidential to protect the privacy rights of citizens. This is not to mention the other forms of discrimination that can arise from the accumulation of such data, such as the targeting of certain demographics by corporate marketing and credit scoring practices that rely on trends in big data. To keep citizens’ legal records and affairs out of these databases, a digital legal system must be securely encrypted and protected by rigid privacy policies. India may have a varying context that leads to different privacy concerns with regards to a digital legal system. In any case, special attention must be given to privacy and security rights of individuals as their Adhaar numbers become attached to all their online personal data, including their legal records and judicial affairs.

CERT-In's Proactive Mandate - A Report on the Indian Computer Emergency Response Team’s Proactive Mandate in the Indian Cyber Security Ecosystem

by tiwari — last modified Nov 19, 2016 04:14 AM
CERT-IN’s proactive mandate is defined in the IT Act, 2000 as well as in the Information Technology (The Indian Computer Emergency Response Team and Manner of Performing Function and Duties ) Rules, 2013 (CERT-In Rules, 2013) both of which postdate the existence of the organisation itself, which has been operational since 2004.
CERT-In's Proactive Mandate - A Report on the Indian Computer Emergency Response Team’s Proactive Mandate in the Indian Cyber Security Ecosystem

Published under CC BY-SA

Regarding the proactive mandate, the IT Act and CERT-In Rules include the following areas where CERT-In is required to carry out proactive measures in the interests of cyber security:

  1. Forecast and alert cyber security incidents (IT Act, 2000) & Predict and prevent cyber security incidents (CERT-In Rules, 2013)
  2. Issue guidelines, advisories and vulnerability notes etc. relating to information security practices, procedures, prevention, response and reporting (IT Act, 2000)
  3. Information Security Assurance (CERT-In Rules, 2013)

This article will track and analyse the CERT-In’s operations in each of these areas over the past twelve years, by analysing the information available on CERT-In’s website as well as other media in the public domain.

The analysis will be carried out using a mixed methodology. The basic quantitative analysis of the information available on the CERT-In’ website will be carried out in the form of simple comparatives of updates, bulletins and other forms of publicly available interaction and critical information dispersal on CERT-In’s website. The qualitative sections, on the other hand, will contain a comparative analysis of the content present in the technical documents of the CERT-In with the equivalent documentation (where present) of similar bodies in the USA and EU. Each section will then illustrate normative suggestions as to how CERT-In’s performance of that respective obligation can be improved to better serve its cyber security mandate.


Read the full article

The image is published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document.

Demonetisation Survey Limits the Range of Feedback that can be Provided by the User

by tiwari — last modified Nov 24, 2016 02:50 PM
The government has faced increasingly targeted attacks by the Opposition and the public on the merits of the demonetisation move carried out a fortnight ago. In an attempt to placate this ire and to create a feedback loop that directly engages with the public, the government has decided to conduct a mass survey to gauge public perception. The survey is hosted on the Narendra Modi mobile application that can be found on the Android and iOS app stores. This article will attempt to analyse the mobile application by looking at the design principles followed in the survey and the scope given to survey takers to express their true opinion of the demonetisation move.

The article was published by First Post on November 24, 2016.


At the time of writing, 90 percent of respondents expressed the feeling that the government's move was 'brilliant/nice'. However, one must look into the merits of the survey and its limitations to understand the true value and nature of the results of the survey.

The first step required in order to take the survey, is downloading the application itself, which forces the user to automatically grant access to Contacts, Phone and Storage functions of their phone. While there are ostensible reasons for these permissions, (sharing the data from within the application, storing downloaded information, etc.) unless the user is running Android 6.0 or above, the user doesn’t have a choice in giving these permissions. This leaves the application with the potential to collect the entire phone book of the user as as well as access any files stored on the user’s device. This is independent of the survey and provides a large scope for massive data collection from any user just choosing to install the application in the first place. It is easily possible to create a version of the application that carries out a vast majority of its current functions without these permissions and the government (along with the application developer) should endeavour to do so at the earliest. In the alternative, they should have a clear and distinct privacy policy that informs users of the data collection and its possible use.

The second major step required to take the survey is the long and tedious registration process, which requires all sorts of details with massive privacy implications. This includes the name, email ID, phone number, residency details, profession and interests, all of which are compulsory fields. Why all of these details are necessary to take a supposedly simple survey and what possible use this information can be put to by the government is both unclear and problematic. It is also possible to register using Google, Facebook, Twitter and other social networking sites where there is a varying standard of equally private and unnecessary information that is being collected by the application from these websites. There are no privacy notices or consent forms that govern this information collection nor is their any indication of how this information will be put to use beyond the scope of the survey. The generic, standard form privacy policy (less than 10 lines long) on the Narendra Modi website is hidden at the bottom of the application download page (not in the application itself) and leaves a lot to be desired to safeguard user interest.

Once the registration is complete, the user is presented with the survey, which has a total of 10 questions of 3 broad categories. 6 of these questions have multiple choice answers, 3 of them have a sliding rating meter and 1 question has general comments/suggestion page.  The article will now look at these categories and analyze the design of the questions, the extent of the choice they give to the users and finally if the survey has a coercive or limiting effect on the feedback that can be given by the user via the application regarding the demonetisation move.

Choice limiting multiple=

Choice limiting multiple choice questions.

The first category of questions, the multiple choice questions (MCQ), have varying degree of choices that the user can select from. However, regardless of the extent of the choices, their exact nature is severely limiting and makes it almost impossible to express a truly negative opinion of the survey. This is done in two ways, first the explicit restriction of choices and second the more subtle negative colouring of responses by cleverly phrasing questions. An example of the explicit restriction of choices can be seen in Question No 7. “Demonetisation will bring  real estate, higher education, healthcare in common man’s reach” which has three options, “Completely Agree, Partially Agree and Can’t Say.” There is no option to disagree with the paradigm set by the question and neither is there an option for the user to further explain or elucidate upon the answer, if he/she choose Can’t Say as an option. This also means that there will be no answers that will have “No” as an answer to the fairly open ended question, which can have a myriad of responses. The same can be said for Question No. 6 regarding the demonetisation move’s effectiveness in curbing illegal activities to which, once again, “No” is not an answer, with “Don’t Know” being the best a user disagreeing can do with the survey question.

The second, more subtle aspect of the MCQ questions are questions that serve as bait to demand a positive answer, which can be used to later bolster the survey's results in a positive light. For example, Question No. 1 reads “Do you think Black Money exists in India” and Question No. 2 reads “Do you think the evil of Corruption & Black Money needs to be fought and eliminated?” both of which have simple “Yes” and “No” as the only two possible responses. These rhetorical questions, which demand a positive answer, provide almost no aspect for the user to subtly or explicitly disagree with motivating factor behind the demonetisation move. The placement of these questions and the lack of choice in responses that can be given to them leaves huge potential to tilt the survey results in the favour of the government’s move. For example, you can’t simultaneously agree that black money is a problem and think the demonetisation move is a bad idea, simply because you can’t express that view in a single question within the survey.

Positive bias driven multiple=

Positive bias driven multiple choice question.

The other two categories of questions do not suffer from the overt problems of encouraging positive bias that the MCQ questions do but leave a fair bit to be desired in their outlook towards individuals who disagree with the move. In the sliding rating meter questions, there are strong visual cues that hint that disagreeing with the demonetisation move is a negative, undesirable idea. They do so by using a large, danger red frown as the icon for Question No. 5 that asks for the survey takers opinion on the ban on old 500 and 1000 rupee notes. The same goes for Question No. 3 that deals with the general moves of the government to tackle black money. This makes any opinion or answer that disagrees with the validity of the move an answer that is portrayed in a negative light. Similarly, the general comments/suggestion section in Question No. 10 is the only place for anyone to express a negative or non-concurring opinion, which there is no way to measure statistically in the overall survey results and will mostly likely not be counted in the final survey results.

Visual cues.

Visual cues.

All of the above points clearly show that the design of both the Narendra Modi mobile application and its survey have huge potential for coercing a biased viewpoint upon any  survey taker and ensure that it is almost possible to express a stark, negative opinion against the demonetisation move via the survey. This can and should be remedied by the government to allow for a more open, conducive and critical discourse to take place regarding the move among the public. It is only when such opinion is allowed to exist in the first place, that the government can understand, engage and respond to the various valid critiques of the move. The chilling effect that would take place in the current form of the survey would be counterproductive to the original intent behind its creation, which was to create a direct constructive feedback loop between the public and the government.

Navigating the 'Reconsideration' Quagmire (A Personal Journey of Acute Confusion)

by Padmini Baruah and Geetha Hariharan — last modified Nov 30, 2016 01:48 PM
An ​earlier analysis of ICANN’s Documentary Information Disclosure Policy already brought to light our concerns about the lack of transparency in ICANN’s internal mechanisms. Carrying my research forward, I sought to arrive at an understanding of the mechanisms used to appeal a denial of DIDP requests. In this post, I aim to provide a brief account of my experiences with the Reconsideration Request process that ICANN provides for as a tool for appeal.

Backdrop: What is the Reconsideration Request Process?

The Reconsideration Request process has been laid down in Article IV, Section 2 of the

ICANN Bylaws. Some of the key aspects of this provision have been outlined below[1],

  • ICANN is obligated to institute a process by which a person ​materially affected ​by ICANN action/inaction can request review or reconsideration.
  • To file this request, one must have been adversely affected by actions of the staff or the board that contradict ICANN’s policies, or actions of the Board taken up without the Board considering material information, or actions of the Board taken up by relying on false information.
  • A separate Board Governance Committee was created with the specific mandate of reviewing Reconsideration requests, and conducting all the tasks related to the same.
  • The Reconsideration Request must be made within 15 days of:
    • FOR CHALLENGES TO BOARD ACTION: the date on which information about the challenged Board action is first published in a resolution, unless the posting of the resolution is not accompanied by a rationale, in which case the request must be submitted within 15 days from the initial posting of the rationale;
    • FOR CHALLENGES TO STAFF ACTION: the date on which the party submitting the request became aware of, or reasonably should have become aware of, the challenged staff action, and
    • FOR CHALLENGES TO BOARD OR STAFF INACTION: the date on which the affected person reasonably concluded, or reasonably should have concluded, that action would not be taken in a timely manner
  • .The Board Governance Committee is given the power to summarily dismiss a reconsideration request if:
    • the requestor fails to meet the requirements for bringing a Reconsideration Request;
    • it is frivolous, querulous or vexatious; or
    • the requestor had notice and opportunity to, but did not, participate in the public comment period relating to the contested action, if applicable
  • If not summarily dismissed, the Board Governance Committee proceeds to review and reconsider.
  • A requester may ask for an opportunity to be heard, and the decision of the Board Governance Committee in this regard is final.
  • The basis of the Board Governance Committee’s action is public written record ­ information submitted by the requester, by third parties, and so on.
  • The Board Governance Committee is to take a decision on the matter and make a final determination or recommendation to the Board within 30 days of the receipt of the Reconsideration request, unless it is impractical to do so, and it is accountable to the Board to make an explanation of the circumstances that caused the delay.
  • The determination is to be made public and posted on the ICANN website.

ICANN has provided a neat infographic to explain this process in a simple fashion, and I am reproducing it here:

Reconsideration

(Image taken from https://www.icann.org/resources/pages/accountability/reconsideration­en​)

Our Tryst with the Reconsideration Process

The Grievance
Our engagement with the Reconsideration process began with the rejection of two of our requests (made on September 1, 2015) under ICANN’s Documentary Information Disclosure Policy. The requests sought information about the registry and registrar compliance audit process that ICANN maintains, and asked for various documents pertaining to the same[2]:

  • Copies of the registry/registrar contractual compliance audit reports for all the audits carried out as well as external audit reports from the last year (2014­2015).
  • A generic template of the notice served by ICANN before conducting such an audit.
  • A list of the registrars/registries to whom such notices were served in the last year.
  • An account of the expenditure incurred by ICANN in carrying out the audit process.
  • A list of the registrars/registries that did not respond to the notice within a reasonable period of time.
  • Reports of the site visits conducted by ICANN to ascertain compliance.
  • Documents which identify the registries/registrars who had committed material discrepancies in the terms of the contract.
  • Documents pertaining to the actions taken in the event that there was found to be some form of contractual non­compliance.
  • A copy of the registrar self­assessment form which is to be submitted to ICANN.

ICANN integrated both the requests and addressed them via one response on 1 October, 2015 (which can be found here​). In their response, ICANN inundated us with already available links on their website explaining the compliance audit process, and the processes ancillary to it, as well as the broad goals of the programme ­ none of which was sought for by us in our request. ICANN then went on to provide us with information on their Three­Year Audit programme, and gave us access to some of the documents that we had sought, such as the pre­audit notification template, list of registries/registrars that received an audit notification, the expenditure incurred to some extent, and so on .

Individual contracted party reports were denied to us on the basis of their grounds for non­disclosure. Further, and more disturbingly, ICANN refused to provide us with the names of the contracted parties who had been found under the audit process to have committed discrepancies. Therefore, a large part of our understanding of the way in which the compliance audit process works remains unfinished.

What we did

Dissatisfied with this response, I went on to file a Reconsideration request (number 15­22) as per their standard format on November 2, 2015. (The request filed can be accessed here​).As grounds for reconsideration, I stated that “​As a part of my research I was tracking the ICANN compliance audit process, and therefore required access to audit reports in cases where discrepancies where formally found in their actions. This is in the public interest and therefore requires to be disclosed...While providing us with an array of detailed links explaining the compliance audit process, the ICANN staff has not been able to satisfy our actual requests with respect to gaining an understanding of how the compliance audits help in regulating actions of the registrars, and how they are effective in preventing breaches and discrepancies.​” Therefore, I requested them to make the records in question publicly available ­ “​We request ICANN to make the records in question, namely the audit reports for individual contracted parties that reflect discrepancies in contractual compliance, which have been formally recognised as a part of your enforcement process. We further request access to all documents that relate to the expenditure incurred by ICANN in the process, as we believe financial transparency is absolutely integral to the values that ICANN stands by.​”

The Board Governance Committee’s response3

The determination of the Board Governance Committee was that our claims did not merit reconsideration, as I was unable to identify any “​misapplication of policy or procedure by the ICANN Staff​”, and my only issue was with the substance of the DIDP Response itself, and substantial disagreements with a DIDP response are not proper bases for reconsideration

(emphasis supplied).

The response of the Board Governance Committee was educative of the ways in which they determine Reconsideration Requests. Analysing the DIDP process, it held that ICANN was well within its powers to deny information under its defined Conditions for Non­Disclosure, and denial of substantive information did not amount to a procedural violation. Therefore, since the staff adhered to established procedure under the DIDP, there was no basis for our grievance, and our request was dismissed..

Furthermore, as a post­script, it is interesting to note that the Board Governance Committee delayed its response time by over a month, by its own admission ­ “​In terms of the timing of the BGC’s recommendation, it notes that Section 2.16 of Article IV of the Bylaws provides that the BGC shall make a final determination or recommendation with respect to a reconsideration request within thirty days, unless impractical. To satisfy the thirty­day deadline, the BGC would have to have acted by 2 December 2015. However, due to the timing of the BGC’s meetings in November and December, the first practical opportunity for the BGC to consider Request 15­22 was 13 January 2016.​”4

Whither do I wander now?

To me, this entire process reflected the absurdity of the Reconsideration request structure as an appeal mechanism under the Documentary Information Disclosure Policy. As our experience indicated, there does not seem to be any way out if there is an issue with the substance of ICANN’s response. ICANN, commendably, is particular about following procedure with respect to the DIDP. However, what is the way forward for a party aggrieved by the flaws in the existing policy? As I had analysed earlier​, the grounds for ICANN to not disclose information are vast, and used to deny a large chunk of the information requests that they receive. How is the hapless requester to file a meaningful appeal against the outcome of a bad policy, if the only ground for appeal is non­compliance with the procedure of said bad policy? This is a serious challenge to transparency as there is no other way for a requester to acquire information that ICANN may choose to withold under one of its myriad clauses. It cannot be denied that a good information disclosure law ought to balance the free  disclosure of information with the holding back of information that truly needs to be kept private.[3][4] However, it is this writer’s firm opinion that even instances where information is witheld, there has to be a stronger explanation for the same, and moreover, an appeals process that does not take into account substantive issues which might adversely affect the appellant falls short of the desirable levels of transparency. Global standards dictate that grounds for appeal need to be broad, so that all failures to apply the information disclosure law/policy may be remedied.6 Various laws across the world relating to information disclosure often have the following as grounds for appeal: an inability to lodge a request, failure to respond to a request within the set time frame, a refusal to disclose information, in whole or in part, excessive fees and not providing information in the form sought, as well as a catch­all clause for other failures.7

Furthermore, independent oversight is the heart of a proper appeal mechanism in such situations[5]; the power to decide the appeal must not rest with those who also have the discretion to disclose the information, as is clearly the case with ICANN, where the Board Governance Committee is constituted and appointed by the ICANN Board itself [one of the bodies against whom a grievance may be raised].

Suggestions

We believe ICANN, in keeping with its global, multistakeholder, accountable spirit, should adopt these standards as well, especially now that the transition looms around the corner. Only then will the standards of open, transparent and accountable governance of the Internet ­ upheld by ICANN itself as the ideal ­ be truly, meaningfully realised. Accordingly, the following standards ought to be met with:

  1. Establishment of an independent appeals authority for information disclosure cases
  2. Broader grounds for appeal of DIDP requests
  3. Inclusion of disagreement with the substantive content of a DIDP response as a ground for appeal.
  4. Provision of proper reasoning for any justification of the witholding of information that is necessary in the public interest.

[1] Article IV, Section 2, ICANN Bylaws, 2014 ​available at https://www.icann.org/resources/pages/governance/bylaws­en/#IV

[2] Copies of the request can be found ​ here​ and here​.

[3] Katherine Chekouras, ​Balancing National Security with a Community's Right­to­Know: Maintaining

Public Access to Environmental Information Through EPCRA 's Non­Preemption Clause​, 34 B.C. Envtl. Aff. L. Rev 107, (2007).

[4] Toby Mendel, Freedom of Information: A Comparative Legal Study​ ​ 151 (2nd edn, 2008).

Id​, at 152

34 Available here​. https://www.icann.org/en/system/files/files/reconsideration­15­22­cis­final­determination­13jan16­en.pdf

[5] Mendel, ​supra ​note 6.

Comments to the BIS on Smart Cities Indicators

by Elonnai Hickok, Rohini Lakshané and Udbhav Tiwari — last modified Dec 11, 2016 07:56 AM
The Bureau of Indian Standards released the Smart Cities - Indicator on 30 September 2016. The Centre for Internet & Society (CIS) presented its views.

View the PDF


Name of the Commentator/ Organisation: The Centre for Internet and Society, India[1]

PRELIMINARY

  1. This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the ​Smart Cities - Indicators (dated 30 September 2016), released by the Bureau of Indian Standards (“BIS”).
  2. CIS is thankful for the opportunity to put forth its views.
  3. This submission is divided into three main parts. The first part, ‘Preliminary’, introduces the document; the second part, ‘About CIS’, is an overview of the organization; and, the third part contains the ‘Comments’.

ABOUT CIS

  1. CIS is a non-​profit organisation[2] that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, freedom of speech and expression, intermediary liability, digital privacy, and cybersecurity.​
  2. CIS values the fundamental principles of justice, equality, freedom and economic development. This submission is consistent with CIS' commitment to these values, the safeguarding of general public interest and the protection of India's national interest at the international level. Accordingly, the comments in this submission aim to further these principles.

III. Comments

Clause/ Para/ Table/ Figure No. commented

Comments/Modified Wordings

Justification of Proposed Change

General Comment

The indicators could generally utilize more of smart data, from both

analog and digital sources, to better reflect the performance of various

Using technology to gather information rather than limiting its scope to existing mostly non-digital sources of data. There is a lot of potential information,

 

indicators.

already collected, that simply goes unused or underutilized. Principled use of such information to make informed decisions on key aspects of urban development will lead to ‘truly’ smart cities. Further, the indicators should include actionable aspects and include avenues to leverage research to better their performance. Moreover, indicators that allow for audits for rights and transparency should be focused on as core indicators.

General Comment

Indicators are limited in scope to basic sustainability.

The indicators in their current form restrict themselves to sustainability, focused on basic sustenance, which seems to limit the scope of the Smart Cities project. Having a core set of indicators that is more relevant to India but also have an optional, more ambitious set of indicators for cities to become truly advanced and for the standard to be more dynamic. Encourage them by leveraging technology in a sustainable, human welfare and development-oriented approach, which the indicators can inculcate.

Further, policy pivots being driven by these indicators could be given to make the decision making in smart cities more transparent and accountable.

Economy

Granularity of information pertaining to macro-level economic indicators

All the indicators in the Economic section pertain to macro-level standards/ indicators. Their limitation is that they provide very little information about the diversity of the economy of a city, the factors responsible for positive or negative effects and offer no real way to encourage microeconomic changes that can lead to the improvement of the economic condition of a city, aided by modern technology. Example indicators could be: average GDP of districts within a city, and total number of operating businesses and merchants in sub-localities in the city. ​ All of this data can also be used to drive micro policies to enable localized development.

Education

Include data at city-level and indicators for higher education.

The indicators measured in the Education section only look at city level information about schools, ignoring district and even school level information already recorded and present in the system. Teacher and student attendance rates, level of basic infrastructure present in schools, presence of toilets for both genders, provisions for meals, etc. are some of the parameters that can be included in the indicator list.

Further, the list completely excludes college  education (both degree and diploma level) as a relevant indicator, nor does it include indicators for the average education of the population of the city, both of which can be easily measured using census data.

 

 

Further, d​ata that allows for a holistic decision making process - poverty levels, distance to schools, transportation levels, access to higher learning, etc. can also be used as supporting indicators. These could come from studies already done that call out the factors.

5. Education 5.1, 5.2, 5.3, 5.5

Include gender-specific indicators for students completing primary education, secondary education, and higher education, and enrolled in education institutions.

Change the term “survival rate” to “retention rate”.

Indicators for the “survival rate” (may be better represented as retention rate) of students who identify as female or transgender in schools and universities, and enrollment of school-aged and college-aged girls, women and transgender students would help work towards an inclusive smart city.

Energy

Better utilisation of data from digital electricity meters.

The advent of digital meters allows for home/business level capturing of energy usage. This information can be leveraged to better target energy leaks, theft, repair work, pricing and even renewable energy incentives.

Finance

Indicators for digital and cashless payment and transaction systems.

The strong push by the government towards digital payments could also be reflected on the list of indicators, such as the “number of establishments accepting (and not accepting) digital payment systems” being a supporting indicator. Similar standards can be extended to include microfinance (number of avenues available for lending, successful payback of loans, et cetera.)

Governance

Recommended inclusion of indicators pertaining to the Right to Information Act, 2005

The number of requests made under the Right to Information Act, 2005, and the time taken by the responding office to reply to them (in terms of the number of days) by the government offices in the city as a relevant factor to gauge transparency and accountability of the governance structures. The same can also be extended to map the parliamentary performance of the elected officials from the city at the state and national level, especially for the interests of the city. Parliamentary performance here would mean attendance records, number of question raised, resources spent on constituency development, et cetera.

10. Governance

10.2, 10.3, 10.6

Indicators for the number of women and  transgenders elected to public office in the city, employed in the

government workforce in the city in reserved positions. Indicators for women and transgendered voters registered as a percentage of the voting-age population.

In the interest of inclusive smart cities, this indicator would help fathom if positions reserved for women and transgenders are filled out and the possible reasons, if any, for some of them going vacant.The number of women and transgender voters would help track the participation of women and transgendered voters in democracy. Further, inclusion of indicators that check voter fraud, political participation levels and technologies that enable secure voter participation and involvement would also be beneficial.

Health

“Cost of basic health services” and number of​ healthcare facilities as a​ supporting indicator.

The cost, quality and access of public primary healthcare services, which can be easily measured using digital systems, should also be included in the overall scheme as a supporting indicator.

Recreation

“Utilisation of public spaces” as a supporting indicator.

Information about the utilisation of public spaces, such as parks  and grounds. can be included as a supporting indicator. Relevant information could footfalls per month or year, number of public events held at these locations, et cetera.

Most of this information is already present via figures for ticket sales while the rest could be collected using digital attendance systems. Other supporting indicators could include green space per resident, play area/park space per child, quality of the public space - (lack of garbage, sewage, etc).

Safety

“Overall crime reporting statistics”as a core indicator.

The overall incidence rates of various crimes reported, crimes solved, and data regarding investigations (such as mapping of the crime to a map, number of FIR's filed, not filed, outcomes of investigations, etc.) should all be included as core indicators to better gauge the safety record of the city.

Safety 13.3

Include “crimes carried out using technology or the Internet, as per the Criminal Procedure Code and Information Technology Act, 2008 (Amended)”.

This indicator will expand the scope of crimes against women to include acts of crime carried out using the Internet as well.

Safety 13.4

Include “Response time of the police department from the initial call in instances of crimes against women”

This would include crimes against women as defined in 13.3. This indicator gives more granular information about safety in general and women’s safety in particular, and of the perception of certain kinds of crimes not being serious enough for the police to respond to.

Shelter

Expansion of indicators to include per capita living space, basic amenities within the houses.

The scope of shelter should be expanded to include per capita living space in housing units as well as availability of basic home amenities to provide a more wholesome view of the living situation in a city. Some basic amenities that could be included are electricity uptime, water distribution (in liters/ per household), number of residents in the household, kind of house roofing, etc.

Telecommunication and Innovation

Inclusion of indicators on mobile phone usage, mobile network connectivity and computer literacy.

There are no indicators for mobile phone usage and computer literacy, both of which are essential for the healthy functioning of any city. Indicators to gauge this could include number of mobile phone users, number of (active) mobile connections, number of computer literate people, etc. Similar indicators should also be included for cellphone network coverage, public WiFi connectivity and digital public service provisions as well. Indicators for the same could be number of neighbourhoods/ localities/ suburbs covered by 2G/3G/4G/ 5G out of the total number in city, total number of Public WiFi spots per unit area, etc.

Transportation

Inclusion of indicators for efficiency, sustainability and planning of city-level transportation.

The current set of indicators do not include indicators to measure the efficiency, fuel consumption, sustainability and reach of public transport, especially in the outskirts or suburban areas. These can be included as supporting indicators: the number of GPS-connected public transport vehicles to the total number, number of vehicles equipped with panic buttons, quantum of vehicles in the city using renewable energy sources as fuel, automation of toll booths, automation of points where traffic offences can be logged (e.g illegal honking) or overspeeding.

Urban Planning

Digital information, such as geospatial data, remote sensing and digital mapping can be used to provide better and more sustainable core indicators.

Geo-spatial information (from surveys and satellites) can be utilised to provide macro-level data that can then be utilised to factor city expansions, illegal structures, suburban development, etc. Digital mapping and remote sensing capabilities can be leveraged to provide this information and the utilisation of such information in city development can be made a supporting indicator.

Sewerage and Sanitation

Indicators governing community hygiene and sanitation.

Information about covered toilets per capita of the population, sewage treatment plants, etc. are either absent or too vaguely detailed in the current set of indicators, despite the push from the government towards the Swachh Bharat programme. They should be included as Core Indicators to encourage sanitation at a citizen level.

Water Supply

Indicators for digital measurement of water

consumption per capita and at the city-level.

Digital water meters are starting to become pervasive and can provide detailed information about water consumption at a household level that was previously unavailable in city planning. A supporting indicator at a minimum can be included to further bolster information aware governance in the field.


[1] This submission is authored, in alphabetical order, by Elonnai Hickok ([email protected]), Rohini Lakshané ([email protected]) and Udbhav Tiwari ([email protected]) on behalf of the Centre for Internet and Society, India.

[2] See The Centre for Internet and Society,available at http://cis​india.org for details of the organization, and our work.

The Technology behind Big Data

by Geethanjali Jujjavarapu and Udbhav Tiwari — last modified Dec 04, 2016 09:53 AM
The authors undertakes a high-level literature review of the most commonly used technological tools and processes in the big data life cycle. The big data life cycle is a conceptual construct that can be used to study the various stages that typically occur in collecting, storing and analysing big data, along with the principles that can govern these processes.

 

Download the Paper (PDF, 277 kb)


Introduction

Defining big data is a disputed area in the field of computer science[1], there is some consensus on a basic structure to its definition[2]. Big data is data that is collected in the form of datasets that has three main criteria: size, variety & velocity, all of which operate at an immense scale[3]. It is ‘big’ in size, often running into petabytes of information, has vast variety within its components, and is created, captured and analysed at an incredibly rapid velocity. All of this also makes big data difficult to handle using traditional technological tools and techniques.

This paper will attempt to perform a high-level literature review of the most commonly used technological tools and processes in the big data life cycle. The big data life cycle is a conceptual construct that can be used to study the various stages that typically occur in collecting, storing and analysing big data, along with the principles that can govern these processes. The big data life cycle consists of four components, which will also be the key structural points of the paper, namely: Data Acquisition, Data Awareness, Data Analytics & Data Governance.4 The paper will focus on the aspects that the author believes are relevant for analysing the technological impact of big data on both technology itself and society at large.

Scope: The scope of the paper is to study the technology used in big data using the "Life Cycle of Big Data" as model structure to categorise & study the vast range of technologies that are involved in big data. However, the paper will be limited to the study of technology related directly to the big data life cycle. It shall specifically exclude the use/utilisation of big data from its scope since big data is most often being fed into other, unrelated technologies for consumption leading to rather limitless possibilities.

Goal: Goal of the paper is twofold: a.) to use the available literature on the technological aspects of big data, to perform a brief overview of the technology in the field and b.) to frame the relevant research questions for studying the technology of big data and its possible impact on society.

Data Acquisition

Acquiring big data has two main sub components to it, the first being sensing the existence of the data’ itself and the second, the stage of collecting and storing this data. Both of these subcomponents are incredibly diverse fields, with lots of rapid change occurring in the technology utilised to carry out these tasks. The section will provide a brief overview of the subcomponents and then discuss the technology used to fulfil the tasks.

Data Sensing

Data does not exist in a vacuum and is always created as a part of a larger process, especially in the aspect of modern technology. Therefore, the source of the data itself plays a vital role in determining how it can be captured and analysed in the larger scheme of things. Entities constantly emit information into the environment that can be utilised for the purposes of big data, leading to two main kinds of data: data that is “born digital” or “born analogue.”[4]

Born Digital Data

Information that is “born digital,” is created, by a user or by a digital system, specifically for use by a computer or data‐processing system. This is a vast range of information and newer fields are being added to this category on a daily basis. It includes, as a short, indicative list: email and text messaging, any form of digital input, including keyboards, mouse interactions and touch screens, GPS location data, data from daily home appliances (Internet of Things), etc. All of this data can be tracked and tagged to users as well as be aggregated to form a larger picture, massively increasing the scope of what may constitute the ‘data’ in big data.

Some indicative uses of how such born digital data is catalogued by technological solutions on the user side, prior to being sent for collection/storage are:

a.) Cookies - There are small, often just text, files that are left on user devices by websites in order to that visit, task or action (for example, logging into an email account) with a subsequent event.[5] (for example, revisiting the website)

b.) Website Analytics[6] - Various services, such as Google Analytics, Piwik, etc., can use JavaScript and other web development languages to record a very detailed, intimate track of a user's actions on a website, including how long a user hovers above a link, the time spent on the website/application and in some cases, even the time spent specific aspects of the page.

c.) GPS[7] - With the almost pervasive usage of smartphones with basic location capabilities, GPS sensors on these devices are used to provide regular, minute driven updates to applications, operating systems and even third parties about the user's location. Modern variations such as A-GPS can be used to provide basic positioning information even without satellite coverage, vastly expanding the indoor capabilities of location collection.

All of these instances of sensing born digital data are common terms, used in daily parlance by billions of people from all over the world, which is a symbolic of just how deeply they have pervaded into our daily lifestyle. Apart from privacy & security concerns this in turn also leads to an exponential increase in the data available to collect for any interested party.

Sensor Data

Information is said to be  “analogue” when it contains characteristics of the physical world, such as images, video, heartbeats, etc.  Such information becomes electronic when processed by a “sensor,” a device that can record physical phenomena and convert it into digital information. Some examples to better illustrate information that is born analogue but collected via digital means are:

a.) Voice and/or video content on devices - Apart from phone calls and other forms communication, video and voice based interactions have started to regularly be captured to provide enhanced services. These include Google Now[8], Cortana[9] and other digital assistants as well as voice guided navigation systems in cars, etc.

b.) Personal health data such as heartbeats, blood pressure, respiration, velocity, etc. - This personal, potentially very powerful information is collected by dedicated sensors on devices such as Fitbit[10], Mi Band[11], etc. as well as by increasingly sophisticated smartphone applications such as Google Fit that can do so without any special device.

c.) Camera on Home Appliances - Cameras and sensors on devices such as video game consoles (Kinect[12] being a relevant example) can record detailed human interactions, which can be mined for vast amounts of information apart from carrying out the basic interactions with the devices itself.

While not as vast a category as born digital data, the increasingly lower costs of technology and ubiquitous usage of digital, networked devices is leading to information that was traditionally analogue in nature to be captured for use at a rapidly increasing rate.

Data Collection & Storage

Traditional data was normally processed using the Extract, Transform, Load (ETL) methodology, which was used to collect the data from outside sources, modify the data to fit needs, and then upload the data into the data storage system for future use.[13] Technology such as spreadsheets, RDBMS databases, Structured Query Languages (SQL), etc. were all initially used to carry out these tasks, more often than not manually. [14]

However, for big data, the methodology traditionally followed is both inefficient and insufficient to meet the demands of modern use. Therefore, the Magnetic, Agile, Deep (MAD) process is used to collect and store data[15][16]. The needs and benefits of such a system are: attracting all the data sources regardless of their quality (magnetic), logical and physical contents of storage systems adapting to the rapid data evolution in big data (agile) and complex algorithmic statistical analysis required of big data on a very short notice[17]. (deep)

The technology used to perform data storage using the MAD process requires vast amount of processing power, which is very difficult to create in a single, physical space/unit for nonstate or research entities, who cannot afford supercomputers. Therefore, most solutions used in big data rely on two major components to store data: distributed systems and Massive Parallel Processing[18] (MPP) that run on non-relational (in-memory) database systems. Database performance and reliability is traditionally gauged using pure performance metrics (FLOPS per second, etc.) as well as the Atomicity, consistency, isolation, durability (ACID) criteria.[19] The most commonly used database systems for big data applications are given below. The specific operational qualities and performance of each of these databases is beyond the scope of this review but the common criteria that makes them well suited for big data storage have been delineated below.

Non-relational databases

Databases traditionally used to be structured entities that operated solely on the ability to correlate information stored in them using explicitly defined relationships. Even prior to the advent of big data, this outlook was turning out to be a limiting factor in how large amounts of stored information could be leveraged, this led to the evolution of non relational database systems. Before going into them in detail, a basic primer on their data transfer protocols will be helpful in understanding their operation.

A protocol is a model that structures instructions in a particular manner so that it can be reproduced from one system to another[20][21]. The protocols which govern technology in the case of big data have gone through many stages of evolution, starting off with simple HTML based systems[22], which then evolved to XML driven SOAP systems[23], which led to JavaScript Object Notation, or JSON[24], the currently used form for in most big database systems. JSON is an open format used to transfer data objects, using human-readable text and is the basis for most of the commonly used non-relational database management systems. Examples of Non-relational databases also known as NoSQL databases, include MongoDB[25], Couchbase[26], etc. They were developed for both managing as well as storing unstructured data. They aim for scaling, flexibility, and simplified development. Such databases rather focus on the high-performance scalable data storage, and allow tasks to be written in the application layer instead of databases specific languages, allowing for greater interoperability.[27]

In-Memory Databases

In order to overcome performance limitation of traditional database systems, some modern databases now use in-memory databases. These systems manage the data in the RAM memory of the server, thus eliminating storage disk input/output. This allows for almost realtime responses from the database, in comparisons to minutes or hours required on traditional database systems. This improvement in the performance is so massive that, entirely new applications are being developed for using IMDB systems.[28] These IMDB systems are also being used for advanced analytics on big data, especially to increase the access speed to data and increase the scoring rate of analytic models for analysis.[29] Examples of IMDB include VoltDB[30], NuoDB[31], SolidDB[32] and Apache Spark[33].

Hybrid Systems

These are the two major systems used to store data prior to it being processed or analysed in a big data application. However, the divide between data storage and data management is a slim one and most database systems also contain various unique attributes that cater them to specific kinds of analysis. (as can be seen from the IMDB example above) One example of a very commonly used Hybrid system that deals with storage as well as awareness of the data is Apache Hadoop33, which is detailed below.

Apache Hadoop

Hadoop consists of two main components: the HDFS for the big data storage, and MapReduce for big data analytics, each of which will be detailed in their respective section.

  1. The HDFS[34][35] storage function in Hadoop provides a reliable distributed file system, stored across multiple systems for processing & redundancy reasons. The file system is optimized for large files, as single files are split into blocks and spread across systems known as cluster nodes.[36] Additionally, the data is protected among the nodes by a replication mechanism, which ensures availability even if any node fails. Further, there are two types of nodes: Data Nodes and Name Nodes.[37] Data is stored in the form of file blocks across the multiple Data Nodes while the Name Node acts as an intermediary between the client and the Data Node, where it directs the requesting client to the particular Data Node which contains the requested data.

This operating structure for storing data also has various variations within Hadoop such as HBase for key/value pair type queries (a NoSQL based system), Hive for relational type queries, etc. Hadoop’s redundancy, speed, ability to run on commodity hardware, industry support and rapid pace of development have led to it being almost co-equivalently associated with big data.[38]

Data Awareness

Data Awareness, in the context of big data, is the task of creating a scheme of relationships within a set of data, to allow different users of the data to determine a fluid yet valid context and utilise it for their desired tasks.[39] It is a relatively new field, in which most of the work is currently being done on semantic structures to allow data to gain context in an interoperable format, in contrast to the current system where data is given context using unique, model specific constructs.[40] (such as XML Schemes, etc.)

Some of the original work on this field was carried out in the form of utilising the Resource Description Framework (RDF), which was built primarily to allow describing of data in a portable manner, especially being agnostic towards platforms and systems for Semantic Web at the W3C. SPARQL is the language used to implement RDF based designs but both largely remain underutilised in both the public domain as well as big data. Authors such as Kurt

Cagle[41] and Bob DuCharme[42] predict its explosion in the next couple of years. Companies have also started realising the value of interoperable context, with Oracle Spatial[43] and IBM’s DB2[44] already including RDF and SPARQL support in the past 3 years.

While underutilised, the rapid developments taking place in the field will make the impact that data awareness may have on big data as big as Hadoop and maybe even SQL. Some aspects of it are already beginning to be used in Artificial Intelligence, Natural Language Processing, etc. with tremendous scope for development.[45]

Data Processing & Analytics

Data Processing largely has three primary goals: a. determines if the data collected is internally consistent; b. make the data meaningful to other systems or users using either metaphors or analogy they can understand; and (what many consider most importantly) provide predictions about future events and behaviours based upon past data and trends.[46]

Being a very vast field with rapidly changing technologies governing its operation, this section will largely concentrate on the most commonly used technologies in data analytics.

Data analytics requires four primary conditions to be met in order to carry out effective processing: fast, data loading, fast query processing, efficient utilisation of storage and adaptivity to dynamic workload patterns. The analytical model most commonly associated with meeting this criteria and with big data in general is MapReduce, detailed below. There are other, more niche models and algorithms (such as Project Voldemort[47] used by LinkedIn), which are used in big data but they are beyond the scope of the review, and more information about them can be read at article linked in the previous citation. (Reference architecture and classification of technologies, products and services for big data system)

MapReduce

MapReduce is a generic parallel programming concept, derived from the “Map” and “Reduce” of functional programming languages, which makes it particularly suited for big data operations. It is at the core of Hadoop[48], and performs the data processing and analytics functions in other big data systems as well.[49] The fundamental premise of MapReduce is scaling out rather than scaling up, i.e., (adding more numerical resources, rather than increasing the power of a single system)[50]

MapReduce operates by breaking a task down into steps and executing the steps in parallel, across many systems. This comes with two advantages, a reduction in the time needed to finish the task and also a decrease in the amount of resources one has to expend to perform the task, in both power and energy. This model makes it ideally suited for the large data sets and quick response times required of big data operations generally.

The first step of a MapReduce job is to correlate the input values to a set of keys/value pairs as output. The “Map” function then partitions the processing tasks into smaller tasks, and assigns them to the appropriate key/value pairs.[51] This allows unstructured data, such as plain text, to be mapped to a structured key/value pair. As an example, the key could be the punctuation in a sentence and the value of the pair could be the number of occurrences of the punctuation overall. This output of the Map function is then passed on “Reduce” function.[52] Reduce then collects and combines this output, using identical key/value pairs, to provide the final result of the task.[53] These steps are carried using the Job Tracker & Task Tracker in Hadoop but different systems have different methodologies to carry out similar tasks.

Data Governance

Data Governance is the act of managing raw big data as well as the processed information that arises from big data in order to meet legal, regulatory and business imposed requirements. While there is no standardized format for data governance, there have been increasing call with various sectors (especially healthcare) to create such a format to ensure reliable, secure and consistent big data utilisation across the board. The following tactics and techniques have been utilised or suggested for data governance, with varying degrees of success:

  1. Zero-knowledge systems: This technological proposal maintains secrecy with respect to the low-level data while allowing encrypted data to be examined for certain higherlevel abstractions.[54] For the system to be zero-knowledge, the client’s system will have to encrypt the data and send it to the storage provider. Due to this, the provider stores the data in the encrypted format and cannot decipher the same unless he/she is in possession of the key which will decrypt the data into plaintext. This allows the individual to store his data with a storage provider while also maintaining anonymity of the details contained in such information. However, these are currently just beginning to be used in simple situations. As of now, they are not expandable to unstructured and complex cases and have to be developed marginally before they can be used for research and data mining purposes.
  2. Homomorphic encryption: Homomorphic encryption is a privacy preserving technique which performs searches and other computations over data that is encrypted while also protecting the individual’s privacy.[55] This technique has however been considered to be impractical and is deemed to be an unlikely policy alternative for near future purposes in the context of preserving privacy in the age of big data.[56]
  3. Multi-party computation: In this technique, computation is done on encrypted distributed data stores.[57] This mechanism is closely related to homomorphic encryption where individual data is kept private using encryption algorithms called “collusion-robust” while the same is used to calculate statistics.[58] The parties involved are aware of some private data and each of them use a protocol which produces results based on the information they are aware of and the information they are not aware of, without revealing the data they are not already aware of.[59] Multi-party computations thus help in generating useful data for statistical and research purposes without compromising the privacy of the individuals.
  1. Differential Privacy: Although this technological development is related to encryption, it follows a different technique. Differential privacy aims at maximizing the precision of computations and database queries while reducing the identifiability of the data owners who have records in the database, usually through obfuscation of query results.[60] This is widely applied today in the existence of big data in order to ensure preservation of privacy while trying to reap the benefits of large scale data collection.[61]
  2. Searchable encryption: Through this mechanism, the data subject can make certain data searchable while minimizing exposure and maximizing privacy.[62] The data owner can make his information available through search engines by providing the data in an encrypted format but by adding tags consisting of certain keywords which can be deciphered by the search engine. This encrypted data shows up in the search results when searched with these particular keywords but can only be read when the person is in possession of the key which is required for decrypting the information.

This technique of encryption provides maximum security to the individual’s data and preserves privacy to the greatest possible extent.

  1. K-anonymity: The property of k-anonymity is being applied in the present day in order to preserve privacy and avoid re-identification.[63] A certain data set is said to possess the property of k-anonymity if individual specific data can be released and used for various purposes without re-identification. The analysis of the data should be carried out without attributing the data to the individual to whom it belongs and should give scientific guarantees for the same.
  2. Identity Management Systems: These systems enable the individuals to establish and safeguard their identities, explain those identities with the help of attributes, follow the activity of their identities and also delete their identities if they wish to.[64] It uses cryptographic schemes and protocols to make anonymous or pseudonymous the identities and credentials of the individuals before analysing the data.
  3. Privacy Preserving Data Publishing: This is a method in which the analysts are provided with the individual’s personal information with the ability to decipher particular information from the database while preventing the inference of certain other information which might lead to a breach of privacy.[65] Data which is essential for the analysis will be provided for processing while sensitive data will not be disclosed. This tool primarily focuses on microdata.
  4. Privacy Preserving Data Mining: This mechanism uses perturbation methods and randomization along with cryptography in order to permit data mining on a filtered version of the data which does not contain any form of sensitive information. PPDM focuses on data mining results unlike PPDP.[66]

Conclusion

Studying the technology surrounding big data has led to two major observations: the rapid pace of development in the industry and the stark lack of industry standards or government regulations directed towards big data technologies. These observations have been the primary motivating factor for framing further research in the field. Understanding how to deal with big data technologically, rather than just the potential regulation of possible harms after the technological processes have been performed might be critical for the human rights dialogue as these processes become even more extensive, opaque and technologically complicated.


[1] EMC: Data Science and Big Data Analytics. In: EMC Education Services, pp. 1–508 (2012)

[2] Bakshi, K.: Considerations for Big Data: Architecture and Approaches. In: Proceedings of the IEEE Aerospace Conference, pp. 1–7 (2012)

[3] Adams, M.N.: Perspectives on Data Mining. International Journal of Market Research 52(1), 11–19 (2010) 4 Elgendy, N.: Big Data Analytics in Support of the Decision Making Process. MSc Thesis, German University in Cairo, p. 164 (2013)

[4] Big Data and Privacy: A Technological Perspective - President’s            Council of Advisors on Science and

Technology (May 2014)

[5] Chen, Hsinchun, Roger HL Chiang, and Veda C. Storey. "Business Intelligence and Analytics: From Big Data to Big Impact." MIS quarterly 36.4 (2012): 1165-1188.

[6] Chandramouli, Badrish, Jonathan Goldstein, and Songyun Duan. "Temporal analytics on big data for web advertising." 2012 IEEE 28th international conference on data engineering. IEEE, 2012.

[7] Laurila, Juha K., et al. "The mobile data challenge: Big data for mobile computing research." Pervasive Computing. No. EPFL-CONF-192489. 2012.

[8] Lazer, David, et al. "The parable of Google flu: traps in big data analysis." Science 343.6176 (2014): 12031205.

[9] ibid

[10] Banaee, Hadi, Mobyen Uddin Ahmed, and Amy Loutfi. "Data mining for wearable sensors in health monitoring systems: a review of recent trends and challenges." Sensors 13.12 (2013): 17472-17500.

[11] ibid

[12] Chung, Eric S., John D. Davis, and Jaewon Lee. "Linqits: Big data on little clients." ACM SIGARCH Computer Architecture News. Vol. 41. No. 3. ACM, 2013.

[13] Kornelson, Kevin Paul, et al. "Method and system for developing extract transform load systems for data warehouses." U.S. Patent No. 7,139,779. 21 Nov. 2006.

[14] Henry, Scott, et al. "Engineering trade study: extract, transform, load tools for data migration." 2005 IEEE Design Symposium, Systems and Information Engineering. IEEE, 2005.

[15] Cohen, Jeffrey, et al. "MAD skills: new analysis practices for big data." Proceedings of the VLDB Endowment

[16] .2 (2009): 1481-1492.

[17] Elgendy, Nada, and Ahmed Elragal. "Big data analytics: a literature review paper." Industrial Conference on Data Mining. Springer International Publishing, 2014.

[18] Wu, Xindong, et al. "Data mining with big data." IEEE transactions on knowledge and data engineering 26.1 (2014): 97-107.

[19] Supra Note 17

[20] Hu, Han, et al. "Toward scalable systems for big data analytics: A technology tutorial." IEEE Access 2 (2014):

[21] -687.

[22] Kurt Cagle, Understanding the Big Data Lifecycle - LinkedIn Pulse (2015)

[23] Coyle, Frank P. XML, Web services, and the data revolution. Addison-Wesley Longman Publishing Co., Inc., 2002.

[24] Pautasso, Cesare, Olaf Zimmermann, and Frank Leymann. "Restful web services vs. big'web services: making the right architectural decision." Proceedings of the 17th international conference on World Wide Web. ACM, 2008.

[25] Banker, Kyle. MongoDB in action. Manning Publications Co., 2011

[26] McCreary, Dan, and Ann Kelly. "Making sense of NoSQL." Shelter Island: Manning (2014): 19-20.

[27] ibid

[28] Zhang, Hao, et al. "In-memory big data management and processing: A survey." IEEE Transactions on Knowledge and Data Engineering 27.7 (2015): 1920-1948.

[29] ibid

[30] ibid

[31] Supra Note 20

[32] Ballard, Chuck, et al. IBM solidDB: Delivering Data with Extreme Speed. IBM Redbooks, 2011.

[33] Shanahan, James G., and Laing Dai. "Large scale distributed data science using apache spark." Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015. 33 Shvachko, Konstantin, et al. "The hadoop distributed file system." 2010 IEEE 26th symposium on mass storage systems and technologies (MSST). IEEE, 2010.

[34] Borthakur, Dhruba. "The hadoop distributed file system: Architecture and design." Hadoop Project Website

[35] .2007 (2007): 21.

[36] ibid

[37] ibid

[38] Zikopoulos, Paul, and Chris Eaton. Understanding big data: Analytics for enterprise class hadoop and streaming data. McGraw-Hill Osborne Media, 2011.

[39] Bizer, Christian, et al. "The meaningful use of big data: four perspectives--four challenges." ACM SIGMOD Record 40.4 (2012): 56-60.

[40] Kaisler, Stephen, et al. "Big data: issues and challenges moving forward." System Sciences (HICSS), 2013 46th Hawaii International Conference on. IEEE, 2013.

[41] Supra Note 21

[42] DuCharme, Bob. "What Do RDF and SPARQL bring to Big Data Projects?." Big Data 1.1 (2013): 38-41.

[43] Zhong, Yunqin, et al. "Towards parallel spatial query processing for big spatial data." Parallel and

Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2012 IEEE 26th International. IEEE, 2012.

[44] Ma, Li, et al. "Effective and efficient semantic web data management over DB2." Proceedings of the 2008 ACM SIGMOD international conference on Management of data. ACM, 2008.

[45] Lohr, Steve. "The age of big data." New York Times 11 (2012).

[46] Pääkkönen, Pekka, and Daniel Pakkala. "Reference architecture and classification of technologies, products and services for big data systems." Big Data Research 2.4 (2015): 166-186.

[47] Sumbaly, Roshan, et al. "Serving large-scale batch computed data with project voldemort." Proceedings of the 10th USENIX conference on File and Storage Technologies. USENIX Association, 2012.

[48] Bar-Sinai, Michael. "Big Data Technology Literature Review." arXiv preprint arXiv:1506.08978 (2015).

[49] ibid

[50] Condie, Tyson, et al. "MapReduce Online." Nsdi. Vol. 10. No. 4. 2010.

[51] Supra Note 47

[52] Dean, Jeffrey, and Sanjay Ghemawat. "MapReduce: a flexible data processing tool." Communications of the ACM 53.1 (2010): 72-77.

[53] ibid

[54] Big Data         and      Privacy:           A    Technological Perspective,                 White               House,

https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014

[55] Tene, Omer, and Jules Polonetsky. "Big data for all: Privacy and user control in the age of analytics." Nw. J. Tech. & Intell. Prop. 11 (2012): xxvii.

[56] Big Data         and      Privacy:           A    Technological Perspective,                 White               House,

https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014

[57] Privacy by design in big data, ENISA

[58] Big Data         and      Privacy:           A    Technological Perspective,                 White               House,

https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy__may_2014

[59] Id

[60] Id

[61] Tene, Omer, and Jules Polonetsky. "Privacy in the age of big data: a time for big decisions." Stanford Law Review Online 64 (2012): 63.

[62] Lane, Julia, et al., eds. Privacy, big data, and the public good: Frameworks for engagement. Cambridge University Press, 2014.

[63] Crawford, Kate, and Jason Schultz. "Big data and due process: Toward a framework to redress predictive privacy harms." BCL Rev. 55 (2014): 93.

[64] http://homes.esat.kuleuven.be/~sguerses/papers/DanezisGuersesSurveillancePets2010.pdf

[65] Seda Gurses and George Danezis, A critical review of 10 years of privacy technology, August 12th 2010, http://homes.esat.kuleuven.be/~sguerses/papers/DanezisGuersesSurveillancePets2010.pdf

[66] Id

Developer team fixed vulnerabilities in Honorable PM's app and API

by Pranesh Prakash last modified Dec 04, 2016 07:08 PM
The official app of Narendra Modi, the Indian Prime Minister, was found to contain a security flaw in 2015 that exposed millions of people's personal data. A few days ago a very similar flaw was reported again. This post by Bhavyanshu Parasher, who found the flaw and sought to get it fixed last year, explains the technical details behind the security vulnerability.

This blog post has been authored by Bhavyanshu Parasher. The original post can be read here.


What were the issues?

The main issue was how the app was communicating with the API served by narendramodi.in.

  1. I was able to extract private data, like email addresses, of each registered user just by iterating over user IDs.
  2. There was no authentication check for API endpoints. Like, I was able to comment as any xyz user just by hand-crafting the requests.
  3. The API was still being served over HTTP instead of HTTPS.

Fixed

  1. The most important issue of all. Unauthorized access to personal info, like email addresses, is fixed. I have tested it and can confirm it.
  2. A check to verify if a valid user is making the request to API endpoint is fixed. I have tested it and can confirm it.
  3. Blocked HTTP. Every response is served over HTTPS. The people on older versions (which was serving over HTTP) will get a message regarding this. I have tested it. It says something like “Please update to the latest version of the Narendra Modi App to use this feature and access the latest news and exciting new features”. It’s good that they have figuered out a way to deal with people running older versions of the app. Atleast now they will update the app.

Detailed Vulnerability Disclosure

Found major security loophole in how the app accesses the “api.narendramodi.in/api/” API. At the time of disclosure, API was being served over “HTTP” as well as “HTTPS”. People who were still using the older version of the app were accessing endpoints over HTTP. This was an issue because data (passwords, email addresses) was being transmitted as plain text. In simple terms, your login credentials could easily be intercepted. MITM attack could easily fetch passwords and email addresses. Also, if your ISP keeps log of data, which it probably does, then they might already have your email address, passwords etc in plain text. So if you were using this app, I would suggest you to change your password immediately. Can’t leave out a possibility of it being compromised.

Another major problem was that the token needed to access API was giving a false sense of security to developers. The access token could easily be fetched & anyone could send hand-crafted HTTP requests to the server. It would result in a valid JSON response without authenticating the user making the request. This included accessing user-data (primarily email address, fb profile pictures of those registered via fb) for any user and posting comments as any registered user of the app. There was no authentication check on the API endpoint. Let me explain you with a demo.

The API endpoint to fetch user profile information (email address) was getprofile. Before the vulnerability was fixed, the endpoint was accessible via “http://www.narendramodi.in/api/getprofile?userid=useridvalue&token=sometokenvalue”. As you can see, it only required two parameters. userid, which we could easily iterate on starting from 1 & token which was a fixed value. There was no authentication check on API access layer. Hand-crafting such requests resulted in a valid JSON response which exposed critical data like email addresses of each and every user. I quickly wrote a very simply script to fetch some data to demonstrate. Here is the sample output for xrange(1,10).

App

Not just email addresses, using this method you could spam on any article pretending to be any user of the app. There was no authentication check as to who was making what requests to the API. See,

App

They have fixed all these vulnerabilities. I still believe it wouldn’t have taken so long if I would have been able to get in touch with team of engineers directly right from the beginning. In future, I hope they figure out an easier way to communicate. Such issues must be addressed as soon as they are found but the communication gap cost us lot of time. The team did a great job by fixing the issues and that’s what matters.


Disclosure to officials

The email address provided on Google play store returned a response stating “The email account that you tried to reach is over quota”. Had to get in touch with authorities via twitter.

Vulnerability disclosed to authorities on 30th sep, 2015 around 5:30 AM

Tweet 1

After about 30 hours of reporting the vulnerabillity

Tweet 2

Proposed Solution

Consulted @pranesh_prakash as well regarding the issue.

Tweet 3

After this, I mailed them a solution regarding the issues.


Discussion with developer

Received phone call from a developer. Discussed possible solutions to fix it.

The solution that I proposed could not be implemented since the vulnerability is caused by a design flaw that should have been thought about right from the beginning when they started developing the app. It just proved how difficult it is to fix such issues for mobile apps. For web apps, it’s lot easier. Why? Because for mobile apps, you need to consider backward compatibility. If they applied my proposed solution, it would crash app for people running the older versions. Main problem is that people don’t upgrade to latest versions leaving themselves vulnerable to security flaws. The one I proposed is a better way of doing it I think but it will break for people using older versions as stated by the developer. Though, they (developers) have come up with solutions that I think would fix most of the issues and can be considered an alternative.

Tweet 4

On Oct 3rd, I received mail from one of the developers who informed me they have fixed it. I could not check it out at that time as I was busy but I checked it around 5 PM. I can now confirm they have fixed all three issues.


Update 12/02/2016

This vulnerability in NM app is similar to the one I got fixed last year. Like I said before also, the vulnerability is because of how the API has been designed. They released the same patch which they did back then. Removing email addresses from the JSON output is not really a patch. I wonder why would they introduce personal information in JSON output again if they knew that’s a privacy problem and has been reported by me a year back. He showed how he was able to follow any user being any user. Similarly, I was able to comment on any post using account of any user of the app. When I talked to the developer back then he mentioned it will be difficult to migrate users to a newer/secure version of the app so they are releasing this patch for the meantime. It was more of a backward compatibility issue because of how API was designed. The only solution to this problem is to rewrite the API from scratch and add standard auth methods for API. That should take care of most of vulnerabilities.

Also read:

Privacy and Security Implications of Public Wi-Fi - A Case Study

by Vanya Rakesh last modified Dec 12, 2016 12:29 PM
Today internet is an essential necessity in everyday work and recognizing its vital role, governments across the world including the Indian government, are giving access to public Wi-Fi. However, use of public Wi-Fi brings along with it certain privacy and security risks. This research paper analyses some of these concerns, along with the privacy policies of key ISPs in India providing public Wi-Fi service in Bangalore-namely D-VoIS and Tata Docomo, as a case study to provide suitable recommendations.

 

Download (PDF)


Contents

1. Introduction

2. Global Scenario

3. Overview of Public Wi-Fi in India

4. Indian Policy and Legal Conundrum

5. Public Wi-Fi and Privacy Concerns

5.1. Data Theft

5.2. Tracking an Individual

5.3. Makes the Electronic Devices Prone to Hacking and Setting up Fake Networks

5.4. Illegal Use of Data

6. Ranking Digital Rights Project

6.1. D-VoIS, Bangalore

6.2. Tata Docomo, Bangalore

7. Compliance of Privacy Policies with Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011

8. Conclusion and Recommendations

8.1. Commitment

8.2. Freedom of Expression

8.3. Privacy


1. Introduction

Recognizing internet as a critical tool for day-to-day work and facilitating increased access to it in the past few years,[1] the Indian Government as well as Governments across the world have rolled out plans for offering public Wi-Fi. However, privacy risks of using public Wi-Fi have also been flagged across jurisdictions, which will be discussed in this paper. Apart from highlighting key privacy concerns associated with the use of free public Wi-Fi, this case study aims to analyse the privacy policies of two of the Internet Service Providers in India-namely Tata Docomo[2] and D-VoiS[3], which offer public Wi-Fi services in Bangalore city against the indicators listed under the Ranking Digital Rights project[4], as well as the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011[5]. Based on this analysis, this paper shall list key recommendations to these ISPs to ensure sound privacy policies and practices with a view to have a balanced framework and ecosystem in light of key privacy considerations, especially in light of public Wi-Fi.

2. Global Scenario

Security and privacy concerns around the use of free and public Wi-Fi have been raised in India[6] as well as across the globe. In various cities like Bangalore, Delhi, Hyderabad, New York, London, Paris, etc., privacy experts have raised concerns over the public Wi-Fi systems at metro stations, malls, payphones and other such public places.[7]

For many years, New York City has been in the process of developing a “free” public Wi-Fi project called LinkNYC[8] to bring wireless Internet access to the residents of the city. However, privacy concerns have been raised by the users and privacy advocates like the New York Civil Liberties Union, where the latter also issued a letter to the Mayor's office regarding this[9] as the collection of potentially sensitive personal, locational and behavioral data, without adequate safeguards could result in sharing of such data without the data subject’s consent or knowledge. For example, one of the concerns raised has been regarding retention of user's data by CityBridge, the company behind the LinkNYC kiosks, often indefinitely,  for building a massive database which carries a risk of security breaches and unwarranted surveillance by the police. [10] Also, users are concerned that their internet browsing history may reveal sensitive information about their political views, religious affiliations or medical issues[11], since registration is required to use LinkNYC by submitting their email addresses and by agreeing to allow CityBridge to collect information about the websites they visit, the duration for which they linger on certain information on a webpage and the links they click on. On the contrary, the privacy policy of CityBridge states that this massive amount of personally identifiable user information would be cleared only if there have been 12 months of user inactivity, raising an alarm in light of privacy concerns.[12]

In the year 2015, the Information Commissioner’s Office (ICO) conducted a review of public Wi-Fi services on a UK high street, where it was found that the Wi-Fi networks requested for varying levels of personal data, which was also processed for marketing purposes. The results highlighted that while some networks did not request any personal data, others asked for varying amounts, including information regarding name, postal and email address, mobile number, gender, as well as asking for a date of birth as a mandatory requirement (except for gender). During the sign-up process, though some Wi-Fi networks provided users with the choice to opt-in or opt-out for receiving electronic newsletters and updates, others offered no choice at all.[13] As a result of the review process, the ICO notified Wi-Fi network providers that it had reviewed and advised them of improvements that they could make to their service and issued guidance[14] regarding the dangers of using public Wi-Fi[15]. ICO also recommended users to take time to read all the information given by providers of Wi-Fi services before connecting.

In 2006, the European Data Retention Directive 2006/24/EC[16] was introduced for the retention of communications data by providers of public electronic communications services for national security. The Directive provides an obligation for providers of publicly available electronic communications services and public communications networks to retain traffic and location data for the purpose of the investigation, detection, and prosecution of serious crime.[17] Also, the Data Retention (EC Directive) Regulations 2009[18] were introduced to implement the Directive in the UK. However, this was challenged on grounds of insufficient safeguards for the privacy rights of individuals, given the substantial interference which it facilitated with those rights.[19]

To ensure protection of user’s data and information, the Data Protection Act 1998[20] in UK obliges businesses retaining people’s data to comply with the law, which involves informing people about what data is being collected and ensure that the data is stored securely.[21] . Therefore, in case of ISP’s providing public Wi-Fi service, this would relate to the information people provide when they log on, such as their email address. Under the Act, the data protection principles must be complied with by the data controllers and it needs to be ensured that the information is used fairly and lawfully, for limited and stated purposes, used in a way that is adequate, relevant and not excessive, kept for no longer than is absolutely necessary, handled according to people’s data protection rights, kept safe and secure and not transferred outside the European Economic Area without adequate protection.[22] This would soon be updated and synced with the European Union’s General Data Protection Directive (GDPR).

3. Overview of Public Wi-Fi in India

In India, the public Wi-Fi in some cases has been offered free for a limited duration, in several cities across the country. For example, in 2014, Bangalore became the first city in the country to establish free public Wi-Fi- Namma Wi-Fi (802.11N) to make Bangalore a smart and connected city. The service is offered at MG Road, Brigade Road and four other locations in Bangalore including Traffic and Transit Management Centres (TTMCs) at Shanthinagar, Yeshwanthpur, Koramangala and CMH Road in Indiranagar.[23] The internet and Wi-Fi service provider for Namma Wi-Fi is D-VoiS Broadband Ltd,a city-based firm.[24] However, it seems the State Government plans to pull the plug on the project, funds, lack of awareness and difficulty in access as key constraints.[25] Tata Docomo has inked an agreement with GMR Airports to offer Wi-Fi services at several International Airports in the country, including the Bangalore International Airport. It offers access to access free Wi-Fi service for 45 minutes, following which they users are required to pay for the service online, to continue using the Wi-Fi service.[26]

Delhi has also introduced free Wi-Fi at its premier shopping hubs of Connaught Place and Khan Market in the year 2014, and BSNL launched a free WiFi service at Karnataka’s Malpe beach in the year 2016 making it the first WiFi beach in the three coastal districts of the state.[27] The State Governments of Mumbai, Kolkata, Patna and Ahmedabad also offer free Wi-Fi services in limited areas.[28] As part of the flagship programme by Indian Government, Digital India, the Government announced the rollout of Wi-Fi services by June 2015 at select public places in 25 Indian cities with population of over 10 lakh and tourist destinations by December 2015.[29] Also, the Government has plans to digitise India by rolling out free Wi-Fi in 2500 towns and cities over a span of 3 years.[30] Google plans to deploy WiFi at 100 railway stations in partnership with Railtel. Under this scheme, Mumbai Central was the first station to get free Wi-Fi in the year 2016.[31] Also, Google's Project Loon aims to provide internet connectivity in remote and rural areas in India, which is currently being tested in other countries.[32].

4. Indian Policy and Legal Conundrum

In light of national security concerns around the misuse of public Wi-Fi, the Department of Telecommunication, GoI, published a regulation[33] dated February 2009, defining procedures for the establishment and use of public Wi-Fi to prevent misuse of public Wi-Fi and to be able to track the perpetrator in case of abuse. Indeed, the DOT has stated that “Insecure Wi-Fi networks are capable of being misused without any trail of user at later date”.[34]

As per the 2009 Regulations, DoT has instructed ISPs to enforce centralized authentication using Login ID and Password for each user to ensure that the identity of the user can be traced.[35] Regarding Wi-Fi services provided at public places, the Regulations state that bulk login IDs shall be created for controlled distribution, with authentication done at a centralized server. The subscribers are required to use public Wi-Fi by registering with temporary user ID and password, in the following methods:

  • Obtaining copy of photo identity of the subscriber, to be maintained by Licensee for one year; or
  • Providing details of user ID and password via SMS on subscriber's mobile phone , to be used as his/her identity by keeping the mobile number for one year.

Additionally, the data protection regime in India is governed by section 43A of the Information Technology Act, 2000 and the Rules[36] notified under it. It obliges corporate bodies which possess, deal or handle any sensitive personal data to implement and maintain reasonable security practices, failing which they would be held liable to compensate those affected by any negligence attributable to this failure. The said Rules also define requirements and safeguards that every Body Corporate is legally required to incorporate into the company's privacy policy. The Rules put restrictions on body corporates on collecting sensitive personal information, and also states that it must obtain prior consent from the “provider of information” regarding “purpose, means and modes of use of the information, along with limiting disclosure of such information.[37] Most of the ISPs in India being a private company, like D-VoiS and Tata Docomo, are obliged to comply with these provisions. Also, under the model License Agreement for Unified License[38] by Ministry of Communication & IT, Department of Telecommunications, Government of India, where the Unified Access License Framework allows for a single license for multiple services such as telecom, the internet and television and provides certain security guidelines, privacy of communications is to be maintained by the Licensee (the ISPs in this case) and network security practices and audits are mandated along with penalties for contravention in addition to what is prescribed under the Information Technology Act,2000. It also provides for  ensuring unauthorized interception of messages does not take place. Therefore, the ISPs providing public Wi-Fi services in various cities across India would be governed by the data protection regime and could be held liable under these provisions in case of non-compliance with  the security measures so stated.

In July 2016, the Telecom Regulatory Authority of India (hereinafter referred as “TRAI”) floated a Consultation paper on Proliferation of Broadband through Public Wi-Fi Networks[39] with an objective to examine the need of encouraging public Wi-Fi networks in the country from a public policy point of view and discuss the issues as well as solutions in its proliferation.  The paper recognises the fact that India is still in a green field deployment phase in terms of adoption of public Wi-Fi services and requires solutions for resolving the challenges and risks  being faced in the process and lay a strong foundation to evolve towards a meaningful position in the advancement of initiatives related to Internet of Things, Smart Cities, etc.[40] This is an important step towards fulfilment of the Digital India scheme of the Indian Government to ensure better connectivity. In the paper, TRAI has advocated development of a payment platform which allows easy access to Wi-Fi services across internet service providers (ISPs) and through any payment instrument.[41] Besides that, the paper raises issues of various regulatory, licensing or policy measures required to encourage ubiquitous city-wide Wi-Fi networks as well as expansion of Wi-Fi networks in remote or rural areas, along with the issue of encouraging interoperability between the Wi-Fi networks of different service providers, both within the country and internationally, as well as between cellular and Wi-Fi networks.[42]

5. Public Wi-Fi and Privacy Concerns

Since proliferation of public Wi-Fi in India is happening at a moderate pace, the paper discusses key issues towards this, one of them being the logistics of deploying this service. This section briefly states and acknowledges privacy and security concerns as an important factor that may be posing issues in the adoption of public Wi-Fi services in the country. Since there have been numerous cases of security vulnerabilities in public Wi-Fi networks worldwide, security of networks and cyber crimes is a key issue for consideration.[43]

Deployment of public wireless access points has made it more convenient for people to access the Internet outside of their offices or homes. Despite advantages like ease of accessibility, connectivity and convenience, public Wi-Fi connection pose serious concerns as well. “The proliferation of public Wi-Fi is one of the biggest threats to consumer data”,  says David Kennedy, founder of TrustedSec, a specialised information security consulting company based in the United States of America.[44] Also, the networks become an easier target with little public awareness about the existence of such threats wherein users expose valuable personal data over Wi-Fi hotspots. The recently released Norton Cyber Security Report 2016[45] shows how the benefit of constant connectivity is often outweighed by consumer complacency, leaving consumers and their Wi-Fi networks at risk. For the purpose of this report, Norton surveyed 20,000 people (over a 1,000 from India ) which reflects that though users in India may be increasingly becoming aware of the cyber threats they face due to use of public Wi-Fi,  they don’t fully understand the accompanying risks and their online behaviour is often contradictory.[46] Also, it is important to consider that the services which claim to be free, actually generate revenue by advertisements, where the model works by providing free access to internet in exchange for user's’ personal and behavioral data, which is subsequently used to target ads to them.[47]

Some of the privacy harms stemming from use of public Wi-Fi are listed below.

5.1. Data Theft

With hackers finding it easy to access personal information of the data subjects, data can be  hijacked by unauthorized internet access by spoofing the MAC and IP addresses of the authenticated user’s device or by use of default settings (saved passwords or IPs).[48] The following kinds of data is at a risk of being stolen and further misused:

  • demographic and locational data[49]
  • forms of personal information acting as identifiers like financial information, social and personal information[50]
  • private information like passwords to social networking sites, email accounts and banking websites[51]
  • historical data from the devices[52]

    5.2. Tracking an Individual

    Like cell phones, Wi-Fi devices have unique identifiers that can be used for tracking purposes which can cause potential security issues. Tracking by using a Wi-Fi hotspot can also lead to third party harms like stalking.[53] To receive or use a service, often websites require the user to share their personal information such as name, age, ZIP code, or personal preferences, which is many times shared with advertisers and other third parties, without the knowledge or consent of the users.[54]

    5.3. Makes the Electronic Devices Prone to Hacking and Setting up Fake Networks

    A recent experiment conducted by the chief scientist at mobile security firm Appknox at the Bengaluru International Airport, India, found that the wireless devices could be easily hacked over the airport’s free Wi-Fi network due to the easily exploitable security holes in  the software made by Apple, Google, and Microsoft.[55] A similar experiment was backed by the European law enforcement agency, Europol, where a mobile hotspot was  created in central London[56] and the hacker was able to gain access to  passwords, apps, and even credit card and banking information with ease.[57] Lack of secure softwares and prevalence of open, unprotected Wi-Fi has made it fairly easy for hackers to set up fake twin access points that give them access to data histories and personal information.[58] This makes is easy to track data histories of users. Even if certain softwares use encryption codes, a simple decryption software can be used to obtain the information.[59]

    5.4. Illegal Use of Data

    • By authorities: the authorities have easier access to people’s browsing details and habits, and with justification in the name of national security, could be used to monitor the people without their consent.[60]

    • Wi-Fi provider: can sell the user’s demographic and location information. [61] Also, it was revealed in a study that the personal information of users is often transmitted by service providers without encryption. Anyone along the path between the user and the service’s data center can then intercept this information, opening users to grave privacy and security risks.[62]

    • By hackers: steal information and hack into unsuspecting victim’s bank accounts and misuse corporate financial information and secrets[63]

    6. Ranking Digital Rights Project

    The "Ranking Digital Rights" project, an ongoing international non-profit research initiative,  aims to promote greater respect for freedom of expression and privacy by focusing on the policies and practices of companies in the information communications technology (ICT) sector[64], rank such companies in this light, and undertake research to develop the ranking methodology.[65]

    In November 2015, the Ranking Digital Rights project launched the Corporate Accountability Index. Since several actors like the Internet and telecommunications companies, software producers, and device and networking equipment manufacturers exert growing influence over the political and civil lives of people all over the world, it is important to state that these organisations  share a responsibility to respect human rights. For this purpose, 16 Internet and telecommunications companies were evaluated according to 31 indicators, which focused on corporate disclosure of policies and practices that affect users’ freedom of expression and privacy.[66]

    The data produced by the index can help companies improve their policies, practices and help them identify challenges faced by companies in meeting their corporate obligations to respect human rights like Freedom of Expression and Privacy in the digital space.[67] Some of the key corporate practices which affect these rights are :

    • How companies handle government requests to hand over user data or restrict content;
    • How companies enforce their own terms of service;
    • What information companies collect about users and how long they retain it; and
    • To whom they share or sell user information.[68]

    The 2015 Corporate Accountability Index assesses transparency levels of the World’s most powerful Internet and telecommunications companies regarding their commitments, policies and practices that affect users’ freedom of expression and privacy and evaluates what companies share about these practices and offers recommendations for improvement. The methodology adopted relies on publicly available information so that advocates, researchers, journalists, policy makers, investors, and users can understand the extent to which different companies respect freedom of expression and privacy, and make appropriate policy, investment, and advocacy decisions. Also, public disclosures would enable researchers and journalists to investigate and verify the accuracy of company statements.[69]

    For the purpose of this research, we would apply this index and the indicators to the internet service provider of public Wi-Fi in Bangalore-D-VoiS Ltd. and Tata Docomo to understand how  comprehensive their privacy policies are when compared to global standards and make informed recommendations. Analysing policies against the index can help these companies identify best practices, as well as the obstacles they face in meeting their corporate obligations to respect human rights in the very digital spheres they helped to create.[70] The information has been gathered and analysed on the basis of publicly available information, and this can help companies empower users to make informed decisions about how they use technology, which would help build trust between users and companies in the long run.[71]

    6.1. D-VoIS[72], Bangalore

    For the purpose of this case study, the Privacy Policies of D-VoIS have been analysed on the basis of the Corporate Accountability index, and the answers can be accessed in Annex 1.

    Summary

    On the basis of the indicators and the information available, it can be ascertained that:

    • The Company has a freely available and understandable Privacy Policy and Terms of Use, though only in the English language.

    • The company does not commit to notify users in case of changes in the privacy policy of the company.

    • The company states circumstances in which it would restrict use of its services, along with reasons for content restriction.

    • The Company commits to the principle of data minimization, discloses circumstances when it shares information with third parties, and provides users with options to control the company’s collection and sharing of their information

    • Deploys industry standards for security of products and services.

    Analysis

    • Commitment: D-VoIS fares low on Commitment since it has made no overarching public commitments to protect users’ freedom of expression or privacy in a manner that meets the Index’s criteria. The Company lacks adequate top-level policy commitments to users’ freedom of expression and privacy, establishing executive and management oversight over these issues, creating a process for human rights impact assessment, and lacks stakeholder engagement and a grievance mechanism.

    • Freedom of Expression: The Company also fares low on Freedom of Expression as the terms of services, though easily available, are only in English language. Also, it does not commit to notify users about changes to the terms of service. While the company discloses what content and activities it prohibits , it provides no information about how the company notifies these restrictions to the users.

      Regarding transparency about content restriction requests, since the Indian law prevents the company from disclosing government requests for content removal[73], but it does not prevent the company from publishing more information about private requests for content restriction. D-VoIS does not provide any information with respect to this.

    • Privacy: D-VoIS is required by law to have a privacy policy available on its website, this policy is available in English, but not in other languages spoken in India. Also, D-VoIS does not  disclose what user information is collected, how and why, nor does it offer users meaningful access to their information. D-VoIS does not disclose any information regarding retention of user information, and the company could improve its disclosures about what user information it collects and how long it is retained.

      Though the company discloses information about its security practices, it does not disclose any information regarding its efforts to educate users about security threats. It also does not disclose information regarding requests by non-governmental entities for user data.

    6.2. Tata Docomo[74], Bangalore

    The Privacy Policy and Terms & Conditions of Tata Docomo have been analysed on the basis of the Corporate Accountability index, and the answers can be accessed in Annex 2.

    Summary

    On the basis of the indicators and the information available, it can be ascertained that:

    • The Company has a freely available and understandable Data Privacy Policy and Terms of Use, though only in English language.

    • The Company has established electronic and administrative safeguards designed to secure the information collected to prevent unauthorized access to or disclosure of that information and to ensure it is used appropriately.

    • The company states circumstances in which it would restrict use of its services, along with reasons for content restriction. The company’s disclosed policies and practices demonstrate how it works to avoid contributing to actions that may interfere with the  right to freedom of expression, except where such actions are lawful, proportionate and for a justifiable purpose.

    • The Company clearly states the kind of information collected, ways of collection and the reasons for collection as well as sharing.

    • Deploys industry standards for security of products and services

    Analysis

    • Commitment: Tata Docomo fares low on Commitment since it has made no overarching public commitments to protect users’ freedom of expression or privacy in a manner that meets the Index’s criteria. Though the Company has established electronic and administrative safeguards designed to secure the information collected, it lacks adequate top-level policy commitments to users’ freedom of expression and privacy, establishing executive and management oversight over these issues, creating a process for human rights impact assessment, and lack of stakeholder engagement.

    • Freedom of Expression: The Company fares low on Freedom of Expression as the terms of services, though easily available, are only in English language. Also, it does not commit to notify users about changes to the terms of service. While the company discloses what content and activities it prohibits , it provides no information about how the company notifies these restrictions to the users.

      Regarding transparency about content restriction requests, since the Indian law prevents the company from disclosing government requests for content removal, it does not prevent the company from publishing more information about private requests for content restriction. Tata Docomo does not provide any information with respect to that.

    • Privacy: Tata Docomo is required by law to have a privacy policy available on its website, this policy is available in English, but not in other languages spoken in India. No information is publically available regarding users option to control company's collection of information. Tata Docomo discloses that user information shall be retained as long as required and does not mention a specific duration for the same. Though the company discloses information about its security practices, it does not disclose any information regarding its efforts to educate users about security threats. It also does not disclose information regarding requests by non-governmental entities for user data.

    7. Compliance of Privacy Policies with Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011

    The Privacy Policy and Terms & Conditions of D-VoIS and Tata Docomo have been analysed on the basis of the security measures and procedures stated under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 to ascertain how sound and compliant the framework is with the existing data protection regime in India. The comparison can be accessed in Annex 3.

    Comparing the requirements listed under the Rules with the policies of both the companies, it can be said that though the websites of both companies provide privacy policies and are easily accessible, they lack crucial information regarding consent of the user before collection as well as sharing of information. Also, though the policies state the purpose of sharing such data with third parties, it does not state the purpose of collection of the information. The policies are also silent regarding the requirements to be complied with before transferring personal data into another jurisdiction . There is also no information about the companies having a grievance officer. Additionally, though the terms of services of D-VoIS state that the customer may choose to restrict the collection or use of their personal information, both companies do not specifically provide for an opt out mechanism to its users.

    8. Conclusion and Recommendations

    To allay the numerous concerns regarding privacy and security with respect to public Wi-Fi’s, the ISPs must have a sound Privacy Policy in place. For this purpose, adherence to the indicators as listed under the Corporate Accountability Index, along with requirements for security of personal information stated under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 and improving the policies accordingly shall greatly contribute to protection of Freedom of Expression and ensure Privacy of user information. Ensuring compliance with the existing data protection regime in the country becomes more important in light of the growing privacy and security concerns due to proliferation of free and public Wi-Fi service in India. Adequate measures like acquiring consent for collection and sharing of user data, commitment by company executives to ensure protection of rights of individuals, adoption of security standards, creating awareness about security concerns, etc. by such corporate must be considered to ensure protection of personal information and reduce the likelihood of a data breach. Both D-VoIS and Tata Docomo must consider the following recommendations in order to meet the criteria set by the Ranking Digital Rights project, ensuring commitment towards protection of right to freedom of expression and privacy of the users.

    8.1. Commitment

    • Set in place an oversight mechanism to monitor how the company’s policies and practices affect freedom of expression and privacy. In case the Company already has that in place, information regarding the same must be made publically available for greater transparency.
    • Also, they must conduct regular, comprehensive, and credible due diligence, such as human rights impact assessments, to identify how all aspects of their business impact freedom of expression and privacy.
    • In addition to that, they must Provide for a remedy or grievance mechanism. The Telecom Regulatory Authority of India also requires that all service providers have redress mechanisms. In case the Company already has that in place, information regarding the same must be made publically available for greater transparency.

    8.2. Freedom of Expression

    • The Companies must make an effort to make the Terms of Service available in the most commonly spoken languages by its users, besides English.
    • Also, it is recommended that the Companies must ensure to provide meaningful notice to users regarding change in terms of service.
    • Besides disclosing what content and activities the companies prohibit, they must disclose information regarding how it enforces these prohibitions and should provide examples regarding the circumstances under which it may suspend service to individuals or areas to help users understand such policies.
    • The Companies must also disclose information regarding the process for evaluating and responding to requests from third parties to restrict content or service. Additionally, it must disclose how long it retains user information, publish process for evaluating and responding to requests from government and other third parties for stored user data and/or real-time communications.

    8.3. Privacy

    • Though both the Companies disclose that the user information shall be shared with third parties, and Tata Docomo discloses what information is collected and how, yet there should be no legal impediment for the companies to improve its disclosures about what user information it collects, with whom it is shared, and how long it is retained to protect the privacy of the users.
    • Though Tata Docomo allows the users to review and correct their Personal Information collected by the Company, D-VoIS must release information regarding whether the users are able to view, download or otherwise obtain all of the information about them that the company holds. In case it does not allow, the Company must duly change its policy regarding the same.
    • The Companies must also publish information to help users defend against cyber threats.

    [1] The Financial Express, ‘Free wi-fi: Digital Dilemma’, February 22, 2015,

    http://www.financialexpress.com/article/economy/free-Wi-Fi-digital-dilemma/45804/

    [2] Tata Docomo, http://www.tatadocomo.com/

    [3] D-VoIS Communication Pvt. Ltd. http://www.dvois.com/

    [4] Ranking Digital Rights, https://rankingdigitalrights.org/

    [5] the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. Available at : http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf

    [6] See : http://indianexpress.com/article/technology/technology-others/public-wifi-can-be-used-to-steal-private-information-it-security-expert/, http://www.aljazeera.com/indepth/features/2016/03/india-unlocking-public-wi-fi-hotspots-160308072320835.html , http://www.business-standard.com/article/technology/indians-most-willing-to-share-personal-data-over-public-wifi-116083000673_1.html and http://articles.economictimes.indiatimes.com/2015-05-20/news/62413108_1_corporate-espionage-hotspots-bengaluru-airport

    [7] Scroll, ‘Free wifi in Delhi is good news but here is the catch’, November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch

    [8] LinkNYC,  https://www.link.nyc/

    [9] See : http://www.nyclu.org/files/releases/city%20wifi%20letter.pdf

    [10] The Huffingtonpost, ‘Maybe You Shouldn't Use Public Wi-Fi In New York City’, March 16, 2016, http://www.huffingtonpost.in/entry/public-wifi-nyc_us_56e96b1ce4b0b25c9183f74a

    [11] NYCLU, ‘City’s Public Wi-Fi Raises Privacy Concerns’, March 16, 2016,

    http://www.nyclu.org/news/citys-public-wi-fi-raises-privacy-concerns

    [12] NYCLU, ‘City’s Public Wi-Fi Raises Privacy Concerns’, March 16, 2016,

    http://www.nyclu.org/news/citys-public-wi-fi-raises-privacy-concerns

    [13]Information Commissioner’s Office Blog, ‘Be wary of public Wi-Fi’September 25, 2015, https://iconewsblog.wordpress.com/2015/09/25/be-wary-of-public-Wi-Fi/

    [14]Information Commissioner’s Office Blog, ‘Be wary of public Wi-Fi’September 25, 2015, https://iconewsblog.wordpress.com/2015/09/25/be-wary-of-public-Wi-Fi/

    [15]Marketing Law, ‘The ICO sounds a warning on public wi-fi and privacy’, November 24, 2015,

    http://marketinglaw.osborneclarke.com/data-and-privacy/the-ico-sounds-a-warning-on-public-Wi-Fi-and-privacy/

    [16]Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006  http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32006L0024

    [17] Feiler, L., "The Legality of the Data Retention Directive in Light of the Fundamental Rights to Privacy and Data Protection", European Journal of Law and Technology, Vol. 1, Issue 3, 2010, http://ejlt.org/article/view/29/75

    [18] The Data Retention (EC Directive) Regulations 2009 http://www.legislation.gov.uk/ukdsi/2009/9780111473894/pdfs/ukdsi_9780111473894_en.pdf

    [19] Purple, ‘Update on the legal implications of offering public WiFi in the UK’, September 10, 2014, http://purple.ai/update-legal-implications-offering-public-wifi-uk/

    [20] Data Protection Act 1998, http://www.legislation.gov.uk/ukpga/1998/29/contents

    [21] Wireless Social, http://www.wireless-social.com/how-it-works/legal-compliance/

    [22] Data Protection Act 1998, https://www.gov.uk/data-protection/the-data-protection-act

    [23]The Hindu, ‘Free wifi on M.G. Road and Brigade Road from Friday’, January 23, 2014, http://www.thehindu.com/news/cities/bangalore/free-wifi-on-mg-road-and-brigade-road-from-friday/article5606757.ece

    [24]The Telegraph, ‘Free Wi-fi on tech city streets- Bangalore offers five public hotspots’, January 25, 2014, http://www.telegraphindia.com/1140125/jsp/nation/story_17863705.jsp#.VwIv_Zx97IU

    [25]Economic Times, ‘Karnataka Govt pulls the plug on public Wi-Fi spots in Bengaluru’, March 15, 2016, http://tech.economictimes.indiatimes.com/news/internet/karnataka-govt-pulls-the-plug-on-public-Wi-Fi-spots-in-bengaluru/51404414

    [26] Medianama, ‘Why Don’t Indian Airports Offer Free WiFi To Passengers?’, May 22, 2013, http://www.medianama.com/2013/05/223-indian-airports-free-wifi/

    [27]Hindustan Times, ‘BSNL launches free public WiFi at Karnataka’s Malpe beach’, January 25, 2016, http://www.hindustantimes.com/tech/bsnl-launches-free-public-wifi-on-karnataka-s-malpe-beach/story-XVM06KQKIcoyqV8CLJoYzJ.html

    [28]TechTree, ‘Problems With Free City-Wide Wi-Fi Hotspots In India’, September 28, 2015,

    http://www.techtree.com/content/features/9914/problems-free-city-wide-Wi-Fi-hotspots-india.html#sthash.2ZSf9kq7.dpuf

    [29]India Today, ‘25 Indian cities to get free public Wi-Fi by June 2015’, December 17, 2014, http://indiatoday.intoday.in/technology/story/25-indian-cities-to-get-free-public-Wi-Fi-by-june-2015/1/407214.html

    [30]Business Insider, ‘Modi Government To Roll Out Free Wi-Fi In 2,500 Towns And Cities To Make India Digital’, January 23, 2015, http://www.businessinsider.in/Modi-Government-To-Roll-Out-Free-Wi-Fi-In-2500-Towns-And-Cities-To-Make-India-Digital/articleshow/45989339.cms

    [31]RailTel launches free high-speed public Wi-Fi service with Google at Mumbai Central, http://www.railtelindia.com/images/Mumbai.pdf

    [32]Economic Times, ‘Google may get government nod to conduct pilot for Project Loon in India’, May 24, 2016,

    http://economictimes.indiatimes.com/tech/internet/google-may-get-government-nod-to-conduct-pilot-for-project-loon-in-india/articleshow/52408455.cms

    [33]Department of Telecommunications, Ministry of Communications & IT, Government of India, February 23, 2009, http://www.dot.gov.in/sites/default/files/Wi-%20fi%20Direction%20to%20UASL-CMTS-BASIC%2023%20Feb%2009.pdf

    [34] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch

    [35]MojoNetworks, ‘Complying with DoT Regulation on Secure Use of WiFi: Less in Letter, More in Spirit’,  http://www.mojonetworks.com/fileadmin/pdf/Implementing_DoT_Regulation_on_WiFi_Security.pdf

    [36] Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011

    [37]The Centre for Internet & Society, ‘Privacy and the Information Technology Act — Do we have the Safeguards for Electronic Privacy?’, April 7, 2011, http://cis-india.org/internet-governance/blog/privacy/safeguards-for-electronic-privacy

    [38]License Agreement for Unified License,  http://www.dot.gov.in/sites/default/files/Unified%20Licence.pdf

    [39] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf

    [40] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf

    [41] The Economic Times, ‘Trai floats consultation paper to boost broadband through Wi-Fi in public places’, July 14, 2016, http://economictimes.indiatimes.com/articleshow/53195586.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

    [42] Telecom Regulatory Authority of India, ‘Consultation Paper on Proliferation of Broadband through Public Wi-Fi Networks’ July 13, 2016, https://www.mygov.in/sites/default/files/mygov_1468492162190667.pdf

    [43]Mint, ‘Trai issues paper on public Wi-Fi networks’ July 14, 2016, http://www.livemint.com/Industry/1jVgso2R2Lz4NR5IYFaCtN/Trai-issues-paper-on-public-WiFi-networks.html

    [44]Forbes,’How To Avoid Data Theft When Using Public Wi-Fi’, March 4, 2014, http://www.forbes.com/sites/amadoudiallo/2014/03/04/hackers-love-public-wi-fi-but-you-can-make-it-safe/#373c75e32476

    [45]Symantec, ‘Norton Cyber Security Insights Report’, 2016, https://www.symantec.com/content/dam/symantec/docs/reports/2016-norton-cyber-security-insights-report.pdf

    [46]The Indian Express, ‘Indian cybercrime victims don’t learn from past experience: Norton Report’, November 18, 2016, http://indianexpress.com/article/technology/tech-news-technology/indian-users-complacent-when-it-comes-to-cyber-security-norton-report/

    [47]Mashable, ‘This is the real price you pay for 'free' public Wi-Fi’, January 26, 2016, http://mashable.com/2016/01/25/actual-cost-free-Wi-Fi/?utm_cid=mash-com-Tw-main-link#WmAJGJ_COiq5

    [48]MojoNetworks, ‘Complying with DoT Regulation on Secure Use of WiFi: Less in Letter, More in Spirit’,  http://www.mojonetworks.com/fileadmin/pdf/Implementing_DoT_Regulation_on_WiFi_Security.pdf

    [49]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374

    [50]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374

    [51]The Indian Express, ‘Public Wifi can be used to steal private information: IT Security Expert’, May 19, 2015, http://indianexpress.com/article/technology/technology-others/public-wifi-can-be-used-to-steal-private-information-it-security-expert/#sthash.xiuWtL6v.dpuf

    [52]Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv

    [53]Network Computing, ‘Public WiFi, Location Data & Privacy Anxiety’, July 4, 2015, http://www.networkcomputing.com/wireless/public-wifi-location-data-privacy-anxiety/1496375374

    [54]University of Washington, Computer Science and Engineering, ‘When I am on Wi-Fi, I am Fearless:” Privacy Concerns & Practices in Everyday Wi-Fi Use’, https://djw.cs.washington.edu/papers/wifi-CHI09.pdf

    [55]Breitbart, ‘Fre Public Wi-Fi poses security risks’, May 19, 2015, http://www.breitbart.com/big-government/2015/05/19/free-public-wifi-poses-security-risk/

    [56]The Guardian, ‘Londoners give up eldest children in public Wi-Fi security horror show’, September 29, 2014,  https://www.theguardian.com/technology/2014/sep/29/londoners-Wi-Fi-security-herod-clause

    [57] Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv

    [58]ABC13, ‘Hackers set up fake Wi-Fi hotspots to steal your information, July 10, 2015, http://abc13.com/technology/hackers-set-up-fake-Wi-Fi-hotspots-to-steal-your-information/835223/

    [59]Medium, ‘Maybe Better If You Don’t Read This Story on Public WiFi’, October 14, 2014, https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6#.3061h6lsv

    [60] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch

    [61] Scroll, ‘Free wifi in Delhi is good news but here is the catch’ November 21, 2014, http://scroll.in/article/690755/free-wifi-in-delhi-is-good-news-but-here-is-the-catch

    [62]University of Washington, Computer Science and Engineering, ‘When I am on Wi-Fi, I am Fearless:” Privacy Concerns & Practices in Everyday Wi-Fi Use’, https://djw.cs.washington.edu/papers/wifi-CHI09.pdf

    [63] Breitbart, ‘Fre Public Wi-Fi poses security risks’, May 19, 2015, http://www.breitbart.com/big-government/2015/05/19/free-public-wifi-poses-security-risk/

    [64] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/

    [65] Business & Human Rights Resource Centre, ‘Ranking Digital Rights Project’, http://business-humanrights.org/en/documents/ranking-digital-rights-project

    [66] Ranking Digital Rights, https://rankingdigitalrights.org/about/

    [67] Ranking Digital Rights, https://rankingdigitalrights.org/about/

    [68] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/

    [69] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/

    [70] Ranking Digital Rights, https://rankingdigitalrights.org/about/

    [71] Ranking Digital Rights, https://rankingdigitalrights.org/who/frequently-asked-questions/

    [72] D-VoIS Communication Pvt. Ltd. http://www.dvois.com/

     

    [73]Section 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 states that all request and complaints must be kept confidential.

    [74] Tata Docomo, http://www.tatadocomo.com/

     

    Habeas Data in India

    by Vipul Kharbanda and edited by Elonnai Hickok — last modified Dec 10, 2016 04:01 AM
    Habeas Data is a latin word which can be loosely translated to mean “have the data”. The right has been primarily conceptualized, designed, ratified, and implemented by various nation-states in the background of a shared common history of decades of torture, terror, and other repressive practices under military juntas and other fascist regimes.

    Download the Paper (PDF)


    Introduction

    The writ of habeas data was a distinct response to these recent histories which provided individuals with basic rights to access personal information collected by the state (and sometimes byprivate agencies of a public nature) and to challenge and correct such data, requiring the state to safeguard the privacy and accuracy of people's personal data.[1]

    The origins of Habeas Data are traced back, unsurprisingly, to the European legal regime since Europe is considered as the fountainhead of modern data protection laws. The inspiration for Habeas Data is often considered to be the Council of Europe's 108th Convention on Data Protection of 1981.[2] The purpose of the Convention was to secure the privacy of individuals regarding the automated processing of personal data. For this purpose, individuals were granted several rights including a right to access their personal data held in an automated database.[3]

    Another source or inspiration behind Habeas Data is considered to be the German legal system where a constitutional right to information self-determination was created by the German Constitutional Tribunal by interpretation of the existing rights of human dignity and personality. This is a right to know what type of data is stored on manual and automatic databases about an individual, and it implies that there must be transparency on the gathering and processing of such data.[4]

    Habeas Data is essentially a right or mechanism for an individual complaint presented to a constitutional court, to protect the image, privacy, honour, information self-determination and freedom of information of a person. [5]

    A Habeas Data complaint can be filed by any citizen against any register to find out what information is held about his or her person. That person can request the rectification, update or even the destruction of the personal data held, it does not matter most of the times if the register is private or public.[6]

    Habeas Data in different jurisdictions

    Habeas Data does not have any one specific definition and has different characteristics in different jurisdictions. Therefore, in order to better understand the right, it will be useful to describe the scope of Habeas Data as it has been incorporated in certain jurisdictions in order to better understand what the right entails:[7]

    Brazil

    The Constitution of Brazil grants its citizens the right to get a habeas data “a. to assure knowledge of personal information about the petitioner contained in records or data banks of government agencies or entities of a public character; b. to correct data whenever the petitioner prefers not to do so through confidential judicial or administrative proceedings;[8]

    The place or tribunal where the Habeas Data action is to be filed changes depending on who is it presented against, which creates a complicated system of venues. Both the Brazilian constitution and the 1997 law stipulate that the court will be:

    • The Superior Federal Tribunal for actions against the President, both chambers of Congress and itself;
    • The Superior Justice Tribunal for actions against Ministers or itself;
    • The regional federal judges for actions against federal authorities;
    • State tribunals according to each state law;
    • State judges for all other cases.[9]

    Paraguay
    The Constitution of Paraguay grants a similar right of habeas data in its constitution which states:

    "All persons may access the information and the data that about themselves, or about their assets, [that] is [obren] in official or private registries of a public character, as well as to know the use made of the same and of their end. [All persons] may request before the competent magistrate the updating, the rectification or the destruction of these, if they were wrong or illegitimately affected their rights."[10]

    Compared to the right granted in Brazil, the text of the Paraguay Constitution specifically recognises that the citizen also has the right to know the use his/her data is being put to.

    Argentina

    Article 43 of the Constitution of Argentina grants the right of habeas data, though it has been included under the action of “amparo”,[11] the relevant portion of Article 43 states as follows:

    "Any person may file an amparo action to find out and to learn the purpose of data about him which is on record in public registries or data banks, or in any private [registers or data banks] whose purpose is to provide information, and in case of falsity or discrimination, to demand the suppression, rectification, confidentiality, or updating of the same. The secrecy of journalistic information sources shall not be affected."[12]

    The version of Habeas Data recognised in Argentina includes most of the protections seen in Brazil and Paraguay, such as the right to access the data, rectify it, update it or destroy it, etc. Nevertheless, the Argentinean constitution also includes certain other features such as the fact that it incorporates the Peruvian idea of confidentiality of data, being interpreted as the prohibition to broadcast or transmit incorrect or false information. Another feature of the Argentinean law is that it specifically excludes the press from the action, which may be considered as reasonable or unreasonable depending upon the context and country in which it is applied.[13]

    Venezuela
    Article 28 of the Constitution of Venezuela established the writ of habeas data, which expressly permits access to information stored in official and private registries. It states as follows:

    "All individuals have a right to access information and data about themselves and about their property stored in official as well as private registries. Secondly, they are entitled to know the purpose of and the policy behind these registries. Thirdly, they have a right to request, before a competent tribunal, the updating, rectification, or destruction of any database that is inaccurate or that undermines their entitlements. The law shall establish exceptions to these principles. By the same token, any person shall have access to information that is of interest to communities and groups. The secrecy of the sources of newspapers-and of other entities or individuals as defined by law-shall be preserved."[14]

    The Venezuelan writ of habeas data expressly provides that individuals "are entitled to know the purpose of and the policy behind these registries." Also, it expresses a right to "updating, rectification, or destruction of any database that is inaccurate or that undermines their entitlements." Article 28 also declares that the “secrecy of the sources of newspapers and of other entities or individuals as defined by law-shall be preserved."[15]

    Philippines

    It is not as if the remedy of Habeas Data is available only in Latin American jurisdictions, but even in Asia the writ of Habeas Data has been specifically granted by the Supreme Court of the Philippines vide its resolution dated January 22, 2008 which provides that “The writ of habeas data is a remedy available to any person whose right to privacy in life, liberty or security is violated or threatened by an unlawful act or omission of a public official or employee, or of a private individual or entity engaged in the gathering, collecting or storing of data or information regarding the person, family, home and correspondence of the aggrieved party.” According to the Rule on Writ of Habeas Data, the petition is to be filed with the Regional Trial Court where the petitioner or respondent resides, or which has jurisdiction over the place where the data or information is gathered, collected or stored, at the option of the petitioner. The petition may also be filed with the Supreme Court or the Court of Appeals or the Sandiganbayan when the action concerns public data files of government offices.[16]

    Two major distinctions are immediately visible between the Philippine right and that in the latin jurisdictions discussed above. One is the fact that in countries such as Bazil, Argentina and Paraguay, there does not appear to be a prerequisite to filing such an action asking for the information, whereas in Philippines it seems that such a petition can only be filed only if an individual’s “right to privacy in life, liberty or security is violated or threatened by an unlawful act or omission”. This means that the Philippine concept of habeas data is much more limited in its scope and is available to the citizens only under certain specific conditions. On the other hand the scope of the Philippine right of Habeas Data is much wider in its applicability in the sense that this right is available even against private individual and entities who are “engaged in the gathering, collecting or storing of data or information regarding the person, family, home and correspondence”. In the Latin American jurisdictions discussed above, this writ appears to be available only against either public institutions or private institutions having some public character.

    Main features of Habeas Data

    Thus from the discussion above, the main features of the writ of habeas data, as it is applied in various jurisdictions can be culled out as follows: [17]

    • It is a right to the individual or citizen to ask for his/her information contained with any data registry;
    • It is available only against public (government) entities or employees; or private entities having a public character;[18]
    • Usually it also gives the individuals the right to correct any wrong information contained in the data registry;
    • It is a remedy that is usually available by approaching any single judicial forum.

    Since the writ of Habeas Data has been established and evolved primarily in Latin American countries, there is not too much literature on it available freely in the English language and that is a serious hurdle in researching this area. For example, this author did not find many article mentioning the scope of the writ of habeas data, for example whether it is an absolute right and on what grounds can it be denied. The Constitution of Venezuela, for example, specifies that the law shall establish exceptions to these principles and infact mentions the secrecy of sources for newspapers as an exception to this rule.[19]

    Similarly in Argentina, there exists a public interest exception to the issuance of the writ of Habeas Data.[20]

    That said, although little literature on the specific exceptions to habeas data is freely available in English, references can still be found to exceptions such as state security (Brazil), secrecy of newspaper sources (Argentina and Venezuela), or other entities defined by law (Venezuela).[21]

    This suggests that the, as would be expected, the right to ask for the writ of habeas data is not an absolute right but would also be subject to certain exceptions and balanced against other needs such as state security and police investigations.

    Habeas Data in the context of Privacy

    Data protection legislation and mechanisms protect people against misuse of personal information by data controllers. Habeas Data, being a figure for use only by certain countries, gives the individuals the right to access, correct, and object to the processing of their information.

    In general, privacy is the genus and data protection is the species, data protection is a right to personal privacy that people have against the possible use of their personal data by data controllers in an unauthorized manner or against the requirements of force. Habeas Data is an action that is brought before the courts to allow the protection of the individual’s image, privacy, honour, self-determination of information and freedom of information of a person. In that sense, the right of Habeas Data can be found within the broader ambit of data protection. It does not require data processors to ensure the protection of personal data processed but is a legal action requiring the person aggrieved, after filing a complaint with the courts of justice, the access and/or rectification to any personal data which may jeopardize their right to privacy.[22]

    Habeas Data in the Indian Context

    Although a number of judgments of the Apex Court in India have recognised the existence of a right to privacy by interpreting the fundamental rights to life and free movement in the Constitution of India,[23]

    the writ of habeas data has no legal recognition under Indian law. However, as is evident from the discussion above, a writ of habeas data is very useful in protecting the right to privacy of individuals and it would be a very useful tool to have in the hands of the citizens. The fact that India has a fairly robust right to information legislation means that atleast some facets of the right of habeas data are available under Indian law. We shall now examine the Indian Right to Information Act, 2005 (RTI Act) to see what facets of habeas data are already available under this Act and what aspects are left wanting. As mentioned above, the writ of habeas data has the following main features:

    • It is a right to the individual or citizen to ask for his/her information contained with any data registry;
    • It is available only against public (government) entities or employees; or private entities having a public character;[24]
    • Usually it also gives the individuals the right to correct any wrong information contained in the data registry;
    • It is a remedy that is usually available by approaching any single judicial forum.

    We shall now take each of these features and analyse whether the RTI Act provides any similar rights and how they differ from each other.

    Right to seek his/her information contained with a data registry

    Habeas data enables the individual to seek his or her information contained in any data registry. The RTI Act allows citizens to seek “information” which is under the control of or held by any public authority. The term information has been defined under the RTI Act to mean “any material in any form, including records, documents, memos, e-mails, opinions, advices, press releases, circulars, orders, logbooks, contracts, reports, papers, samples, models, data material held in any electronic form and information relating to any private body which can be accessed by a public authority under any other law for the time being in force”.[25]

    Further, the term “record” has been defined to include “(a) any document, manuscript and file; (b) any microfilm, microfiche and facsimile copy of a document; (c) any reproduction of image or images embodied in such microfilm (whether enlarged or not); and (d) any other material produced by a computer or any other device”. It is quite apparent that the meaning given to the term information is quite wide and can include various types of information within its fold. The term “information” as defined in the RTI Act has been further elaborated by the Supreme Court in the case of Central Board of Secondary Education v. Aditya Bandopadhyay,[26]

    where the Court has held that a person’s evaluated answer sheet for the board exams held by the CBSE would come under the ambit of “information” and should be accessible to the person under the RTI Act.[27]

    An illustrative list of items that have been considered to be “information” under the RTI Act would be helpful in further understanding the concept:

    1. Asset declarations by Judges;[28]
    2. Copy of inspection report prepared by the Reserve Bank of India about a Co-operative Bank;[29]
    3. Information on the status of an enquiry;[30]
    4. Information regarding cancellation of an appointment letter;[31]
    5. Information regarding transfer of services;[32]
    6. Information regarding donations given by the President of India out of public funds.[33]

    The above list would indicate that any personal information relation to an individual that is available in a government registry would in all likelihood be considered as “information” under the RTI Act.

    However, just because the information asked for is considered to come within the ambit of section 2(h) does not mean that the person will be granted access to such information if it falls under any of the exceptions listed in section 8 of the RTI Act. Section 8 provides that if the information asked falls into any of the categories specified below then such information shall not be released in an application under the RTI Act, the categories are:

    "(a) information, disclosure of which would prejudicially affect the sovereignty and integrity of India, the security, strategic, scientific or economic interests of the State, relation with foreign State or lead to incitement of an offence;
    (b) information which has been expressly forbidden to be published by any court of law or tribunal or the disclosure of which may constitute contempt of court;
    (c) information, the disclosure of which would cause a breach of privilege of Parliament or the State Legislature;
    (d) information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;
    (e) information available to a person in his fiduciary relationship, unless the competent authority is satisfied that the larger public interest warrants the disclosure of such information;
    (f) information received in confidence from foreign Government;
    (g) information, the disclosure of which would endanger the life or physical safety of any person or identify the source of information or assistance given in confidence for law enforcement or security purposes;
    (h) information which would impede the process of investigation or apprehension or prosecution of offenders;
    (i) cabinet papers including records of deliberations of the Council of Ministers, Secretaries and other officers:
    Provided that the decisions of Council of Ministers, the reasons thereof, and the material on the basis of which the decisions were taken shall be made public after the decision has been taken, and the matter is complete, or over:
    Provided further that those matters which come under the exemptions specified in this section shall not be disclosed;
    (j) information which relates to personal information the disclosure of which has no relationship to any public activity or interest, or which would cause unwarranted invasion of the privacy of the individual unless the Central Public Information Officer or the State Public Information Officer or the appellate authority, as the case may be, is satisfied that the larger public interest justifies the disclosure of such information:
    Provided that the information which cannot be denied to the Parliament or a State Legislature shall not be denied to any person."

    The above mentioned exceptions seem fairly reasonable and infact are important since public records may contain information of a private nature which the data subject would not want revealed, and that is exactly why personal information is a specific exception mentioned under the RTI Act. When comparing this list to the recognised exceptions under habeas data, it must be remembered that a number of the exceptions listed above would not be relevant in a habeas data petition such as commercial secrets, personal information, etc. The exceptions which could be relevant for both the RTI Act as well as a habeas data writ would be (a) national security or sovereignty, (b) prohibition on publication by a court, (c) endangering the physical safety of a person, (d) hindrance in investigation of a crime. It is difficult to imagine a court (especially in India) granting a habeas data writ in violation of these four exceptions.

    Certain other exceptions that may be relevant in a habeas data context but are not mentioned in the common list above are (a) information received in a fiduciary relationship; (b) breach of legislative privilege, (c) cabinet papers; and (d) information received in confidence from a foreign government. These four exceptions are not as immediately appealing as the others listed above because there are obviously competing interests involved here and different jurisdictions may take different points of view on these competing interests.[34]

    Available only against public (government) entities or entities having public character.

    A habeas corpus writ is maintainable in a court to ask for information relating to the petitioner held by either a public entity or a private entity having a public character. In India, the right to information as defined in the RTI Act means the right to information accessible under the Act held by or under the control of any public authority. The term "public authority" has been defined under the Act to mean “any authority or body or institution of self-government established or constituted—

    (a) by or under the Constitution;
    (b) by any other law made by Parliament;
    (c) by any other law made by State Legislature;
    (d) by notification issued or order made by the appropriate Government, and includes any— (i) body owned, controlled or substantially financed; (ii) non-Government organisation substantially financed, directly or indirectly by funds provided by the appropriate Government;"[35]

    Therefore most government departments as well as statutory as well as government controlled corporations would come under the purview of the term "public authority". For the purposes of the RTI Act, either control or substantial financing by the government would be enough to bring an entity under the definition of public authority.[36]

    The above interpretation is further bolstered by the fact that the preamble of the RTI Act contains the term “governments and their instrumentalities".[37]

    Right to correct wrong information
    While certain sectoral legislations such as the Representation of the People Act and the Collection of Statistics Act, etc. may provide for correction of inaccurate information, the RTI Act does not have any such provisions. This stands to reason because the RTI Act is not geared towards providing people with information about themselves but is instead a transparency law which is geared at dissemination of information, which may or may not relate to an individual.

    Available upon approaching a single judicial forum
    While the right of habeas data is available only upon approaching a judicial forum, the right to information under the RTI Act is realised entirely through the bureaucratic machinery. This also means that the individuals have to approach different entities in order to get the information that they need instead of approaching just one centralised entity.

    Conclusion

    There is no doubt that habeas data, by itself cannot end massive electronic surveillance of the kind that is being carried out by various governments in this day and age and the excessive collection of data by private sector companies, but providing the citizenry with the right to ask for such a writ would provide a critical check on such policies and practices of vast surveillance.[38]

    An informed citizenry, armed with a right such as habeas data, would be better able to learn about the information being collected and kept on them under the garb of law and governance, to access such information, and to demand its correction or deletion when its retention by the government is not justified.

    As we have discussed in this paper, under Indian law the RTI Act gives the citizens certain aspects of this right but with a few notable exceptions. Therefore, if a writ such as habeas data is to be effectuated in India, it might perhaps be a better idea to approach it by amending/tweaking the existing structure of the RTI Act to grant individuals the right to correct mistakes in the data along with creating a separate department/mechanism so that the applications demanding access to one’s own data do not have to be submitted in different departments but can be submitted at one central place. This approach may be more pragmatic rather than asking for a change in the Constitution to grant to the citizens the right to ask for a writ in the nature of habeas data.

    There may be calls to also include private data processors within the ambit of the right to habeas data, but it could be challenging to enforce this right. This is because it is still feasible to assume that the government can put in place machinery to ensure that it can find out whether information about a particular individual is available with any of the government’s myriad departments and corporations, however it would be almost impossible for the government to track every single private database and then scan those databases to find out how many of them contain information about any specific individual. This also throws up the question whether a right such as habeas data, which originated in a specific context of government surveillance, is appropriate to protect the privacy of individuals in the private sector. Since under Indian law section 43A and the Rules thereunder, which regulate data protection, already provide for consent and notice as major bulwarks against unauthorised data collection, and limit the purpose for which such data can be utilised, privacy concerns in this context can perhaps be better addressed by strengthening these provisions rather than trying to extend the concept of habeas data to the private sector.


    [1]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803

    [2]. Article 8 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, 1981, available at https://www.coe.int/en/web/conventions/full-list/-/conventions/rms/0900001680078b37

    [3]. Guadamuz A, 'Habeas Data: The Latin-American Response to Data Protection',2000 (2) The Journal of Information, Law and Technology (JILT).

    [4]. Id.

    [5]. Speech by Chief Justice Reynato Puno, Supreme Court of Philippines delivered at the UNESCO Policy Forum and Organizational Meeting of the Information for all Program (IFAP), Philippine National Committee, on November 19, 2007, available at http://jlp-law.com/blog/writ-of-habeas-data-by-chief-justice-reynato-puno/

    [6]. Guadamuz A, 'Habeas Data: The Latin-American Response to Data Protection',2000 (2) The Journal of Information, Law and Technology (JILT).

    [7]. The author does not purport to be an expert on the laws of these jurisdictions and the analysis in this paper has been based on a reading of the actual text or interpretations given in the papers that have been cited as the sources. The views in this paper should be viewed keeping this context in mind.

    [8]. Article 5, LXXII of the Constitution of Brazil, available at https://www.constituteproject.org/constitution/Brazil_2014.pdf

    [9]. Guadamuz A, 'Habeas Data vs the European Data Protection Directive', Refereed article, 2001 (3) The Journal of Information, Law and Technology (JILT).

    [10]. Article 135 of the Constitution of Paraguay, available at https://www.constituteproject.org/constitution/Paraguay_2011.pdf?lang=en

    [11]. The petition for a writ of amparo is a remedy available to any person whose right to life, liberty and security is violated or threatened with violation by an unlawful act or omission of a public official or employee, or of a private individual or entity.

    [12]. Article 43 of the Constitution of Argentina, available at https://www.constituteproject.org/constitution/Argentina_1994.pdf?lang=en

    [13]. https://www2.warwick.ac.uk/fac/soc/law/elj/jilt/2001_3/guadamuz/

    [14]. Article 28 of the Venezuelan Constitution, available at http://www.venezuelaemb.or.kr/english/ConstitutionoftheBolivarianingles.pdf

    [15]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803

    [16]. Rule on the Writ of Habeas Data Resolution, available at http://hrlibrary.umn.edu/research/Philippines/Rule%20on%20Habeas%20Data.pdf

    [17]. The characteristics of habeas data culled out in this paper are by no means exhaustive and based only on the analysis of the jurisdictions discussed in this paper. This author does not claim to have done an exhaustive analysis of every jurisdiction where Habeas Data is available and the views in this paper should be viewed in that context.

    [18]. Except in the case of the Philippines and Venezeula. This paper has not done an analysis of the writ of habeas data in every jurisdiction where it is available and there may be jurisdictions other than the Philippines which also give this right against private entities.

    [19]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803

    [20]. The case of Ganora v. Estado Nacional,  Supreme Court of Argentina, September 16, 1999, cf.http://www.worldlii.org/int/journals/EPICPrivHR/2006/PHR2006-Argentin.html

    [21]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803

    [22]. http://www.oas.org/dil/data_protection_privacy_habeas_data.htm

    [23]. Even the scope of the right to privacy is currently under review in the Supreme Court of India. See “Right to Privacy in Peril”, http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

    [24]. Except in the case of the Philippines. This paper has not done an analysis of the writ of habeas data in every jurisdiction where it is available and there may be jurisdictions other than the Philippines which also give this right against private entities.

    [25]. Section 2(f) of the Right to Information Act, 2005.

    [26]. 2011 (106) AIC 187 (SC), also available at http://judis.nic.in/supremecourt/imgst.aspx?filename=38344

    [27]. The exact words of the Court were: “The definition of `information' in section 2(f) of the RTI Act refers to any material in any form which includes records, documents, opinions, papers among several other enumerated items. The term `record' is defined in section 2(i) of the said Act as including any document, manuscript or file among others. When a candidate participates in an examination and writes his answers in an answer-book and submits it to the examining body for evaluation and declaration of the result, the answer-book is a document or record. When the answer-book is evaluated by an examiner appointed by the examining body, the evaluated answer-book becomes a record containing the `opinion' of the examiner. Therefore the evaluated answer-book is also an `information' under the RTI Act.”

    [28]. Secretary General, Supreme Court of India v. Subhash Chandra Agarwal, AIR 2010 Del 159, available at https://indiankanoon.org/doc/1342199/

    [29]. Ravi Ronchodlal Patel v. Reserve Bank of India, Central Information Commission, dated 6-9-2006.

    [30]. Anurag Mittal v. National Institute of Health and Family Welfare, Central Information Commission, dated 29-6-2006.

    [31]. Sandeep Bansal v. Army Headquarters, Ministry of Defence, Central Information Commission, dated 10-11-2008.

    [32]. M.M. Kalra v. DDA, Central Information Commission, dated 20-11-2008.

    [33]. Nitesh Kumar Tripathi v. CPIO, Central Information Commission, dated 4-5-2012.

    [34]. A similar logic may apply to the exceptions of (i) cabinet papers, and (ii) parliamentary privilege.

    [35]. Section 2 (h) of the Right to Information Act, 2005.

    [36]. M.P. Verghese v. Mahatma Gandhi University, 2007 (58) AIC 663 (Ker), available at https://indiankanoon.org/doc/1189278/

    [37]. Principal, M.D. Sanatan Dharam Girls College, Ambala City v. State Information Commissioner, AIR 2008 P&H 101, available at https://indiankanoon.org/doc/1672120/

    [38]. González, Marc-Tizoc, ‘Habeas Data: Comparative Constitutional Interventions from Latin America Against Neoliberal States of Insecurity and Surveillance’, (2015). Chicago-Kent Law Review, Vol. 90, No. 2, 2015; St. Thomas University School of Law (Florida) Research Paper No. 2015-06. Available at SSRN:http://ssrn.com/abstract=2694803

    Comments on the Draft National Policy on Software Products

    by Anubha Sinha, Rohini Lakshané, and Udbhav Tiwari — last modified Dec 12, 2016 02:45 PM
    The Centre for Internet & Society submitted public comments to the Department of Electronics & Information Technology (DeitY), Ministry of Information & Communications Technology, Govt. of India on the National Policy of Software Products on December 9, 2016.

     

    I. Preliminary

    1. This submission presents comments by the Centre for Internet and Society, India (“​CIS​”) on the ​Draft National Policy on Software Products [1] (“​draft policy”),​ released by the Ministry of Electronics & Information Technology (“MeitY​ ​”).

    2. CIS commends MeitY on its initiative to present a draft policy, and is thankful for the opportunity to put forth its views in this public consultation period.

    3. This submission is divided into three main parts. The first part, ‘Preliminary’, introduces the document; the second part, ‘About CIS’, is an overview of the organization; and, the third part contains the comments by CIS on the Draft National Policy on Software Products.

    II. About CIS

    4. CIS is a non-​profit organisation [2] that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, freedom of speech and expression, intermediary liability, digital privacy, and cyber​ security.

    5. CIS values the fundamental principles of justice, equality, freedom and economic development. This submission is consistent with CIS' commitment to these values, the safeguarding of general public interest and the protection of India's national interest at the international level. Accordingly, the comments in this submission aim to further these principles.

    III. Comments on the Draft National Policy on Software Products

    General Comments

    6. CIS commends MeitY on its initiative to develop a consolidated National Policy on Software Products. We believe that there are certain salient points in the draft policy that deserve particular appreciation for being in the interest of all stakeholders, especially the public. An indicative list of such points include:

    1. A focus on aiding digital inclusion via software, especially in the fields of finance, education and healthcare.
    2. The recognition of the need for openness and application of open data principles in the private and public sector. Identifying the need for diversification of the information technology sector into regions outside the developed cities in India.
    3. Identifying the need for innovation and original research in emerging fields such as Internet of Things and Big Data.

    7. We observe that the draft policy weighs in the favour of creating a thriving digital economy, which indeed is a commendable objective per se. However, there are certain aspects which remain to be addressed by the draft policy, to ensure that the growth of our domestic software industry truly achieves the vision set out in Digital India for better delivery of government services and maximisation of the public interest.

    8. We submit that the proposed policy should include certain additional guiding principles to direct creation of software and its end-utilisation. These principles would ensure responsible, inclusive, judicious and secure software product life cycle by all the relevant stakeholders, including the industry, the government and especially the public. An indicative list of such principles that we believe should be explicitly included in the policy are:

    1. Ensuring that internationally accepted principles of privacy are followed in software development and utilisation, including public awareness.
    2. Requiring basic yet sufficient standards of information security to ensure protection of user data at all stages of the software product life cycle.
    3. Enforcing lingual diversity in software to allow for India’s diverse population to operate indigenous software in an inclusive manner.
    4. Mandating minimum standards on accessibility in software creation, procurement and implementation to ensure sustainable use by the differently-abled.
    5. Focusing on transparency & accountability in software procurement for all public funded projects.
    6. Implementing the utilisation of Free and Open Source Software (“​FOSS​”) in the execution of public funded projects as per the mandate of the Policy on Adoption of Open Source Software for Government of India; thereby incentivising the creation of FOSS for use in both private and public sector.
    7. For software to be truly inclusive of the goals of Digital India, it is essential that to provide supports to Indic languages and scripts without yielding an inferior experience or results for the end user in non-English interfaces. Software already deployed should be translated and localised.

    9. The inclusion of these principles in substantive clauses of the policy will go a long way in ensuring the sustainable and transparent growth of domestic software product ecosystem.

    Specific Comments

    10. Development of a robust Electronic Payment Infrastructure

    10.1. CIS observes that clauses 5.4 and 6.7 of the draft policy aim to establish a seamless electronic payment infrastructure. We submit that an electronic payment infrastructure should be designed with strong standards of information security, privacy and inclusivity (both accessibility and lingual).

    10.2. We recommend that the policy mandate minimum standards of information security, privacy and inclusivity in all payment systems across private and public sectors. The policy should, therefore, ideally specify the respective standards for these categories, for instance ISO 27001 and National Policy on Universal Electronics Accessibility [3], alongside other industry standards for Electronic Payment Infrastructure.

    11. Government Procurement

    11.1. CIS observes that clause 6.1 of the draft policy seeks to develop a framework for inclusion of Indian software in government procurement. It is commendable that the draft policy identifies the need for a better framework. CIS notes that the existing procurement procedure allows for usage of Indian software. In fact, the Government e-Marketplace(eGM) already has begun to incorporate some of these principles in general procurement.

    11.2. Indeed, the presence of a transparent and accountable government procurement, which leverages technology and the internet, is key to ensuring a sustainable and fair market. CIS recommends that the policy refer to these guiding principles to enable the development of a viable cache of Indian software products by creating more avenues, including government procurement.

    12. Incentives for Digital India oriented software

    12.1. CIS observes that clause 6.3 of the draft policy incentivises the creation of software addressing the action pillars of the commendable Digital India programme.

    12.2. For development of superior quality software which will ensure excellent success of the Digital India programme, CIS recommends that the incentives should be provided ​contingent to the incorporation of certain minimum standards of software development. Such products and services should, ​inter alia, adhere to the stipulations under National Policy on Universal Electronics Accessibility, the Guidelines for Indian Government Websites, Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011, etc. In the process, the software should be subjected to reviews by a neutral entity to gauge the compliance with the abovementioned minimum standards.

    13. Increasing adoption of Open APIs and Open Data

    13.1. CIS observes that clause 6.6 of the draft policy promotes the use of open APIs and open data in development of e-government services.

    13.2. We strongly recommend that open APIs and open data principles be adopted by software used in all government organizations, and non-commercial software . Open Data and Open APIs can serve a vital role in ensuring transparent, accountable and efficient governance, which can be leveraged in a major way within the policy by the public and civil society.

    14. Creation of Enabling Environment for Innovation, R&D, and IP Creation and Protection

    14.1. CIS observes that clause 8.1 of the draft policy seeks to create an enabling environment for innovation, R&D, and IP creation and protection.

    14.2. CIS submits that the existing TRIPS-compliant Indian intellectual property law regime is adequately designed to incentivise creativity and innovation in the area of software development. The Indian Patents Act, 1970 read with the Guidelines for Examination of Computer Related Inventions, 2016 do not permit the patenting of ​computer programmes per se. Several Indian software developers, notably small and medium sized development companies have made evidence-based submissions to the government previously on the negative impact of software patenting on software innovation [4].

    14.3. CIS recommends that the proposed policy re-affirm the adequacy of the Indian intellectual property regime to protect software development, in compliance with the TRIPS Agreement.

    IV. Conclusion

    15. CIS commends the MeitY on the development of the draft policy. We strongly urge MeitY to address the issues highlighted above, especially emphasising the incorporation of essential principles such as information security, privacy, accessibility, etc. Adoption of such measures will ensure a fair balance between commercial growth of domestic software industry and the maximisation of public interest.


    [1]. National Policy on Software Products (2016, Draft internal v1. 15) available at http://meity.gov.in/sites/upload_files/dit/files/National%20Policy%20on%20Software%20Products.pdf

    [2]. See The Centre for Internet and Society, available at http://cis- india.org for details of the organization,and our work.

    [3]. See http://meity.gov.in/sites/upload_files/dit/files/Accessible-format-National%20Policy%20on%20Universal%20Electronics.pdf

    [4]. See http://economictimes.indiatimes.com/articleshow/52159304.cms?utm_source=contentofinterest&utm_me dium=text&utm_campaign=cppst

     

    Enlarging the Small Print: A Study on Designing Effective Privacy Notices for Mobile Applications

    by Meera Manoj — last modified Dec 14, 2016 04:27 PM
    The Word’s biggest modern lie is often wholly considered to lie in the sentence “I haveread and agreed to the Terms and Conditions.” It is a well-known fact, backed by empirical research that consumers often skip reading cumbersome privacy notices. The reasons for these range from the lengthy nature, complicated legal jargon and inopportune moments when these notices are displayed. This paper seeks to compile and analyse the different simplified designs of privacy notices that have been proposed for mobile applications that encourage consumers to make informed privacy decisions.

    Introduction: Ideas of Privacy and Consent Linked with Notices

    The Notice and Choice Model

    Most modern laws and data privacy principles seek to focus on individual control. As Alan Westin of Columbia University characterises privacy, "it is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other," [1] Or simply put, personal information privacy is "the ability of the individual to personally control information about himself."[2]

    The preferred mechanism for protecting online privacy that has emerged is that of Notice and Choice.[3] The model, identified as "the most fundamental principle" in online privacy,[4] refers toconsumers consenting to privacy policies before availing of an online service. [5]

    The following 3 standards of expectations of privacy in electronic communications have emerged in the United States courts:

    1. KATZ TEST: Katz v. United States,[6] a wiretap case, established expectation of privacy as one society is prepared to recognize as ―reasonable. [7]This concept is critical to a court's understanding of a new technology because there is no established precedent to guide its analysis[8]
    2. KYLLO/ KYLLO-KATZ HYBRID TEST: Society's reasonable expectation of privacy is higher when dealing with a new technology that is not ―generally available to the public.[9]This follows the logic that it is reasonable to expect common data collection practices to be used but not rare ones. [10] In Kyllo v. United States [11] law enforcement used a thermal imaging device to observe the relative heat levels inside a house. Though as per Katz the publicly available thermal radiation technology is reasonable, the uncommon means of collection was not. This modification to the Katz standard is extremely important in the context of mobile privacy. Mobile communications may be subdivided into smaller parts of audio from a phone call, e-mail, and data related to a user's current location. Following an application of the hybrid Katz/Kyllo test, the reasonable expectation of privacy in each of those communications would be determined separately[12], by evaluating the general accessibility of the technology required to capture each stream.[13]
    3. DOUBLE CLICK TEST: DoubleClick[14] illustrates the potential problems of transferring consent to a third party, one to whom the user never provided direct consent or is not even aware of. The court held that for DoubleClick, an online advertising network, to collect information from a user it needed only to obtain permission from the website that user accessed, and not from the user himself. The court reasoned that the information the user disclosed to the website was analogous to information one discloses to another person during a conversation. Just as the other party to the conversation would be free to tell his friends about anything that was said, a website should be free to disclose any information it receives from a user's visit after the user has consented to use the website's services.

    These interpretations have weakened the standards of online privacy. While the Katz test vaguely hinges on societal expectations, the Kyllo Test to an extent strengthens privacy rights by disallowing uncommon methods of collection, but as the DoubleClick Test illustrates, once the user has consented to such practices he cannot object to the same. There have been sugestions to consider personal information as property when it shares features of property like location data.[15] It is fixed when it is in storage, it has a monetary value, and it is sold and traded on a regular basis. This would create a standard where consent is required for third-party access. [16] Consent will then play a more pivotal role in affixing liability.

    The notice and choice mechanism is designed to put individuals in charge of the collection and use of their personal information. In theory, the regime preserves user autonomy by putting the individual in charge of decisions about the collection and use of personal information. [17] Notice and choice is asserted as a substitute for regulation because it is thought to be more flexible, inexpensive to implement, and easy to enforce.[18] Additionally, notice and choice can legitimize an information practice, whatever it may be, by obtaining an individual's consent and suit individual privacy preferences. [19]

    However, the notice and choice mechanism is often criticized for leaving users uninformed-or misinformed, at least-as people rarely see, read, or understand privacy notices. [20] Moreover, few people opt out of the collection, use, or disclosure of their data when presented with the choice to do so.[21]

    Amber Sinha of the Centre for Internet and Society argues that consent in these scenarios Is rarely meaningful as consumers fail to read/access privacy policies, understand the consequences and developers do not provide them the choice to opt out of a particular data practice while still being allowed to use their services. [22]

    Of particular concern is the use of software applications (apps) designed to work on mobile devices. Estimates place the current number of apps available for download at more than 1.5 million, and that number is growing daily.[23] A 2011 Google study, "The Mobile Movement," identified that mobile devices are viewed as extensions of ourselves that we share with deeply personal relations with, raising fundamental questions of how apps and other mobile communications influence our privacy decision-making.

    Recent research indicates that mobile device users have concerns about the privacy implications of using apps. [24] The research finds that almost 60 percent of respondents ages 50 and older decided not to install an app because of privacy concerns (see figure 1).[25]

    Consumer Reactions

    Because no standards currently exist for providing privacy notice disclosure for apps, consumers may find it difficult to understand what data the app is collecting, how those data will be used, and what rights users have in limiting the collection and use of their data. Many apps do not provide users with privacy policy statements, making it impossible for app users to know the privacy implications of using a particular app. [26]Apps can make use of any or all of the device's functions, including contact lists, calendars, phone and messaging logs, locational information, Internet searches and usage, video and photo galleries, and other possibly sensitive information. For example, an app that allows the device to function as a scientific calculator may be accessing contact lists, locational data, and phone records even though such access is unnecessary for the app to function properly. [27]

    Other apps may have privacy policies that are confusing or misleading. For example, an analysis of health and fitness apps found that more than 30 percent of the apps studied shared data with someone not disclosed in the app's privacy policy.[28]

    Types of E-Contracts

    Margaret Radin distinguishes two models of direct e-contracts based on consent as -"contract-as-consent" and "contract-as-product." [29]

    The contract-as-consent model is the traditional picture of how binding commitment is arrived at between two humans. It involves a meeting of the minds which implies that terms be understood, alternatives be available, and probably that bargaining be possible.

    In the contract-as-product model, the terms are part of the product, not a conceptually separate bargain; physical product plus terms are a package deal. For example the fact that a chip inside an electronics item will wear out after a year is an unseen contract creating a take-it-or-leave-it choice not to buy the package.

    The product-as-consent model defies traditional ideas of consent and raises questions of whether consent is meaningful. Modern day e-contracts such as click wrap, shrink wrap, viral contracts and machine-made contracts which form the privacy policy of several apps have a product-as-consent approach where consumers are given the take-it-or-leave-it option.

    Mobile application privacy notices fall into the product-as-consent model. Consumers often have to click "I agree" to all the innumerable Terms and Conditions in order to install the app. For instance terms that the fitness app will collect biometric data is a feature of the product that is non-negotiable. It is a classic take-it-or-leave-it approach where consumers compromise on privacy to avail services.

    Contracts that facilitate these transactions are generally long and complicated and often agreed to by consumers without reading them.

    Craswell strikes a balance in applying the liability rule to point out that as explaining the meaning of extensive fine print would be very costly to point out it could be efficient to affix the liability rule not as a written contract but rather on "reasonable" terms. This means that if a fitness app collects sensitive financial information, which is unreasonable given its core activities, then even if the user has consented to the same in the privacy policy's fine print the contract should be capable of being challenged.

    The Concept of Privacy by Design

    Privacy needs to be considered from the very beginning of system development. For this reason, Dr. Anne Cavoukian [30] coined the term "Privacy by Design", that is, privacy should be taken into account throughout the entire engineering process from the earliest design stages to the operation of the productive system. This holistic approach is promising, but it does not come with mechanisms to integrate privacy in the development processes of a system. The privacy-by-design approach, i.e. that data protection safeguards should be built into products and services from the earliest stage of development, has been addressed by the European Commission in their proposal for a General Data Protection Regulation. This proposal uses the terms "privacy by design" and "data protection by design" synonymously.

    The 7 Foundational Principles[31] of Privacy by Design are:

    1. Proactive not Reactive; Preventative not Remedial
    2. Privacy as the Default Setting
    3. Privacy Embedded into Design
    4. Full Functionality - Positive-Sum, not Zero-Sum
    5. End-to-End Security - Full Lifecycle Protection
    6. Visibility and Transparency - Keep it Open
    7. Respect for User Privacy - Keep it User-Centric

    Several terms have been introduced to describe types of data that need to be protected. A term very prominently used by industry is "personally identifiable information (PII)", i.e., data that can be related to an individual. Similarly, the European data protection framework centres on "personal data". However, some authors argue that this falls short since also data that is not related to a single individual might still have an impact on the privacy of groups, e.g., an entire group might be discriminated with the help of certain information. For data of this category the term "privacy-relevant data" has been used. [32]

    An essential part of Privacy by Design is that data subjects should be adequately informed whenever personal data is processed. Whenever data subjects use a system, they should be informed about which information is processed, for what purpose, by which means and who it is shared is with. They should be informed about their data access rights and how to exercise them.[33]

    Whereas system design very often does not or barely consider the end-users' interests, but primarily focuses on owners and operators of the system, it is essential to account the privacy and security interests of all parties involved by informing them about associated advantages (e.g. security gains) and disadvantages (e.g. costs, use of resources, less personalisation). By creating this system of "multilateral security" the demands of all parties must be realized.[34]

    The Concept of Data Minimization

    The most basic privacy design strategy is MINIMISE, which states that the amount of personal data that is processed should be restricted to the minimal amount possible. By ensuring that no, or no unnecessary, data is collected, the possible privacy impact of a system is limited. Applying the MINIMISE strategy means one has to answer whether the processing of personal data is proportional (with respect to the purpose) and whether no other, less invasive, means exist to achieve the same purpose. The decision to collect personal data can be made at design time and at run time, and can take various forms. For example, one can decide not to collect any information about a particular data subject at all. Alternatively, one can decide to collect only a limited set of attributes.[35]

    If a company collects and retains large amounts of data, there is an increased risk that the data will be used in a way that departs from consumers' reasonable expectations.[36]

    There are three privacy protection goals[37] that data minimization and privacy by design seek to achieve. These privacy protection goals are:

    • Unlinkability - To prevent data being linked to an identifiable entity
    • Transparency - The information has to be available before, during and after the processing takes place.
    • Intervenability - Those who provide their data must have means of intervention into all ongoing or planned privacy-relevant data processing

    Spiekermann and Cranor raised an intriguing point in their paper, they argued that those companies that employ privacy by design and data minimization practices in their applications should be allowed to skip the need for privacy policies and forgo need for notice and choice features. [38]

    To Summarise: The emerging model and legal dialogue that regulates online privacy is that of Notice and Choice which has been severely criticised for not creating informed choice making processes. E-contracts such as agreeing to privacy notices follow the consent-as-product model. When there is extensive fine print liability must be affixed on the basis of reasonable terms. Privacy notices must incorporate the concepts of Privacy by Design through providing complete information and collecting minimum data.

    Features of Privacy Notices in the Current Mobile Ecosystem

    A privacy notice inform a system's users or a company's customers of data practices involving personal information. Internal practices with regard to the collection, processing, retention, and sharing of personal information should be made transparent.

    Each app a user chooses to install on his smartphone can access different information stored on that device. There is no automatic access to user information. Each application has access only to the data that it pulls into its own 'sandbox'.

    The sandbox is a set of fine-grained controls limiting an application's access to files, preferences, network resources, hardware etc. Applications cannot access each other's sandboxes.[39] The data that makes it into the sandbox is normally defined by user permissions.[40] These are a set of user defined controls[41]and evidence that a user consents to the application accessing that data. [42]

    To gain permission mobile apps generally display privacy notices that explicitly seek consent. These can leverage different channels, including a privacy policy document posted on a website or linked to from mobile app stores or mobile apps. For example, Google Maps uses a traditional clickwrap structure that requires the user to agree to a list of terms and conditions when the program is initially launched. [43] Foursquare, on the other hand, embeds its terms in a privacy policy posted on its website, and not within the app. [44]

    This section explains the features of current privacy notices on the 4 parameters of stage (at which the notice is given), content, length and user comprehension. Under each of these parameters the associated problems are identified and alternatives are suggested.

    (1) Timing and Frequency of Notice:

    This sub-section identifies the various stages that notices are given and highlights their advantages, disadvantages and makes recommendations. It concludes with the findings of a study on what the ideal stage to provide notice is. This is supplemented with 2 critical models to address the common problems of habituation and contextualization.

    Studies indicate that timing of notices or the stage at which they are given impact how consumer's recall and comprehend them and make choices accordingly. [45] I ntroducing only a 15-second delay between the presentation of privacy notices and privacy relevant choices can be enough to render notices ineffective at driving user behaviour.[46]

    Google Android and Apple iOS provide notices at different times. At the time of writing, Android users are shown a list of requested permissions while the app is being installed, i.e., after the user has chosen to install the app. In contrast, iOS shows a dialog during app use, the first time a permission is requested by an app. This is also referred to as a "just-in-time" notification. [47]

    The following are the stages in which a notice can be given:

    1) NOTICE AT SETUP: Notice can be provided when a system is used for the first time[48]. For instance, as part of a software installation process users are shown and have to accept the system's terms of use.

    a) Advantages: Users can inspect a system's data practices before using or purchasing it. The system developer is benefitted due to liability and transparency reasons that gain user trust. It provides the opportunity to explain unexpected data practices that may have a benign purpose in the context of the system[49]. It can even impact purchase decisions. Egelman et al. found that participants were more likely to pay a premium at a privacy-protective website when they saw privacy information in search results, as opposed to on the website after selecting a search result[50].

    b) Disadvantages: Users have become largely habituated to install time notices and ignore them[51]. Users may have difficulty making informed decisions because they have not used the system yet and cannot fully assess its utility or weigh privacy trade-offs. They may also be focused on the primary task, namely completing the setup process to be able to use the system, and fail to pay attention to notices [52].

    c) Recommendations: Privacy notices provided at setup time should be concise and focus on data practices immediately relevant to the primary user rather than presenting extensive terms of service. Integrating privacy information into other materials that explain the functionality of the system may further increase the chance that users do not ignore it.[53]

    2) JUST IN TIME NOTICE: A privacy notice can be shown when a data practice is active, for example when information is being collected, used, or shared. Such notices are referred to as "contextualized" or "just-in-time" notices[54].

    a) Advantages: They enhance transparency and enable users to make privacy decisions in context. Users have also been shown to more freely share information if they are given relevant explanations at the time of data collection[55].

    b) Disadvantages: Habituation can occur if these are shown too frequently. Moreover in apps such as gaming apps users generally tend to ignore notices displayed during usage.

    c) Recommendations: Consumers can be given notice the first time a particular type of information is accessed such as email and then be given the option to opt out of further notifications. A Consumer may then seek to opt out of notices on email but choose to view all notices on health information that is accessed depending on his privacy priorities.

    3) CONTEXT-DEPENDENT NOTICES: The user's and system's context can also be considered to show additional notices or controls if deemed necessary [56]. Relevant context may be determined by a change of location, additional users included in or receiving the data, and other situational parameters. Some locations may be particularly sensitive, therefore users may appreciate being reminded that they are sharing their location when they are in a new place, or when they are sharing other information that may be sensitive in a specific context. Facebook introduced a privacy checkup message in 2014 that is displayed under certain conditions before posting publicly. It acts as a "nudge" [57] to make users aware that the post will be public and to help them manage who can see their posts.

    a) Advantages: It may help users make privacy decisions that are more aligned with their desired level of privacy in the respective situation and thus foster trust in the system.

    b) Disadvantages: Challenges in providing context-dependent notices are detecting relevant situations and context changes. Furthermore, determining whether a context is relevant to an individual's privacy concerns could in itself require access to that person's sensitive data and privacy preferences. [58]

    c) Recommendations: Standards must be evolved to determine a contextual model based on user preferences.

    4) PERIODIC NOTICES: These are shown the first couple of times a data practice occurs, or every time. The sensitivity of the data practice may determine the appropriate frequency.

    a) Advantages: It can further help users maintain awareness of privacy-sensitive information flows especially when data practices are largely invisible [59]such as in patient monitoring apps. This helps provide better control options.

    b) Disadvantages: Repeating notices can lead to notice fatigue and habituation[60].

    c) Recommendations: Frequency of these notices needs to be balanced with user needs. [61] Data practices that are reasonably expected as part of the system may require only a single notice, whereas practices falling outside the expected context of use which the user is potentially unaware of may warrant repeated notices. Periodic notices should be relevant to users in order to be not perceived as annoying. A combined notice can remind about multiple ongoing data practices. Rotating warnings or changing their look can also further reduce habituation effects [62]

    5) PERSISTENT NOTICES: A persistent indicator is typically non-blocking and may be shown whenever a data practices is active, for instance when information is being collected continuously or when information is being transmitted[63]. When inactive or not shown, persistent notices also indicate that the respective data practice is currently not active. For instance, Android and iOS display a small icon in the status bar whenever an application accesses the user's location.

    a) Advantages: These are easy to understand and not annoying increasing their functionality.

    b) Disadvantages: These ambient indicators often go unnoticed.[64] Most systems can only accommodate such indicators for a small number of data practices.

    c) Recommendations: Persistent indicators should be designed to be noticeable when they are active. A system should only provide a small set of persistent indicators to indicate activity of especially critical data practices which the user can also specify.

    6) NOTICE ON DEMAND: Users may also actively seek privacy information and request a privacy notice. A typical example is posting a privacy policy at a persistent location[65] and providing links to it from the app. [66]

    a) Advantages: Privacy sensitive users are given the option to better explore policies and make informed decisions.

    b) Disadvantages: The current model of a link to a long privacy policy on a website will discourage users from requesting for information that they cannot fully understand and do not have time to read.

    c) Recommendations: Better option are privacy settings interfaces or privacy dashboards within the system that provide information about data practices; controls to manage consent; summary reports of what information has been collected, used, and shared by the system; as well as options to manage or delete collected information. Contact information for a privacy office should be provided to enable users to make written requests.

    Which of these Stages is the Most Ideal?

    In a series of experiments, Rebecca Balekabo and others [67] have identified the impact of timing on smartphone privacy notices. The following 5 conditions were imposed on participants who were later tested on their levels of recall of the notices through questions:

    • Not Shown: The participants installed and used the app without being shown a privacy notice
    • App Store: Notice was shown at the time of installation at the app store
    • App store Big: A large notice occupying more screen space was shown at the app store
    • App Store Popup: A smaller popup was displayed at the app Store
    • During use: Notice was shown during usage of the app

    The results (Figure) suggest that even if a notice contains information users care about, it is unlikely to be recalled if only shown in the app store and more effective when shown during app usage.

    Seeing the app notice during app usage resulted in better recall. Although participants remembered the notice shown after app use as well as in other points of app use, they found that it was not a good point for them to make decisions about the app because they had already used it, and participants preferred when the notice was shown during or before app usage.

    Hence depending on the app there are optimal times to show smartphone privacy notices to maximize attention and recall with preference being given to the beginning of or during app use.

    However several of these stages as outlined baove face the disadvantages of habituation and uncertainty on contextualization. The following 2 models have been proposed to address this:

    Habituation

    When notices are shown too frequently, users may become habituated. Habituation may lead to users disregarding warnings, often without reading or comprehending the notice[68]. To reduce habituation from app permission notices, Felt et al. identified a tested method to determine which permission requests should be emphasized [69]

    They categorized actions on the basis of revertibility, severability, initiation, alterable and approval nature (Explained in figure) and applied the following permission granting mechanisms :

    • Automatic Grant: It must be requested by the developer, but it is granted without user involvement.
    • Trusted UI elements: They appear as part of an application's workflow, but clicking on them imbues the application with a new permission. To ensure that applications cannot trick users, trusted UI elements can be controlled only by the platform. For example, a user who is sending an SMS message from a third-party application will ultimately need to press a button; using trusted UI means the platform provides the button.
    • Confirmation Dialog: Runtime consent dialogs interrupt the user's flow by prompting them to allow or deny a permission and often contain descriptions of the risk or an option to remember the decision.
    • Install-time warning: These integrate permission granting into the installation flow. Installation screens list the application's requested permissions. In some platforms (e.g., Facebook), the user can reject some install-time permissions. In other platforms (e.g., Android and Windows 8 Metro), the user must approve all requested permissions or abort installation.[70]

    Based on these conditions the following sequential model that the system must adopt was proposed to determine frequency of displaying notices:

    Sequential Model

    Initial tests have proven to be successful in reducing habituation effects and it is an important step towards designing and displaying privacy notices.

    Contextualization

    Bastian Koning and others, in their paper "Towards Context Adaptive Privacy Decisions in Ubiquitous Computing" [71] propose a system for supporting a user's privacy decisions in situ, i.e., in the context they are required in, following the notion of contextual integrity. It approximates the user's privacy preferences and adapts them to the current context. The system can then either recommend sharing decisions and actions or autonomously reconfigure privacy settings. It is divided into the following stages:

    Privacy Decision Process

    Context Model: A distinction is created between the decision level and system level. The system level enables context awareness but also filters context information and maps it to semantic concepts required for decisions. Semantic mappings can be derived from a pre-defined or learnt world model. On the decision level, the context model only contains components relevant for privacy decision making. For example: An activity involves the user, is assigned a type, i.e., a semantic label, such as home or work, based on system level input.

    Privacy Decision Engine : The context model allows to reason about which context items are affected by a context transition. When a transition occurs, the privacy decision engine (PDE) evaluates which protection worthy context items are affected. Protection worthiness (or privacy relevance) of context items for a given context are determined by the user's privacy preferences that are This serves as a basis for adapting privacy preferences and is subsequently further adjusted to the user by learning from the user's explicit decisions, behaviour, and reaction to system actions. [72] approximated by the system from the knowledge base.

    The user's personality type is determined before initial system use to select a basic privacy profile.

    It may also be possible that the privacy preference cannot be realized in the current context. In that case, the privacy policy would suggest terminating the activity. For each privacy policy variant a confidence score is calculated based on how well it fits the adapted privacy preference. Based on the confidence scores, the PDE selects the most appropriate policy candidate or triggers user involvement if the confidence is below a certain threshold determined by the user's personality and previous privacy decisions.

    Realization and Enforcement: The selected privacy policy must be realized on the system level. This is by combining territorial privacy and information privacy aspects. The private territory is defined by a territorial privacy boundary that separates desired and undesired entities.

    Granularity adjustments for specific Information items is defined. For example, instead of the user's exact position only the street address or city can be provided.

    ADVANTAGES: The personalization to a specific user has the advantage of better emulating that user's privacy decision process. It also helps to decide when to involve the user in the decision process by providing recommendations only and when privacy decisions can be realized autonomously.

    DISADVANTAGES: The entire model hinges on the ability of the system to accurately determine user profile before the user starts using it and not after, when preferences can be more accurately determined. There is no provision for the user to pick his own privacy profile, it is all system determined taking away an element of consent in the very beginning. As all further preferences are adapted on this base, it is possible that the system may not deliver. The use of confident scores is an approximation that can compromise privacy by a small numerical margin of difference.

    However it is a useful insight on techniques of contextualization. Depending on the environment, different strategies for policy realization and varying degrees of enforcement are possible[73].

    Length

    The length of privacy policies is often cited as one reason they are so commonly ignored. Studies show privacy policies are hard to read, read infrequently, and do not support rational decision making. [74] Aleecia M. McDonald and Lorrie Faith Cranor in their seminal study, "The Cost of Reading Privacy Policies" estimated that the the average length of privacy policies is 2,500 words. Using the reading speed of 250 words per minute which is typical for those who have completed secondary education, the average policy would take 10 minutes to read.

    The researchers also investigated how quickly people could read privacy policies when they were just skimming it for pertinent details. They timed 93 people as they skimmed a 934-word privacy policy and answered multiple choice questions on its content.

    Though some people took under a minute and others up to 42 minutes, the bulk of the subjects of the research took between three and six minutes to skim the policy, which itself was just over a third of the size of the average policy.

    The researchers used their data to estimate how much it costs to read the privacy policy of every site they visit once a year if their time was charged for and arrived at a mind boggling figure of $652 billion.

    Probability Density Function

    Problems

    Though the figure of $652 billion has limited usefulness, because people rarely read whole policies and cannot charge anyone for the time it takes to do this, the researchers concluded that readers who do conduct a cost-benefit analysis might decide not to read any policies.

    "Preliminary work from a small pilot study in our laboratory revealed that some Internet users believe their only serious risk online is they may lose up to $50 if their credit card information is stolen. For people who think that is their primary risk, our point estimates show the value of their time to read policies far exceeds this risk. Even for our lower bound estimates of the value of time, it is not worth reading privacy policies though it may be worth skimming them," said the research. This implies that seeing their only risk as credit card fraud suggests Internet users likely do not understand the risks to their privacy. As an FTC report recently stated, "it is unclear whether consumers even understand that their information is being collected, aggregated, and used to deliver advertising."[75]"

    Recommendations

    If the privacy community can find ways to reduce the time cost of reading policies, it may be easier to convince Internet users to do so. For example, if consumers can move from needing to read policies word-for-word and only skim policies by providing useful headings, or with ways to hide all but relevant information in a layered format and thus reduce the effective length of the policies, more people may be willing to read them. [76] Apps can also adopt short form notices that summarize and link to the larger more complete notice displayed elsewhere. These short form notices need not be legally binding and must candidate that it does not cover all types of data collection but only the most relevant ones. [77]

    Content

    In an attempt to gain permission most privacy policies inform users about: (1) the type of information collected; and (2) the purpose for collecting that information.

    Standard privacy notices generally cover the points of:

    • Methods Of Collection And Usage Of Personal Information
    • The Cookie Policy
    • Sharing Of Customer Information [78]

    Certified Information Privacy Professionals divide notices into the following sequential sections[79]:

    i. Policy Identification Details: Defines the policy name, version and description.

    ii. P3P-Based Components: Defines policy attributes that would apply if the policy is exported to a P3P format. [80] Such attributes would include: policy URLs, organization information, PII access and dispute resolution procedures.

    iii. Policy Statements and Related Elements: Groups, Purposes and PII Types-Policy statements define the individuals able to access certain types of information, for certain pre-defined purposes.

    Problems

    Applications tend to define the type of data broadly in an attempt to strike a balance between providing enough information so that application may gain consent to access a user's data and being broad enough to avoid ruling out specific information.[81]

    This leads to usage of vague terms like "information collected may include."[82]

    Similarly the purpose of the data acquisition is also very broad. For example, a privacy policy may state that user data can be collected for anything related to ―"improving the content of the Service." As the scope of ―improving the content of the Service is never defined, any usage could conceivably fall within that category.[83]

    Several apps create user social profiles based on their online preferences to promote targeted marketing which is cleverly concealed in phrases like "we may also draw upon this Personal Information in order to adapt the Services of our community to your needs". [84] For instance Bees & Pollen is a "predictive personalization" platform for games and apps that "uses advanced predictive algorithms to detect complex, non-trivial correlations between conversion patterns and users' DNA signatures, thus enabling it to automatically serve each user a personalized best-fit game options, in real-time." In reality it analyses over 100 user attributes, including activity on Facebook, spending behaviours, marital status, and location.[85]

    Notices also often mislead consumers into believing that their information will not be shared with third parties using the terms "unaffiliated third parties." Other affiliated companies within the corporate structure of the service provider may have access to user's data for marketing and other purposes. [86]

    There are very few choices to opt-out of certain practices, such as sharing data for marketing purposes. Thus, users are effectively left with a take-it-or-leave-it choice - give up your privacy or go elsewhere.[87]Users almost always grant consent if it is required to receive the service they want which raises the query if this consent is meaningful[88].

    Recommendations

    The following recommendations have emerged:

    • Notice - Companies should provide consumers with clear, conspicuous notice that accurately describe their information practices.
    • Consumer Choice - Companies should provide consumers with the opportunity to decide (in the form of opting-out) if it may disclose personal information to unaffiliated third parties.
    • Access and Correction - Companies should provide consumers with the opportunity to access and correct personal information collected about the consumer.
    • Security - Companies must adopt reasonable security measures in order to protect the privacy of personal information. Possible security measures include: administrative security, physical security and technical security.
    • Enforcement - Companies should have systems through which they can enforce the privacy policy. This may be managed by the company, or an independent third party to ensure compliance. Examples of popular third parties include BBBOnLine and TRUSTe.[89]
    • Standardization : Several researchers and organizations have recommended a standardized privacy notice format that covers certain essential points. [90] However as displaying a privacy notice in itself is voluntary it is unpredictable whether companies would willingly adopt a standardized model. Moreover with the app market burgeoning with innovations a standard format may not cover all emergent data practices.

    Comprehension

    The FTC states that "the notice-and-choice model, as implemented, has led to long, incomprehensible privacy policies that consumers typically do not read, let alone understand. the question is not whether consumers should be given a say over unexpected uses of their data; rather, the question is how to provide simplified notice and choice"[91].

    Notably, in a survey conducted by Zogby International, 93% of adults - and 81% of teens - indicated they would take more time to read terms and conditions for websites if they were written in clearer language.[92]

    Most privacy policies are in natural language format: companies explain their practices in prose. One noted disadvantage to current natural language policies is that companies can choose which information to present, which does not necessarily solve the problem of information asymmetry between companies and consumers. Further, companies use what have been termed "weasel words" - legalistic, ambiguous, or slanted phrases - to describe their practices [93].

    In a study by Aleecia M. McDonald and others[94], it was found that accuracy in what users comprehend span a wide range. An average of 91% of participants answered correctly when asked about cookies, 61% answered correctly about opt out links, 60% understood when their email address would be "shared" with a third party, and only 46% answered correctly regarding telemarketing. Participants found those questions harder which substituted vague or complicated terms to refer to practices such as telemarketing by "the information you provide may be used for marketing services." Overall accuracy was a mere 33%.

    Problems

    Natural language policies are often long and require college-level reading skills. Furthermore, there are no standards for which information is disclosed, no standard place to find particular information, and data practices are not described using consistent language. These policies are "long, complicated, and full of jargon and change frequently."[95]

    Kent Walker list five problems that privacy notices typically suffer from -

    a) overkill - long and repetitive text in small print,

    b) irrelevance - describing situations of little concern to most consumers,

    c) opacity - broad terms the reflect the truth that is impossible to track and control all the information collected and stored,

    d) non-comparability - simplification required to achieve comparability will lead to compromising accuracy, and

    e) inflexibility - failure to keep pace with new business models. [96]

    Recommendations

    Researchers advocate a more succinct and simpler standard for privacy notices,[97] such as representing the information in the form of a table. [98] However, studies show only an insignificant improvement in the understanding by consumers when privacy policies are represented in graphic formats like tables and labels. [99]

    There are also recommendations to adopt a multi-layered approach where the relevant information is summarized through a short notice.[100] This is backed by studies that consumers find layered policies easier to understand. [101] However they were less accurate in the layered format especially with parts that were not summarized. This suggests participants that did not continue to the full policy when the information they sought was not available on the short notice. Unless it is possible to identify all of the topics users care about and summarize to one page, the layered notice effectively hides information and reduces transparency. It has also been pointed out that it is impossible to convey complex data policies in simple and clear language. [102]

    Consumers often struggle to map concepts such as third party access to the terms used in policies. This is also because companies with identical practices often convey different information, and these differences reflected in consumer's ability to understand the policies. These policies may need an educational component so readers understand what it means for a site to engage in a given practice[103]. However it is unlikely that when readers fail to take time to read the policy that they will read up on additional educational components.


    [1] Amber Sinha http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy

    [2] Wang, et al., 1998) Milberg, et al. (1995)

    [3] See e.g., White House, Consumer Privacy Bill of Rights (2012) http://www.whitehouse.gov/the-pressoffice/2012/02/23/we-can-t-wait-obama-administration-unveils-blueprint-privacy-bill-rights; Fed. Trade Comm'n, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Business and Policy Makers (2012) http://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commissionreport-protecting-consumer-privacy-era-rapid-change-recommendations/120326privacyreport.pdf.

    [4] Fed. Trade Comm'n, Privacy Online: A Report to Congress 7 (June 1998), available at www.ftc.gov/reports/privacy3/priv-23a.pdf.

    [6] 389 U.S. 347 (1967).

    [7] Dow Chem. Co. v. United States, 476 U.S. 227, 241 (1986)

    [8] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj

    [9] Dow Chem. Co. v. United States, 476 U.S. 227, 241 (1986)

    [10] Kyllo, 533 U.S. at 34 (―[T]he technology enabling human flight has exposed to public view (and hence, we have said, to official observation) uncovered portions of the house and its curtilage that once were private.‖).

    [11] Kyllo v. United States, 533 U.S. 27

    [12] See Katz, 389 U.S. at 352 (―But what he sought to exclude when he entered the booth was not the intruding eye-it was the uninvited ear. He did not shed his right to do so simply because he made his calls from a place where he might be seen.‖).

    [13] See United States v. Ahrndt, No. 08-468-KI, 2010 WL 3773994, at *4 (D. Or. Jan. 8, 2010).

    [14] In re DoubleClick Inc. Privacy Litig., 154 F. Supp. 2d 497 (S.D.N.Y. 2001).

    [15] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj

    [16] See Michael A. Carrier, Against Cyberproperty, 22 BERKELEY TECH. L.J. 1485, 1486 (2007) (arguing against creating a right to exclude users from making electronic contact to their network as one that exceeds traditional property notions).

    [17] See M. Ryan Calo, Against Notice Skepticism in Privacy (and Elsewhere), 87 NOTRE DAME L. REV. 1027, 1049 (2012) (citing Paula J. Dalley, The Use and Misuse of Disclosure as a Regulatory System, 34 FLA. ST. U. L. REV. 1089, 1093 (2007) ("[D]isclosure schemes comport with the prevailing political philosophy in that disclosure preserves individual choice while avoiding direct governmental interference.")).

    [18] See Calo, supra note 10, at 1048; see also Omri Ben-Shahar & Carl E. Schneider, The Failure of Mandated Disclosure, 159 U. PA. L. REV. 647, 682 (noting that notice "looks cheap" and "looks easy").

    [19] Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness and Externalities, 6 I/S J. L. & POL'Y FOR INFO. SOC'Y 425, 440 (2011) (citing M. Ryan Calo, A Hybrid Conception of Privacy Harm Draft-Privacy Law Scholars Conference 2010, p. 28).

    [20] Daniel J. Solove, Introduction: Privacy Self-Management and the Consent Dilemma, 126 HARV. L. REV. 1879, 1885 (2013) (citing Jon Leibowitz, Fed. Trade Comm'n, So Private, So Public: Individuals, the Internet & the Paradox of Behavioral Marketing, Remarks at the FTC Town Hall Meeting on Behavioral Advertising: Tracking, Targeting, & Technology (Nov. 1, 2007), available at http://www.ftc.gov/speeches/leibowitz/071031ehavior/pdf). Paul Ohm refers to these issues as "information-quality problems." See Paul Ohm, Branding Privacy, 97 MINN. L. REV. 907, 930 (2013). Daniel J. Solove refers to this as "the problem of the uninformed individual." See Solove, supra note 17

    [21] See Edward J. Janger & Paul M. Schwartz, The Gramm-Leach-Bliley Act, Information Privacy, and the Limits of Default Rules, 86 MINN. L. REV. 1219, 1230 (2002) (stating that according to one survey, "only 0.5% of banking customers had exercised their opt-out rights").

    [22] See Amber Sinha A Critique of Consent in Information Privacy http://cis-india.org/internet-governance/blog/a-critique-of-consent-in-information-privacy

    [23] Leigh Shevchik, "Mobile App Industry to Reach Record Revenue in 2013," New Relic (blog), April 1, 2013, http://blog.newrelic.com/2013/04/01/mobile-apps-industry-to-reach-record-revenue-in-2013/.

    [24] Jan Lauren Boyles, Aaron Smith, and Mary Madden, "Privacy and Data Management on Mobile Devices," Pew Internet & American Life Project, Washington, DC, September 5, 2012.

    [25] http://www.aarp.org/content/dam/aarp/research/public_policy_institute/cons_prot/2014/improving-mobile-device-privacy-disclosures-AARP-ppi-cons-prot.pdf

    [26] "Mobile Apps for Kids: Disclosures Still Not Making the Grade," Federal Trade Commission, Washington, DC, December 2012

    [27] http://www.aarp.org/content/dam/aarp/research/public_policy_institute/cons_prot/2014/improving-mobile-device-privacy-disclosures-AARP-ppi-cons-prot.pdf

    [28] Linda Ackerman, "Mobile Health and Fitness Applications and Information Privacy," Privacy Rights Clearinghouse, San Diego, CA, July 15, 2013.

    [29] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 75 IND. L.J. 1125, 1126 (1999). http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=2199&context=ilj

    [30] William Aiello, Steven M. Bellovin, Matt Blaze, Ran Canetti, John Ioannidis, Angelos D. Keromytis, and Omer Reingold. Just fast keying: Key agreement in a hostile internet. ACM Trans. Inf. Syst. Secur., 7(2):242-273, 2004.

    [31] Privacy By Design The 7 Foundational Principles by Anne Cavoukian https://www.ipc.on.ca/images/resources/7foundationalprinciples.pdf

    [32] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.

    [33] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.

    [34] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. Le M´etayer, R. Tirtea, and S. Schiffner. Privacy and Data Protection by Design - from policy to engineering. report, ENISA, Dec. 2014.

    [35] John Frank Weaver, We Need to Pass Legislation on Artificial Intelligence Early and Often, SLATE FUTURE TENSE (Sept. 12, 2014),http://www.slate.com/blogs/future_tense/2014/09/12/we_need_to_pass_artificial_intelligence_laws_early_and_often.html

    [36] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 75 IND. L.J. 1125, 1126 (1999).

    [37] Richard Warner & Robert Sloan, Beyond Notice and Choice: Privacy, Norms, and Consent, J. High Tech. L. (2013). Available at: http://scholarship.kentlaw.iit.edu/fac_schol/568

    [39] iOS Application Programming Guide: The Application Runtime Environment, APPLE, http://developer.apple.com/library/ ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/RuntimeEnvironment /RuntimeEnvironment.html (last updated Feb. 24, 2011)

    [40] Security and Permissions, ANDROID DEVELOPERS, http://developer.android.com/guide/topics/security/security.html (last updated Sept. 13, 2011).

    [41] iOS Application Programming Guide: The Application Runtime Environment, APPLE, http://developer.apple.com/library/ ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/RuntimeEnvironment /RuntimeEnvironment.html (last updated Feb. 24, 2011)

    [42] See Katherine Noyes, Why Android App Security is Better Than for the iPhone, PC WORLD BUS. CTR. (Aug. 6, 2010, 4:20 PM), http://www.pcworld.com/businesscenter/article/202758/why_android_app_security_is_be tter_than_for_the_iphone.html; see also About Permissions for Third-Party Applications, BLACKBERRY, http://docs.blackberry.com/en/smartphone_users/deliverables/22178/ About_permissions_for_third-party_apps_50_778147_11.jsp (last visited Sept. 29, 2011); Security and Permissions, supra note 76.

    [43] Peter S. Vogel, A Worrisome Truth: Internet Privacy is Impossible, TECHNEWSWORLD (June 8, 2011, 5:00 AM), http://www.technewsworld.com/ story/72610.html.

    [44] Privacy Policy, FOURSQUARE, http://foursquare.com/legal/privacy (last updated Jan. 12, 2011)

    [45] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing Notice: A Large-scale Experiment on the Timing of Software License Agreements. In Proc. of CHI. ACM, 2007.

    [46] I. Adjerid, A. Acquisti, L. Brandimarte, and G. Loewenstein. Sleights of Privacy: Framing, Disclosures, and the Limits of Transparency. In Proc. of SOUPS. ACM, 2013.

    [47] http://delivery.acm.org/10.1145/2810000/2808119/p63-balebako.pdf?ip=106.51.36.200&id=2808119&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E35B5BCE80D07AAD9&CFID=801296199&CFTOKEN=33661544&__acm__=1466052980_2f265a2442ea3394aa1ebab7e6449933

    [48] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.

    [49] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.

    [50] S. Egelman, J. Tsai, L. F. Cranor, and A. Acquisti. Timing is everything?: the effects of timing and placement of online privacy indicators. In Proc. CHI '09. ACM, 2009.

    [51] R. B¨ohme and S. K¨opsell. Trained to accept?: A field experiment on consent dialogs. In Proc. CHI '10. ACM, 2010

    [52] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing notice: a large-scale experiment on the timing of software license agreements. In Proc. CHI '07. ACM, 2007.

    [53] N. S. Good, J. Grossklags, D. K. Mulligan, and J. A. Konstan. Noticing notice: a large-scale experiment on the timing of software license agreements. In Proc. CHI '07. ACM, 2007.

    [54] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.

    [55] A. Kobsa and M. Teltzrow. Contextualized communication of privacy practices and personalization benefits: Impacts on users' data sharing and purchase behavior. In Proc. PETS '05. Springer, 2005.

    [56] F. Schaub, B. K¨onings, and M. Weber. Context-adaptive privacy: Leveraging context awareness to support privacy decision making. IEEE Pervasive Computing, 14(1):34-43, 2015.

    [57] E. Choe, J. Jung, B. Lee, and K. Fisher. Nudging people away from privacy-invasive mobile apps through visual framing. In Proc. INTERACT '13. Springer, 2013.

    [58] F. Schaub, B. K¨onings, and M. Weber. Context-adaptive privacy: Leveraging context awareness to support privacy decision making. IEEE Pervasive Computing, 14(1):34-43, 2015.

    [59] Article 29 Data Protection Working Party. Opinion 8/2014 on the Recent Developments on the Internet of Things. WP 223, Sept. 2014.

    [60] B. Anderson, A. Vance, B. Kirwan, E. D., and S. Howard. Users aren't (necessarily) lazy: Using NeuroIS to explain habituation to security warnings. In Proc. ICIS '14, 2014.

    [61] B. Anderson, B. Kirwan, D. Eargle, S. Howard, and A. Vance. How polymorphic warnings reduce habituation in the brain - insights from an fMRI study. In Proc. CHI '15. ACM, 2015.

    [62] M. S. Wogalter, V. C. Conzola, and T. L. Smith-Jackson. Research-based guidelines for warning design and evaluation. Applied Ergonomics, 16 USENIX Association 2015 Symposium on Usable Privacy and Security 17 33(3):219-230, 2002.

    [63] L. F. Cranor, P. Guduru, and M. Arjula. User interfaces for privacy agents. ACM TOCHI, 13(2):135-178, 2006.

    [64] R. S. Portnoff, L. N. Lee, S. Egelman, P. Mishra, D. Leung, and D. Wagner. Somebody's watching me? assessing the effectiveness of webcam indicator lights. In Proc. CHI '15, 2015

    [65] M. Langheinrich. Privacy by design - principles of privacy-aware ubiquitous systems. In Proc. UbiComp '01. Springer, 2001

    [66] Microsoft. Privacy Guidelines for Developing Software Products and Services. Technical Report version 3.1, 2008.

    [67] The Impact of Timing on the Salience of Smartphone App Privacy Notices, Rebecca Balebako , Florian Schaub, Idris Adjerid , Alessandro Acquist ,Lorrie Faith Cranor

    [68] R. Böhme and J. Grossklags. The Security Cost of Cheap User Interaction. In Workshop on New Security Paradigms, pages 67-82. ACM, 2011

    [69] A. Felt, S. Egelman, M. Finifter, D. Akhawe, and D. Wagner. How to Ask For Permission. HOTSEC 2012, 2012.

    [70] A. Felt, S. Egelman, M. Finifter, D. Akhawe, and D. Wagner. How to Ask For Permission. HOTSEC 2012, 2012.

    [71] Towards Context Adaptive Privacy Decisions in Ubiquitous Computing Florian Schaub∗ , Bastian Könings∗ , Michael Weber∗ , Frank Kargl† ∗ Institute of Media Informatics, Ulm University, Germany Email: { florian.schaub | bastian.koenings | michael.weber }@uni-ulm.d

    [72] M. Korzaan and N. Brooks, "Demystifying Personality and Privacy: An Empirical Investigation into Antecedents of Concerns for Information Privacy," Journal of Behavioral Studies in Business, pp. 1-17, 2009.

    [73] B. Könings and F. Schaub, "Territorial Privacy in Ubiquitous Computing," in WONS'11. IEEE, 2011, pp. 104-108.

    [74] The Cost of Reading Privacy Policies Aleecia M. McDonald and Lorrie Faith Cranor

    [75] 5 Federal Trade Commission, "Protecting Consumers in the Next Tech-ade: A Report by the Staff of the Federal Trade Commission," March 2008, 11, http://www.ftc.gov/os/2008/03/P064101tech.pdf.

    [76] The Cost of Reading Privacy Policies Aleecia M. McDonald and Lorrie Faith Cranor

    I/S: A Journal of Law and Policy for the Information Society 2008 Privacy Year in Review issue http://www.is-journal.org/

    [77] IS YOUR INSEAM YOUR BIOMETRIC? Evaluating the Understandability of Mobile Privacy Notice Categories Rebecca Balebako, Richard Shay, and Lorrie Faith Cranor July 17, 2013 https://www.cylab.cmu.edu/files/pdfs/tech_reports/CMUCyLab13011.pdf

    [78] https://www.sba.gov/blogs/7-considerations-crafting-online-privacy-policy

    [79] https://www.cippguide.org

    [80] The Platform for Privacy Preferences Project, more commonly known as P3P was designed by the World Wide Web Consortium aka W3C in response to the increased use of the Internet for sales transactions and subsequent collection of personal information. P3P is a special protocol that allows a website's policies to be machine readable, granting web users' greater control over the use and disclosure of their information while browsing the internet.

    [81] Security and Permissions, ANDROID DEVELOPERS, http://developer.android.com/guide/topics/security/security.html (last updated Sept. 13, 2011).

    [82] See Foursqaure Privacy Policy

    [83] http://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1600&context=iplj

    [84] Privacy Policy, FOURSQUARE, http://foursquare.com/legal/privacy (last updated Jan. 12, 2011)

    [85] Bees and Pollen, "Bees and Pollen Personalization Platform," http://www.beesandpollen.com/TheProduct. aspx; Bees and Pollen, "Sense6-Social Casino Games Personalization Solution," http://www.beesandpollen. com/sense6.aspx; Bees and Pollen, "About Us," http://www.beesandpollen.com/About.aspx.

    [86] CFA on the NTIA Short Form Notice Code of Conduct to Promote Transparency in Mobile Applications July 26, 2013 | Press Release

    [87] P. M. Schwartz and D. Solove. Notice & Choice. In The Second NPLAN/BMSG Meeting on Digital Media and Marketing to Children, 2009.

    [88] F. Cate. The Limits of Notice and Choice. IEEE Security Privacy, 8(2):59-62, Mar. 2010.

    [89] https://www.cippguide.org/2011/08/09/components-of-a-privacy-policy/

    [90] https://www.ftc.gov/public-statements/2001/07/case-standardization-privacy-policy-formats

    [91] Protecting Consumer Privacy in an Era of Rapid Change. Preliminary FTC Staff Report.December 2010

    [92] . See Comment of Common Sense Media, cmt. #00457, at 1.

    [93] Pollach, I. What's wrong with online privacy policies? Communications of the ACM 30, 5 (September 2007), 103-108

    [94] A Comparative Study of Online Privacy Policies and Formats Aleecia M. McDonald,1 Robert W. Reeder,2 Patrick Gage Kelley, 1 Lorrie Faith Cranor1 1 Carnegie Mellon, Pittsburgh, PA 2 Microsoft, Redmond, WA

    http://lorrie.cranor.org/pubs/authors-version-PETS-formats.pdf

    [95] Amber Sinha Critique

    [96] Kent Walker, The Costs of Privacy, 2001 available at https://www.questia.com/library/journal/1G1-84436409/the-costs-of-privacy

    [97] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

    [98] Allen Levy and Manoj Hastak, Consumer Comprehension of Financial Privacy Notices, Interagency Notice Project, available at https://www.sec.gov/comments/s7-09-07/s70907-21-levy.pdf

    [99] Patrick Gage Kelly et al., Standardizing Privacy Notices: An Online Study of the Nutrition Label Approach available at https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416-544506-00037/544506-00037.pdf

    [100] The Center for Information Policy Leadership, Hunton & Williams LLP, "Ten Steps To Develop A Multi-Layered Privacy Notice" available at https://www.informationpolicycentre.com/files/Uploads/Documents/Centre/Ten_Steps_whitepaper.pdf

    [101] A Comparative Study of Online Privacy Policies and Formats Aleecia M. McDonald,1 Robert W. Reeder,2 Patrick Gage Kelley, 1 Lorrie Faith Cranor1 1 Carnegie Mellon, Pittsburgh, PA 2 Microsoft, Redmond, WA

    [103] Report by Kleimann Communication Group for the FTC. Evolution of a prototype financial privacy notice, 2006. http://www.ftc.gov/privacy/ privacyinitiatives/ftcfinalreport060228.pdf Accessed 2 Mar 2007

    http://lorrie.cranor.org/pubs/authors-version-PETS-formats.pdf

    Workshop Report - UIDAI and Welfare Services: Exclusion and Countermeasures

    by Vanya Rakesh last modified Mar 16, 2019 04:34 AM
    This report presents summarised notes from a workshop organised by the Centre for Internet and Society (CIS) on Saturday, August 27, 2016, to discuss, raise awareness of, and devise countermeasures to exclusion due to implementation of UID-based verification for and distribution of welfare services.

     

    Introduction

    The Centre for Internet and Society organised a workshop on "UIDAI and Welfare Services: Exclusion and Countermeasures" at the Institution of Agricultural on Technologists on August 27 in Bangalore to discuss, raise awareness of, and devise countermeasures to exclusion due to implementation of UID-based verification for and distribution of welfare services [1]. This was a follow-up to the workshop held in Delhi on “Understanding Aadhaar and its New Challenges” at the Centre for Studies in Science Policy, JNU on May 26th and 27th 2016 [2]. In this report we summarise the key concerns raised and the case studies presented by the participants at the workshop held on August 27, 2016.

    Implementation of the UID Project

    Question of Consent: The Aadhaar Act [3] states that the consent of the individual must be taken at the time of enrollment and authentication and it must be informed to him/her the purpose for which the data would be used. However, the Act does not provide for an opt-out mechanism and an individual is compelled to give consent to continue with the enrollment process or to complete an authentication.

    Lack of Adherence to Court Orders: Despite of several orders by Supreme Court stating that use of Aadhaar cannot be made mandatory for the purpose of availing benefits and services, multiple state governments and departments have made it mandatory for a wide range of purposes like booking railway tickets [4], linking below the poverty line ration cards with Aadhaar [5], school examinations [6], food security, pension and scholarship [7], to name a few.

    Misleading Advertisements: A concern was raised that individuals are being mislead in the necessity and purpose for enrollment into the project. For example, people have been asked to enrol by telling them that they might get excluded from the system and cannot get services like passports, banks, NREGA, salaries for government employees, denial of vaccinations, etc. Furthermore, the Supreme Court has ordered Aadhaar not be mandatory, yet people are being told that documentation or record keeping cannot be done without UID number.

    Hybrid Governance: The participants pointed out that with the Aadhaar (Targeted delivery of financial and other subsidies, benefits and services) Act, 2016 (hereinafter referred to as Aadhaar Act, 2016 ) being partially enforced, multiple examples of exclusion as reported in the news are demonstrating how the Aadhaar project is creating a case of hybrid governance i.e private corporations playing a significant role in Governance. This can be seen in case of Aadhaar where we see many entities from private sector being involved in its implementation, as well as many software and hardware companies.

    Lack of Transparency around Sharing of Biometric Data: The fact how and why the Government is relying on biometrics for welfare schemes is unclear and not known. Also, there is no information on how biometric data that is collected through the project is being used and its ability as an authenticating device. Along with that, there is very little information on companies that have been enlisted to hold and manage data and perform authentication.

    Possibility of Surveillance: Multiple petitions and ongoing cases have raised concerns regarding the possibility of surveillance, tracking, profiling, convergence of data, and the opaque involvement of private companies involved in the project.

    Denial of Information: In an RTI filed by one of the participant requesting to share the key contract for the project, it was refused on the grounds under section 8(1) (d) of the RTI Act, 2005. However, it was claimed that the provision would not be applicable since the contract was already awarded and any information disclosed to the Parliament should be disclosed to the citizens. The Central Information Commission issued a letter stating that the contractual obligation is over and a copy of the said agreement can be duly shared. However, it was discovered by the said participant that certain pages of the same were missing , which contained confidential information. When this issue went before appeal before the Information Commissioner, the IC gave an order to the IC in Delhi to comply with the previous order. However, it was communicated that limited financial information may be given, but not missing pages. Also, it was revealed that the UIDAI was supposed to share biometric data with NPR (by way of a MoU), but it has refused to give information since the intention was to discontinue NPR and wanted only UIDAI to collect data.

    Concerns Arising from the Report of the Comptroller and Auditor General of India (CAG) on Implementation of PAHAL (DBTL) Scheme

    A presentation on the CAG compliance audit report of PAHAL on LPG [8] revealed how the society was made to believe that UID will help deal with the issue of duplication and collection as well as use of biometric data will help. The report also revealed that multiple LPG connections have the same Aadhaar number or same bank account number in the consumer database maintained by the OMCs, the bank account number of consumers were also not accurately recorded, scrutiny of the database revealed improper capture of Aadhaar numbers, and there was incorrect seeding of IFSC codes in consumer database. The participants felt that this was an example of how schemes that are being introduced for social welfare do not necessarily benefit the society, and on the contrary, has led to exclusion by design. For example, in the year 2011, by was of the The Liquefied Petroleum Gas (Regulation of Supply and Distribution) Amendment Order, 2011 [9], the Ministry of Petroleum and Natural Gas made the Unique Identification Number (UID) under the Aadhaar project a must for availing LPG refills. This received a lot of public pushback, which led to non-implementation of the order. In October 2012, despite the UIDAI stating that the number was voluntary, a number of services began requiring the provision of an Aadhaar number for accessing benefits. In September 2013, when the first order on Aadhaar was passed by court [10], oil marketing companies and UIDAI approached the Supreme Court to change the same and allow them to make it mandatory, which was refused by the Court. Later in the year 2014, use of Aadhaar for subsidies was made mandatory. The participants further criticised the CAG report for revealing the manner in which linking Aadhaar with welfare schemes has allowed duplication and led to ghost beneficiaries where there is no information about who these people are who are receiving the benefits of the subsidies. For example, in Rajasthan, people are being denied their pension as they are being declared dead due to absence of information from the Aadhaar database.

    It was said that the statistics of duplication mentioned in the report show how UIDAI (as it claims to ensure de-duplication of beneficiaries) is not required for this purpose and can be done without Aadhaar as well. Also, due to incorrect seeding of Aadhaar number many are being denied subsidy where there is no information regarding the number of people who have been denied the subsidy because of this. Considering these important facts from the audit report, the discussants concluded how the statistics reflect inflated claims by UIDAI and how the problems which are said to be addressed by using Aadhaar can be dealt without it. In this context, it is important to understand how the data in the aadhaar database maybe wrong and in case of e-governance the citizens suffer. Also, the fact that loss of subsidy-not in cash, but in use of LPG cylinder - only for cooking, is ignored. In addition to that, there is no data or way to check if the cylinder is being used for commercial purposes or not as RTI from oil companies says that no ghost identities have been detected.

    UID-linked Welfare Delivery in Rajasthan

    One speaker presented findings on people's experiences with UID-linked welfare services in Rajasthan, collected through a 100 days trip organised to speak to people across the state on problems related to welfare governance. This visit revealed that people who need the benefits and access to subsidies most are often excluded from actual services. It was highlighted that the paperless system is proving to be highly dangerous. Some of the cases discussed included that of a disabled labourer, who was asked to get an aadhaar card, but during enrollment asked the person standing next to him to put all his 5 fingers for biometric data collection. Due to this incorrect data, he is devoid of all subsidies since the authentication fails every time he goes to avail it. He stopped receiving his entitlements. Though problems were anticipated, the misery of the people revealed the extent of the problems arising from the project. In another case, an elderly woman living alone, since she could not go for Aadhaar authentication, had not been receiving the ration she is entitled to receive for the past 8 months. When the ration shop was approached to represent her case, the dealers said that they cannot provide her ration since they would require her thumb print for authentication. Later, they found out that on persuading the dealer to provide her with ration since Aadhaar is not mandatory, they found out that in their records they had actually mentioned that she was being given the ration, which was not the case. So the lack of awareness and the fact that people are entitled to receive the benefits irrespective of Aadhaar is something that is being misused by dealers. This shows how this system has become a barrier for the people, where they are also unaware about the grievance redressal mechanism.

    Aadhaar and e-KYC

    In this session, the use of Aadhaar for e-KYC verification was discussed The UID strategy document describes how the idea is to link UIDAI with money enabled Direct Benefit Transfer (DBT) to the beneficiaries without any reason or justification for the same. It was highlighted by one of the participants how the Reserve Bank of India (RBI) believed that making Aadhaar compulsory for e-KYC and several other banking services was a violation of the Money Laundering Act as well as its own rules and standards, however, later relaxed the rules to link Aadhaar with bank accounts and accepted its for e-KyC with great reluctance as the Department of Revenue thought otherwise. It was mentioned how allowing opening of bank accounts remotely using Aadhaar, without physically being present, was touted as a dangerous idea. However, the restrictions placed by RBI were suddenly done away with and opening bank accounts remotely was enabled via e-KYC.

    A speaker emphasised that with emerging FinTech services in India being tied with Aadhaar via India Stack, the following concerns are becoming critical:

    1. With RBI enabling creation of bank accounts remotely, it becomes difficult to to track who did e-KYC and which bank did it and hold the same accountable.

    2. The Aadhaar Act 2016 states that UIDAI will not track the queries made and will only keep a record of Yes/No for authentication. For example, the e-KYC to open a bank account can now be done with the help of an Aadhaar number and biometric authentication. However, this request does not get recorded and at the time of authentication, an individual is simply told whether the request has been matched or not by way of a Yes/No [11]. Though UIDAI will maintain the authentication record, this may act as an obstacle since in case the information from the aadhaar database does not match, the person would not be able to open a bank account and would only receive a yes/no as a response to the request.

    3. Further, there is a concern that the Aadhaar Enabled Payment System being implemented by the National Payment Corporation of India (NCPI) would allow effectively hiding of source and destination of money flow, leading to money laundering and cases of bribery. This possible as NCPI maintains a mapper where each bank account is linked (only the latest one). However, Aadhaar number can be linked with multiple bank accounts of an individual. So when a transaction is made, the mapper records the transaction only from that 1 account. But if another transaction takes place with another bank account, that record is not maintained by the mapper at NCPI since it records only transactions of the latest account seeded in that. This makes money laundering easy as the money moves from aadhaar number to aadhaar number now rather than bank account to bank account.

    Endnotes

    [1] See: http://cis-india.org/internet-governance/events/uidai-and-welfare-services-exclusion-and-countermeasures-aug-27.

    [2] See: http://cis-india.org/internet-governance/blog/report-on-understanding-aadhaar-and-its-new-challenges.

    [3] See: https://uidai.gov.in/beta/images/the_aadhaar_act_2016.pdf.

    [4] See: http://scroll.in/latest/816343/aadhaar-numbers-may-soon-be-compulsory-to-book-railway-tickets.

    [5] See: http://www.thehindu.com/news/national/karnataka/linking-bpl-ration-card-with-aadhaar-made-mandatory/article9094935.ece.

    [6] See: http://timesofindia.indiatimes.com/india/After-scam-Bihar-to-link-exams-to-Aadhaar/articleshow/54000108.cms.

    [7] See: http://www.dailypioneer.com/state-editions/cs-calls-for-early-steps-to-link-aadhaar-to-ac.html.

    [8] See: http://www.cag.gov.in/sites/default/files/audit_report_files/Union_Commercial_Compliance_Full_Report_25_2016_English.pdf.

    [9] See: http://petroleum.nic.in/docs/lpg/LPG%20Control%20Order%20GSR%20718%20dated%2026.09.2011.pdf.

    [10] See: http://judis.nic.in/temp/494201232392013p.txt.

    [11] Section 8(4) of the Aadhaar Act, 2016 states that "The Authority shall respond to an authentication query with a positive, negative or any other appropriate response sharing such identity information excluding any core biometric information."

     

    Protection of Privacy in Mobile Phone Apps

    by Hitabhilash Mohanty and Edited by Leilah Elmokadem — last modified Dec 15, 2016 02:18 PM
    The term “Fintech” refers to technology-based businesses that compete against, enable and/or collaborate with financial institutions. The year 2015 was a critical year for the Indian fintech industry, which saw the rise of numerous fintech start-ups, incubators and investments from the public and private sector.

    According to NASSCOM, the Indian fintech market is worth an estimated USD 1.2 billion, and is predicted to reach USD 2.4 billion by 2020.[1] The services brought forth by Fintech, such as digital wallets, lending, and insurance, have transformed the ways in which businesses and institutions execute dayto-day transactions. The rise of fintech in India has rendered the nation’s market a point of attraction for global investment.[2] Fintech in India is perceived both as a catalyst for economic growth and innovation, as well as a means of financial inclusion for the millions of unbanked individuals and businesses. The government of India, along with regulators such as SEBI (Securities and Exchange Board of India) and RBI (Reserve Bank India), has consistently supported the digitalization of the nation’s economy and the formation of a strong fintech ecosystem through funding and promotional initiatives.[3]

    The RBI has been pivotal in enabling the development of India’s fintech sector and adopting a cautious approach in addressing concerns around consumer protection and law enforcement. Its key objective as a regulator has been to create an environment for unimpeded innovations by fintech, expanding the reach of banking services for unbanked populations, regulating an efficient electronic payment system and providing alternative options for consumers. The RBI’s prime focus areas for enabling fintech have been around payment, lending, security/biometrics and wealth management. For example, the RBI has introduced “Unified Payment Interface” with the NPCI (National Payments Corporation of India), which has been critical in revolutionizing digital payments and pushing India closer to the objective of a cash-less society. It has also released a consultation paper on regulating Peer 2 Peer (P2P) lending market in India, highlighting the advantages and disadvantages of regulating the sector.[4]

    The consultation paper offers a definition of P2P lending as well as a general explanation of the activity and the digital platforms that facilitate transactions between lenders and borrowers. It also provides a set of arguments for and against regulating P2P lending. The arguments against regulating the sector mainly pertain to the risk of stifling the growth of an innovative, efficient and accessible avenue for borrowers who either lack access to formal financial channels or are denied loans by them.[5]

    This is the general consensus around the positive impact of the Fintech sector in India: its facilitation of financial inclusion and economic opportunity. However, the paper lists many more arguments for regulation than against. One of the main points made is with regards to P2P lending’s potential to disrupt the financial sector by challenging traditional banking channels. There is also the argument that, if properly regulated, the P2P lending platforms can more efficiently and effectively exercise their potential of promoting alternative forms of finance.[6]

    The paper concludes that the balance of advantage would lie in developing an appropriate regulatory and supervisory toolkit that facilitates the orderly growth of the P2P lending sector in order to harness its ability to provide an alternative avenue for credit for the right borrowers[7]

    The RBI’s regulatory framework for P2P lending platforms encompasses the permitted activity, prudential regulations on capital, governance, business continuity plan (BCP) and customer interface, apart from regulatory reporting.[8]

    The Securities and Exchange Board of India (SEBI) is also a prominent regulator of the Indian fintech sector. They issued a consultation paper on “crowdfunding”, which is defined as the solicitation of funds (small amounts) from multiple investors through a web-based platform or social networking site for a specific project, business venture or social cause. P2P lending is then a form of crowdfunding, which can be understood as an umbrella term that covers fintech lending practices. SEBI’s paper aimed to provide a brief overview of the global scenario of crowdfunding including the various prevalent models under it, the associated benefits and risks, the regulatory approaches in different jurisdictions, etc. It also discusses the legal and regulatory challenges in implementing the framework for crowdfunding. The paper proposes a framework for ushering in crowdfunding by giving access to capital markets to provide an additional channel of early stage funding to Start-ups and SME’s and seeks to balance the same with investor protection.[9] Unlike RBI’s consultation paper on P2P lending, SEBI’s paper on crowdfunding was intended mainly to invite discussion and not necessarily to implement a framework for regulation.

    Some of the benefits cited in SEBI’s crowdfunding paper pertain to the commonly mentioned advantages of fintech: economic opportunity for the SME sector and start-ups, alternative lending systems to keep SMEs alive when traditional banks crash, new investment avenues for the local economy and increased competition in the financial sector.[10]

    The paper also lists a set of risks that suggest the need for a regulatory framework for crowdfunding. For example, it mentions the “substitution of institutional risk by retail risk”, meaning that individual lenders, who’s risk tolerance may be low, bear the risk of low/no return investors when they lend to SMEs without adequate assessment of credit worthiness. Also, there is the risk that the digital platform that facilitates lending and issues all the transactions, may not conduct proper due diligence. If the platform is temporarily shut down or closed permanently, no recourse is available to the investors.[11]

    The SEBI paper mentions a long list of other risks associated with crowdfunding, mostly associated with systemic failures, loan defaults, fraud practices, and information asymmetry. Information asymmetry refers partially to the chance that lending decisions are made based on incomplete data sets that are based on social networking platforms. There is a lack of transparency and reporting obligations in issuers including with respect to the use of funds raised.[12]

    Similar to the RBI consultation paper, SEBI makes a decent effort to weigh the costs and benefits of crowdfunding practices but only does this from an economic/financial perspective. Most of the cited risks, benefits and concerns tend to overlook information security and risks of privacy breaches of the implicated borrowers.

    India Stack is a paperless and cashless service delivery system that has been supported by the Indian government as part of the fintech sector. It is a new technology paradigm that is designed to handle massive data inflows, and is poised to enable entrepreneurs, citizens and governments to interact with one another transparently. It is intended to be an open system to electronically verify businesses, people and services. It allows the smartphone to become the delivery platform for services such as digital payments, identification and digital lockers. The vision of India Stack is to shift India towards a paperless economy.[13]

    The central government, based on its experience with the Aadhaar project, decided to launch the opendata initiative in 2012 supported by an open API policy, which would pave the way for private technology solutions to build services on top of Aadhaar and to make India a digital cash economy. Unified Payments Interface (UPI), which will make mobile payments card-less and completely digital, allows consumers to transact directly through their bank account with a unique UPI identity that syncs to Aadhaar’s verification and connects to the merchant, the settlement and the issuing bank to close transactions.[14]

    It is suspected that India Stack will shift in business models in banking from low-volume, high-value, high-cost and high fees to high-volume, low-value, low cost and no fees. This well lead to a drastic increase in accessibility and affordability, and the market force of consumer acquisition and the social purpose of mass inclusion will converge.[15]

    India Stack serves as an example of how the Government of India has supported initiatives that would promote the fintech sector while facilitating economic growth and financial opportunity for unbanked individuals. However, there is continuous discussion around India Stack’s attachment to the Aadhaar system, which can lead to the exclusion of unregistered individuals from the benefits that would otherwise be reaped from the open-data initiative. It can also result in many privacy and security breaches when records of individuals’ daily transactions are attached to their Aadhaar numbers, which carry their biometric information and is linked to other personal data that is held by the government such as health records.

    Download the Full Report


    [1]. KPMG: https://assets.kpmg.com/content/dam/kpmg/pdf/2016/06/FinTech-new.pdf

    [2]. Id.

    [3]. Id.

    [4]. Id.

    [5]. RBI 2P2 Consultation Paper, https://rbidocs.rbi.org.in/rdocs/content/pdfs/CPERR280416.pdf

    [6]. Id.

    [7]. Id.

    [8]. Id.

    [9]. SEBI Crowdfunding consultation paper, http://www.sebi.gov.in/cms/sebi_data/attachdocs/1403005615257.pdf

    [10]. Id.

    [11]. Id.

    [12]. Id.

    [13]. Krishna, https://yourstory.com/2016/07/india-stack/

    [14]. Id.

    [15]. Nilekani, http://indianexpress.com/article/opinion/columns/the-coming-revolution-in-indian-banking-2924534/

    ISIS and Recruitment using Social Media – Roundtable Report

    by Vidushi Marda, Aditya Tejus, Megha Nambiar and Japreet Grewal — last modified Dec 16, 2016 02:19 AM
    The Centre for Internet and Society in collaboration with the Takshashila Institution held a roundtable discussion on “ISIS and Recruitment using Social Media” on 1 September 2016 from 5.00 p.m. to 7.30 p.m. at TERI in Bengaluru.

    The objective of this roundtable was to explore the recruitment process and methods followed by ISIS on social media platforms like Facebook and Twitter and to understand the difficulties faced by law enforcement agencies and platforms in countering the problem while understanding existing counter measures, with a focus on the Indian experience.

    Reviewing Existing Literature

    To provide context to the discussion,  a few key pieces of existing literature on online extremism were highlighted. Discussing Charlie Winter’s “Documenting the Virtual Caliphate”, a participant outlined the multiple stages of the radicalisation process that begins with a person being exposed to general ISIS releases, entering an online filter bubble of like minded people, initial contact, followed by persuasion by the contact person to isolate the potential recruit from  his/her family and friends. This culminates with the assignment of an ISIS task to such person. The takeaway from the paper, was the colossal scale of information and events put out by ISIS on the social media. It was pointed out that contrary to popular belief, ISIS publishes content under six broad themes: mercy, belonging, brutality, victimhood, war and utopia, least of which falls under the category of brutality which in fact garners the most attention worldwide. It was further elaborated that ISIS employs positive imagery in the form of nature and landscapes, and appeals to the civilian life within its borders. This strategy is that of prioritising quantity, quality, adaptability and differentiation while producing media.  This strategy of producing media that is precise, adaptable and effective, according to the author, must be emulated by Governments in their counter measures, although there is no universal counter narrative that is effective. This effort, he stressed cannot be exclusively state-driven.

    JM Berger’s “Making Countering Violent Extremism Work” was also discussed. Here, a slightly different model of radicalisation has been identified with potential recruits going through 4 stages: the first being that of Curiosity where there is exposure to violent extremist ideology, the second stage is Consideration where the potential recruit evaluates the ideology, the third being Identification where the individual begins to self identify with extremist ideology, and the last being that of Self-Critique which is revisited periodically. According to Berger, law enforcement need only be involved in the third stage identified in this taxonomy, through situational awareness programs and investigations. This paper stated that counter-messaging policies need not mimic the ISIS pattern of slick messaging. A data-driven study had found that suspending and suppressing the reach of violent extremist accounts and individuals on online platform was effective in reducing the reach of these ideologies, though not universally so. It also found that generic counter strategies used in the US was more efficient than targeted strategies followed in Europe.

    Lack of Co-ordination, Fragmentation between the States and Centre

    Speaking of the Indian scenario in particular, another participant brought to light the lack of co-ordination and consensus between the State and Central Governments and law enforcement agencies with respect to countering violent extremism with leads to a breakage in the chain of action. Another participant added that the underestimation of the problem at the state level coupled with the theoretical and abstract nature of work done at the Centre is another pitfall. While the fragmentation of agencies was stated to be ineffective, bringing them under the purview of a single agency was also proposed as an ineffective measure. It was instead suggested that a neutral policy body, and not an implementing body, should coordinate the efforts of the multiple groups involved.

    Unreliable Intelligence Infrastructure

    It was pointed out that countries are presently underequipped due to the lack of intelligence infrastructure and technical expertise. This was primarily because agencies in India tend to use off-the shelf hardware and software produced by foreign companies, and such heavy dependence on unreliable parts will necessarily be detrimental to building reliable security infrastructure. Emphasis was laid on the significance of collaboration and open-source intelligence in countering online radicalisation.  An appeal was made to inculcate a higher IT proficiency, indigenous production of resources, funding, collaboration, integration of lower level agencies and more research to be produced in this regard.

    Proactive Counter Narratives

    The importance of proactive counter-narratives to extremist content was stressed on, with the possibility of generating inputs from government agencies and private bodies backing the government being discussed. Another solution identified was the creation and internal circulation of a clear strategy to counter the ISIS narrative and the public dissemination of research on online radicalization in the Indian context.

    Policies of Social Media Platforms

    The conversation moved towards understanding policies of social media. One participant shed light on a popular platform’s strategies against extremism, wherein it was pointed out that the site’s tolerance policy extends not only to directly extremist content but also content created by people who support violent extremism .The involvement of the platform with several countries and platforms in order to create anti-extremist messaging and its intention to expand these initiatives was in furtherance of its philosophy to prevent any celebration of violence. The participant further explained that research shows that anti-extremist content that made use of humour and a lighter tone was more effective than media which relied on gravitas.

    Having identified the existing literature and current challenges, the roundtable concluded with suggestions for further areas of research:

    1. Understanding the use of encrypted messaging services like Whatsapp and Telegram for extremism, and an analysis of these platforms in the Indian context. A deeper understanding of these services is essential to gauge the dimensions of the problem and identify counter measures.
    2. A lexical analysis of Indian social media accounts to identify ISIS supporters and group them into meta-communities, similar to research done by the RAND Corporation
    3. Collation of ISIS media packages was also flagged off as an important measure in order to have a dossier to present to the government. This would help policymakers gain context around the issue, and also help them understand the scale of the problem.

    Deep Packet Inspection: How it Works and its Impact on Privacy

    by Amber Sinha last modified Dec 16, 2016 11:14 PM
    In the last few years, there has been extensive debate and discussion around network neutrality in India. The online campaign in favor of Network Neutrality was led by Savetheinternet.in in India. The campaign was a spectacular success and facilitated sending over a million emails supporting the cause of network neutrality, eventually leading to ban on differential pricing. Following in the footsteps of the Shreya Singhal judgement, the fact that the issue of net neutrality has managed to attract wide public attention is an encouraging sign for a free and open Internet in India. Since the debate has been focused largely on zero rating, other kinds of network practices impacting network neutrality have yet to be comprehensively explored in the Indian context, nor their impact on other values. In this article, the author focuses on network management, in general, and deep packet inspection, in particular and how it impacts the privacy of users.

    Background

    In the last few years, there has been extensive debate and discussion around network neutrality in India. The online campaign in favor of Network Neutrality was led by Savetheinternet.in in India. The campaign, captured in detail by an article in Mint, [1] was a spectacular success and facilitated sending over a million emails supporting the cause of network neutrality, eventually leading to ban on differential pricing. Following in the footsteps of the Shreya Singhal judgement, the fact that the issue of net neutrality has managed to attract wide public attention is an encouraging sign for a free and open Internet in India. Since the debate has been focused largely on zero rating, other kinds of network practices impacting network neutrality have yet to be comprehensively explored in the Indian context, nor their impact on other values. In this article, I focus on network management, in general, and deep packet inspection, in particular and how it impacts the privacy of users.

    The Architecture of the Internet

    The Internet exists as a network acting as an intermediary between providers of content and it users. [2] Traditionally, the network did not distinguish between those who provided content and those who were recipients of this service, in fact often, the users also functioned as content providers. The architectural design of the Internet mandated that all content be broken down into data packets which were transmitted through nodes in the network transparently from the source machine to the destination machine.[3] As discussed in detail later, as per the OSI model, the network consists of 7 layers. We will go into each of these layers in detail below, however is important to understand that at the base is the physical layer of cables and wires, while at the top is application layer which contains all the functions that people want to perform on the Internet and the content associated with it. The layers in the middle can be characterised as the protocol layers for the purpose of this discussion. What makes the architecture of the Internet remarkable is that these layers are completely independent of each other, and in most cases, indifferent to the other layers. The protocol layer is what impacts net neutrality. It is this layer which provides the standards for the manner in which the data must flow through the network. The idea was for the it to be as simple and feature free as possible such that it is only concerned with the transmission data as fast as possible ('best efforts principle') while innovations are pushed to the layers above or below it.[4]

    This aspect of the Internet's architectural design, which mandates that network features are implemented as the end points only (destination and source machine), i.e. at the application level, is called the 'end to end principle'.[5] This means that the intermediate nodes do not differentiate between the data packets in any way based on source, application or any other feature and are only concerned with transmitting data as fast as possible, thus creating what has been described as a 'dumb' or neutral network. [6] This feature of the Internet architecture was also considered essential to what Jonathan Zittrain has termed as the 'generative' model of the Internet.[7] Since, the Internet Protocol remains a simple layer incapable of discrimination of any form, it meant that no additional criteria could be established for what kind of application would access the Internet. Thus, the network remained truly open and ensured that the Internet does not privilege or become the preserve of a class of applications, nor does it differentiate between the different kinds of technologies that comprise the physical layer below.

    While the above model speaks of a dumb network not differentiating between the data packets that travel through it, in truth, the network operators engage in various kinds of practices that priorities, throttle or discount certain kinds of data packets. In her thesis essay at the Oxford Internet Institute, Alissa Cooper[8] states that traffic management involves three different set of criteria- a) Some subsets of traffic needs to be managed, and arriving at a criteria to identify those subsets the criteria can be based on source, destination, application or users, b) Trigger for the traffic management measure which - could be based upon time of the day, usage threshold or a specific network condition, and c) the traffic treatment put into practice when the trigger is met. The traffic treatment can be of three kinds. The first is Blocking, in which traffic is prevented from being delivered. The second is Prioritization under which identified traffic is sent sooner or later. This is usually done in cases of congestion and one kind of traffic needs to be prioritized. The third kind of treatment is Rate limiting where identified traffic is limited to a defined sending rate.[9] The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and in this way it treats all information sent over it as equal. In such a network, the content of the packets is not examined, and Internet providers act according to the destination of the data as opposed to any other factor. However, in order to perform traffic management in various circumstances, Deep packet Inspection technology, which does look at the content of data packets is commonly used by service providers.

    Deep Packet Inspection

    Deep packet inspection (DPI) enables the examination of the content of a data packets being sent over the Internet. Christopher Parsons explains the header and the payload of a data packet with respect to the OSI model. In order to understand this better, it is more useful to speak of network in terms of the seven layers in the OSI model as opposed to the three layers discussed above.[10]

    Under the OSI model, the top layer, the Application Layer is in contact with the software making a data request. For instance, if the activity in question is accessing a webpage, the web-browser makes a request to access a page which is then passed on to the lower layers. The next layer is the Presentation Layer which deals with the format in which the data is presented. This lateral performs encryption and compression of the data. In the above example, this would involve asking for the HTML file. Next comes the Session Layer which initiates, manages and ends communication between the sender and receiver. In the above example, this would involve transmitting and regulating the data of the webpage including its text, images or any other media. These three layers are part of the 'payload' of the data packet.[11]

    The next four layers are part of the 'header' of the data packet. It begins with the Transport Layer which collects data from the Payload and creates a connection between the point of origin and the point of receipt, and assembles the packets in the correct order. In terms of accessing a webpage, this involves connecting the requesting computer system with the server hosting the data, and ensuring the data packets are put together in an arrangement which is cohesive when they are received. The next layer is the Data Link Layer. This layer formats the data packets in such a way that that they are compatible with the medium being used for their transmission. The final layer is the Physical Layer which determines the actual media used for transmitting the packets.[12]

    The transmission of the data packet occurs between the client and server, and packet inspect occurs through some equipment placed between the client and the server. There are various ways in which packet inspection has been classified and the level of depth that the inspection needs to qualify in order to be categorized as Deep Packet Inspection. We rely on Parson's classification system in this article. According to him, there are three broad categories of packet inspection - shallow, medium and deep.[13]

    Shallow packet inspection involves the inspection of the only the header, and usually checking it against a blacklist. The focus in this form of inspection is on the source and destination (IP address and packet;s port number). This form of inspection primarily deals with the Data Link Layer and Network Layer information of the packet. Shallow Packet Inspection is used by firewalls.[14]

    Medium Packet Inspection involves equipment existing between computers running the applications and the ISP or Internet gateways. They use application proxies where the header information is inspected against their loaded parse-list and used to look at a specific flows. These kinds of inspections technologies are used to look for specific kinds of traffic flows and take pre-defined actions upon identifying it. In this case, the header and a small part of the payload is also being examined.[15]

    Finally, Deep Packet Inspection (DPI) enables networks to examine the origin, destination as well the content of data packets (header and payload). These technologies look for protocol non-compliance, spam, harmful code or any specific kinds of data that the network wants to monitor. The feature of the DPI technology that makes it an important subject of study is the different uses it can be put to. The use cases vary from real time analysis of the packets to interception, storage and analysis of contents of a packets.[16]

    The different purposes of DPI

    Network Management and QoS

    The primary justification for DPI presented is network management, and as a means to guarantee and ensure a certain minimum level of QoS (Quality of Service). Quality of Service (QoS) as a value conflicting with the objectives of Network Neutrality, has emerged as a significant discussion point in this topic. Much like network neutrality, QoS is also a term thrown around in vague, general and non-definitive references. The factors that come into play in QoS are network imposed delay, jitter, bandwidth and reliability. Delay, as the name suggests, is the time taken for a packet to be passed by the sender to the receiver. Higher levels of delay are characterized by more data packets held 'in transit' in the network. [17] A paper by Paul Ferguson and Geoff Huston described the TCP as a 'self clocking' protocol.[18] This enables the transmission rate of the sender to be adjusted as per the rate of reception by the receiver. As the delay and consequent stress on the protocol increases, this feedback ability begins to lose its sensitivity. This becomes most problematic in cases of VoIP and video applications. The idea of QoS generally entails consistent service quality with low delay, low jitter and high reliability through a system of preferential treatment provided to some traffic on a criteria formulated around the need of such traffic to have greater latency sensitivity and low delay and jitter. This is where Deep Packet Inspection comes into play. In 1991, Cisco pioneered the use of a new kind of router that could inspect data packets flowing through the network. DPI is able to look inside the packets and its content, enabling it to classify packets according to a formulated policy. DPI, which was used a security tool, to begin with, is a powerful tool as it allows ISPs to limit or block specific applications or improve performances of applications in telephony, streaming and real-time gaming. Very few scholars believe in an all-or-nothing approach to network neutrality and QoS and debate often comes down to what forms of differentiations are reasonable for service providers to practice. [19]

    Security

    Deep Packet inspection was initially intended as a measure to manage the network and protect it from transmitting malicious programs . As mentioned above, Shallow Packet Inspection was used to secure LANs and keep out certain kinds of unwanted traffic. [20] Similarly, DPI is used for identical purposes, where it is felt useful to enhance security and complete a 'deeper' inspection that also examines the payload along with the header information.

    Surveillance

    The third purpose of DPI is what concerns privacy theorists the most. The fact that DPI technologies enable the network operators to have access to the actual content of the data packets puts them a position of great power as well as making them susceptible to significant pressure from the state. [21] For instance, in US, the ISPs are required to conform to the provisions of the Communications Assistance for Law Enforcement Act (CALEA) which means they need to have some surveillance capacities designed into their systems. What is more disturbing for privacy theorists compared to the use of DPI for surveillance under legislation like CALEA, are the other alleged uses by organisation like the National Security Agency through back end access to the information via the ISPs. Aside from the US government, there have been various reports of use of DPI by governments in countries like China,[22] Malaysia[23] and Singapore. [24]

    Behavioral targeting

    DPI also enables very granular tracking of the online activities of Internet users. This information is invaluable for the purposes of behavioral targeting of content and advertising. Traditionally, this has been done through cookies and other tracking software. DPI allows new way to do this, so far exercised only through web-based tools to ISPs and their advertising partners. DPI will enable the ISPs to monitor contents of data packets and use this to create profiles of users which can later be employed for purposes such as targeted advertising. [25]

    Impact on Privacy

    Each of the above use-cases has significant implications for the privacy of Internet users as the technology in question involves access, tracking or retention of their online communication and usage activity.

    Alyssa Cooper compares DPI with other technologies carrying out content inspection such as caching services and individual users employing firewalls or packet sniffers. She argues that one of the most distinguishing feature of DPI is the potential for "mission-creep." [26] Kevin Werbach writes that while networks may deploy DPI for implementation under CALEA or traffic peer-to-peer shaping, once deployed DPI techniques can be used for completely different purposes such as pattern matching of intercepted content and storage of raw data or conclusions drawn from the data.[27] This scope of mission creep is even more problematic as it is completely invisible. As opposed to other technologies which rely on cookies or other web-based services, the inspection occurs not at the end points, but somewhere in the middle of the network, often without leaving any traces on the user's system, thus rendering them virtually undiscoverable.

    Much like other forms of surveillance, DPI threatens the sense that the web is a space where people can engage freely with a wide range of people and services. For such a space to continue to exist, it is important for people to feel secure about their communication and transaction on medium. This notion of trust is severely harmed by a sense that users are being surveilled and their communication intercepted. This has obvious chilling effect on free speech and could also impact electronic commerce.[28]

    Allyssa Cooper also points out another way in which DPI differs from other content tracking technologies. As the DPI is deployed by the ISPs, it creates a greater barrier to opting out and choosing another service. There are only limited options available to individuals as far as ISPs are concerned. Christopher Parsons does a review of ISPs using DPI technology in UK, US and Canada and offers that various ISPs do provide in their terms of services that they use DPI for network management purposes. However, this information is often not as easily accessible as the terms and conditions of online services. A;so, As opposed to online services, where it is relatively easier to migrate to another service, due to both presence of more options and the ease of migration, it is a much longer and more difficult process to change one's ISP.[29]

    Measures to mitigate risk

    Currently, there are no existing regulatory frameworks in India which deal govern DPI technology in any way. The International Telecommunications Union (ITU) prescribes a standard for DPI[30] however, the standard does not engage with any questions of privacy and requires all DPI technologies to be capable of identifying payload data, and prescribing classification rules for specific applications, thus, conflicting with notions of application agnosticism in network management. More importantly, the requirements to identify, decrypt and analyse tunneled and encrypted data threaten the reasonable expectation of privacy when sending and receiving encrypted communication. In this final section, I look at some possible principles and practices that may be evolved in order to mitigate privacy risks caused due to DPI technology.

    Limiting 'depth' and breadth

    It has been argued that inherently what DPI technology intends to do is matching of patterns in the inspected content against a pre-defined list which is relevant to the purpose how which DPI is employed. Much like data minimization principles applicable to data controllers and data processors, it is possible for network operators to minimize the depth of the inspection (restrict it to header information only or limited payload information) so as to serve the purpose at hand. For instance, in cases where the ISP is looking to identify peer-to-peer traffic, there are protocols which declare their names in the application header itself. Similarly, a network operators looking to generate usage data about email traffic can do so simply by looking at port number and checking them against common email ports.[31] However, this mitigation strategy may not work well for other use-cases such as blocking malicious software or prohibited content or monitoring for the sake of behavioral advertising.

    While depth referred to the degree of inspection within data packets, breadth refers to the volume of packets being inspected. Alyssa Cooper argues that for many DPI use cases, it may be possible to rely on pattern matching on only the first few data packets in a flow, in order to arrive at sufficient data to take appropriate response. Cooper uses the same example about peer-to-peer traffic. In some cases, the protocol name may appear on the header file of only the first packet of a flow between two peers. In such circumstances, the network operators need not look beyond the header files of the first packet in a flow, and can apply the network management rule to the entire flow.[32]

    Data retention

    Aside from the depth and breadth of inspection, another important question whether and for along is there a need for data retention. All use cases may not require any kind of data retention and even in case where DPI is used for behavioral advertising, only the conclusions drawn may be retained instead of retaining the payload data.

    Transparency

    One of the issues is that DPI technology is developed and deployed outside the purview of standard organizations like ISO. Hence, there has been a lack of open, transparent standards development process in which participants have deliberated the impact of the technology. It is important for DPI to undergo these process which are inclusive, in that there is participation by non-engineering stakeholders to highlight the public policy issues such as privacy. Further, aside from the technology, the practices by networks need to be more transparent. [33] Disclosure of the presence of DPI, the level of detail being inspected or retained and the purpose for deployment of DPI can be done. Some ISPs provide some of these details in their terms of service and website notices. [34] However, as opposed to web-based services, users have limited interaction with their ISP. It would be useful for ISPs to enable greater engagement with their users and make their practices more transparent.

    Conclusion

    The very nature of of the DPI technology renders some aspects of recognized privacy principles like notice and consent obsolete. The current privacy frameworks under FIPP[35] and OECD [36] rely on the idea of empowering the individual by providing them with knowledge and this knowledge enables them to make informed choices. However, for this liberal conception of privacy to function meaningfully, it is necessary that there are real and genuine choices presented to the alternatives. While some principles like data minimisation, necessity and proportionality and purpose limitation can be instrumental in ensuring that DPI technology is used only for legitimate purposes, however, without effective opt-out mechanisms and limited capacity of individual to assess the risks, the efficacy of privacy principles may be far from satisfactory.

    The ongoing Aadhaar case and a host of surveillance projects like CMS, NATGRID, NETRA[37] and NMAC [38] have raised concerns about the state conducting mass-surveillance, particularly of online content. In this regard, it is all the more important to recognise the potential of Deep Packet Inspection technologies for impact on privacy rights of individuals. Earlier, the Centre for Internet and Society had filed Right to Information applications with the Department of Telecommunications, Government of India regarding the use of DPI, and the government had responded that there was no direction/reference to the ISPs to employ DPI technology. [39] Similarly, MTNL also responded to the RTI Applications and denied using the technology.[40] It is notable though, that they did not respond to the questions about the traffic management policies they follow. Thus, so far there has been little clarity on actual usage of DPI technology by the ISPs.


    [1] Ashish Mishra, "India's Net Neutrality Crusaders", available at http://mintonsunday.livemint.com/news/indias-net-neutrality-crusaders/2.3.2289565628.html

    [3] Vinton Cerf and Robert Kahn, "A protocol for packet network intercommunication", available at https://www.semanticscholar.org/paper/A-protocol-for-packet-network-intercommunication-Cerf-Kahn/7b2fdcdfeb5ad8a4adf688eb02ce18b2c38fed7a

    [4] Paul Ganley and Ben Algove, "Network Neutrality-A User's Guide", available at http://wiki.commres.org/pds/NetworkNeutrality/NetNeutrality.pdf

    [5] J H Saltzer, D D Clark and D P Reed, "End-to-End arguments in System Design", available at http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf

    [6] Supra Note 4.

    [7] Jonathan Zittrain, The future of Internet - and how to stop it, (Yale University Press and Penguin UK, 2008) available at https://dash.harvard.edu/bitstream/handle/1/4455262/Zittrain_Future%20of%20the%20Internet.pdf?sequence=1

    [8] Alissa Cooper, How Regulation and Competition Influence Discrimination in Broadband Traffic Management: A Comparative Study of Net Neutrality in the United States and the United Kingdom available at http://ora.ox.ac.uk/objects/uuid:757d85af-ec4d-4d8a-86ab-4dec86dab568

    [9] Id .

    [10] Christopher Parsons, "The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?", available at https://www.christopher-parsons.com/the-politics-of-deep-packet-inspection-what-drives-surveillance-by-internet-service-providers/ at 15.

    [11] Ibid at 16.

    [12] Id .

    [13] Ibid at 19.

    [14] Id .

    [15] Id .

    [16] Jay Klein, "Digging Deeper Into Deep Packet Inspection (DPI)", available at http://spi.unob.cz/papers/2007/2007-06.pdf

    [17] Tim Wu, "Network Neutrality: Broadband Discrimination", available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=388863

    [18] Paul Ferguson and Geoff Huston, "Quality of Service on the Internet: Fact, Fiction,

    or Compromise?", available at http://www.potaroo.net/papers/1998-6-qos/qos.pdf

    [19] Barbara van Schewick, "Network Neutrality and Quality of Service: What a non-discrimination Rule should look like", available at http://cyberlaw.stanford.edu/downloads/20120611-NetworkNeutrality.pdf

    [20] Supra Note 14.

    [21] Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance," available at http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf

    [22] Ben Elgin and Bruce Einhorn, "The great firewall of China", available at http://www.bloomberg.com/news/articles/2006-01-22/the-great-firewall-of-china .

    [23] Mike Wheatley, "Malaysia's Web Heavily Censored Before Controversial Elections", available at http://siliconangle.com/blog/2013/05/06/malaysias-web-heavily-censored-before-controversial-elections/

    [24] Fazal Majid, "Deep packet inspection rears it ugly head" available at https://majid.info/blog/telco-snooping/.

    [25] Alissa Cooper, "Doing the DPI Dance: Assessing the Privacy Impact of Deep Packet Inspection," in W. Aspray and P. Doty (Eds.), Privacy in America: Interdisciplinary Perspectives, Plymouth, UK: Scarecrow Press, 2011 at 151.

    [26] Ibid at 148.

    [27] Kevin Werbach, "Breaking the Ice: Rethinking Telecommunications Law for the Digital Age", Journal of Telecommunications and High Technology, available at http://www.jthtl.org/articles.php?volume=4

    [28] Supra Note 25 at 149.

    [29] Supra Note 25 at 147.

    [30] International Telecommunications Union, Recommendation ITU-T.Y.2770, Requirements for Deep Packet Inspection in next generation networks, available at https://www.itu.int/rec/T-REC-Y.2770-201211-I/en.

    [31] Supra Note 25 at 154.

    [32] Ibid at 156.

    [33] Supra Note 10.

    [34] Paul Ohm, "The Rise and Fall of Invasive ISP Surveillance", available at http://paulohm.com/classes/infopriv10/files/ExcerptOhmISPSurveillance.pdf .

    [37] "India's Surveillance State" Software Freedom Law Centre, available at http://sflc.in/indias-surveillance-state-our-report-on-communications-surveillance-in-india/

    [38] Amber Sinha, "Are we losing our right to privacy and freedom on speech on Indian Internet", DNA, available at http://www.dnaindia.com/scitech/column-are-we-losing-the-right-to-privacy-and-freedom-of-speech-on-indian-internet-2187527

    [40] Smita Mujumdar, "Use of DPI Technology by ISPs - Response by the Department of Telecommunications" available at http://cis-india.org/telecom/dot-response-to-rti-on-use-of-dpi-technology-by-isps

    ISO/IEC JTC 1 SC 27 Working Group Meetings - A Summary

    by Vanya Rakesh last modified Dec 16, 2016 11:53 PM
    The Centre for Internet & Society attended the ISO/IEC JTC 1 SC 27 Working Group Meetings from 22 to 27 October 2016 in Abu Dhabi at Abu Dhabi National Exhibition Centre.

    Being a member of Working Group 5: Information technology - Security techniques – Identity management and privacy technologies, we attended the following meetings:

    1. WD 29184 Guidelines for online privacy notices and consent- As technological advancement and wider availability of communication infrastructures has enabled collection and analysis of information regarding an individuals' activities, along with people becoming aware about privacy implications of the same, this standard aims to provides a framework for organizations to provide clear and easily under information to consumers about how the organization will process their PII.
    2. SP PII Protection Considerations for Smartphone App providers - Being a 1-year long project proposed during the ISO/IEC SC 27 JTC 1 Working Group Meetings in Jaipur in the year 2015. This group aims to build off a privacy framework for mobile applications to guide app developers on the lines of ISO/IEC 29100 international standard (which defines a broad privacy framework for information technologies)  in light of excessive data collection by apps in absence of consent or justification, lack of comprehensive policies, Non transparent practices,  Lack of adequate choice and consent, to ensure protection of rights of the individuals, etc. and will work towards ensuring a harmonized and standardized privacy structure for mobile application data policies and practices.
    3. WD 20889 Privacy enhancing data de-identification techniques- Given the importance of Data de-identification techniques when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles, the selection, design, use and assessment of these techniques needs to be performed appropriately in order to effectively address the risks of re-identification in a given context.
    4. SP Privacy in Smart Cities- Being a 1-year long project proposed during the ISO/IEC SC 27 JTC 1 Working Group Meetings in Jaipur this group saw contributions from Japan, India, PRIPARE in EU, to name a few. The scope for the group was proposed to produce a framework in light of data ownership, communication channels, privacy risk and impact assessment in smart cities, data lifecycle privacy governance for smart cities, and Develop use cases and contexts for Privacy Controls w.r.t the data lifecycle in Smart Cities, along with detailed documentation of Privacy Controls for Smart Cities aligned to the primary controls and associated sub controls.

    Inputs to the Working Group on Enhanced Cooperation on Public Policy Issues Pertaining to the Internet (WGEC)

    by Sunil Abraham and Vidushi Marda, with inputs from Pranesh Prakash — last modified Dec 17, 2016 12:20 AM
    The Centre for Internet & Society (CIS) submitted inputs to the Working Group on Enhanced Cooperation on Public Policy Issues Pertaining to the Internet (WGEC) on 15 December 2016. The WGEC sought inputs on two questions that will guide the next meeting of the Working Group which is scheduled to take place on the 26-27 January 2017.

    What are the high level characteristics of enhanced cooperation?

    • The Tunis Agenda leaves the term “enhanced cooperation” unclearly defined. What is clear, however, is that enhanced cooperation is distinct from the Internet Governance Forum.
    • According to Paragraph 69 of the Tunis Agenda, enhanced cooperation will enable "governments, on an equal footing, to carry out their roles and responsibilities, in international public policy issues pertaining to the Internet, but not in the day-to-day technical and operational matters, that do not impact on international public policy issues." In other words enhanced cooperation should result in in the development and enforcement of international public policy and only "day-to-day technical and operational matters" with no public policy impact and national public policy is exempt from government-to-government enhanced cooperation.
    • According to Paragraph 70, enhanced cooperation includes "development of globally-applicable principles on public policy issues associated with the coordination and management of critical Internet resources." According to the paragraph, “organizations responsible for essential tasks associated with the

    Internet should create an environment that facilitates this development of these principles using "relevant international organizations". In other words, both Internet institutions [ICANN, ISOC and RIRs] and multilateral organisations [WIPO, ITU, UNESCO etc] should be used to develop principles.

    • Paragraph 71 gives some further clarity. According to this paragraph, the process for enhanced cooperation should 1) be “started by the UN Secretary General” 2) "involve all stakeholders in their respective roles" 3) "proceed as quickly as possible"  4) be "consistent with legal process"  5) "be responsive to innovation".
    • Again according to Paragraph 71, enhanced cooperation should be commenced by "relevant organisations" and should involve "all stakeholders". But only the "relevant organisations shall be requested to provide annual performance reports." Enhanced cooperation as envisioned in the Tunis Agenda, therefore, calls for a multistakeholder model where each constituency leads the process of developing principles and self-regulatory mechanisms that does involve all​ stakeholders at all stages, but rather, one that requires participation from relevant​ stakeholders in accordance with the issue at hand at the relevant stage.
    • For government-to-government enhanced cooperation, governments need to agree on what is within the exclusive realm of "national public policy" for ex. national security, intellectual property policy, and protection of children online. Governments also need to agree on what is within the remit of “international public policy” for ex. cross border taxation, cross border criminal investigations, cross border hate speech. Once this is done, the governments of the world should pursue the development and enforcement of international law and norms at the appropriate forums if they exist or alternatively they must create new forums that are appropriate.
    • For enhanced cooperation with respect to non-government "relevant organisations" [different sub-groups within the private sector, technical community and civil society], we believe that the requirements of Paragraph 71 can be understood to mean that enhanced cooperation is the “development of self regulatory norms” as a complement to traditional multilateral norm setting and international law making envisioned in Paragraph 69. In​ other words, the real utility of the multi-stakeholder model is self-regulation by the private sector. Besides the government, it is the private sector that has the greatest capacity for harm and therefore is in urgent need of regulation. The multistakeholder model will best serve its purpose if the end result is that the private sector self-regulates. Most of the harm emerging from large corporations can only be addressed if they agree amongst themselves. Having a centralised or homogenous model of enhanced cooperation will not suffice, the model of cooperation should be flexible in accordance with the issue being brought to the table.

    Taking into consideration the work of the previous WGEC and the Tunis Agenda, particularly paragraphs 69-71, what kind of recommendations should we consider?

    The previous work of the WGEC is useful as a mapping exercise. However, the working group was unable to agree on a definition of Enhanced Cooperation. In our previous response we have clearly indicated that enhanced cooperation is 1) development of international law and norms by governments at appropriate international/multilateral fora 2) articulation of principles by "organizations responsible for essential tasks associated with the Internet" and "relevant international organizations" and 3) development of self-regulatory norms and enforcement mechanisms by private sector, technical community and civil society with a priority for the private sector because they have the greatest potential after government for harms. To repeat, the Tunis Agenda makes it very clear that enhanced cooperation is distinct from the IGF. If the IGF is only the learning forum, we need a governance forum like ICANN so that different constituencies can develop self regulatory norms and enforcement mechanisms with inputs from other stakeholder constituencies and the public at large.

    The Curious Case of Poor Security in the Indian Twitterverse

    by Udbhav Tiwari last modified Dec 17, 2016 12:28 AM
    What are the technical, legal and jurisdictional issues around the recent Twitter and email hacks claimed by the ‘Legion Crew’, and what can targeted entities do to better protect themselves?

    The article was originally published in the Wire on December 15, 2016.


    The term legion, an oft-referred identity in popular culture, has begun to attain recent notoriety in Indian cyberspace due to the spate of hacks being carried out by a group of hackers calling themselves ‘Legion Crew’. The group has compromised four Twitter and/or email accounts in the past two weeks, with confirmed hacks of Rahul Gandhi, Vijay Mallya, Barkha Dutt and Ravish Kumar. Lalit Modi, Apollo Hospitals and the parliament (sansad) have been singled out as future targets, with dire warnings of catastrophic data leaks if the group were to be investigated by the authorities. The ethical impression of the hacks have been divided, with some segments of the public supporting the supposedly hacktivist outlook of the group while others condemning their actions as reckless and invasive. In the meantime, no individuals or entities have been accused of the hacks by the police, with most reports claiming the foreign origin of the hacks being the biggest impediment to the investigations.

    A technical and legal perspective

    The hacks first began against the politician Gandhi, whose Twitter account was hacked almost two weeks ago, with various demeaning tweets being posted for a few hours before access to the account was restored to the rightful owner. The same hacks were then carried out on business tycoon Mallya’s Twitter account last Friday but this time around, his bank details (apparently obtained from his compromised email accounts) were also leaked to the public via Twitter. Similar hacks targeting both the Twitter and email accounts of Dutt and Kumar were also carried out the past weekend. Sensitive details and data dumps (around 1.5 GB in size) of the journalists were released to the public, along with escalating warnings about future attacks. The data dumps released by the hackers seemed to be indicative that the hackers obtained far more information than they had disclosed via the Twitter hacks and were willing to leverage this data as ransom. Twitter, via both their Indian policy representatives and their international office, has denied any compromise to their systems and has claimed that all accounts were legitimately accessed with valid credentials at the time of the hacks. This leads to three main questions: How were the Twitter and email accounts hacked? What is the recourse, especially in terms of investigation, available to the afflicted parties and the authorities? What can potential targets do to secure their online presence from such attacks?

    Regarding their technical nature, all of these hacks were sustained compromises that lasted for a few hours each (a long time in cyberspace) and seemed to be reflective of only a fragment of the power the hackers held over the individual’s online presence. Considering Twitter’s denial that the attacks were due to a security flaw on their end as well as the fact that legitimate login details were used to gain access to the accounts, a rather simple investigation can show that the most likely attack vector used by the Legion Crew for these hacks was a DNS Hijacking attack in combination with a Man in the Middle (MITM) attack. These methods abuse the rather simple and (by default) insecure DNS system that is responsible for directing the world’s Internet traffic including email. While the use of DNS to map websites to the IP address of the systems where they are physically hosted (for instance, www.thewire.in maps to 52.76.81.135 at the time of writing this article) is fairly well known, the DNS system also directs most of the world’s email. Similar to DNS A and AAA name records regarding websites, DNS MX records direct email sent to domain names to the correct email servers where they are processed for storage or forwarding, as required. If these MX records are compromised, then hackers can easily redirect emails sent to legitimate email address of the domain name (for instance, [email protected]) to whatever system they want, including other compromised email addresses.

    The original operator of the email account is unaware of any email that is redirected in such a way and has no way of knowing the account has been hacked until they notice they are not consistently receiving emails sent to them, which in well planned hacks can be as for many weeks or even months. These attacks can also be further augmented if the hackers also decide to implement an MITM. In an MITM attack, hackers can redirect all traffic attempting to reach an email account via the MX records to a system they operate by changing the MX records on the domain name server to a malicious system. They can access and store all these emails (along with attachments) via the malicious system and also manipulate the information contained in these emails. Then, either in bulk or selectively, they can re-send the emails to the original email accounts they were intended for from their own servers. The owner will then receive the emails in their inboxes with the apparent impression they are private and being received for the first time. This entire MITM process can be setup in a manner that the emails are rerouted to compromised servers by MX records changes, stored for future analysis and then forwarded to the original recipient account in a matter of seconds.

    Given the reliance placed by most websites on email IDs being a primary form of identity authentication, compromising an email ID can give access to most of the social networking, entertainment and even banking websites’ login details of the owner to any individual who has the login details of the account. This is because of the password reset or forgotten password feature available in most services that use only email IDs by default as a form of authenticating account ownership and allowing the user to reset their passwords by setting a reset email to their registered email accounts. Once they gain access to the compromised accounts, hackers can perform these resets with impunity, granting them unrestricted access to the online presence of the owner. In fact, hackers can use these attacks to perform password resets on the email accounts themselves, allowing them unlimited access to past conversation, records and login details that may be stored in the email accounts.

    Keeping this background in mind, the most likely methodology behind the hacks is quite simple to explain. The Legion Crew most likely first compromised the email systems of these celebrities by changing the DNS MX records of the email IDs which were registered with Twitter as login IDs for these accounts. This allowed them to redirect emails sent to these email IDs to an alternative system of their choosing. They then used the password reset feature of Twitter, which is similar to those provided by most social networking services, to reset the password of these accounts. However, due to the compromise of the MX records of the domain names used by these celebrities, instead of reaching the inboxes of the entities operating the accounts, the password reset emails were sent to the alternative systems set up by the hackers solely for receiving such emails. After receiving this email, it was a simple matter of resetting the account credentials by clicking on the password reset link on the email and changing the passwords of these accounts to unique passwords only known to the hackers.

    The hackers then would (and did) have complete control of the account until the service provider itself intervened and provided an emergency reset along with recommending rectifying the MX records from the malicious one’s inserted by the hackers. The only question left to be answered in the methodology followed by the hackers is how they gained access to the MX records, as DNS records can only be changed using the dashboard of the domain name provider, which in turn is protected by a login password. Allegations have arisen that most (if not all) of the compromised accounts used ‘Net4india’ as their domain name provider. Therefore, it is very possible either that it is a vulnerability on the Net4india systems, an internal compromise of the personnel Net4india and so on leading to access detail to domain name accounts from being compromised. Such security and personnel breaches could have been responsible for providing access to the domain name management dashboard of the hacked celebrities email IDs, after which the attack would have followed the methodology described above by changing the MX records to a malicious system.

    Jurisdictional issues

    The legal avenues available to the affected parties are fairly clear within the Information Technology Act, 2000 and the Indian Penal Code, 1862. Section 66 and Section 66C of the IT Act, which govern hacking and misuse of passwords respectively, would apply along with possible application of the provisions concerning mischief (Section 425), cheating (Section 420) and extortion (Section 383) of the IPC. However, recent investigations have already begun to show that the various jurisdictional symptoms that plague cybercrimes investigations are also hindering investigations for these hacks. The global nature of the internet ensures that the operating servers, attackers, compromised users and unwitting intermediaries are more often than not all located in different jurisdictions, each with their own set of protections, vulnerabilities and laws. For example, investigations by the Delhi police into IP addresses that accessed Gandhi’s Twitter account during the hack have shown that in the period of few hours the account was accessed from the US, Sweden, Canada, Thailand and Romania. Of course, given the pervasive availability of IP spoofing tools, none of these countries is indicative of the actual location of the hacker. Gaining information from these different servers, in order to trace a route of the hacker’s digital geographical journey, is a bureaucratic and legal nightmare with long delays, unanswered Mutual Legal Assistance Treaty requests and unresponsive service providers being the norm. Like in most cybercrime investigation, if the hackers take certain basic steps to mask their identities and geographical location, their odds being caught by traditional law enforcement are negligible. Investigations that have successfully managed to catch such hacker groups, such as the Project Safe Childhood by the FBI against child pornography on the Tor web, take millions of dollars, months of efforts and a high level of skill. Whether these Twitter hacks will generate the sustained, multijurisdictional effort across law enforcement agencies in India required to catch such crimes remains to be seen. Until then, the questions of attribution, liability and justice will remain unanswered like in a majority of large scale cyber hacks.

    Possible measures

    Given that various other targets have already been singled out by the hacker group, the need for vigilance and improved security is greater than ever. One basic measure, easily available within Twitter and most other services, that should be carried out is enabling two factor authentication (2FA) on both email and social media accounts.  2FA ensures that the user has to input a One Time Password (OTP) generated on a separate device (such as a mobile phone) at the time of logging in or resetting the password for the account. This would mean that even if the hackers obtain the password or compromise the emails being sent to an account, they will be unable to login into an account without also being in physical possession of the device with the OTP generation application. If this option, which is already available within Twitter, was enabled for the four accounts that were hacked, for example, they would have remained protected despite the email account compromise. Further, domain name service providers should also implement Domain Name System Security Extensions and Domain Keys Identified Mail to prevent DNS and email hijacking, as was carried out on Net4India servers in these Twitter attacks. Using HTTPS on all pages on websites will also go a long way in preventing spoofing and securing user information in transit. Finally, nothing can replace customer education and awareness as the most effective tool to combat the growing cyber threats faced by the average netizen. The weakest link in a digital system is often the end user. A core set of security measures that can be percolated into common practice will serve as the first and best line of defence against such attacks in the future, for both the common man and celebrities alike.

    Incident Response Requirements in Indian Law

    by Vipul Kharbanda last modified Dec 28, 2016 01:19 AM
    Cyber incidents have serious consequences for societies, nations, and those who are victimised by them. The theft, exploitation, exposure or otherwise damage of private, financial, or other sensitive personal or commercial data and cyber attacks that damage computer systems are capable of causing lasting harm.

    A recent example of such an attack  that we have seen from India is the recent data breach involving an alleged 3.2 million debit cards in India.[1] In the case of this hack the payment processing networks such as National Payments Corporation of India, Visa and Mastercard, informed the banks regarding the leaks, based on which the banks started the process of blocking and then reissuing the compromised cards. It has also been reported that the banks failed to report this incident to the Computer Emergency Response Team of India (CERT-In) even though they are required by law to do so.[2] Such risks are increasingly faced by consumers, businesses, and governments. A person who is a victim of a cyber incident usually looks to receive assistance from the service provider and government agencies, which are prepared to investigate the incident, mitigate its consequences, and help prevent future incidents. It is essential for an effective response to cyber incidents that authorities have as much knowledge regarding the incident as possible and have that knowledge as soon as possible. It is also critical that this information is communicated to the public. This underlines the importance of  reporting  cyber incidents as a tool in making the internet and digital infrastructure   secure.. Like any other crime, an Internet-based crime should be reported to those law enforcement authorities assigned to tackle it at a local, state, national, or international level, depending on the nature and scope of the criminal act. This is the first in a series of blog posts highlighting the importance of incident reporting in the Indian regulatory context with a view to highlight the Indian regulations dealing with incident reporting and the ultimate objective of having a more robust incident reporting environment in India.

    Incident Reporting under CERT Rules

    In India, section 70-B of the Information Technology Act, 2000 (the “IT Act”) gives the Central Government the power to appoint an agency of the government to be called the Indian Computer Emergency Response Team. In pursuance of the said provision the Central Government issued the Information Technology (The Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013 (the “CERT Rules”) which provide the location and manner of functioning of the Indian Computer Emergency Response Team (CERT-In). Rule 12 of the CERT Rules gives every person, company or organisation the option to report cyber security incidents to the CERT-In. It also places an obligation on them to mandatorily report the following kinds of incidents as early as possible:

    • Targeted scanning/probing of critical networks/systems;
    • Compromise of critical systems/information;
    • Unauthorized access of IT systems/data;
    • Defacement of website or intrusion into a website and unauthorized changes such as inserting malicious code, links to external websites, etc.;
    • Malicious code attacks such as spreading of virus/worm/Trojan/botnets/spyware;
    • Attacks on servers such as database, mail, and DNS and network devices such as routers;
    • Identity theft, spoofing and phishing attacks;
    • Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks;
    • Attacks on critical infrastructure, SCADA systems and wireless networks;
    • Attacks on applications such as e-governance, e-commerce, etc.

    The CERT Rules also impose an obligation on service providers, intermediaries, data centres and body corporates to report cyber incidents within a reasonable time so that CERT-In may have scope for timely action. This mandatory obligation of reporting incidents casts a fairly wide net in terms of private sector entities, however it is notable that prima facie the provision does not impose any obligation on government entities to report cyber incidents unless they come under any of the expressions “service providers”, “data centres”, “intermediaries” or “body corporate”. This would mean that if the data kept with the Registrar General & Census Commissioner of India is hacked in a cyber incident, then there is no statutory obligation under the CERT Rules on it to report the incident. It is pertinent to mention here that although there is no obligation on a government department under law to report such an incident, such an obligation may be contained in its internal rules and guidelines, etc. which are not readily available.

    It is pertinent to note that although the CERT Rules provide for a mandatory obligation to report the cyber incidents listed therein, the Rules themselves do not provide for any penalty for non compliance. However this does not mean that there are no consequences for non compliance, it just means that we have to look to the parent legislation i.e. the IT Act for the appropriate penalties for non compliance. Section 70B(6) gives the CERT-In the power to call for information and give directions for the purpose of carrying out its functions. Section 70B(7) provides that any service provider, intermediary, data center, body corporate or person who fails to provide the information called for or comply with the direction under sub-section (6), shall be liable to imprisonment for a period up to  1 (one) year or fine of up to 1 (one) lakh or both.

    It is possible to argue here that sub-section (6) only talks about calls for information by CERT-In and the obligation under Rule 12 of the CERT Rules is an obligation placed by the central government and not CERT-In. It can also be argued that sub-section (6) is only meant for specific requests made by CERT-In for information and sub-section (7) only penalises those who do not respond to these specific requests. However, even if these arguments were to be accepted and we were to conclude that a violation of the obligation imposed under Rule 12 would not attract the penalty stipulated under sub-section (7) of section 70B, that does not mean that Rule 12 would be left toothless. Section 44(b) of the IT Act provides that where any person is required under any of the Rules or Regulations under the IT Act to furnish any information within a particular time and such person fails to do so, s/he may be liable to pay a penalty of upto Rs. 5,000/- for every day such failure continues. Further section 45 provides for a further penalty of Rs.25,000/- for any contravention of any of the rules or regulations under the Act for which no other penalty has been provided.

    Incident Reporting under Intermediary Guidelines

    Section 2(1)(w) of the IT Act defined the term “intermediary” in the following manner;

    “intermediary” with respect to any particular electronic record, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web hosting service providers, search engines, online payment sites, online-auction sites, online market places and cyber cafes.

    Rule 3(9) of the Information Technology (Intermediaries Guidelines) Rules, 2011 (the “Intermediary Guidelines”) also imposes an obligation on any intermediary to report any cyber incident and share information related to cyber security incidents with the CERT-In. Since neither the Intermediary Guidelines not the IT Act specifically provide for any penalty for non conformity with Rule 3(9) therefore any enforcement action against an intermediary failing to report a cyber security incident would have to be taken under section 45 of the IT Act containing a penalty of Rs. 25,000/-.

    Incident Reporting under the Unified License

    Clause 39.10(i) of the Unified License Agreement obliges the telecom company to create facilities for the monitoring of all intrusions, attacks and frauds on its technical facilities and provide reports on the same to the Department of Telecom (DoT). Further clause 39.11(ii) provides that for any breach or inadequate compliance with the terms of the license, the telecom company shall be liable to pay a penalty amount of Rs. 50 crores (Rs. 50,00,00,000) per breach.

    Conclusion

    It is clear from the above discussion that there is a legal obligation service providers to report  cyber incidents to the CERT-In. Presently, the penalty prescribed under Indian law may not be enough to incentivise companies to adopt comprehensive and consistent incident response programmes. , except in cases of telecom companies under the Unified License Agreement. A fine of Rs. 25,000/- appears to be inconsequential  when compared to the possible dangers and damages that may be caused due to a security breach of data containing, for example,  credit card details.. Further, it is also imperative that apart from the obligation to report the cyber incident to the appropriate authorities (CERT-In) there should also be a legal obligation to report it to the data subjects whose data is stolen or is put at risk due to the said breach. A provision requiring notice to the data subjects could go a long way in ensuring that service providers, intermediaries, data centres and body corporates implement the best data security practices since a breach would then be known by general consumers leading to a flurry of bad publicity which could negatively impact the business of the data controller, and for a business entity an economic stimulus may be an effective way  to ensure compliance.

    As we continue to research incident response, the questions and areas we are exploring include the ecosystem of incidence response including what is reported, how, and when, appropriate incentives to companies and governments to report incidents, various forms of penalties, the role of cross border sharing of information and jurisdiction and best practices for incident reporting and citizen awareness.

    Published under Creative Commons License CC BY-SA. Anyone can distribute, remix, tweak, and build upon this document, even for commercial purposes, as long as they credit the creator of this document and license their new creations under the terms identical to the license governing this document


    [1] http://www.huffingtonpost.in/2016/10/21/atm-card-hack-what-banks-are-saying-about-india-s-biggest-data/

    [2] http://tech.economictimes.indiatimes.com/news/internet/cert-in-had-warned-banks-on-oct-7-about-expected-targeted-attacks-from-pakistan/54991025

    Mapping of India’s Cyber Security-Related Bilateral Agreements

    by Leilah Elmokadem and Saumyaa Naidu — last modified Apr 27, 2017 03:14 PM
    With the rapid spread of cloud computing and the growth of cyber spaces, large masses of information are now easily transmittable transnationally, necessitating the ratification of new agreements and cooperation efforts amongst states in order to secure cyber spaces and regulate exchanges of information. In an attempt to understand the nature and extent of current international collaborative efforts in cyber security, we have compiled the following data regarding India’s cyber security-related bilateral agreements. The intention of this exercise is to offer a dynamic visualization that demonstrates which countries India has collaborated with on cyber security efforts and initiatives. This is an ongoing map that we will be updating as our research continues.

    Download: Infographic (PDF) and data (XLSX)



    The data used for the info-graphic consists of India’s MLATs, cyber security-related MoUs and Joint Statements, and Cyber Frameworks. An MLAT is an agreement between two or more countries, drafted for the purpose of gathering and exchanging information in an effort to enforce public or criminal laws. A MoU (Memorandum of Understanding) is a nonbinding agreement between two or more states outlining the terms and details of an understanding, including each party’s requirements and responsibility; it is often the first stage in the formation of a formal contract. For the purpose of this research, we have grouped Joint Statements with MoUs, as they both generally entail the informal agreement between two states to strengthen cooperation on certain issues. Lastly, a Cyber Framework consists of standards, guidelines and practices to promote protection of critical infrastructure. The data accounts for agreements centered on cyber security as well as any agreements mentioning cooperation efforts in Cyber Security, information security or cybercrime.

    MLAT Agreement


    The Mapping of India’s Cybersecurity-related bilateral agreement has been updated on April 12, 2017 with the following changes:

    1. A new MoU was signed between Australia and India in April 2017, focusing on combating terrorism and civil aviation security. Cybersecurity cooperation is mentioned in the MoU[1].
    2. A new MoU was signed between Bangladesh and India in April 2017. The Indian Computer Emergency Response Team (CERT-In), Indian Ministry of Electronics and Information Technology and the ICT Division of Bangladesh are the signing parties of the MoU. The agreement focuses on Cooperation in the area of Cyber Security[2].
    3. A preexisting MoU between France and India was added to the mapping, signed in January of 2016. Officials of both countries agreed to intensify cooperation between the Indian and French security forces in the fields of homeland security, cyber security, Special Forces and intelligence sharing to fight against criminal networks and tackle the common threat of terrorism[3].
    4. A new MoU was signed between Indonesia and India in March 2017. It focuses on enhancing cooperation in cyber security and intelligence sharing[4].
    5. A new MoU was signed between Kenya and India in January 2017, with “cyber security” mentioned as one of the key areas of cooperation[5].
    6. A preexisting MoU between Malaysia and India was added to the mapping, signed in November of 2015. Both sides agreed to promote cooperation and the exchange of information regarding cyber security incident management, technology cooperation and cyber attacks, prevalent policies and best practices and mutual response to cyber security incidents[6].
    7. A preexisting MoU between Mauritius and India, signed July 2016, was added to the mapping. This is a non-governmental MoU. Leading bourse BSE signed an agreement with Stock Exchange of Mauritius (SEM) for collaboration in areas including cyber security[7].
    8. A new joint statement between India and Portugal was signed in March 2017. The two countries agreed to set up an institutional mechanism to collaborate in the areas of electronic manufacturing, ITeS, startups, cyber security and e-governance.[8]
    9. A preexisting MoU, signed between Qatar and India in December of 2016, was added to the mapping. The agreement was regarding a protocol on technical cooperation in cyberspace and combatting cybercrime[9].
    10. A new MoU was signed between Serbia and India in January 2017, focusing on cooperation in the field of IT, Electronics. The MoU itself does not explicitly mention cybersecurity. However, the MoU calls for cooperation and exchanges in capacity building institutions, which should entail cyber security strengthening[10].
    11. A preexisting MoU between Singapore and India was added to the mapping. The MoU was signed in January 2016, focusing on the establishment of a formal framework for professional dialogue, CERT-CERT related cooperation for operational readiness and response, collaboration on cyber security technology and research related to smart technologies, exchange of best practices, and professional exchanges of human resource development[11].
    12. A new joint statement was signed between UAE and India in January 2017, following up on their previous Technical Cooperation MoU signed in February 2016. To further deepen cooperation in this area, they agreed to set up joint Research & Development Centres of Excellence[12].
    13. A preexisting MoU has been included in the mapping, signed in May of 2016. CERT-In agreed with the UK Ministry of Cabinet Office to promote close cooperation between both countries in the exchange in knowledge and experience in detection, resolution and prevention of security related incidents[13].
    14. A new MoU between India and the US was signed in March 2017. CERT-In and CERT-US signed a MoU agreeing to promote closer co-operation and exchange of information pertaining to cyber security in accordance with relevant laws, rules and regulations and on the basis of equality, reciprocity and mutual benefit[14].
    15. A new MoU was signed between Vietnam and India in January 2017, agreeing to promote closer cooperation for exchange of knowledge and experience in detection, resolution and prevention of cyber security incidents between both countries[15].

    NOTE: Some preexisting MoUs were added as we were initially only including the most recent agreements in the mapping. Upon adding newly signed MoUs, we decided to also keep the preexisting ones and revisit the other entries to include any preexisting MoUs that were initially excluded due to not being the most-recent. In this respect, the visualization will be adjusted to indicate the number of MoUs per country.


    [1]http://www.dnaindia.com/india/report-india-australia-sign-mous-on-combating-terrorism-civil-aviation-security-2393843

    [2]http://www.theindependentbd.com/arcprint/details/89237/2017-04-09

    [3]http://www.thehindu.com/news/resources/Full-text-of-Joint-Statement-issued-by-India-France/article14019524.ece

    [4]http://indianexpress.com/article/india/indianhome-ministry-indonesian-ministry-of-security-and-coordination/

    [5]https://telanganatoday.news/india-kenya-focus-defence-security-cooperation-pm

    [6]http://economictimes.indiatimes.com/news/economy/foreign-trade/india-and-malaysia-sign-3-mous-including-cyber-security/articleshow/49891897.cms

    [7]http://indiatoday.intoday.in/story/bse-mauritius-stock-exchange-tie-up-to-promote-financial-mkts/1/723635.html

    [8]http://www.tribuneindia.com/news/business/india-portugal-to-collaborate-in-ites-cyber-security/373666.html

    [9]http://naradanews.com/2016/12/india-qatar-sign-agreements-on-visa-cybersecurity-investments/

    [10]http://ehub.newsforce.in/cabinet-approves-mou-india-serbia-cooperation-field-electronics/

    [11]http://www.businesstimes.com.sg/government-economy/singapore-and-india-strengthen-cooperation-on-cyber-security

    [12]http://mea.gov.in/bilateral-documents.htm?dtl/27969/India++UAE+Joint+Statement+during+State+visit+of+Crown+Prince+of+Abu+Dhabi+to+India+January+2426+2017

    [13]http://www.bestcurrentaffairs.com/india-uk-mou-cyber-security/

    [14]http://www.dqindia.com/india-cert-signs-an-mou-with-us-cert/

    [15]http://pib.nic.in/newsite/PrintRelease.aspx?relid=157458


     

    Mapping of Sections in India’s MLAT Agreements

    by Leilah Elmokadem and Saumyaa Naidu — last modified Dec 31, 2016 06:52 AM
    This set of infographics by Leilah Elmokadem and Saumyaa Naidu maps out and compares the various sections that exist in the 39 MLATs (mutual legal assistance treaty) between India and other countries. An MLAT is an agreement between two or more countries, drafted for the purpose of gathering and exchanging information in an effort to enforce public or criminal laws.

     

    Download: Infographic (PDF) and data (XLSX)


    We have found that India’s 39 MLAT documents are worded, formatted and sectioned differently. At the same time, many of the same sections exist across several MLATs. This diagram lists the sections found in the MLAT documents and indicates the treaties in which they were included or not included. To keep the list of sections concise and to more easily pinpoint the key differences between the agreements, we have merged sections that are synonymous in meaning but were worded slightly differently. For example: we would combine “Entry into force and termination” with “Ratification and termination” or “Expenses” with “Costs”.

    At the same time, some sections that seemed quite similar and possible to merge were kept separate due to potential key differences that could be overlooked as a result. For example: “Limitation on use” vs. “Limitation on compliance” or “Serving of documents” vs. “Provision of (publicly available) documents/records/objects” remained separate for further analysis and comparison.

    These differences in sectioning can be analysed to facilitate a thorough comparison between the effectiveness, efficiency, applicability and enforceability of the various provisions across the MLATs. The purpose of this initial mapping is to provide an overall picture of which sections exist in which MLAT documents. There will be further analysis of these sections to produce a more holistic content-based comparison of the MLATs.

     

    Aggregated Analysis of Sections of MLAT Agreements

    Aggregated analysis of sections of MLAT agreements by India Aggregated analysis of sections of MLAT agreements by India

     

    Comments on the Report of the Committee on Digital Payments (December 2016)

    by Sumandro Chattapadhyay and Amber Sinha — last modified Jan 12, 2017 12:32 PM
    The Committee on Digital Payments constituted by the Ministry of Finance and chaired by Ratan P. Watal, Principal Advisor, NITI Aayog, submitted its report on the "Medium Term Recommendations to Strengthen Digital Payments Ecosystem" on December 09, 2016. The report was made public on December 27, and comments were sought from the general public. Here are the comments submitted by the Centre for Internet and Society.

     

    1. Preliminary

    1.1. This submission presents comments by the Centre for Internet and Society (“CIS”) [1] in response to the report of the Committee on Digital Payments, chaired by Mr. Ratan P. Watal, Principal Advisor, NITI Aayog, and constituted by the Ministry of Finance, Government of India (“the report”) [2].

    2. The Centre for Internet and Society

    2.1. The Centre for Internet and Society, CIS, is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, and open access), internet governance, telecommunication reform, digital privacy, and cyber-security.

    2.2. CIS is not an expert organisation in the domain of banking in general and payments in particular. Our expertise is in matters of internet and communication governance, data privacy and security, and technology regulation. We deeply appreciate and are most inspired by the Ministry of Finance’s decision to invite entities from both the sectors of finance and information technology. This submission is consistent with CIS’ commitment to safeguarding general public interest, and the interests and rights of various stakeholders involved, especially the citizens and the users. CIS is thankful to the Ministry of Finance for this opportunity to provide a general response on the report.

    3. Comments

    3.1. CIS observes that the decision by the Government of India to withdraw the legal tender character of the old high denomination banknotes (that is, Rs. 500 Rs. 1,000 notes), declared on November 08, 2016 [3], have generated unprecedented data about the user base and transaction patterns of digital payments systems in India, when pushed to its extreme use due to the circumstances. The majority of this data is available with the National Payments Corporation of India and the Reserve Bank of India. CIS requests the authorities concerned to consider opening up this data for analysis and discussion by public at large and experts in particular, before any specific policy and regulatory decisions are taken towards advancing digital payments proliferation in India. This is a crucial opportunity for the Ministry of Finance to embrace (open) data-driven regulation and policy-making.

    3.2. While the report makes a reference to the European General Data Protection Directive, it does not make a reference to any substantive provisions in the Directive which may be relevant to digital payments. Aside from the recommendation that privacy protections around the purpose limitation principle be relaxed to ensure that payment service providers be allowed to process data to improve fraud monitoring and anti-money laundering services, the report is silent on significant privacy and data protection concerns posed by digital payments services. CIS strongly warns that the existing data protection and security regulations under Information Technology (Reasonable security practices and procedures and sensitive personal data or information), Rules are woefully inadequate in their scope and application to effectively deal with potential privacy concerns posed by digital payments applications and services. Some key privacy issues that must be addressed either under a comprehensive data protection legislation or a sector specific financial regulation are listed below. The process of obtaining consent must be specific, informed and unambiguous and through a clear affirmative action by the data subject based upon a genuine choice provided along with an option to opt out at any stage. The data subjects should have clear and easily enforceable right to access and correct their data. Further, data subjects should have the right to restrict the usage of their data in circumstances such as inaccuracy of data, unlawful purpose and data no longer required in order to fulfill the original purpose.

    3.3. The initial recommendation of the report is to “[m]ake regulation of payments independent from the function of central banking” (page 22). This involves a fundamental transformation of the payment and settlement system in India and its regulation. We submit that a decision regarding transformation of such scale and implications is taken after a more comprehensive policy discussion, especially involving a wider range of stakeholders. The report itself notes that “[d]igital payments also have the potential of becoming a gateway to other financial services such as credit facilities for small businesses and low-income households” (page 32). Thus, a clear functional, and hence regulatory, separation between the (digital) payments industry and the lending/borrowing industry may be either effective or desirable. Global experience tells us that digital transactions data, along with other alternative data, are fast becoming the basis of provision of financial and other services, by both banking and non-banking (payments) companies. We appeal to the Ministry of Finance to adopt a comprehensive and concerted approach to regulating, enabling competition, and upholding consumers’ rights in the banking sector at large.

    3.4. The report recognises “banking as an activity is separate from payments, which is more of a technology business” (page 154). Contemporary banking and payment businesses are both are primarily technology businesses where information technology particularly is deployed intimately to extract, process, and drive asset management decisions using financial transaction data. Further, with payment businesses (such as, pre-paid instruments) offering return on deposited money via other means (such as, cashbacks), and potentially competing and/or collaborating with established banks to use financial transaction data to drive lending decisions, including but not limited to micro-loans, it appears unproductive to create a separation between banking as an activity and payments as an activity merely in terms of the respective technology intensity of these sectors. CIS firmly recommends that regulation of these financial services and activities be undertaken in a technology-agnostic manner, and similar regulatory regimes be deployed on those entities offering similar services irrespective of their technology intensity or choice.

    3.5. The report highlights two major shortcomings of the current regulatory regime for payments. Firstly “the law does not impose any obligation on the regulator to promote competition and innovation in the payments market” (page 153). It appears to us that the regulator’s role should not be to promote market expansion and innovation but to ensure and oversee competition. We believe that the current regulator should focus on regulating the existing market, and the work of the expansion of the digital payments market in particular and the digital financial services market in general be carried out by another government agency, as it creates conflict of interest for the regulator otherwise. Secondly, the report mentions that Payment and Settlement Systems Act does not “focus the regulatory attention on the need for consumer protection in digital payments” and then it notes that a “provision was inserted to protect funds collected from customers” in 2015 (page 153). This indicates that the regulator already has the responsibility to ensure consumer protection in digital payments. The purview and modalities of how this function of course needs discussion and changes with the growth in digital payments.

    3.6. The report identifies the high cost of cash as a key reason for the government’s policy push towards digital payments. Further, it mentions that a “sample survey conducted in 2014 across urban and rural neighbourhoods in Delhi and Meerut, shows that despite being keenly aware of the costs associated with transacting in cash, most consumers see three main benefits of cash, viz. freedom of negotiations, faster settlements, and ensuring exact payments” (page 30). It further notes that “[d]igital payments have significant dependencies upon power and telecommunications infrastructure. Therefore, the roll out of robust and user friendly digital payments solutions to unelectrified areas/areas without telecommunications network coverage, remains a challenge.” CIS much appreciates the discussion of the barriers to universal adoption and rollout of digital payments in the report, and appeals to the Ministry of Finance to undertake a more comprehensive study of the key investments required by the Government of India to ensure that digital payments become ubiquitously viable as well as satisfy the demands of a vast range of consumers that India has. The estimates about investment required to create a robust digital payment infrastructure, cited in the report, provide a great basis for undertaking studies such as these.

    3.7. CIS is very encouraged to see the report highlighting that “[w]ith the rising number of users of digital payment services, it is absolutely necessary to develop consumer confidence on digital payments. Therefore, it is essential to have legislative safeguards to protect such consumers in-built into the primary law.” We second this recommendation and would like to add further that financial transaction data is governed under a common data protection and privacy regime, without making any differences between data collected by banking and non-banking entities.

    3.8. We are, however, very discouraged to see the overtly incorrect use of the word “Open Access” in this report in the context of a payment system disallowing service when the client wants to transact money with a specific entity [4]. This is not an uncommon anti-competitive measure adopted by various platform players and services providers so as to disallow users from using competing products (such as, not allowing competing apps in the app store controlled by one software company). The term “Open Access” is not only the appropriate word to describe the negation of such anti-competitive behaviour, its usage in this context undermines its accepted meaning and creates confusion regarding the recommendation being proposed by the report. The closest analogy to the recommendation of the report would perhaps be with the principle of “network neutrality” that stands for the network provider not discriminating between data packets being processed by them, either in terms of price or speed.

    3.9. A major recommendation by the report involves creation of “a fund from savings generated from cash-less transactions … by the Central Government,” which will use “the trinity of JAM (Jan Dhan, Adhaar, Mobile) [to] link financial inclusion with social protection, contributing to improved Social and Financial Security and Inclusion of vulnerable groups/ communities” (page 160-161). This amounts to making Aadhaar a mandatory ID for financial inclusion of citizens, especially the marginal and vulnerable ones, and is in direct contradiction to the government’s statements regarding the optional nature of the Aadhaar ID, as well as the orders by the Supreme Court on this topic.

    3.10. The report recommends that “Aadhaar should be made the primary identification for KYC with the option of using other IDs for people who have not yet obtained Aadhaar” (page 163) and further that “Aadhaar eKYC and eSign should be a replacement for paper based, costly, and shared central KYC registries” (page 162). Not only these measures would imply making Aadhaar a mandatory ID for undertaking any legal activity in the country, they assume that the UIDAI has verified and audited the personal documents submitted by Aadhaar number holders during enrollment. A mandate for replacement of the paper-based central KYC agencies will only remove a much needed redundancy in the the identity verification infrastructure of the government.

    3.11. The report suggests that “[t]ransactions which are permitted in cash without KYC should also be permitted on prepaid wallets without KYC” (page 164-165). This seems to negate the reality that physical verification of a person remains one of the most authoritative identity verification process for a natural person, apart from DNA testing perhaps. Thus, establishing full equivalency of procedure between a presence-less transaction and one involving a physically present person making the payment will only amount to removal of relatively greater security precautions for the former, and will lead to possibilities of fraud.

    3.12. In continuation with the previous point, the report recommends promotion of “Aadhaar based KYC where PAN has not been obtained” and making of “quoting Aadhaar compulsory in income tax return for natural persons” (page 163). Both these measures imply a replacement of the PAN by Aadhaar in the long term, and a sharp reduction in growth of new PAN holders in the short term. We appeal for this recommendation to be reconsidered as integration of all functionally separate national critical information infrastructures (such as PAN and Aadhaar) into a single unified and centralised system (such as Aadhaar) engenders massive national and personal security threats.

    3.13. The report suggest the establishment of “a ranking and reward framework” to recognise and encourage for the best performing state/district/agency in the proliferation of digital payments. It appears to us that creation of such a framework will only lead to making of an environment of competition among these entities concerned, which apart from its benefits may also have its costs. For example, the incentivisation of quick rollout of digital payment avenues by state government and various government agencies may lead to implementation without sufficient planning, coordination with stakeholders, and precautions regarding data security and privacy. The provision of central support for digital payments should be carried out in an environment of cooperation and not competition.

    3.14. CIS welcomes the recommendation by the report to generate greater awareness about cost of cash, including by ensuring that “large merchants including government agencies should account and disclose the cost of cash collection and cash payments incurred by them periodically” (page 164). It, however, is not clear to whom such periodic disclosures should be made. We would like to add here that the awareness building must simultaneously focus on making public how different entities shoulder these costs. Further, for reasons of comparison and evidence-driven policy making, it is necessary that data for equivalent variables are also made open for digital payments - the total and disaggregate cost, and what proportion of these costs are shouldered by which entities.

    3.15. The report acknowledges that “[t]oday, most merchants do not accept digital payments” and it goes on to recommend “that the Government should seize the initiative and require all government agencies and merchants where contracts are awarded by the government to provide at-least one suitable digital payment option to its consumers and vendors” (page 165). This requirement for offering digital payment option will only introduce an additional economic barrier for merchants bidding for government contracts. We appeal to the Ministry of Finance to reconsider this approach of raising the costs of non-digital payments to incentivise proliferation of digital payments, and instead lower the existing economic and other barriers to digital payments that keep the merchants away. The adoption of digital payments must not lead to increasing costs for merchants and end-users, but must decrease the same instead.

    3.16. As the report was submitted on December 09, 2016, and was made public only on December 27, 2016, it would have been much appreciated if at least a month-long window was provided to study and comment on the report, instead of fifteen days. This is especially crucial as the recently implemented demonetisation and the subsequent banking and fiscal policy decisions taken by the government have rapidly transformed the state and dynamics of the payments system landscape in India in general, and digital payments in particular.

    Endnotes

    [1] See: http://cis-india.org/.

    [2] See: http://finmin.nic.in/reports/Note-watal-report.pdf and http://finmin.nic.in/reports/watal_report271216.pdf.

    [3] See: http://finmin.nic.in/cancellation_high_denomination_notes.pdf.

    [4] Open Access refers to “free and unrestricted online availability” of scientific and non-scientific literature. See: http://www.budapestopenaccessinitiative.org/read.

     

    Comments on the Proposed ICANN Community Anti-Harassment Policy

    by Padma Venkataraman, Rohini Lakshané, Sampada Nayak and Vidushi Marda — last modified Jan 13, 2017 03:56 PM
    ICANN sought community input on the Proposed ICANN Community Anti-Harassment Policy on 7 November 2016. In response to this the Centre for Internet & Society (CIS) submitted its comments.

    We at CIS are grateful for the opportunity to comment on the proposed ICANN Community Anti-Harassment Policy (“Policy”). We provide our specific comments to the Policy below, in three sections. The first section addresses the Terms of Participation, the second deals with the Reporting and Complaint Procedure, and the third places on record our observations on questions and issues for further consideration which have not been covered by the Policy.

    Besides various other observations, CIS broadly submitted:

    • The attempt to provide an exhaustive definition of “Specified Characteristics” results in its meaning being unclear and exclusionary.
    • CIS strongly supports the phrase “including, but not limited to” that is followed by a bulleted list of inappropriate conduct.
    • The word “consent” is entirely missing from the draft policy even though the deciding factor in the “appropriateness” of an act or conduct is active and explicit consent to the act by both/ all individuals involved.
    • There is a need for clarity of communication platforms. The current Policy fails to specify instances of face-to-face and online communications.
    • The policy fails to account for a body of persons (as is provided for in the IETF policy) for the redressal of harassment complaints.
    • The provision for an informal resolution of a harassment issue is problematic as it could potentially lead to negative consequences for the complainant.
    • The Ombudsperson’s discretion in the determination of remedial action is detrimental to transparency and accountability.
    • The Policy in its current form lacks provisions for ensuring privacy and confidentiality of the complainant as well as interim relief while the Ombudsperson is looking into the complaint

    Read the Complete Submission here

    Social Media Monitoring

    by Amber Sinha last modified Jan 16, 2017 02:23 PM
    We see a trend of social media and communication monitoring and surveillance initiatives in India which have the potential to create a chilling effect on free speech online and raises question about the privacy of individuals. In this paper, Amber Sinha looks at social media monitoring as a tool for surveillance, the current state of social media surveillance in India, and evaluate how the existing regulatory framework in India may deal with such practices in future.

     

    Social Media Monitoring: Download (PDF)


    Introduction

    In 2014, the Government of India launched the much lauded and popular citizen outreach website called MyGov.in. A press release by the government announced that they had roped in global consulting firm PwC to assist in the data mining exercise to process and filter key points emerging from debates on Mygov.in. While this was a welcome move, the release also mentioned that the government intended to monitor social media sites in order to gauge popular opinion. Further, earlier this year, the government set up National Media Analytics Centre (NMAC) to monitor blogs, media channels, news outlets and social media platforms. The tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments, and also look at the past patterns of posts. A project called NETRA has already been reported in the media a few years back which would intercept and analyse internet traffic using pre-defined filters. Alongside, we see other initiatives which intend to use social media data for predictive policing purposes such as CCTNS and Social Media Labs.

    Thus, we see a trend of social media and communication monitoring and surveillance initiatives announced by the government which have the potential to create a chilling effect on free speech online and raises question about the privacy of individuals. Various commentators have raised concerns about the legal validity of such programmes and whether they were in violation of the fundamental rights to privacy and free expression, and the existing surveillance laws in India. The lack of legislation governing these programmes often translates into an absence of transparency and due procedure. Further, a lot of personal communication now exists in the public domain which renders traditional principles which govern interception and monitoring of personal communications futile. In the last few years, the blogosphere and social media websites in India have also changed and become platforms for more dissemination of political content, often also accompanied by significant vitriol, ‘trolling’ and abuse. Thus, we see greater policing of public or semi-public spaces online. In this paper, we look at social media monitoring as a tool for surveillance, the current state of social media surveillance in India and evaluate how the existing regulatory framework in India may deal with such practices in future.

     

    The Design & Technology behind India’s Surveillance Programmes

    by Udbhav Tiwari last modified Jan 20, 2017 03:56 PM
    There has been an exponential growth in the pervasive presence of technology in the daily lives of an average Indian citizen over the past few years. While leading to manifold increase in convenience and connectivity, these technologies also allow for far greater potential for surveillance by state actors.

    While the legal and policy avenues of  state surveillance in India have been analysed by various organisations, there is very little available information about the technology and infrastructure used to carry out this surveillance. This appears to be   largely, according to the government, due to reasons of national security and sovereignty.[1] This blog post will attempt to paint a picture of the technological infrastructure being used to carry out state surveillance in India.

    Background
    The revelations by Edward Snowden about mass surveillance in mid-2013 led to an explosion of journalistic interest in surveillance and user privacy in India.[2] The reports and coverage from this period, leading up to early 2015, serve as the main authority for the information presented in this blog post. The lack of information from official government sources as well as decreasing public spotlight on surveillance since that point of time generally have both led to little or no new information turning up about India’s surveillance regime since this period. However, given the long term nature of these programmes and the vast amounts of time it takes to set them up, it is fairly certain that the programmes detailed below are still the primary bedrock of state surveillance in the country, albeit having become operational and inter-connected only in the past 2 years.

    The technology being used to carry out surveillance in India over the past 5 years is largely an upgraded, centralised and substantially more powerful version of the  surveillance techniques followed in India since the advent of telegraph and telephone lines: the tapping & recording of information in transit.[3] The fact that all the modern surveillance programmes detailed below have not required any new legislation, law, amendment or policy that was not already in force prior to 2008 is the most telling example of this fact. The legal and policy implication of the programmes illustrated below have been covered in previous articles by the Centre for Internet & Society which can be found here,[4] here[5] and here.[6] Therefore, this post will solely concentrate on the  technological design and infrastructure being used to carry out surveillance along with any new developments in this field that the three source mentioned would not have covered from a technological perspective.

    The Technology Infrastructure behind State Surveillance in India

    The programmes of the Indian Government (in public knowledge) that are being used to carry out state surveillance are broadly eight in number. These exclude specific surveillance technology being used by independent arms of the government, which will be covered in the next section of this post.  Many of the programmes listed below have overlapping jurisdictions and in some instances are cross-linked with each other to provide greater coverage:

    1. Central Monitoring System (CMS)
    2. National Intelligence Grid (NAT-GRID)
    3. Lawful Intercept And Monitoring Project (LIM)
    4. Crime and Criminal Tracking Network & Systems (CCTNS)
    5. Network Traffic Analysis System (NETRA)
    6. New Media Wing (Bureau of New and Concurrent Media)

    The post will look at the technological underpinning of each of these programmes and their operational capabilities, both in theory and practice.

    Central Monitoring System (CMS)

    The Central Monitoring System (CMS) is the premier mass surveillance programme of the Indian Government, which has been in the planning stages since 2008[7] Its primary goal is to replace the current on-demand availability of analog and digital data from service providers with a “central and direct” access which involves no third party between the captured information and the government authorities.[8] While the system is currently operated by the Centre for Development of Telematics, the unreleased three-stage plan envisages a centralised location (physically and legally) to govern the programme. The CMS is primarily operated by Telecom Enforcement and Resource Monitoring Cell (TERM) within the Department of Telecom, which also has a larger mandate of ensuring radiation safety and spectrum compliance.

    The technological infrastructure behind the CMS largely consists of Telecom Service Providers (TSPs) and Internet Service Providers (ISPs) in India being mandated to integrate Interception Store & Forward (ISF) servers with their Lawful Interception Systems required by their licences. Once these ISF servers are installed they are then connected to the Regional Monitoring Centres (RMC) of the CMS, setup according to geographical locations and population. Finally, Regional Monitoring Centre (RMC) in India is connected to the Central Monitoring System (CMS) itself, essentially allowing the collection, storage, access and analysis of data collected from all across the country in a centralised manner. The data collected by the CMS includes voice calls, SMS, MMS, fax communications on landlines, CDMA, video calls, GSM and even general, unencrypted  data travelling across the internet using the standard IP/TCP Protocol.[9]

    With regard to the analysis of this data,  Call Details Records (CDR) analysis, data mining, machine learning and predictive algorithms have been allegedly implemented in various degrees across this network.[10] This allows state actors to pre-emptively gather and collect a vast amount of information from across the country, perform analysis on this data and then possibly even take action on the basis of this information by directly approaching the entity (currently the TERM under C-DOT) operating the system. [11] The system has reached full functionality in mid 2016, with over 22 Regional Monitoring Centres functional and the system itself being ‘switched on’ post trials in gradual phases.[12]

    National Intelligence Grid (NATGRID)

    The National Intelligence Grid (NATGRID) is a semi-functional[13] integrated intelligence grid that links the stored records and databases of several government entities in order to collect data, decipher trends and provide real time (sometimes even predictive) analysis of  data gathered across law enforcement, espionage and military agencies. The programme intends to provide 11 security agencies real-time access to 21 citizen data sources to track terror activities across the country.  The citizen data sources include bank account details, telephone records, passport data and vehicle registration details, the National Population Register (NPR), the Immigration, Visa, Foreigners Registration and Tracking System (IVFRT), among other types of data, all of which are already present within various government records across the country.[14]

    Data mining and analytics are used to process the huge volumes of data generated from the 21 data sources so as to analyse events, match patterns and track suspects, with big data analytics[15] being the primary tool to effectively utilise the project, which was founded to prevent another instance of the September, 2011 terrorist attacks in Mumbai. The list of agencies that will have access to this data collection and analytics platform are the Central Board of Direct Taxes (CBDT), Central Bureau of Investigation (CBI), Defense Intelligence Agency (DIA), Directorate of Revenue Intelligence (DRI), Enforcement Directorate (ED), Intelligence Bureau (IB), Narcotics Control Bureau (NCB), National Investigation Agency (NIA), Research and Analysis Wing (RAW), the Military Intelligence of Assam , Jammu and Kashmir regions and finally the Home Ministry itself.[16]

    As of late 2015, the project has remained stuck because of bureaucratic red tape, with even the first phase of the four stage project not complete. The primary reason for this is the change of governments in 2014, along with apprehensions about breach of security and misuse of information from agencies such as the IB, R&AW, CBI, and CBDT, etc.[17] However, the office of the NATGRID is now under construction in South Delhi and while the agency claims an exemption under the RTI Act as a Schedule II Organisation, its scope and operational reach have only increased with each passing year.

    Lawful Intercept And Monitoring Project

    Lawful Intercept and Monitoring (LIM), is a secret mass electronic surveillance program operated by the Government of India for monitoring Internet traffic, communications, web-browsing and all other forms of Internet data. It is primarily run by the Centre for Development of Telematics (C-DoT) in the Ministry of Telecom since 2011.[18]

    The LIM Programme consists of installing interception, monitoring and storage programmes at international gateways, internet exchange hubs as well as ISP nodes across the country. This is done independent of ISPs, with the entire hardware and software apparatus being operated by the government. The hardware is installed between the Internet Edge Router (PE) and the core network, allowing for direct access to all traffic flowing through the ISP.  It is the primary programme for internet traffic surveillance in India, allowing indiscriminate monitoring of all traffic passing through the ISP for as long as the government desires, without any oversight of courts and sometimes without the knowledge of ISPs.[19] One of the most potent capabilities of the LIM Project are live, automated keyword searches which allow the government to track all the information passing through the internet pipe being surveilled for certain key phrases in both in text as well in audio. Once these key phrases are successfully matched to the data travelling through the pipe using advanced search algorithms developed uniquely for the project, the system has various automatic routines which range from targeted surveillance on the source of the data to raising an alarm with the appropriate authorities.

    LIM systems are often also operated by the ISPs themselves, on behalf of the government. They operate the device, including hardware upkeep, only to provide direct access to government agencies upon requests. Reports have stated that the legal procedures laid down in law (including nodal officers and formal requests for information) are rarely followed[20] in both these cases, allowing unfettered access to petabytes of user data on a daily basis through these programmes.

    Crime and Criminal Tracking Network & Systems (CCTNS)

    The Crime and Criminal Tracking Network & System (CCTNS) is a planned network that allows for the digital collection, storage, retrieval, analysis, transfer and sharing of information relating to crimes and criminals across India.[21] It is supposed to primarily operate at two levels, one between police stations and the second being between the various governance structures around crime detection and solving around the country, with access also being provided to intelligence and national security agencies.[22]

    CCTNS aims to integrate all the necessary data and records surrounding a crime (including past records) into a Core Application Software (CAS) that has been developed by Wipro.[23] The software includes the ability to digitise FIR registration, investigation and charge sheets along with the ability to set up a centralised citizen portal to interact with relevant information. This project aims to use this CAS interface across 15, 000 police stations in the country, with up to 5, 000 additional deployments. The project has been planned since 2009, with the first complete statewide implementation going live only in August 2016 in Maharashtra. [24]

    While seemingly harmless at face value, the project’s true power lies in two main possible uses. The first being its ability to profile individuals using their past conduct, which now can include all stages of an investigation and not just a conviction by a court of law, which has massive privacy concerns. The second harm is the notion that the CCTNS database will not be an isolated one but will be connected to the NATGRID and other such databases operated by organisations such as the National Crime Records Bureau, which will allow the information present in the CCTNS to be leveraged into carrying out more invasive surveillance of the public at large.[25]

    Network Traffic Analysis System (NETRA)

    NETRA (NEtwork TRaffic Analysis) is a real time surveillance software developed by the Centre for Artificial Intelligence and Robotics (CAIR) at the Defence Research and Development Organisation. (DRDO) The software has apparently been fully functional since early 2014 and is primarily used by Indian Spy agencies, the Intelligence Bureau (IB) and the Research and Analysis Wing (RAW) with some capacity being reserved for domestic agencies under the Home Ministry.

    The software is meant to monitor Internet traffic on a real time basis using both voice and textual forms of data communication, especially social media, communication services and web browsing. Each agency was initially allocated 1000 nodes running NETRA, with each node having a capacity to analyse 300GB of information per second, giving each agency a capacity of around 300 TB of information processing per second.[26] This capacity is largely available only to agencies dealing with External threats, with domestic agencies being allocated far lower capacities, depending on demand. The software itself is mobile and in the presence of sufficient hardware capacity, nothing prevents the software from being used in the CMS, the NATGRID or LIM operations.

    There has been a sharp and sudden absence of public domain information regarding the software since 2014, making any statements about its current form or evolution mere conjecture.

    Analysis of the Collective Data

    Independent of the capacity of such programmes, their real world operations work in a largely similar manner to mass surveillance programmes in the rest of the world, with a majority of the capacity being focused on decryption and storage of data with basic rudimentary data analytics.[27] Keyword searches for hot words like 'attack', 'bomb', 'blast' or 'kill' in the various communication stream in real time are the only real capabilities of the system that have been discussed in the public domain,[28] which along with the limited capacity of such programmes[29] (300 TB) is indicative of basic level of analysis that is carried  on captured data. Any additional details about the technical details about how India’s surveillance programmes use their captured data is absent from the public domain but they can presumed, at best, to operate with similar standards as global practices.[30]

    Capacitative Global Comparison

    As can be seen from the post so far, India’s surveillance programmes have remarkably little information about them in the public domain, from a technical operation or infrastructure perspective. In fact, post late 2014, there is a stark lack of information about any developments in the mass surveillance field. All of the information that is available about the technical capabilities of the CMS, NATGRID or LIM is either antiquated (pre 2014) or is about (comparatively) mundane details like headquarter construction clearances.[31] Whether this is a result of the general reduction in the attention towards mass surveillance by the public and the media[32] or is the result of actions taken by the government under the “national security” grounds under as the Official Secrets Act, 1923[33] can only be conjecture.

    However, given the information available (mentioned previously in this article) a comparative points to the rather lopsided position in comparison to international mass surveillance performance. While the legal provisions in India regarding surveillance programmes  are among the most wide ranging, discretionary and opaque in the world[34] their technical capabilities seem to be anarchic in comparison to modern standards. The only real comparative that can be used is public reporting surrounding the DRDO NETRA project around 2012 and 2013.  The government held a competition between the DRDO’s internally developed software “Netra” and NTRO’s “Vishwarupal” which was developed in collaboration with Paladion Networks.[35] The winning software, NETRA, was said to have a capacity of 300 GB per node, with a total of 1000 sanctioned nodes.[36] This capacity of 300 TB for the entire system, while seemingly powerful, is a miniscule fragment of 83 Petabytes traffic that is predicted to generated in India per day.[37] In comparison, the PRISM programme run by the National Security Agency in 2013 (the same time that the NETRA was tested) has a capacity of over 5 trillion gigabytes of storage[38], many magnitudes greater than the capacity of the DRDO software. Similar statistics can be seen from the various other programmes of NSA and the Five Eyes alliance,[39] all of which operated at far greater capacities[40] and were held to be minimally effective.[41] The questions this poses of the effectiveness, reliance and  proportionality of the Indian surveillance programme can never truly be answered due to the lack of information surrounding capacity and technology of the Indian surveillance programmes, as highlighted in the article. With regard to criminal databases used in surveillance, such as the NATGRID, equivalent systems both domestically (especially in the USA) and internationally (such as the one run by the Interpol)[42] are impossible due to the NATGRID not even being fully operational yet.[43]

    Conclusion

    Even if we were to ignore the issues in principle with mass surveillance, the pervasive, largely unregulated and mass scale surveillance being carried in India using the tools and technologies detailed above have various technical and policy failings. It is imperative that transparency, accountability and legal scrutiny be made an integral part of the security apparatus in India. The risks of security breaches, politically motivated actions and foreign state hacking only increase with the absence of public accountability mechanisms. Further, opening up the technologies used for these operations to regular security audits will also improve their resilience to such attacks.


    [1] http://cis-india.org/internet-governance/blog/the-constitutionality-of-indian-surveillance-law

    [2] http://india.blogs.nytimes.com/2013/07/10/how-surveillance-works-in-india/

    [3] https://www.privacyinternational.org/node/818

    [4] http://cis-india.org/internet-governance/blog/state-of-cyber-security-and-surveillance-in-india.pdf

    [5] http://cis-india.org/internet-governance/blog/security-surveillance-and-data-sharing.pdf

    [6] http://cis-india.org/internet-governance/blog/paper-thin-safeguards.pdf

    [7] http://pib.nic.in/newsite/PrintRelease.aspx?relid=54679 & http://www.dot.gov.in/sites/default/files/English%20annual%20report%202007-08_0.pdf

    [8] http://ijlt.in/wp-content/uploads/2015/08/IJLT-Volume-10.41-62.pdf

    [9] http://www.thehindu.com/scitech/technology/in-the-dark-about-indias-prism/article4817903.ece

    [10] http://cis-india.org/internet-governance/blog/india-centralmonitoring-system-something-to-worry-about

    [11] https://www.justice.gov/sites/default/files/pages/attachments/2016/07/08/ind195494.e.pdf

    [12] http://www.datacenterdynamics.com/content-tracks/security-risk/indian-lawful-interception-data-centers-are-complete/94053.fullarticle

    [13] http://natgrid.attendance.gov.in/ [Attendace records at the NATGRID Office!]

    [14] http://articles.economictimes.indiatimes.com/2013-09-10/news/41938113_1_executive-order-nationalintelligence-grid-databases

    [15] http://www.business-standard.com/article/current-affairs/natgrid-to-use-big-data-analytics-to-track-suspects-1

    [16] http://sflc.in/wp-content/uploads/2014/09/SFLC-FINAL-SURVEILLANCE-REPORT.pdf

    [17] http://indiatoday.intoday.in/story/natgrid-gets-green-nod-but-hurdles-remain/1/543087.html

    [18] http://www.thehindu.com/news/national/govt-violates-privacy-safeguards-to-secretly-monitor-internet-traffic/article5107682.ece

    [19] ibid

    [20] http://www.thehoot.org/story_popup/no-escaping-the-surveillance-state-8742

    [21] http://ncrb.gov.in/BureauDivisions/CCTNS/cctns.htm

    [22] ibid

    [23] http://economictimes.indiatimes.com/news/politics-and-nation/ncrb-to-connect-police-stations-and-crime-data-across-country-in-6-months/articleshow/45029398.cms

    [24] http://indiatoday.intoday.in/education/story/crime-criminal-tracking-network-system/1/744164.html

    [25] http://www.dailypioneer.com/nation/govt-cctns-to-be-operational-by-2017.html

    [26] http://articles.economictimes.indiatimes.com/2012-03-10/news/31143069_1_scanning-internet-monitoring-system-internet-data

    [27] Surveillance, Snowden, and Big Data: Capacities, consequences, critique: http://journals.sagepub.com/doi/pdf/10.1177/2053951714541861

    [28] http://www.thehindubusinessline.com/industry-and-economy/info-tech/article2978636.ece

    [29] See previous section in the article “NTRO”

    [30] Van Dijck, José. "Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology." Surveillance & Society 12.2 (2014): 197.

    [31] http://www.dailymail.co.uk/indiahome/indianews/article-3353230/Nat-Grid-knots-India-s-delayed-counter-terror-programme-gets-approval-green-body-red-tape-stall-further.html

    [32] http://cacm.acm.org/magazines/2015/5/186025-privacy-behaviors-after-snowden/fulltext

    [33] https://freedomhouse.org/report/freedom-press/2015/india

    [34] http://blogs.wsj.com/indiarealtime/2014/06/05/indias-snooping-and-snowden/

    [35] http://articles.economictimes.indiatimes.com/2012-03-10/news/31143069_1_scanning-internet-monitoring-system-internet-data

    [36] http://economictimes.indiatimes.com/tech/internet/government-to-launch-netra-for-internet-surveillance/articleshow/27438893.cms

    [37] http://trak.in/internet/indian-internet-traffic-8tbps-2017/

    [38] http://www.economist.com/news/briefing/21579473-americas-national-security-agency-collects-more-information-most-people-thought-will

    [39] http://www.washingtonsblog.com/2013/07/the-fact-that-mass-surveillance-doesnt-keep-us-safe-goes-mainstream.html

    [40] http://www.washingtonpost.com/wp-srv/special/politics/prism-collection-documents/

    [41] Supra Note 35

    [42] http://www.papillonfoundation.org/information/global-crime-database/

    [43] http://www.thehindu.com/opinion/editorial/Revive-NATGRID-with-safeguards/article13975243.ece

    Privacy after Big Data - Workshop Report

    by Amber Sinha last modified Jan 27, 2017 01:09 AM
    The Centre for Internet and Society (CIS) and the Sarai programme, CSDS, organised a workshop on 'Privacy after Big Data: What Changes? What should Change?' on Saturday, November 12, 2016 at Centre for the Study of Developing Societies in New Delhi.

    This workshop aimed to build a dialogue around some of the key government-led big data initiatives in India and elsewhere that are contributing significant new challenges and concerns to the ongoing debates on the right to privacy. It was an open event.

    In this age of big data, discussions about privacy are intertwined with the use of technology and the data deluge. Though big data possesses enormous value for driving innovation and contributing to productivity and efficiency, privacy concerns have gained significance in the dialogue around regulated use of data and the means by which individual privacy might be compromised through means such as surveillance, or protected. The tremendous opportunities big data creates in varied sectors ranges from financial technology, governance, education, health, welfare schemes, smart cities to name a few. With the UID project re-animating the Right to Privacy debate in India, and the financial technology ecosystem growing rapidly, striking a balance between benefits of big data and privacy concerns is a critical policy question that demands public dialogue and research to inform an evidence based decision. Also, with the advent of potential big data initiatives like the ambitious Smart Cities Mission under the Digital India Scheme, which would rely on harvesting large data sets and the use of analytics in city subsystems to make public utilities and services efficient, the tasks of ensuring data security on one hand and protecting individual privacy on the other become harder.

    This workshop sought to discuss some of the emerging problems due to the advent of big data and possible ways to address these problems. The workshop began with Amber Sinha of CIS and Sandeep Mertia of Sarai introducing the topic of big data and implications for privacy. Both speakers tried to define big data and brief history of the evolution of the term and raised questions about how we understand it. Dr. Usha Ramanathan spoke on the right to privacy in the context of the ongoing Aadhaar case and Vipul Kharbanda introduced the concept of Habeas Data as a possible solution to the privacy problems posed by big data.  Amelia Andersotter discussed national centralised digital ID systems and their evolution in Europe, often operating at a cross-functional scale, and highlighted its implications for discussions on data protection, welfare governance, and exclusion from public and private services. Srikanth Lakshmanan spoke of the issues with technology and privacy, and possible technological solutions.  Dr. Anupam Saraph discussed the rise of digital banking and Aadhaar based payments and its potential use for corrupt practices. Astha Kapoor of Microsave spoke about her experience of implementation of digital money solution in rural India.

    Post lunch, Dr. Anja Kovacs and Mathew Rice spoke on the rise of mass communication surveillance across the world, and the evolving challenges of regulating surveillance by government agencies. Mathew also spoke of privacy movements by citizens and civil society in regions. In the final speaking session, Apar Gupta and Kritika Bhardwaj traced the history of jurisprudence on the right to privacy and the existing regulations and procedures. In the final session, the participants discussed various possible solutions to privacy threats from big data and identity projects including better regulation, new approached such as harms based regulation and privacy risk assessments, and conceiving privacy as a horizontal right. The workshop ended with vote of thanks from the organizers.

    The agenda for the event can be accessed here, and the transcript is available here.

    Comparison of General Data Protection Regulation and Data Protection Directive

    by Aditi Chaturvedi and Edited by Leilah Elmokadem — last modified Feb 07, 2017 02:08 PM
    Recently, the General Data Protection Regulation (REGULATION (EU) 2016/679) was passed. It shall replace the present Data Protection Directive (DPD 95/46/EC), which is a step that is likely to impact the workings of many organizations. This document intends to offer a clear comparison between the General Data Protection Regulation (GDPR) a the Data Protection Direction (DPD).

    Download the file here


    INTRODUCTION

    The GDPR i.e. General Data Protection Regulation (REGULATION (EU) 2016/679) was adopted on May 27th, 2016. It will come into force after a two-year transition period on May 25th, 2018 and will replace the Data Protection Directive (DPD 95/46/EC). The Regulation intends to empower data subjects in the European Union by giving them control over the processing of their personal data. This is not an enabling legislation. Unlike the previous regime under the DPD (Data Protection Directive), wherein different member States legislated their own data protection laws, the new regulation intends uniformity in application with some room for individual member states to legislate on procedural mechanisms. While this will ensure a predictable environment for doing business, a number of obligations will have to be undertaken by organizations, which might initially burden them financially and administratively.

    2. SUMMARY

    The Regulation contains a number of new provisions as well as modified provisions that were under DPD and has removed certain requirements under the DPD. Some significant changes mentioned in the document have been summarized in this section.. These changes suggest that GDPR is a comprehensive law with detailed substantive and procedural provisions. Yet, some ambiguities remain with respect to its workability and interpretation. Clarifications will be required.

    2.1 Provisions from the DPD that were retained but altered in the GDPR include:

    2.1.1 Scope:

    GDPR has an expanded territorial scope and is applicable under two scenarios; 1) when processor or controller is established in the Union, and 2) when processor or controller is not established in the Union. The conditions for applicability of the GDPR under the two are much wider than those provided for DPD. Also, the criteria under GDPR are more specific and clearer to demonstrate application.

    2.1.2 Definitions:

    Six definitions have remained the same while those of personal data and consent have been expanded.

    2.1.3 Consent:

    GDPR mentions "unambiguous" consent and spells out in detail what constitutes a valid consent. Demonstration of valid consent is an important obligation of the controller. Further, the GDPR also explains situations in which child's consent will be valid. Such provisions are absent in DPD.

    2.1.4 Special categories of data:

    Two new categories, biometric and genetic data have been added under GDPR.

    2.1.5 Rights:

    The GDPR strengthens certain rights granted under the DPD. These include:

    a. Right to restrict processing: Under DPD the data subject can block processing of data on the grounds of data inaccuracy or incomplete nature of data. GDPR, on the other hand , is more elaborate and defined in this respect. Many more grounds are listed together with consequences of enforcement of this right and obligations on controller.

    b. Right to erasure: This is known as the "right to be forgotten". Here, the DPD merely mentions that the data subject has the right to request erasure of data on grounds of data inaccuracy or incomplete nature of data or in case of unlawful processing. The GDPR has strengthened this right by laying out 7 conditions for enforcing this right including 5 grounds on which the request for erasure shall not be processed. This means that the "right to erasure" is not an absolute right. GDPR provides that if data has been made public, controllers are under an obligation to inform other controllers processing the data about the request.

    c. Right to rectification: This right is similar under GDPR and DPD.

    d. Right to access: GDPR has broadened the amount of information data subject can have regarding his/her own data. For example, under the DPD the data subject could know about the purpose of processing, categories of processing, recipients or categories to whom data are disclosed and extent of automated decision involved. Now under GDPR, the data subject can also know about retention period, existence of certain rights, about source of data and consequences of processing. It specifically states controllers obligations in this regard.

    e. Automated individual decision making including profiling: This is an interesting provision that applies solely to automate decision-making. This includes profiling, which is a process by which personal data is evaluated solely by automated means for the purpose of analyzing a person's personal aspect such as performance at work, health, location etc. The intent is that data subjects should have the right to obtain human intervention into their personal data. This upholds philosophy of data safeguard as the subject can get an opportunity to express himself, obtain explanation and challenge the decision. Under GDPR, such decision-making excludes data concerning a child.

    2.1.6 Code of conduct:

    A voluntary self-regulating mechanism has been provided under both GDPR and DPD.

    2.1.7 Supervisory Authority:

    As compared to the DPD, the GDPR lays down detailed and elaborate provisions on Supervisory Authority.

    2.1.8 Compensation and Liability:

    Although compensation and liability provisions under GDPR and DPD are similar, the GDPR specifically mentions this as a right with a wider scope. While the Directive enforces liability on the controller only, under the GDPR, compensation can be claimed from both, processor and controller.

    2.1.9 Effective judicial remedies:

    Provisions in this area are also quite similar between the DPD and GDPR. The difference is that GDPR specifically mentions this as a "right" and the Directive does not. Use of such words is bound to bring legal clarity. It is interesting to note that in the DPD, recourse to remedy has been mentioned in the Recitals and it is the national law of individual member states, which shall regulate the enforceability. GDPR, on the other hand, mentions this under its Articles together with the jurisdiction of courts and exceptions to this right.

    2.1.10 Right to lodge complaint with supervisory authority:

    The right conferred to the data subject to seek remedy under unlawful processing has been strengthened under GDPR. Again, as mentioned above, GDRP specifically words this as a "right" while the DPD does not.

    2.2 New provisions added to the GDPR include:

    2.2.1 Data Transfer to third countries:

    Provisions under Chapter V of GDPR regulate data transfers from EU to third countries and international organizations and data transfer onward. DPD only provides for data transfer to third countries without reference to international organizations.

    A mechanism called adequacy decisions for such transfers remains the same under both laws. However, in situations where Commission does not take adequacy decisions, alternate and elaborate provisions on "Effective Safeguards" and "Binding Corporate Rules" have been mentioned under the GDPR. Other certain situations have been envisaged under both GDPR and DPD for data transfers in absence of adequacy decision. These are more or less similar with a only few modifications.

    Significantly, GDPR brings clarity with respect to enforceability of judgments and orders of authorities that are outside of EU over their decision on such data transfer. Additionally, it provides for international cooperation for protection of personal data. These are not mentioned in the DPD.

    2.2.2 Certification mechanism:

    Just like code of conduct, this is also a voluntary mechanism, which can aid in demonstrating compliance with Regulation.

    2.2.3 Records of processing activities:

    This is a mandatory "compliance demonstration" mechanism under GDPR, which is not mentioned under DPD. Organizations are likely to face initial administrative and financial burdens in order to maintain records of processing activities.

    2.2.4 Obligations of processor:

    DPD fixes liability on controllers but leaves out processors. GDPR includes both. Consequently, GDPR specifies obligations of the processor, the kinds of processors the controller can use and what will govern processing.

    2.2.5 Data Protection officer:

    This finds no mention in the DPD. Under the GDPR, a data protection officer must be mandatorily appointed where the core business activity of the organization pertains to processing, which requires regular and systematic monitoring of data subjects on large scale, processing of large scale special categories of data and offences, or processing carried out by public authority or public body.

    2.2.6 Data protection impact assessment:

    This is a Privacy Impact assessment for ensuring and demonstrating compliance with the Regulation. Such assessment can identify and minimize risks. GDPR mandates that such assessment must be carried out when processing is likely to result in high risk. The relevant Article mentions when to carry out processing, the type of information to be contained in assessment and a clause for prior consultation with supervisory authority prior to processing if assessment indicates high risk.

    2.2.7 Data Breach:

    Under this provision, the controller is responsible for two things: 1) reporting personal data breach to supervisory authority no later than 72 hours . Any delay in notifying the authority has to be accompanied by reasons for delay; and 2) communicating the breach to the data subject in case the breach is likely to cause high risk to right and freedoms of the person. As far as the processor is concerned, in the event of data breach, the processor must notify the controller. This provision is likely to push some major changes in the workings of various organizations. A number of detection and reporting mechanisms will have to be implemented. Above all, these mechanisms will have to be extremely efficient given the time limit.

    2.2.8 Data Protection by design and default:

    This entails a general obligation upon the controller to incorporate effective data protection in internal policies and implementation measures.

    2.2.9 Rights:

    Under the GDPR, a new right called the " Right to data portability " has been conferred upon the data subjects. This right empowers the data subject to receive personal data from one controller and transfer it to another.

    2.2.10 New Definitions:

    Out of 26 definitions, 18 new definitions have been added. "Pseudonymisation" is one such new concept that can aid data privacy. This data processing technique encourages processing in a way that personal data can no longer be attributed to a specific data subject without using additional information. This additional information is to be stored separately in a way that it is not attributed to an identified or identifiable natural person.

    2.2.11 Administrative fines:

    Perhaps much concern about GDPR is due to provisions on high fines for non-compliance of certain provisions. Organizations simply cannot afford to ignore it. Non-compliance can lead to imposition of very heavy fines up to 20,000,000 EUR or 4% of total worldwide turnover.

    2.3 Deleted provisions under DPD include :

    2.3.1 Working Party:

    Working party under the DPD has been replaced by the European Data Protection Board provided by the GDPR. The purpose of the Board is to ensure consistent application of the Regulation.

    2.3.2 Notification Requirement:

    The general obligation to notify processing supervisory authorities has been removed. It was observed that this requirement imposed unnecessary financial and administrative burden on organizations and was not successful in achieving the real purpose that is protection of personal data. Instead, now the GDPR focuses on procedures and mechanisms like Privacy Impact assessment to ensure compliance.

    3. BRIEF OVERVIEW

    The GDPR is the new uniform law, which will now replace older laws. A brief overview has been given below:

    Topic

    GDPR

    (General Data Protection Regulation)

    DPD

    (Data Protection Directive)

    Name

    REGULATION (EU) 2016/679

    DPD 95/46/EC

    Enforcement

    Adopted on 27 May 2016

    To be enforced on 25 May 2018

    Adopted on 24 October 1995

    Effect of legislation

    It is a Regulation.

    Is directly applicable to all EU member states without requiring a separate national legislation.

    It is an enabling legislation.

    Countries have to pass their own separate legislations.

    Objective

    To protect "natural persons" with regard to processing of personal data and on free movement of such data.

    It repeals DPD 95/46/EC.

    To protect "individuals" with regard to processing of personal data and on free movement of such data.

    Number of Chapters

    XI

    VII

    Number of Articles

    99

    34

    Number of Recitals

    173

    72

    Applicability

    To processors and controllers

    Same

    4. COMPARATIVE ANALYSIS OF GDPR AND DPD

    This section offers a comparative analysis through a set of tables and text analysing and comparing the provisions of General Data Protection Regulation (GDPR) with those of the Data Protection Direction (DPD). Spaces left blank in the tables imply lack of similar provisions under the respective data regime.

    4.1 Territorial Scope

    GDPR has expanded territorial scope. The application of Regulation is independent of the place where processing of personal data takes places under certain conditions. The focus is the data subject and not the location. The DPD made application of national law, a criterion for determining the applicability of the Directive. Under the GDPR, the following conditions need to be satisfied for application of Regulation.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    3

    4

    When processor or controller is established in the Union, the Regulation/ Directive will apply if:

    (DPD is silent on location of processors )

    1. Processing is of personal data

    2. Processing is in "context of activities" of the establishment

    3. Processing may or may not take place in the Union

    Processing is of personal data.

    When processor or controller is not established in Union, the Regulation/Directive will apply if:

    (DPD is silent on location of processors )

    1. Data subjects are in the Union; and

    2. Processing activity is related to:

    I. Offering of goods or services; or

    II. Monitoring their behavior within Union

    3. Will apply when Member State law is applicable to that place by the virtue of public international law

    1. Like GDPR the DPD mentions that national law should be applicable to that place by virtue of public international law;

    Or

    2. If the equipment for processing is situated on Member state territory unless it is used only for purpose of transit.

    4.2 Material Scope

    The Recital under GDPR explains that data protection is not an absolute right. Principle of proportionality has been adopted to respect other fundamental rights.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    2

    3

    Applies to

    Processing of personal data

    Processing is by automated means, wholly or partially

    When processing is not by automated means, the personal data should form or are intended to form a part of filing system

    Same

    Does not apply to

    Processing of personal data:

    1. For activities which lie outside scope of Union law

    2. By Member State under Chapter 2 Title V of TEU

    3. By natural person in course of purely personal or household activity

    4. By competent authorities in relation to criminal offences and penalties and threats to public security

    5. Under Regulation (EC) No 45/2001. This needs to be adapted for consistency with GDPR

    6. Which should not prejudice the E commerce Directive 2000/31/EC especially the liability rules of intermediary service providers

    The provisions in DPD are similar to GDPR.

    In addition to Title V, the DPD did not apply to Title VI of TEU.

    DPD doesn't mention Regulation (EC) No 45/2001 or the E commerce Directive 2000/31/EC.

    4.3 Definitions

    GDPR incorporates 26 definitions as compared to 8 definitions under DPD. There are 18 new definitions in GDPR. Some definitions have been expanded.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    4

    2

    New Definitions under GDPR

    1. Restriction of processing

    2. Profiling

    3. Pseudonymisation

    4. Personal data breach

    5. Genetic data

    6. Biometric data

    7. Data concerning health

    8. Main establishment

    9. Representative

    10. Enterprise

    11. Group of undertakings

    12. Binding corporate rules

    13. Supervisory authority

    14. Supervisory authority concerned

    15. Cross border processing

    16. Relevant and reasoned objection

    17. Information society service

    18. International organizations

    2 definitions that have been expanded under GDPR

    1. Personal data

    2. Consent

    6 Definitions which have remained same in GDPR and DPD

    1. Processing of personal data

    2. Personal data filing system

    3. Controller

    4. Processor

    5. Third party recipient

    4.3.1 Expanded definition of personal data

    Both DPD and GDPR apply to 'personal data'. The GDPR gives an expanded definition of 'personal data'. Recital 30 gives example of an online identifier such as IP addresses.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    4(1)

    2(a)

    New term added in the definition

    A new term " online identifier" has been added.

    Example of online identifier is given under Recital 30. An IP address is one such example.

    4.3.2 Expanded definition of consent

    Valid consent must be given by the data subject. The definition of valid consent has been added under GDPR. Recital 32 further explains that consent can be given by "means of a written statement including electronic means or an oral statement". For example, ticking a box on websites signifies acceptance of processing while "pre ticked boxes, silence or inactivity" do not constitute consent.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    4(11)

    2(h)

    Term added in GDPR

    Consent must be unambiguous, freely given, specific and informed.

    The word "unambiguous" is not contained in DPD.

    Means of signifying assent to processing own data

    Assent can be given by a statement or by clear affirmative action signifying assent to processing.

    DPD merely mentions that freely given, specific and informed consent signifies assent.

    4.4 Conditions for consent

    GDPR lays down detailed provisions for valid consent. Such provisions are not given in DPD.

    Sub-topics in the section

    GDPR

    DPD

    Article

    7

    Obligation of controller

    Must demonstrate consent has been given

    Presentation of written declaration of consent

    It should be in a clearly distinguishable, intelligible and easily accessible form.

    Language should be clear and plain.

    If declaration or any part of it infringes on Regulation

    Declaration will be non-binding.

    Right of data subject

    To withdraw consent at any time.

    If consent is withdrawn, it will not make processing done earlier unlawful.

    For assessing whether consent is freely given

    Must consider whether performance of contract or provision of service is made conditional on consent to processing of data not necessary for performance of contract.

    4.5 Conditions applicable to child's consent in relation to information society services

    This article prescribes an age limit for making processing lawful when information society services (direct online service) are offered directly to a child.

    Sub Topics in the Section

    GDPR

    DPD

    Given in Article

    8

    Conditions for valid consent in this case

    If child is at least 16 years old his consent is valid.

    If child is below 16 years consent must be obtained from holder of parental responsibility over the child.

    Age relaxation can be given when

    Member States provides a law lowering the age.

    Age cannot be lowered below 13 years.

    Controller's responsibility

    Verify who has given the consent

    Exceptions

    This law will not affect:

    General contract law of member states;

    Effect of contract law on a child;

    4.6 Processing of special categories of personal data

    Like the DPD, the GDPR spells out the data that is considered sensitive and the conditions under which this data can be processed. Two new categories of special data, "genetic data" and "biometric data", have been added to the list in the GDPR.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    9

    8

    Categories of data considered sensitive

    Racial or ethnic origin

    Same

    Political opinions

    Same

    Religious or philosophical beliefs

    Same

    Trade union membership

    Same

    Health or sex life or sexual orientation

    Same

    Genetic data or

    Biometric data uniquely identifying natural person

    Circumstances in which processing of personal data may take place

    If there is explicit consent of data subject provided Member State laws do not prohibit such processing

    Necessary for carrying out specific rights of controller or data subject

    Under DPD these rights can be for employment.

    The GDPR adds social security and social protection to this list.

    These rights are to be authorized by Member state or Union. The GDPR adds "Collective agreements" to this.

    In the vital interest of data subject who cannot give consent due to physical or legal causes.

    Same

    In the vital interest of a Natural person physically or legally incapable of giving consent

    Same

    For legitimate activities carried on by not-for profit-bodies for political, philosophical or trade union aims subject to certain conditions.

    Same

    When personal data is made public by data subject

    Same

    For establishment, exercise of defense of legal claims or for courts

    Same

    For substantial public interest in accordance with Member State or Union law

    Is necessary for:

    Preventive or occupational medicine

    Assessing working capacity of employee

    Medical diagnosis

    Healthcare or social care services

    Contract with health professional

    Is necessary in Public interest in the area of public health

    For public interest, scientific or historical research or statistical purpose

    Data for preventive or occupational medicine, medical diagnosis etc. can be processed when:

    Data is processed by or under responsibility of a professional under obligation of professional secrecy as state in law

    Here the processing is done by health professional under obligation of professional secrecy

    4.7 Principles relating to processing of personal data

    The principles set out in GDPR are similar to the ones under DPD. Some changes have been introduced. Accountability of the controller has been specifically given under GDPR.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    5

    6

    Lawfulness, fairness, transparency

    Processing must be Lawful, fair and transparent

    Does not mention transparent

    Purpose limitation

    Data must be specified, explicit and legitimate.

    Same

    Processing for achieving public interest, scientific or historical research or statistical purpose is not to be considered incompatible with initial purpose.

    Same

    Data minimization

    Processing is adequate, relevant and limited to what is necessary

    Same

    Accuracy

    Data is accurate, up to date, erased or rectified without delay

    Same

    Storage limitation

    Data is to be stored in a way that data subject can be identified for no longer than is necessary for purpose of processing

    Same

    Data can be stored for longer periods when it is processed solely in public interest, scientific or historical research or statistical purpose

    Same

    However, public interest is not mentioned.

    There must be appropriate technical and organizational measures to safeguard rights and freedoms

    Same

    Additionally, it specifically states that Member States must lay down appropriate safeguards

    Integrity and confidentiality

    Manner of processing must:

    Ensure security of personal data,

    Protection against unlawful processing and accidental loss, destruction or damage

    Not mentioned

    Accountability

    Controller is responsible for and must demonstrate compliance with all of the above.

    DPD states it is for the controller to ensure compliance with this Article.

    Unlike GDPR, DPD doesn't specifically state the responsibility of controller for demonstrating compliance.

    4.8 Lawfulness of processing

    The conditions for "lawfulness of processing" under DPD have been retained in the GDPR with certain modifications allowing flexibility for member states to introduce specific provisions in public interest or under a legal obligation. It should be noted that protection given to child's data and rights and freedoms of data subject should not be prejudiced. Additionally, a non-exhaustive list has been laid down in the GDPR for determining if processing is permissible in situations where the new purpose of processing is different from original purpose.

    Sub Topics in the Section

    GDPR

    DPD

    Given in Article

    6

    7

    Processing is lawful when :

    If at least one of the principles applies:

    Data subject has given consent to processing for specific purpose(s).

    Same

    However it mentions "unambiguous" consent.

    Processing is necessary for performance of contract to which data subject is party or at request of data subject before entering into a contract

    Same

    Processing is necessary for controller's compliance with legal obligation.

    Same

    Is necessary for legitimate interests pursued by controller or by third party subject to exceptions (should not override rights and freedoms of data subject and protections given to child's data.)

    Same

    It is necessary for performance of task carried out in public interest or for exercise of official authority vested in controller

    Same

    It additionally mentions third party:

    "…exercise of official authority vested in controller or in a third party to whom data are disclosed"

    For protections of vital interest of data subject or another natural person

    Same

    Does not mention natural person.

    Member States may introduce specific provisions when:

    When processing is necessary for compliance with a legal obligation or to protect public interest

    Basis for processing for shall be laid down by: Union law or Member State law

    If processing is done for purpose other than for which data is collected and is without data subject's consent or is not collected under law:

    To determine if processing for another purpose is compatible with the original purpose

    Controller shall take into account following factors:

    Link between purposes for which data was collected and the other purpose

    Context in which personal data have been collected

    Nature of personal data

    Possible consequences of other purpose

    Existence of appropriate safeguards

    4.9 Processing which does not require identification:

    This article lays down the conditions under which the controller is exempted from gathering additional data in order to identify a data subject for the purpose of complying with this Regulation. If the controller is able to demonstrate that identification is not possible, the data subject is to be informed if possible.

    Sub Topics in the Section

    GDPR

    DPD

    Given in Article

    11

    Conditions under which the controller is not obliged to maintain process or acquire additional information to identify data subject

    If purpose for processing doesn't not require identification of data subject by the controller

    Consequence of not maintaining the data

    Art 15 to 20 shall not apply provided controller is able to demonstrate its inability to identify the data subject

    Exception to above consequence will apply when :

    Data subject provides additional information enabling identification

    4.10 Rights of the data subject

    The General Data Protection Rules (GDPR) confers 8 rights upon the data subject.These rights are to be honored by the controller:-

    1. Right to be informed

    2. Right of access

    3. Right to rectification

    4. Right to erasure

    5. Right to restrict processing

    6. Right to data portability

    7. Right to object

    8. Rights in relation to automated decision making and profiling

    4.10.1 Right to be informed

    The controller must provide information to the data subject in cases where personal data has not been obtained from the data subject. A number of exemptions have been listed. Additionally, GDPR lays down the time period within which the information has to be provided.

    Sub Topics in the Section

    GDPR

    DPD

    Given in Article

    14

    10

    Type of information to be provided

    Identity and contact details of the controller or controller's representative

    Same

    Contact details of the data protection officer

    Purpose and legal basis for processing

    Purpose of processing

    Recipients or categories of recipients of personal data

    Same

    Intention to transfer data to third country or international organization and Information regarding adequacy decision or suitable safeguards or Binding Corporate Rules or derogations. This includes means to obtain a copy of these as well as information on place of availability.

    Additional information to be provided by controller to ensure fair and transparent processing

    Storage period of personal data and criteria for determining the period

    Legitimate interests pursued by controller or third party

    Existence of data subject's rights with regard to access or rectification or erasure of personal data, automated decision making

    Where applicable, existence of right to withdraw consent

    Time period within which information is to be provided

    Information to be given within a reasonable period, latest within one month.

    To be provided latest at the time of first communication to data subject, if personal data are to be used for communication with data subject

    In case of intended disclosure to another recipient , at the latest when personal data are first disclosed.

    If processing is intended for a new purpose other than original purpose, information to be provided prior to processing on new purpose.

    Situations in which exceptions are applicable

    Data subject already has information

    Same

    Provision of information involves disproportionate effort or is impossible or renders impossible or seriously impairs achievement of objective of processing.

    This is particularly with respect to processing for archiving purposes in public interest, scientific or historical research or statistical purpose.

    However controller must take measures to protect data subject's rights and freedom and legitimate interests including make information public.

    Provision involves impossible or disproportionate effort, in particular where processing is for historical or scientific research.

    However, appropriate safeguards must be provided by Member States.

    Obtaining or disclosure is mandatory under Union or member law and it provides protection to data subject's legitimate interests

    Where law expressly lays down recording or disclosure provided appropriate safeguards are provided by Member States.

    This is particularly applicable to processing for scientific or historical research.

    Confidentiality of data mandated by professional secrecy under Union or Member State law

    4.10.2 Right to access

    Both Data Protection Directive (DPD) and General Data Protection Rules (GDPR) confer right to access information regarding personal data on the data subject.

    CJEU in YS V. Minister voor Immigrate Integratie en Asiel stated that it is the data subject's right "to be aware of and verify the lawfulness of the processing".

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    15

    12

    Data subject has the right to know about:

    Purpose of processing

    Same

    Categories of processing the data

    Same

    Recipients or categories to whom data are disclosed

    Same

    Retention period of the data and criteria for this

    Existence of right to request erasure, rectification or restriction of processing

    Right to lodge complaint with supervisory authority

    Knowledge about source of data

    To know about any significant and envisaged consequences of processing for the data subject

    Existence of automated decision making and logic involved

    Same

    In case of data transfer to third country

    Right to be informed about the safeguards

    Controller's obligation

    To provide a copy of data undergoing processing. Reasonable fee based on administrative costs can be charged for this.

    4.10.3 Right to rectification

    GDPR and DPD both give the data subject the right to rectify their personal data. Under the GDPR the data subject can complete the incomplete data by giving a supplementary statement.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    16

    12(b)

    Right can be exercised when:

    Processing does not comply with the Directive i.e. damage is caused due to unlawful processing (Recital 55)

    OR

    When data is incomplete

    When data is incomplete or inaccurate

    Obligations of controller

    To enforce the right without undue delay

    Obligation of controller to give notification when data is disclosed to third party

    Given under Art 19

    Request of erasure of personal data to be communicated to each recipient of such data

    Given under Article 12(c)

    Request must be communicated to third parties

    It should not involve an impossible or disproportionate effort

    Same

    4.10.4 Right to erasure

    This is also referred to as the "right to be forgotten". It empowers the individual to erase personal data under certain circumstances. The data subject can request the controller to remove the data for attaining this purpose.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    17

    12(b)

    Obligation of the controller

    To erase the data without undue delay

    Conditions under which the right can be exercised

    When processing does not comply with the Directive i.e. damage is caused due to unlawful processing (Recital 55)

    OR

    When data is incomplete or inaccurate

    Personal data is no longer necessary for the purpose for which it was collected or processed

    Data Subject withdraws consent for processing

    Data subject objects to processing and there are no overriding legitimate grounds for processing

    Data subject objects to processing for direct marketing purpose

    Personal data has been unlawfully processed

    When personal data has to be erased under a legal obligation of Union or member State law

    When personal data has been collected in offer of information society services to a child

    Condition of processing under which request to erasure shall not be granted

    For exercising right of freedom of expression and information

    Processing is done under Union or Member State law in public interest or exercise of official authority vested in controller

    Done for public interest in public health

    For public interest, scientific or historical research or statistical purpose.

    For establishment, exercise or defense of legal claims.

    Controller's obligations when personal data has been made public

    Controller to take reasonable steps to inform controllers who are processing the data, of the request of erasure.

    All links, copy or replication of personal data to be erased.

    Technology available and cost of implementation to be taken into account.

    Notification when data is disclosed to third party

    Given under obligation of controller under Art 19:

    Request of erasure of personal data to be communicated to each recipient of such data

    Given under obligation of controller under 12(c) :

    Request must be communicated to third parties

    It should not involve an impossible or disproportionate effort

    Same

    4.10.5 Right to restrict processing

    While DPD provided for "blocking", the GDPR strengthened this right by specifically conferring the " Right to Restrict Processing" upon the data subject. This Article gives data subject the right to restrict processing under certain conditions. Recital 67 explains that these methods could include steps like removing published data from website or temporarily moving the data to another processing system.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    18

    12(b)

    About this right

    Data subject can restrict processing of data

    Data subject is allowed to erase, rectify or block processing of personal data.

    Conditions under which the right can be exercised

    When accuracy of personal data is contested

    Besides accuracy, the DPD also mentions "incomplete nature of data" as grounds for exercising this right.

    When processing is unlawful and data subject opposes erasure and requests restriction of data use

    When data is no longer needed by controller but is required by data subject for establishment, exercise or defense of legal claims.

    Data subject objects to processing and the verification by controller of compelling legitimate grounds for processing is ongoing

    Consequences of this enforcement of this right

    Controller can store data but not process it

    Processing can be done only with the data subject's consent; or

    Processing can be done for establishment exercise or defense of legal claims; or

    Processing can be done for protecting rights of another natural or legal person ;or

    It can be done in public interest of Union or Member State.

    Obligations of controller under Art 18

    The controller must inform the data subject before the restrictions are lifted.

    Obligations of controller under Art 19

    Inform each recipient of personal data about the restriction.

    This obligation need not be performed if it is impossible to do so or it involved disproportionate effort.

    Inform data subject about the recipients when requested by the data subject.

    4.10.6 Right to data portability

    This right empowers the data subject to receive personal data from one controller and transfer it to another. This gives the data subject more control over his or her own data. The controller cannot hinder this right when the following conditions are met.

    Sub-topics in the section

    GDPR

    DPD

    Given in article

    20

    Conditions for data transmission

    The data must have been provided to the controller by data subject himself; and

    Processing is based on:

    Consent; or

    For performance of contract; and is carried out by automated means

    Data transfer must be technically feasible

    Format of personal data

    It should be in a:

    Structured

    Commonly-used

    Machine readable format

    Time and cost for data transfer

    Given in Art 12(3)

    Should be free of charge

    Information to be provided within one month. Further extension by two months permissible under certain circumstances.

    Circumstance under which this Right cannot be exercised

    When the exercise of the Right prejudices rights and freedom of another individual

    When processing is necessarily carried out in public interest

    When processing is necessarily done in exercise of official authority vested in controller

    When this Right adversely affects the "Right to be forgotten"

    4.10.7 Right to Object

    Both DPD and GDPR confer upon the data subject the right to object to processing on a number of grounds. The GDPR strengthens this right . Under GDPR, there is a visible shift from the data subject to the controller as far as the burden of showing " compelling legitimate grounds" is concerned. Under the DPD, when processing is undertaken in public interest or in exercise of official authority or in legitimate interests of third party or controller, the data subject not only has to show existence of compelling legitimate grounds but also that objection is justified. On the other hand, GDPR spares the data subject from this exercise and instead places the onus on the controller of demonstrating that "compelling legitimate grounds" exist such that these grounds override the interests, rights and freedom of the data subject.

    GDPR also provides a new ground for objecting to processing. The data subject can object to processing when it is for scientific or historical research or statistical purpose unless such processing is necessary in public interest.

    Under the GDPR the data subject must be informed of this right "clearly and separately" and "at the time of first communication with data subject" when processing is done in public interest/exercise of official authority/legitimate interest of third party or controller or for direct marketing purpose. This right can be exercised by automated means in case of information society service.

    The DPD also provides that the data subject must be informed of this right if the controller anticipates processing for direct marketing or disclosure of data to third party. It specifically states that this right is to be offered "free of charge". Additionally, it places responsibility upon the Member States to ensure that data subjects are aware of this right.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    21

    14

    Conditions under which the right can be exercised during processing

    When performance of task is carried out in public interest or in exercise of official authority vested in controller. (Art 6(1)(e))

    Exception:

    If controller demonstrates processing is for compelling legitimate grounds which override interests of data subject

    For establishment, exercise or defense of legal claims.

    Grounds are same but the data subject also has to show existence of compelling legitimate grounds. Processing will cease if objection is justified.

    Exceptions:

    Unless provided by national legislation the data subject can object on this ground.

    For legitimate interests of controller or third party (Art 6(1)(f))

    Exception:

    1. If controller demonstrates processing is for compelling legitimate grounds that override interests of data subject.

    2. For establishment, exercise or defense of legal claims.

    Same as above

    When data is processed for scientific/historical research/ statistical purpose under Art 89(1)

    Exception:

    If processing is necessary for public interest

    When personal data is used for marketing purpose.

    Can object at anytime.

    No exceptions

    Same

    4.10.8 Rights in relation to automated individual decision making including profiling

    This Article empowers the data subject to challenge automated decisions under certain conditions. This is to protect individuals from decisions taken without human intervention.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    22

    15

    This right can be exercised when decisions are based:

    Only on automated processing

    Including profiling; and

    Same

    Produce legal effects or have similarly significant effects on data subject

    Same

    Conditions under which this right will not be guaranteed

    For entering into or performance of contract;

    Same

    If Member State or Union law authorizes the decision provided it lays down suitable measures for safeguarding data subject's rights, freedoms and legitimate interests; Or

    Same

    When decision is based on data subject's explicit consent.

    Controller's obligation

    Enforce measures to safeguard rights and freedom and interests

    Ensure data subject can obtain human intervention, express his point of view, challenge decisions

    Automated decision making will not apply when:

    "Special categories of personal data" are to be processed

    However, if the data subject gives his explicit consent or such processing serves substantial public interest then the restriction can be waived.

    Concerns a child

    4.11 Security and Accountability

    4.11.1 Data protection by design and default

    This is another new concept under GDPR. It is a general obligation on the controller to incorporate effective data protection in internal policies and implementation measures. Measures include: minimization of processing, pseudonymisation, transparency while processing, allowing data subjects to monitor data processing etc. The implementation of organizational and technical measures is essential to demonstrate compliance with Regulation.

    Sub-topics in the section

    GDPR

    DPD

    Article

    25

    Responsibility of controller when determining means of processing and at the time of processing

    Implementation of appropriate technical and organizational measures for data protection

    Ensure that by default only personal data necessary for purpose of processing is processed

    Means of demonstrating compliance with this Article

    Approved certification mechanism may be used.

    Data minimization

    Transparency etc.

    4.11.2 Security of personal data

    Security of processing is mentioned in the GDPR under Article 32. The controller and processor must implement technical and organizational measures to ensure data security. These may include pseudonymisation, encryption, ensuring confidentiality, restoring availability and access to personal data, regularly testing etc. Compliance with the code may be demonstrated by adherence to Code of conduct and certification mechanism. Further, all processing which is done by a natural person acting under authority of controller or processor can be done only under instructions from the controller.

    4.11.3 Notification of personal data breach

    This Article provides the procedure for communicating the personal data breach to supervisory authority. If the breach is not likely to result in risk to rights and freedoms of natural persons, then the controller is not required to notify the supervisory authority.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    33

    Responsibility of controller

    Report personal data breach to supervisory authority after being aware of it

    Time limit for reporting data breach

    Must be reported no later than 72 hours

    In case of delay in reporting

    Reasons to be stated

    Responsibility of processor

    Notify the controller after being aware of breach

    Description of notification

    Describe nature of personal data

    Name contact details of data protection officer

    Likely consequences of personal data breach

    Measures to be taken or proposed to be taken by controller to address the breach or mitigate its possible effect

    When information cannot be provided at same time

    Provide it in phases without further undue delay

    For verification of compliance

    Controller has to document any personal data breach. It must contain Facts , effects and remedial action taken

    4.11.4 Communication of personal data breach to the data subject

    Not only is the supervisory authority to be notified, but data subjects are also to be informed about personal data breaches without undue delay under certain conditions.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    34

    Conditions under which controller is to communicate the breach to data subject

    When breach is likely to cause high risk to rights and freedoms of natural persons

    Nature of communication

    Must be in a clear and plain language.

    Must describe the nature of breach.

    Must Contain at least:

    Name contact details of data protection officer

    Likely consequences of personal data breach

    Measures to be taken or proposed to be taken by controller to address the breach or mitigate its possible effect

    Condition under which communication will not be required

    If controller has implemented appropriate technical and organizational measures and these were applied to the affected data.

    E.g.: encryption

    Subsequent measures have been taken by controller to ensure there is no high risk

    If communication involves disproportionate effort.

    Public communication or similar measures can be undertaken under such circumstances.

    Role of supervisory authority

    In case of likelihood of high risk, the authority may require the controller to communicate the breach if the controller has not already done so.

    4.11.5 Data protection impact assessment

    This is also known as Privacy Impact Assessment. While DPD provides general obligation to notify the processing to supervisory authorities, the GDPR, taking into account the need for more protection of personal data, has replaced the notification process by different set of mechanisms.

    To serve the above purpose, the data protection impact assessment (DPIA) has been provided under this Article.

    Sub-topics in the section

    GDPR

    DPD

    Given in Article

    35

    When to carry out assessment

    When new technology is used; and

    Processing is likely to result in high risk to rights and freedoms of natural persons

    Automated processing including profiling involving systematic and extensive evaluation of personal aspects of natural persons;

    and

    When decisions based on such processing produce legal effects

    Large scale processing of special categories of data or personal data relating to criminal convictions and offences

    Large scale systematic monitoring of publicly accessible area

    Type of information contained in assessment

    Description of processing operations and purpose

    Assessment of necessity and proportionality of processing operations

    Assessment of risks to individuals

    Measures to address risks and demonstration of compliance with Regulation

    Sub-topics in the section

    GDPR

    DPD

    Topic

    Prior Consultation

    Given in Article

    36

    When should controller consult supervisory authority

    Prior to processing; and

    DPIA indicates high risk; and

    In absence of risk mitigation measures by controller

    Data protection officer

    GDPR mandates that a person with expert knowledge of data protection law and practice is appointed for helping the controller or processor to comply with the data protections laws. A single data protection officer (DPO) may be appointed by a group of undertakings or where controller or processor is a public authority or body.The DPO must be accessible from each establishment.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    37

    Situations in which DPO must be appointed

    When processing is carried out by public authority or body.

    Note: Courts acting in judicial capacity are excluded.

    Core activity involves processing which requires regular and systematic monitoring of data subjects on large scale; or

    Core activity involves processing of large scale special categories of data and criminal convictions and offences

    Position of Data Protection Officer

    The DPO must directly report to the highest management level of the controller or processor. Data subjects may contact the DPO in case of problems related to processing and exercise of rights.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    38

    Responsibility of controller and processor

    Ensure DPO is involved properly and in timely manner

    Provide DPO with support, resources and access to personal data and processing operations

    Not dismiss or penalize DPO for performing his task.

    Ensure independence of working and not give instruction to DPO

    Tasks of Data Protection officer

    The DPO must be involved in all matters concerning data protection. He is expected to act independently and advice the controllers and processors to facilitate the establishment's compliance with Regulations.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    39

    Tasks

    Inform and advise the controller or processor and employees over data protection laws

    Monitor compliance with data protection laws. Includes assigning responsibilities, awareness- raising, staff training and audits

    Advice and monitor performance

    Cooperate with supervisory authority

    Act as point of contact for supervisory authority for processing, prior consultation and consultation on other matter

    4.11.6 European Data Protection Board

    For consistent application of the Regulation, the GDPR envisages a Board that would replace the Working Party on Protection of Individuals With Regard to Processing of Personal Data established under the DPD. This Regulation confers legal personality on the Board.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    68

    Represented by

    Chair

    Composition of the Board

    Head of one supervisory authority of each Member State and European Data Protection Supervisor or of their representatives.

    Joint representative can be appointed where Member State has more than one supervisory authority.

    Role of Commission

    Right to participate in activities and meetings of the Board without voting rights.

    Commission to designate a representative for this.

    Functions of the Board

    Consistent application of Regulation

    Advise Commission of level of protection in third countries or international organizations

    Promote cooperation of supervisory authorities

    Board is to act independently

    4.11.7 Supervisory Authority

    GDPR lays down detailed provisions on supervisory authorities, defining their functions, independence, appointment of members, establishment rules, competence, competence of lead supervisory authority, tasks, powers and activity reports. Such elaborate provisions are absent in DPD.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    Chapter VI, Article 51 -59

    28

    4.12 Processor

    The Article spells out the obligations of a processor and conditions under which other processors can be involved.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    28

    What kind of processors can be used by controller

    ● Those which provide sufficient guarantees to implement appropriate technical and organizational measures

    ● Those which comply with Regulation and Rights

    Obligations of processor in case of addition or replacement of processor

    ● Not engage another processor without controller's authorization

    ● In case of general written authorization inform the controller

    Processing shall be governed by

    Contract or legal act under Union or Member State law.

    Elements of Contract

    ● Is binding on processor

    ● Sets out subject matter and duration of processing

    ● Nature of processing

    ● Type of personal data

    ● Categories of data subjects

    ● Obligations and Rights of the controller

    Obligations of processor under contract or legal act

    Processor shall process under instructions from controller unless permitted under law itself.

    Controller is to be informed in the latter case.

    Ensures that persons authorized to process have committed themselves to confidentiality

    Processor to undertake all data security measures (mentioned under Art 32)

    Enforces conditions on engaging another processor

    Assists the controller by appropriate technical and organizational measures

    Assists controller in compliance with Art 32 to 36

    Delete or return all personal data to controller at the choice of controller at the end of processing

    Make information available to controller for demonstrating compliance with obligations.

    Contribute to audits, inspections etc.

    Inform the controller if it believes that an instruction infringes the regulation or law.

    Conditions under which a processor can engage another processor

    ● Same data protection obligations will be applicable to other processor.

    ● If other processor fails to fulfill data protection obligations, initial processor shall remain fully liable to controller for such performance.

    4.13 Records of processing activities

    The controller or processor must maintain records of processing activities to demonstrate compliance with the Regulation. They are obliged to cooperate with and make record available to the supervisory authority upon request. DPD does not contain similar obligations.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    30

    Obligation of controller or controller's representative

    Maintain a record of processing activities

    Information to be contained in the record

    Name and contact details of:

    ● Controller /joint controller / controller's representatives

    ● Data protection officer

    Purpose of processing

    Categories of data subjects and categories of personal data

    Categories of recipients to whom data has been or will be disclosed

    Transfers of personal data to third party, identification of third party, documentation of suitable safeguards

    Expected time duration for erasure of different categories of data

    Technical and organizational security measures

    Obligation of processor

    Maintain a record of processing activities carried out on behalf of controller

    Record maintained by processor shall contain information such as:

    Name and contact details of:

    ● Processor /processor's representative

    ● Controller /controller's representative

    ● Data protection officer

    Categories of processing

    Data transfer to third party

    Identification of third party

    Documentation of safeguards

    Technical and organizational security measures

    Form in which record is to be maintained

    In writing and electronic form

    Conditions under which exemption will apply

    ● Organizations employing fewer than 250 employees are exempted;

    ● Processing should not cause risk to rights and freedoms of data subjects

    ● Processing should not be occasional

    ● Processing should not include special categories of data

    4.14 Code of Conduct

    These mechanisms have been provided under GDPR to demonstrate compliance with the Regulation. This is important as the GDPR ( under Art 83 ) provides that adherence to code of conduct shall be one of the factors taken into account for calculating administrative fines. This is not an obligatory provision.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    40

    27

    Who will encourage drawing up of code of conduct

    ● Member States

    ● Supervisory Authorities

    ● Commission.

    Specific needs of micro, small and medium enterprises to be taken into account.

    ● Member States

    ● Commissions

    Does not mention the rest

    Who may prepare amend or extend code of conduct

    Associations and other bodies representing categories of controller or processors

    Information contained in the code

    Fair and transparent processing

    Legitimate interests of controller

    Collection of personal data

    Pseudonymisation

    Information to public and data subjects

    Exercise of rights of data subject

    Information provided to and protection of children and manner in which consent of holders of parental responsibility is obtained

    Measures under:

    ● Data protection by design and default

    ● Controller responsibilities

    ● Security of processing

    Notification of data breach to authorities and communication of same to data subjects

    Data transfer to third party

    Dispute resolution procedures between controllers and data subjects

    Mechanisms for mandatory monitoring

    Mandatory monitoring

    Code of conduct containing the above information enables mandatory monitoring of compliance by body accredited by supervisory authority. (Art 41)

    4.15 Certification

    Like the code of conduct, Certification is a voluntary mechanism that demonstrates compliance with the Regulation. Establishment of data protection certification mechanism and data protection seals and marks shall be encouraged by Member States, supervisory authorities, Boards and Commission. As in case of code of conduct, specific needs of micro, small and medium sized enterprise ought to be taken into account. DPD does not mention such mechanisms.

    Sub Topics in the Section

    GDPR

    DPD

    Article

    42

    Who will issue the certificate

    Certification bodies or competent supervisory authority on basis of approved criteria.

    Time period during which certification shall be issued

    Maximum period of three years.

    Can be renewed under same conditions.

    Who accredits certification bodies

    Competent Supervisory bodies or National accreditation body.

    When can accreditation be revoked

    When conditions of accreditation are not or no longer met.

    OR

    Where actions taken by certification body infringe this Regulation.

    Who can revoke

    Competent supervisory authority or national accreditation body

    4.16 Data Transfer

    4.16.1 Transfers of personal data to third countries or international organizations

    Chapter V lays down the conditions with which the data controller must comply in order to transfer data for the purpose of processing outside of the EU to third countries or international organizations. The chapter also stipulates conditions that must be complied with for onward transfers from the third country or international organization.

    4.16.2 Transfer on the basis of an adequacy decision

    Under GDPR, transfer of data can take place after the Commission decides whether the third country, territory, specified sector within that third country or international organization ensures adequate level of data protection. This is called adequacy decision. A list of countries or international organizations which ensure adequate data protection shall be published in the Official Journal of the European Union and on the website by the Commission. Once data transfer conditions are found to be compliant with the Regulation, no specific authorization would be required for data transfer from the supervisory authorities. The commission would decide this by means of an "Implementing Act" specifying a mechanism for periodic review, its territorial and sectoral application and identification of supervisory authorities. Decisions of Commission taken under Art 25(6) of DPD shall remain in force. DPD also provides parameters for the same.

    Sub-topics in this section

    GDPR

    DPD

    Given in article

    45

    25

    Conditions apply when transfers take place to

    Third country or international organization

    International organization not mentioned.

    Functions of the commission

    Take adequacy decisions

    Same

    Review the decision periodically every four years

    Monitor developments on ongoing basis

    Repeal, amend or suspend decision

    Inform Member States if third country doesn't ensure adequate level of protection.

    Similarly, member state has to inform the Commission.

    Functions of Member State

    Inform Commission if third country doesn't ensure adequate level of protection.

    Take measures to comply with Commission's decisions

    Prevent data transfer if Commission finds absence of adequate level of protection.

    Factors, with respect to third country or international organization, to be considered while deciding adequacy of safeguards

    Rule of law,

    human rights, fundamental freedoms, access of public authorities to personal data,

    data protection rules, rules for onward transfer of personal data to third country or international organization etc.

    Circumstances surrounding data transfer operations: nature of data; purpose and duration of processing operation; rule of law, professional rules and security measures in third country; country of origin and final destination; professional rules and security measures;

    Functioning of independent supervisory authorities, their powers of enforcing compliance with data protection rules and powers to assist and advise data subject to exercise their rights.

    International commitments entered into.

    Obligations under legally binding conventions.

    Same

    When adequate level of protection no longer ensues

    The Commission, to the extent necessary: repeal, amend or suspend the decision.

    This is to be done by the means of an implementing act.

    No retroactive effect to take place

    The member state will have to suspend data transfer if Commission finds absence of adequate level of protection.

    Commission to enter into consultation with the third country or international organization to remedy the situation

    Same

    4.16.3 Transfers subject to appropriate safeguards

    This article provides for a situation when the Commission takes no decision. (Mentioned above under Transfer on the basis of an adequacy decision). In this case, the controller or processor can transfer data to third country or international organization subject to certain conditions. Specific authorization from supervisory authorities is not required in this context. Procedure for the same has been mentioned.

    Sub-topics in this section

    GDPR

    DPD

    Given in article

    46

    When can data transfer take place

    When appropriate safeguards are provided by the controller or processor;

    AND

    On condition that data subject enjoys enforceable rights and effective legal remedies for data safety.

    Conditions to be fulfilled for providing appropriate safeguards without specific authorization from supervisory authority

    Existence of legally binding and enforceable instrument between public bodies or authorities

    Existence of Binding Corporate Rules

    Adoption of Standard Protection Clauses adopted by the Commission

    Adoption of Standard data protection clauses by supervisory authorities and approved by Commission.

    Approved code of conduct along with binding and enforceable commitments of controller or processor in third country to apply appropriate safeguards and data subject's rights

    OR

    Approved certification mechanism along with binding and enforceable commitments of controller or processor in third country to apply appropriate safeguards and data subject's rights.

    Conditions to be fulfilled for providing appropriate safeguards subject to authorization from competent authority

    Existence of contractual clauses between:

    Controller or Processor and

    Controller, Processor or recipient of personal data (third party)

    Provisions inserted in administrative arrangements between public authorities or bodies. Provisions to contain enforceable and effective data subject rights.

    Consistency mechanism to be applied by supervisory authority

    Unless amended, replaced or repealed, authorization to transfer given under DPD will remain valid when:

    Third country doesn't ensure adequate level of protection but controller adduces adequate safeguards;

    or

    Commission decides that standard contractual clauses offer sufficient safeguards

    4.16.4 Binding Corporate Rules

    These are agreements that govern transfers between organizations within a corporate group

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    47

    Elements of Binding Corporate Rules

    Legally binding

    Apply to and are enforced by every member of group of undertakings or group of enterprises engaged in joint economic activity. Includes employees

    Expressly confer enforceable rights on data subject over processing of personal data

    What do they specify

    Structure and contact details of group of undertakings

    Data transfers or set of transfers including categories of personal data , type of processing, type of data subjects affected, identification of third countries

    Legally binding nature

    Application of general data protection principles

    Rights of data subjects

    Means to exercise those right

    How the information on BCR is provided to data subjects

    Tasks of data protection officer etc.

    Complaint procedure

    Mechanisms within the group of undertakings, group of enterprises for ensuring verification of compliance with BCR.

    Eg. Data protection audits

    Results of verification to be available to person in charge of monitoring compliance with BCR and to board of undertaking or Group of enterprises.

    Should be available upon request to competent supervisory authority

    Mechanism for reporting and recording changes to rules and reporting changes to supervisory authority

    Cooperation mechanism with supervisory authority

    Data protection training to personnel having access to personal data

    Role of Commission

    May specify format and procedures for exchange of information between controllers, processors and supervisory authorities for BCR

    4.16.5 Transfers or disclosures not authorized by Union law

    This Article lays down enforceability of decisions given by judicial and administrative authorities in third countries with regard to transfer or disclosure of personal data.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    48

    Article concerns

    Transfer of personal data under judgments of courts, tribunals, decision of administrative authorities in third countries.

    When can data be transferred or disclosed

    International agreement between requesting third country and member state or union.

    E.g.: mutual legal assistance treaty

    4.16.6 Derogations for specific situations

    This Article comes into play in the absence of adequacy decision or appropriate safeguards or of binding corporate rules. Conditions for data transfer to a third country or international organization under such situations have been laid down.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    49

    26

    Conditions under which data transfer can take place

    On obtaining Explicit consent of data subject after being informed of possible risks

    On obtaining unambiguous consent of data subject to the proposed transfer

    Transfer is necessary for conclusion or performance of contract.

    The contract should be in the interest of data subject.

    The contract is between the controller and another natural or legal person.

    Contractual conditions are same.

    DPD also includes implementation of pre contractual measures taken upon data subject's request.

    Transfer is necessary in public interest

    Same

    Is necessary for establishment, exercise or defense of legal claims

    Same

    To protect vital interest of data subject or of other persons where data subject is physically or legally incapable of giving consent

    Includes vital interest of data subject but doesn't include "other person". Condition for consent is also not included.

    Transfer made from register under Union or Member State law to provide information to public and is open to consultation by public or person demonstrating legitimate interest.

    Same

    Conditions for transfer when even the above specific situations are not applicable

    Transfer is not repetitive

    Concerns limited number of data subjects

    Necessary for compelling legitimate interests pursued by controller

    Legitimate interests are not overridden by interests or rights and freedoms of data subject

    Controller has provided suitable safeguards after assessing all circumstances surrounding data transfer

    Controller to inform supervisory authority about the transfer

    Controller to inform data subject of transfer and compelling legitimate interests pursued

    Member may authorize transfer personal data to third country where controller adduces adequate safeguards for protection of privacy and fundamental rights and freedoms of individuals

    4.17 International cooperation for protection of personal data

    This Article lays down certain steps to be taken by Commissions and supervisory authorities for protection of personal data.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    50

    Steps will include

    Development of international cooperation mechanisms to facilitate enforcement of legislation for protection of personal data

    Provide international mutual assistance in enforcement of legislation for protection of personal data

    Engage relevant stakeholders for furthering international cooperation

    Promote exchange and documentation of personal data protection legislation and practice

    4.18 Remedies, Liability and Compensation

    4.18.1 Right to lodge complaint with a supervisory authority

    This article gives the data subject the right to seek remedy against unlawful processing of data. GDPR strengthens this right as compared to the one provided under DPD.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    77

    28(4)

    Right given

    Right to lodge complaint

    Under GDPR the data subject has been conferred the "right" specifically. This is not so in DPD.

    DPD merely obliges the supervisory authority to hear claims concerning rights and freedoms.

    Who can lodge complaint

    Data subject

    Any person or association representing that person

    Complaint to be lodged before

    Supervisory authority in the Member State of habitual residence, place of work or place of infringement

    Supervisory authority

    When can the complaint be lodged

    When processing of personal data relating to data subject allegedly infringes on Regulation

    When rights and freedom are to be protected while processing.

    When national legislative measures to restrict scope of Regulations is adopted and processing is alleged to be unlawful.

    Accountability

    Complainant to be informed by Supervisory authority on progress and outcome of complaint and judicial remedy to be taken up

    Complainant to be informed on outcome of claim or if check on unlawfulness has taken place

    4.18.2 Right to an effective judicial remedy against supervisory authority

    The concerned Article seeks to make supervisory authorities accountable by bringing proceedings against the authority before the courts. GDPR gives a specific right to the individual. DPD under Article 28(3) merely provides for appeal against decisions of supervisory authority in the courts.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    78 (1)

    Who has the right

    Every natural or legal person

    When can the right be exercised

    Against legally binding decision of supervisory authorities concerning the complainant

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    78(2)

    Who has the right

    Data subject

    When can the right be exercised

    When the competent supervisory authority doesn't handle the complaint

    Or

    Doesn't inform data subject about progress / outcome of complaint within 3 months

    The jurisdiction of court will extend to the territory of the Member State in which the supervisory authority is established (GDPR Art 78(3)). The supervisory authority is required to forward proceedings to the court if the decision was preceded by the Board's decision in the consistency mechanism. (GDPR 78(4))

    4.18.3 Right to effective judicial remedy against a controller or processor

    The data subject has been conferred with the right to approach the courts under certain circumstance. The GDPR confers the specific right while DPD provides for judicial remedy without using the word "right".

    Sub-topics in this section

    GDPR

    DPD

    Given in

    Art 79

    Recital 55

    Right can be exercised when:

    1. Data has been processed; and

    2. Processing Results in infringement of rights; and

    3. Infringement is due to non compliance of Regulation

    Similar provisions provided under DPD:

    When controller fails to respect the rights of data subjects and national legislation provides a judicial remedy.

    Processors are not mentioned.

    Jurisdiction of the courts

    Proceedings can be brought before the courts of Member States wherein:

    1. Controller or processor has an establishment

    Or

    2. Data Subject has habitual residence

    Right cannot be exercised when

    1. The controller or processor is a public authority of Member State

    And

    2. Is exercising its public powers

    4.18.4 Right to compensation and liability

    GDPR enables a person who has suffered damages to claim compensation as a specific right. DPD merely entitles the person to receive compensation. Although Liability provisions under GDPR and DPD are similar, the liability under GDPR is stricter as compared to DPD. This is because DPD exempts the processor from liability but GDPR does not. For example, DPD imposes liability on controllers only.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    82

    23

    Who can claim compensation

    Any person who has

    suffered material or non material damage

    Similar provisions.

    But DPD doesn't mention "material or non-material damage" specifically.

    Right arises due to

    Infringement of Regulation

    Same

    Right granted

    Right to receive compensation

    Same

    Compensation has to be given by

    Controller or processor

    Compensation can be claimed only from controller

    Liability of controller arises when

    Damage is caused by processing due to infringement of regulation

    Same

    Liability of processor arises when

    1. Processor has not complied with directions given to it under Regulation

    OR

    2. Processor has acted outside or contrary to lawful instructions of controller

    Exemptions to controller or processor from liability

    If there is proof that they are not responsible

    Exemption for controller is same

    Liability when more than one controller or processor cause damage

    Each controller or processor to be held liable for entire damage

    4.19 General conditions for imposing administrative fines

    GDPR makes provision for imposition of administrative fines by supervisory authorities in case of infringement of Regulation. Such fines should be effective, proportionate and dissuasive. In case of minor infringement, "reprimand may be issued instead of a fine" [1]. Means of enforcing accountability of supervisory authority have been provided. If Member state law does not provide for administrative fines, then the fine can be initiated by the supervisory authority and imposed by courts. However, by 25 May 2018, Member States have to adopt laws that comply with this Article.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    83

    Who can impose fines

    Supervisory Authority

    Fines to be issued against

    Controllers or Processors

    Parameters to be taken into account while determining administrative fines

    Nature, gravity and duration of infringement

    and

    Nature scope or purpose of processing

    and

    Number of data subjects affected

    and

    Level of damage suffered

    Intentional or negligent character of infringement

    Action taken by controller or processor to mitigate damage suffered by data subjects

    Degree of responsibility of con controller or processor. Technical and organizational measures implemented to be taken into account.

    Relevant previous infringement

    Degree of cooperation with supervisory authority

    Categories of personal data affected

    Manner in which supervisory authorities came to know of the infringement and

    Extent to which the controller or processor notified the infringement

    Whether corrective orders of supervisory authority under Art 58(2) have been issue before and complied with

    Adherence to approved code of conduct under Art 40 or approved certification mechanisms under Art 42

    Other aggravating or mitigating factors like financial benefits gained losses avoided etc.

    If infringement is intentional or due to negligence of processor or controller

    Total amount of administrative fine to not exceed amount specified for gravest infringement

    Means checking power of supervisory authority to impose fines

    Procedural safeguards under Member State or Union law.

    Including judicial remedy and due process

    Article 83 splits the amount of administrative fines according to obligations infringed by controllers, processors or undertakings. The first set of infringements may lead to imposition of fines up to 10,000,000 EUR or 2% of total worldwide turnover.

    Sub-topics in this section

    GDPR

    DPD

    Article

    83(4)

    Fine imposed

    Up to 10,000,000 EUR

    or

    in case of undertaking,

    2% of total worldwide turnover of preceding financial year, whichever is higher

    Infringement of these provisions will cause imposition of fine (Provisions infringed)

    Obligations of controller and processor under:

    Art 8

    Conditions applicable to child's consent in relation to information society services

    Art 11

    Processing which does not require identification

    Art 25 to 39

    General obligations , Security of personal data , Data Protection impact assessment and prior consultation

    Art 42

    Certification

    Art 43

    Certification bodies

    Obligations of certification body under:

    Art 42

    Art 43

    Obligations of monitoring body under:

    Art 41(4)

    Second set of infringements may cause the authority to impose higher fines up to 20,000,000 EUR or 4% of total worldwide turnover.

    Sub-topics in this section

    GDPR

    DPD

    Article

    83(5)

    Fine imposed

    Up to 20,000,000 EUR

    or

    in case of undertaking,

    4% of total worldwide turnover of preceding financial year, whichever is higher

    Infringement of provisions that will cause imposition of fine (Provisions infringed)

    Basic principles for processing and conditions for consent under:

    Art 5

    Principles relating to processing of personal data

    Art 6

    Lawfulness of processing

    Art 7

    Conditions for consent

    Art 9

    Processing of special categories of personal data

    Data subject's rights under:

    Art 12 to 22

    Transfer of personal data to third country or international organization under:

    Art 44 to 49

    Obligations under Member State law adopted under Chapter IX

    Non Compliance with supervisory authority's powers under provisions of Art 58:

    Imposition of temporary or definitive limitation including ban on processing

    (Art 58 (2)(f))

    Suspension of data flows to third countries or international organization

    (Art 58(2) (j))

    Provide access to premises or data processing equipment and means (Art 58 (1) (f))

    4.20 Penalties

    Article 84 makes provision for penalties in case of infringement of Regulation.

    The penalties must be effective, proportionate and dissuasive.

    Sub-topics in this section

    GDPR

    DPD

    Given in Article

    84

    When will penalty be imposed

    In case of infringements that are not subject to administrative fines

    Who imposes them

    Member State

    Responsibility of Member State

    To lay down the law and ensure implementation.

    To notify to the Commission, the law adopted, by 25 May 2018



    [1] Recital 148 , GDPR

    Survey on Data Protection Regime

    by Aditi Chaturvedi and Elonnai Hickok — last modified Feb 10, 2017 10:47 AM
    We request you to take part in this survey aimed at understanding how various organisations view the changes in the Data Protection Regime in the European Union. Recently the General Data Protection Regulation (EU) 2016/679 was passed, which shall replace the present Data Protection Directive DPD 95/46/EC. This step is likely to impact the way of working for many organisations. We are grateful for your voluntary contribution to our research, and all information shared by you will be used for the purpose of research only. Questions that personally identify you are not mandatory and will be kept strictly confidential.

     

    The survey form below can also be accessed here.


     

    Ranking Digital Rights in India

    by Divij Joshi and Aditya Chawla — last modified Feb 12, 2017 07:22 AM
    This report is a study of five Indian telecommunication companies (Tata Communications Ltd., Reliance Communications Limited, Aircel Limited, Vodafone India Private Limited and Reliance Jio Infocomm Limited) and three Indian online service providers (Hike Messenger, Shaadi.com and Rediff.com). The report is an attempt to evaluate the practices and policies of companies which provide internet infrastructure or internet services, and are integral intermediaries to the everyday experience of the internet in India.

    Download the PDF


    The report draws upon the methodology of Ranking Digital Rights project, which analysed 16 of the world’s major internet companies, including internet services and telecommunications providers based on their commitment towards upholding human rights through their services – in particular towards their commitment to users’ freedom of expression and privacy. The report comprehensively assessed the performance of companies on various indicators related to these human rights, as per information which was made publicly available by these companies or was otherwise in the public domain. This report follows the methodology of the proposed 2017 Ranking Digital Rights index, updated as of October 2016.

    This report studied Indian companies which have, or have had, a major impact on the use and experience of the Internet in India. The companies range from online social media and micro-blogging platforms to major telecommunications companies providing critical national communications infrastructure. While some of the companies have operations outside of India as well, our study was aimed at how these companies have impacted users in India. This allowed us to study the impact of the specific legal and social context in India upon the behaviour of these firms, and conversely also the impact of these companies on the Indian internet and its users.

    VSNL, the company later to be acquired by and merged into TATA Communications, was the first company to provide public Internet connections to India, in 1996. In 2015, India surpassed the United States of America, as the jurisdiction with the worlds second-largest internet user base, with an estimated  338 million users. With the diminishing costs of wireless broadband internet and the proliferation of cheaper internet-enabled mobile devices, India is expected to house a significant number of the next billion internet users.

    Concomitantly, the internet service industry in India has grown by leaps and bounds, particularly the telecommunications sector, a large part of whose growth can be attributed to the rising use of wireless internet across India. The telecom/ISP industry in India remains concentrated among a few firms. As of early 2016 just three of the last mile ISPs which are studied in this report, are responsible for providing end-user connectivity to close to 40% of mobile internet subscribers in India. However, the market seems to be highly responsive to new entrants, as can be seem from the example of Reliance Jio, a new telecom provider, which has built its brand specifically around affordable broadband services, and is also one of the companies analysed in this report. As the gateway service providers of the internet to millions of Indian users, these corporations remain the focal point of most regulatory concerns around the Internet in India, as well as the intermediaries whose policies and actions have the largest impact on internet freedoms and user experiences.

    Besides the telecommunications companies, India has a thriving internet services industry – by some estimates, the Indian e-commerce industry will be worth 119 Billion USD by 2020. While the major players in the e-commerce industry are shipping and food aggregation services, other companies have emerged which provide social networking services or mass-communication platforms including micro-blogging platforms, matrimonial websites, messaging applications, social video streaming services, etc. While localised services, including major e-commerce websites (Flipkart, Snapdeal), payment gateways (Paytm, Freecharge) and taxi aggregators (Ola), remain the most widely utilized internet services among Indians, the services analysed in this report have been chosen for their potential impact they have upon the user rights analysed in this report – namely freedom of speech and privacy. These services provide important alternative spaces of localised social media and communication, as alternatives to the currently dominant services such as Facebook, Twitter and Google, as well as specialised services used mostly within the Indian social context, such as Shaadi.com, a matrimonial match-making website which is widely used in India. The online service providers in this report have been chosen on the basis of the potential impact that these services may have on online freedoms, based on the information they collect and the communications they make possible.

    Legal and regulatory framework

    Corporate Accountability in India

    In the last decade, there has been a major push towards corporate social responsibility (“CSR”) in policy. In 2009, the Securities Exchange Board of India mandated all listed public companies to publish ‘Business Responsibility Reports’ disclosing efforts taken towards, among other things, human rights compliances by the company. The new Indian Companies Act, 2013 introduced a ‘mandatory’ CSR policy which enjoins certain classes of corporations to maintain a CSR policy and to spend a minimum percentage of their net profits towards activities mentioned in the Act. However, these provisions do not do much in terms of assessing the impact of corporate activities upon human rights or enforcing human rights compliance.

    Privacy and Data Protection in India

    There is no explicit right to privacy under the Constitution of India. However, such as right has been judicially recognized as being a component of the fundamental right to life and liberty under Article 21 of the Constitution of India. However, there have been varying interpretations of the scope of such a right, including who and what it is meant to protect. The precise scope of the right to privacy, or whether a general right to privacy exists at all under the Indian Constitution, is currently being adjudicated by the Supreme Court. Although the Indian Supreme Court has had the opportunity to adjudicate upon telephonic surveillance conducted by the Government, there has been no determination of the constitutionality of government interception of online communications, or to carry out bulk surveillance.

    As per Section 69 of the Information Technology Act, the primary legislation dealing with online communications in India, the government is empowered to monitor, surveil and decrypt information, “in the interest of the sovereignty or integrity of India, defense of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence.” Moreover, intermediaries, as defined under the act, are required to provide facilities to enable the government to carry out such monitoring. The specific procedure to be followed during lawful interception of information is given under the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009, (“Interception Rules”) which provides a detailed procedure for government agencies to issue monitoring directions as well as the obligations of intermediaries to facilitate the same. The Interception Rules require intermediaries who are enlisted for facilitating monitoring of information to maintain strict confidentiality regarding such directions for lawful interception or decryption, as well as to destroy any records of such directions every six (6) months. Intermediaries are required to designate specific authorities (the designated authority) to receive and handle any of the above government directions and also to maintain records and provide proper facilities to the government agencies. The designated authority is also responsible for maintaining the security and confidentiality of all information which ‘affects the privacy’ of individuals. Further, the rules prescribe that no person may intercept any online communication or information, except the intermediary for the limited purposes specified in the rules, which include for tracing persons who may have contravened any provision of the IT Act or rules.

    With respect to decryption, besides the government’s power to order decryption of content as described above, the statutory license between the telecommunications providers and the Department of Telecommunications (“DoT”), prescribes, among other things, that only encryption “up to 40 bit key length in the symmetric algorithms or its equivalent in others” may be utilized by any person, including an intermediary. In the case that any person utilizes encryption stronger than what is prescribed, the decryption key must be stored with the DoT. At the same time, the license prescribes that ISP’s must not utlilize any hardware or software which makes the network vulnerable to security breaches, placing intermediaries in a difficult position regarding communications privacy.. Moreover, the license (as well as the Unified Access Service License) prohibit the use of bulk encryption by the ISP for their network, effectively proscribing efforts towards user privacy by the ISP’s own initiative.

    There is no statute in India generally governing data protection or for the protection of privacy. However, statutory rules address privacy concerns across different sectors, such as banking and healthcare. A more general regulation for data protection was enacted under Section 43A of the Information Technology Act, 2000 (“IT Act”) and the rules made thereunder, in particular, the Information Technology (Reasonable Security Practices and Procedures and sensitive personal data or information) Rules, 2011 (“Rules”). Section 43A requires body corporates (defined as any company) handling sensitive personal information, (as defined under the IT Act and Rules), to maintain reasonable security practices regarding handling such information, and penalises failure to maintain such practices, in case it causes ‘wrongful loss or wrongful gain to any person.’ The Rules prescribed under Section 43A detail the general obligations of body corporates that handle sensitive personal information more comprehensively.

    The Rules specify that all body corporates which “collects, receives, possess, stores, deals or handle information”, directly from the holder of such information through a lawful contract, shall provide a privacy policy, which must – (a) be clearly accessible; (b) specify the data collected; (c) specify the purpose for collection and the disclosure of such information and; (d) specify the reasonable security practices for the protection of such data. There are also specific requirements for body corporates which handle sensitive personal information, which includes obtaining consent from the data subject, and permitting data collection for a specified and limited purpose as well as a limited time. The body corporate is also supposed to ensure the data subject is aware of: (a) the fact that the information is being collected; (b) the purpose for which the information is being collected; (c) the intended recipients of the information; and (d) the name and address of he agency that is collecting the information as well as the agency that will retain the information. The rules also require the body corporate to provide an explicit option for users to opt-out of having their personal information collected, which permission can also be withdrawn at any time.

    Apart from the above, the IT (Intermediary Guidelines) Rules, 2011, (“Guidelines) also contain a prescription for providing information to government agencies, although the rules have been enacted under the provisions of the safe-harbour conditions of the IT Act. Rule 3(7) of the Guidelines states that “…When required by lawful order, the intermediary shall provide information or any such assistance to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. The information or any such assistance shall be provided for the purpose of verification of identity, or for prevention, detection, investigation, prosecution, cyber security incidents and punishment of offences under any law for the time being in force, on a request in writing staling clearly the purpose of seeking such information or any such assistance.” While this regulation outside the scope of the rule-making power under Section 79 of the IT Act, it continues to remain in force, although the extent to which it is utilized to obtain information is unknown.

    Content Restriction, Website blocking and Intermediary Liability in India

    Section 79 of the IT Act contains the safe harbor provision for intermediaries, sheltering them from liability, under specific circumstances, against information, data, or communication links made available by any third party. For the safe harbor to apply, the role of the intermediaries must be limited to (a) providing access to a communication system over which information made available by third parties is transmitted or temporarily stored or hosted; or (b) a platform which does not initiate the transmission, modify it or select the receiver of the transmission. Moreover, the safe-harbour does not apply when the ISP has received actual knowledge, or been notified by the appropriate government agency, about potentially unlawful material which the intermediary has control over, fails to act on such knowledge by disabling access to the material.

    The Central Government has further prescribed guidelines under Section 79 of the IT Act, which intermediaries must comply with to have the shelter of the safe harbor provisions. The guidelines contain prescriptions for all intermediaries to inform their users, through terms of service and user agreements, of information and content which is restricted, including vague prescriptions against content which is “…grossly harmful, harassing, blasphemous defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever;” or that infringes any proprietary rights (including Intellectual Property rights).

    Rule 3(4) is particularly important, and provides the procedure to be followed for content removal by intermediaries. This rule provides that any intermediary, who hosts, publishes or stores information belonging to the above specified categories, shall remove such information within 36 hours of receiving ‘actual knowledge’ about such information by any ‘affected person’. Further, any such flagged content must be retained by the intermediary itself for a period of 90 days. The scope of this rule led to frequent misuse of the provision for removal of content. As non-compliance would make the intermediaries liable for potentially illegal conduct, intermediaries were found to be eager to remove any content which was flagged as objectionable by any individual.  However, the scope of the rule received some clarification from the Supreme Court judgement in Shreya Singhal v Union of India. While the Supreme Court upheld the validity of Section 79 and the Guidelines framed under that section, it interpreted the requirement of ‘actual knowledge’ to mean the knowledge obtained through the order of a court asking the intermediary to remove specific content. Further, the Supreme Court held that any such court order for removal of restriction must conform Article 19(2) of the Constitution of India, detailing permissible restrictions to the freedom of speech and expression.

    For the enforcement of the above rules, Rule 11 directs intermediaries to appoint a Grievance Officer to redress any complaints for violation of Rule 3, which must be redressed within one month. However, there is no specific mention of any remedies against wrongful removal of content or mechanisms to address such concerns.

    Apart from the above, there is a parallel mechanism for imposing liability on intermediaries under the Copyright Act, 1957. According to various High Courts in India, online intermediaries fall under the definition of Section 51(a)(ii),  which includes as an infringer, “…any person who permits for profit any place to be used for the communication of the work to the public where such communication constitutes an infringement of the copyright in the work, unless he was not aware and had no reasonable ground for believing that such communication to the public would be an infringement of copyright.”

    Section 52(1) provides for exemptions from liability for infringement. The relevant part of S.52 states –

    “(1) The following acts shall not constitute an infringement of copyright, namely:

    (b) the transient or incidental storage of a work or performance purely in the technical process of electronic transmission or communication to the public;

    (c) transient or incidental storage of a work or performance for the purpose of providing electronic links, access or integration, where such links, access or integration has not been expressly prohibited by the right holder, unless the person responsible is aware or has reasonable grounds for believing that such storage is of an infringing copy:

    Provided that if the person responsible for the storage of the copy has received a written complaint from the owner of copyright in the work, complaining that such transient or incidental storage is an infringement, such person responsible for the storage shall refrain from facilitating such access for a period of twenty-one days or till he receives an order from the competent court refraining from facilitating access and in case no such order is received before the expiry of such period of twenty-one days, he may continue to provide the facility of such access;”

    While Section 52 of the Act provides for safe harbour for certain kinds of online intermediaries, this does not apply where the intermediary has ‘reasonable grounds for believing’ that storage is an infringing copy, similar to language used in 51(a)(ii), which has been broadly interpreted by high  courts.  The procedure for notifying the intermediary for taking down infringing content is given in the Rules prescribed under the Copyright Act, which requires that the holder of the Copyright must give written notice to the intermediary, including details about the description of work for identification, proof of ownership of original work, proof of infringement by work sought to be removed, the location of the work, and details of the person who is responsible for uploading the potentially infringing work.  Upon receipt of such a notice, the intermediary must disable access to such content within 36 hours. Further, intermediaries are required to display reasons for disabling access to anyone trying to access the content. However, the intermediary may restore the content after 21 days if no court order is received to endorse its removal, although this is not a requirement. After this notice period, the intermediary may choose not to respond to further notices from the same complainant about the same content at the same location.

    Besides the safe harbour provisions, which require intermediaries to meet certain conditions to avoid liability for content hosted by them, intermediaries are also required to comply with government blocking orders for removal of content, as per Section 69A of the IT Act. This section specifies that the government may, according to the prescribed procedure, order any intermediary to block access to any information “in the interest of sovereignty and integrity of India, defense of India, security of the State, friendly relations with foreign states or public order or for preventing incitement to the commission of any cognizable offence relating to above.”  Failure to comply by the intermediary results in criminal penalties for the personnel of the intermediary.

    The procedure for blocking has been prescribed in the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The Rules under Section 69A allow any Central Government or State Government ministry or department to issue blocking requests, which may be made by any person to specific departmental representatives known as ‘nodal officers’, may request the blocking of access to content by any intermediary. The nodal officers forward such requests for blocking of access to the ‘designated officer’, who is an officer of the Central Government not below the rank of the joint secretary, as nominated by the Central Government. The blocking request is then considered by a committee which recommends whether the designated officer should approve such request or not. Once approved, the request is forwarded to the intermediary, who must nominate at least one person to handle all such requests. In case of non-compliance, the designated officer may initiate action under Section 69A against the intermediary.

    The rules contain some safeguards to ensure due process before blocking orders are made. The designated officer is required to make ‘reasonable efforts’ to locate the user or intermediary who has hosted the content and allow for such person or intermediary to appear before the committee to submit their reply and clarifications.  Rule 9 lays down the emergency procedure for blocking in which case the above detailed procedural safeguards such as the committee deliberation or providing a hearing are dispensed with. However, Rule 16 requires the confidentiality of all such requests and actions taken under the rules, which defeats any attempts at the transparency or fairness of the process.

    Finally, the ISP and Unified Services License (USL) issued by the DoT prescribe further obligations to block content. Under Clause 38 of the USL, for example, ISP’s must take measures to prevent the “flow of obscene, objectionable, unauthorised or any other content infringing copy-rights, intellectual property right and international & domestic Cyber laws in any form” over their network.  Moreover, as per Clause 7 of the USL, the licensee is obliged to block subscribers as well as content, as identified by the Licensor (DoT).  Failure to comply with license conditions can lead to the cancellation of the telecommunication operators license with the DoT, without which they are not permitted to operate in India.

    Findings and Recommendations

    General

    • Most companies’ policies are only tailored towards minimum compliance with national regulations;
    1. As detailed in the above sections, companies are mandated by law to comply with certain procedures including data protection and content restriction policies. While compliance with these regulations also varies from company to company, there are barely any instances of companies taking initiative to ensure better privacy procedures than mandated by law, or to go beyond human rights reporting requirements as detailed in corporate social responsibility regulations. For example, Vodafone was the only company in this index to disclose (even in a limited manner) government requests for user information or for content restriction.
    2. While compliance with regulations is an understandable threshold for companies to maintain, companies should make efforts to at least explain the import of the regulations to their users and explain how their policies are likely to affect their users’ rights.
    • Company policies are usually tailored towards regulations in specific regulations;
    1. Jurisdiction is a major issue in regulating internet services. Internet service providers may operate and have users in several jurisdictions, but their policies do not always meet the requirements of each jurisdiction in which they operate, where there services are accessed. Even in cases of large ISPs which operate across jurisdictions, the policies may be tailored to specific jurisdictions. Tata Communications Ltd. for example, specifically references the law of the United States of America in its policies, though the same policies may operate for users in other jurisdictions. This is problematic since most company policies have accession to the terms as a condition of service, which means that restrictions (or protections, as the case may be) on user rights placed in one jurisdiction can be responsible for similar restrictions across the board in several jurisdictions.
    • Companies do not seek meaningful consent from their users before subjecting them to their policies;
    1. The study highlights the importance of company policies to users rights. These policies define the relationship between the service provider and the user, including delimiting the rights available to users and their control over the information collected from them (often automatically). However, most companies take very little effort in obtaining meaningful user consent towards their policies, including efforts towards educating users about the import of their policies. In many cases, mere use of the service is mentioned as a sufficient condition for making the policies binding upon the users. Even in other cases, where notice of policies is more prominent, few efforts are made to ensure that users fully understand the scope and effect of the policies.
    2. Further, while most companies have committed to informing users of changes to their policies in some form, only Reliance Jio disclosed that it directly informed users of changes to policies, subject to its discretion; while others did not maintain any clear standard for notice to changes to policies. None of the companies provided access to any archives where changes to the company policies could be reviewed.
    3. It is apparent that most companies do not take much effort in maintaining robust or meaningful terms and conditions or privacy policies, which include an explanation of how the service could potentially affect a user’s privacy or freedom of expression. Nor do most companies attempt to take safeguards for protecting such freedoms beyond complying with regulations. Only Shaadi.com commits to informing users about data protection and how to take reasonable steps for ensuring their online privacy, above and beyond the regulations.
    4. Finally, a study of TCL’s policy indicates that in some cases, the actions or policies of upstream providers (backbone internet providers such as TCL), can affect users’ experience of the internet without their consent or even notice, since these terms must be complied with by the last-mile provider to whom the users may connect.
    5. The formalistic manner in which these policies are framed and worded effectively prevents many users from understanding their import upon online freedoms. Companies which are serious about committing to human rights should take steps towards making their policies easily accessible, and to clearly explain the scope of their policies and their impact on users’ online human rights in an easy and understandable manner instead of a formalistic, legal statement which is not accessible to lay users. Companies should also take steps towards educating users about how to protect their online freedoms while utilizing the services of the company.
    • Indian regulations hinder transparency and prevent companies from being accountable to their users;
    1. The regulations outlined in Part – I of this report are telling in the broad restrictions they place on company transparency, in particular for disclosing any information about government requests for user information, or government or third party requests for content restriction. The policies are vaguely worded and broad in their confidentiality requirements, which potentially causes a chilling effect around the release of even aggregate or depersonalized information by companies.
    2. Government regulations often provide the framework around which company policies operate. Regulators must include principles for safeguarding online freedom of expression and privacy as a fundamental part of their regulations. This includes clearly specifying the scope of confidentiality requirements as a response to government requests and to enable some form of transparency and oversight.

    Commitment

    • Most companies do not adequately disclose efforts towards assessing their impact on online freedoms or compliance with the same;
    1. Except Vodafone India (through Vodafone plc, its parent company), none of the companies surveyed in this report have disclosed any assessments of the impact of their services on online freedom of speech or privacy. The lack of such disclosures indicates companies’ lack of concern over ensuring transparency in such issues.
    2. Although no legal framework exists for such assessment, companies must independently assess the impact of their services upon basic online freedoms as the first step towards committing to protecting those freedoms, possibly through a third party such as the Global Network Initiative. The findings from these assessments should, to the extent possible, be made public.
    • Some companies have implemented internal policies for training on and to monitor compliance with online freedoms;
    1. Some companies have disclosed internal mechanisms which emphasise on protecting online freedoms, for example, through employee training on such issues. These internal policies are an important aspect of accountability for company processes which are generally outside of public oversight. Four of the eight companies surveyed, for example, have whistle-blower policies protecting the internal reporting of violations of ‘ethical conduct’. In addition, some companies, for example Tata Communications and Aircel disclose an internal code of ethics and measures for ensuring compliance with the same. Similarly, Vodafone discloses the existence of a Privacy Management System for training employees on the importance of customer privacy.
    2. While some companies have robust internal processes for accountability, companies should also specify that these processes explicitly deal with concerns about user privacy or censorship, above and beyond general requirements for ethical conduct.
    • Companies do not disclose direct efforts to lobby against regulatory policies which negatively impact online freedoms;
    1. None of the companies disclosed efforts towards directly lobbying for clearer regulations on government censorship of online privacy. However, the lack of transparency could possibly be attributed to the nature of the public consultancy process by Indian regulators. In fact, where the consultancy process is made public and transparent, companies have shown efforts at engaging with regulators. For example, several of the companies studied in this report have responded to the TRAI’s call for public comments on the network neutrality framework for the Indian internet, including TCL, Airtel, Aircel and Vodafone India.
    2. The obvious implication for regulators is to improve the public consultancy process and attempt to engage stakeholders in a more transparent manner. Companies should also put regulatory pressure against regulations which stifle free speech or user privacy, if not through legal challenges, through public statements against regulatory overreach or oversight in these areas.

    • However, companies are making efforts towards better regulation through industry groups, particularly for privacy and data protection;
    1. Most telecommunication companies surveyed in this report are members of some industry body which advocates in favour of protecting online freedoms. In particular, the companies are members of associations such as the Data Security Council of India or the Internet Service Providers Association of India, which commit to protecting different aspects of users rights. The DSCI, for example, is an influential industry association which lobbies for better regulations for data protection. However, there are few such associations actively committed towards tackling private or governmental censorship online.
    2. While industry bodies are a growing voice in lobbying efforts towards better regulation, companies should also participate in civil society forums which advocate for protecting online freedoms.
    • All companies disclose some forum for grievance redressal, however, none of these specifically address freedom of speech and privacy issues;
    1. All the companies surveyed have disclosed some forum for grievance redressal. As indicated above, this forum is also a statutory requirement under both the Reasonable Security Practices Rules and the Intermediaries Guidelines Rules under the IT Act. In most cases, however, these policies do not specify whether and to what extent the grievance redressal forum addresses issues of online censorship or privacy concerns, although some companies, such as Vodafone, have specifically designated Privacy Officers. Only Aircel, TCL and RCL disclosed an appellate process or timelines for resolution of complaints. Further, Aircel is the only company in this report which disclosed aggregate data of complaints received and dealt with.
    2. Companies must take steps towards improving customer protection, particularly in cases involving violations of online freedoms. Grievance redressal by the company is generally the first step towards addressing rights violations and can also prevent future legal problems which the company may face. Further, companies should be transparent in their approach towards resolving customer grievances, and should publish aggregate data including complaints received and resolved, and to the extent possible, classifying the nature of the complaints received.

    Freedom of Speech

    • Most companies do not disclose processes or safeguards in case of content restriction requests by private third parties or by the government;
    1. Few of the companies surveyed have any form of checking misuse by government or third parties of blocking procedures prescribed under their terms and conditions. Some policies, such as TCL’s acceptable use policy, specifies that the company shall attempt to contact the owner of the content upon notice of private requests for content restriction, however, this requirement is entirely discretionary.
    2. Some companies, such are Rediff, have a well-defined procedure for content restriction on intellectual property claims, but not in case of general content restriction measures.
    3. However, there is evidence that at least some of the companies do provide some notice to users when the information they attempt to access has been removed or blocked by court order. TCL, for example, redirects users to a notice stating that the information has been blocked as per the provisions of a specific law. However, this does not reflect in its policies.
    4. Companies must have internal procedural safeguards to ensure the authenticity of content restriction claims and their compliance with regulations. Companies must commit to objecting against overbroad requests for restriction. One important step in this regard is to clarify the scope of companies liabilities as intermediaries, for actions taken in good faith.
    5. Companies must also provide clear and detailed notice to both users attempting to access blocked content as well as to the person whose content has been restricted. Such notice must specify whether the removal was due to a judicial, executive or privacy order, and to the extent possible, should specify the law, regulation or company policy under which the content has been restricted.
    • Companies do not disclose internal processes on content restriction or termination of services taken independently of third party requests;

    1. None of the companies disclosed their process for removal of content independently of third party requests, for the enforcement of their terms. None of the company policies disclose processes for identification or investigation of any violation of their terms. In fact, many companies, including Rediff, Hike Messenger and Vodafone expressly state that services may be terminated without notice and entirely at the discretion of the service provider.
    2. Further, none of the companies surveyed disclose their network management principles or make any public commitments against throttling of blocking of specific content or differential pricing, although, some of the telecommunications companies did vouch for some form of network neutrality, in their response to the TRAI’s public consultation on network neutrality regulations. As an outcome of those consultations, regulations now effectively prevent telecoms from discriminatory tariffs based on the nature of content.
    3. Company processes for enforcement of their terms of use must be disclosed. Further, companies should commit to transparency in the enforcement of the terms of use, to the extent possible.

    Privacy

    • Company practices on data protection vary widely – most companies show some commitment towards users’ privacy, but fall short on many grounds
    1. Despite the existence of a privacy regulation (the Reasonable Security Practices Rules), company practices on data collection vary. Some companies, such as TCL, have robust commitments towards important privacy principles including user consent and collection limitation, however, on the other end of the spectrum, RCL does not have a publicly available privacy policy governing the use of its internet services. In fact, none of the companies have data collection policies which contain the minimum safeguards as expected from such policies, such as compliance with the OECD Privacy Principles, or the National Privacy Principles as laid out in the A.P. Shah Committee Report on Privacy.
    2. Most of the companies surveyed make some form of commitment to notifying users of the collection and use of their data, including specifying the purposes for which information would be used and specifying the third parties with whom such information may be shared, and the option to opt-out of sharing their data with third parties. However, none of the policies explicitly commit to limiting collection of data to that which is necessary for the service. Further, while companies generally specify that data may be shared with ‘third parties’, usually for commercial purposes, theses parties are usually not explicitly mentioned in the policies.
    3. Some of the companies, including TCL and Reliance Jio also explicitly allow individual participation to access, amend or delete the information companies have stored about them. However, in other cases, users can only delete specific information upon account termination. Moreover, other companies do not specify if they continue to hold user information beyond the period for which services are provided. In fact, none of the companies except Hike Messenger disclose that they limit the storage of information to a specified time period.
    4. Companies must follow acceptable standards for data protection and user privacy, which, at the very least, require them to commit to collection and use limitations, specify time periods for retaining the data, allowing users to access, amend and delete data and to ensure that data stored is not out-dated or wrong. These policies must clearly specify the third parties with whom information may be shared, and should specify whether and how user consent is to be obtained before sharing of this information.
    • Companies’ processes for sharing of user information upon request by private third parties or governments are not transparent

    1. With the exception of the Vodafone Transparency Report (undertaken by Vodafone India’s holding company), none of the companies studied attempt to disclose any information about their processes for sharing user information with governments. Even in the case of private third parties, only some companies expressly commit to user notification before sharing of information.
    2. Companies should be more transparent about third-party requests for user data. While regulations regarding confidentiality could be clearer, companies should at least indicate that governments have requested user data and present this information in aggregate form.
    • Some companies disclose specific measures taken to secure information collected through the use of their services, including the use of encryption

    1. While all companies collecting sensitive personal information are requested to comply with the reasonable security standards laid down under the Rules, companies’ disclosures about measures taken to secure data are generally vague. Rediff, for example, merely specifies that it uses the SSL encryption standard for securing financial data and ‘accepted industry standards’ for securing other data and Vodafone discloses that it takes ‘reasonable steps’ to secure data.
    2. None of the companies surveyed disclose the existence of security audits by independent professionals, or the procedure followed in case of a breach of security. Further none of the companies commit to encrypting communications with or between the users end-to-end.
    3. Companies should specify the safety standards utilized for the handling, transmission and storage of personal information. They must specify that the security used is in compliance with acceptable industry standards or legally prescribed standards. Further, they should ensure, wherever possible, that end-to-end encryption is used to secure the information of their users.

    RDR Company Reports

    Tata Communications Limited
    www.tatacommunications.com

    Industry
    : Telecommunications
    Services
    evaluated: Tier-1 Internet Backbone Services, VSNL Mail
    Market
    Capitalization: INR 194 Billion

    TATA Communications Ltd. (TCL) is a global telecommunications company, headquartered in Mumbai and Singapore. A part of the TATA group of companies, TCL was founded as Videsh Sanchar Nigam Limited (VSNL), which was the first public-access gateway internet provider in India. VSNL was later acquired by the TATA group, and entirely merged with TATA Communications in 2008. TATA continues to retain the VSNL domain for its personal and enterprise email service.

    According to its latest annual report, TCL provides backbone connectivity to over 240 countries and territories and carries close to 24% of the world’s Internet routes. TCL also owns three of the ten submarine cable landing stations in India, responsible for India’s connectivity to the global internet.

    Commitment
    TCL scores averagely on disclosure of its commitment to human rights on the internet, including on disclosures relating to freedom of expression and privacy. Although TCL maintains a corporate social responsibility policy as well as business responsibility report, which include policy commitments to protecting human rights, (which are mandated by Indian law), none of its publicly available policies make a reference to its commitments to freedom of expression of its users.

    The TATA group maintains a code of conduct, applicable to all of its group companies, including TCL. The code makes an explicit reference to data security and privacy of TATA’s customers. As per that code, the Managing Director and Group CEO is the Chief Ethics Officer, responsible for the implementation of the Code of Conduct.

    TCL’s internal policies concerning internal implementation of human rights, as well as grievance redressal, are more robust than their public policy commitments to the same. As per in the TATA group code of conduct, which is applicable to its group companies, TCL provides employee training and conducts ethics awareness workshops at frequent intervals, and also takes other initiatives to ensure compliance with the code of conduct, which includes a commitment to customer privacy and data protection. Further, TCL has a well articulated whistleblower policy which states the processes to be followed in case any employee observes any unethical conduct within the company, including violations of the TATA code of conduct. The whistleblower policy commits to protecting any employee who reports unethical conduct under the policy, but contains no explicit references to freedom of speech or censorship issues, or issues of user privacy.

    Concerning stakeholder engagement, TCL seems to be somewhat involved in engaging with issues of privacy, but makes no commitments on issues of freedom of expression. TCL is a member of the Data Security Council of India, an industry body which makes public commitments towards user privacy and data security, which includes guiding the Indian IT industry on self-regulation on issues of privacy and data security.

    TCL maintains various grievance redressal forums, evidenced through different policies. For example, their consumer charter provides a general forum for addressing grievances, which include complaints regarding service outages. However, this does not refer specifically to complaints about censorship or privacy-related concerns. TCL’s Acceptable Use Policy and privacy policy also guide users to specific grievance redressal forums, for complaints under those policies. Besides this, there are recorded instances where TCL has advertised grievance redressal mechanisms relating to cases of private or judicial requests for blocking of content. However, TCL does not make any public disclosures about the inputs to or outcomes of its grievance redressal mechanisms.

    Freedom of Expression
    General
    TCL’s Acceptable Use Policy (“AUP”) governs the use of TCL services by its customers, which includes downstream providers, which TCL is responsible for interconnection with, as a backbone internet provider. VSNL mail maintains its own terms and conditions for users, which are available on its website. Both TCL’s AUP and VSNL’s terms and conditions are easily locatable through their websites, are presented in a clear and understandable manner and are available in English.

    TCL does not commit to notifying users of important changes to their terms of use, stating that it may chose to notify its customers of changes to the AUP, either directly, or by posting such modifications on its website. VSNLs policy states that the terms and conditions of the use of the webmail service may change without any notice to users.Although TCL is an Indian company and its terms are applicable to its customers worldwide, the AUP contains several references are to laws and procedures of the United States of America, such as the US PATRIOT Act, ostensibly due to TATA’s heavy presence in the US market coupled with stricter disclosure requirements in that jurisdiction.

    Content Restrictions and Termination of Services
    The AUP does not place any obligations on TCL to ensure a fair judgement before sanctions such as removal of content, termination or suspension for violations of terms of use. Although the AUP identifies categories of content which is prohibited by the service, the AUP also  states that TCL may suspend or terminate a users account, for any action they may deem to be inappropriate or abusive, whether or not stated in their policies. The AUP clearly states that TCL may remove of edit content in violation of the AUP or content which is harmful or offensive. Although it states that TCL shall attempt to first contact a user who is suspected of violations, they may suspend or terminate the services of the customer at their sole discretion. There is evidence, although not stated explicitly in its policies, that TCL provides general notice when content is taken down on its network through judicial order. However, there is no disclosure of any requirement to contact the relevant user, in case of takedown of user-generated content in compliance with judicial order.Although TCL has voiced its opinion on network neutrality, for example, by issuing public comments to the Telecom Regulatory Authority of India, it does not disclose its policies regarding throttling or degrading of content over its network, or its network management principles.As a backbone connection provider, TCL’s major customers include downstream ISP’s who connect through TCL’s network. Therefore, the AUP states that the downstream provider shall ensure that its customers comply with the AUP, failing which TCL may terminate the services of the downstream provider. Further, importantly, TCL treats violations of the AUP by the end-user as violations by the downstream ISP, making them directly liable for the violations of the terms and subject to any actions TCL may take in that regard. The AUP further expressly states that TCL shall co-operate with appropriate law enforcement agencies and other parties investigating claims of illegal or inappropriate conduct, but does not mention whether this involves taking down content or disconnecting users.

    Technical observations on TCL’s blocking practices in 2015 showed that TCL appeared to be using a proxy server to inspect and modify traffic to certain IP addresses.

    Privacy
    General
    TCL has one privacy policy which covers all services provided by the company with the exception of VSNL mail, which has its own privacy policy. The policy is easily accessible and available in English. The policy partially discloses that users are updated of any changes to the policy, however, any notification of the changes is only on the website and not done directly. In addition to the above, TCL also has a separate cookie policy, which contains information about its use of cookies for the collection of user information on its websites. Use of TCL’s services entails acceptance of its privacy policy.

    Disclosure of Collection, Use and Sharing of Personal Information
    TCL, as well as VSNL mail, discloses that it collects users’ personal information, based on the service utilized by them, both as solicited information and as automatically collected information through the use of technologies such as cookies, or through third parties. TCL’s privacy policy states the various purposes to which such personal collection might be used, including for the investigation of fraud or unlawful activity, and for the provision of services, including for marketing. TCL discloses that it may combine this information prior to use. VSNL does not clearly state the purpose for which information may be collected, nor how it is shared.

    TCL discloses that it may share personal information with affiliates, marketing partners, service providers as well as in response to legal processes including court orders or subpoena’s or in any other case which TCL deems necessary or appropriate. Where personal information is shared with third parties, TCL commits to ensure that third parties (which include third party downstream carriers) also have appropriate data protection policies. TCL does not disclose its process for responding to orders for interception or for user information from private parties or from governmental agencies, nor does it provide any specific or aggregate data regarding the same.

    User control over information
    The policy discloses that TCL explicitly seeks user consent before it transfers data across legal jurisdictions. Although the policy states that TCL may share user information with law enforcement agencies in compliance with legal requests, it does not disclose any process for vetting such requests, nor does it disclose any data (specific or aggregate) about any such requests received. With the exception of California, USA, TCL does not permit users to access data about any requests for their personal information which may have been received or granted by TCL to private third parties. Further, in contrast to most companies studied in this index, TCL discloses that it permits users to access, amend or delete information which the company stores about them. VSNL does not disclose that it allows users to access, amend or delete their personal information collected by VSNL.

    Security
    TCL does not disclose that it uses or permits the use of encryption for any communications transmitted through its network, nor does it provide users any training or disclaimers to consumers on data protection.


    Rediff.com India Ltd.
    www.rediff.com
    Industry: Internet Software Services and Media
    Services evaluated: Rediff.com, Rediff Mail, Rediff iShare, Rediff Shopping
    Market Capitalization: USD 6.07 Million

    Rediff.com is a company, operating several internet services, including personal and enterprise email services, news services, a media-sharing platform and a shopping platform. It has its headquarters in Mumbai, India. According to the Alexa Index, Rediff.com is the 47th most visited website in India, and the 407th overall. Approximately 87% of its traffic originates from Indian users.

    Commitment
    Of the companies studied in this survey, Rediff.com (“Rediff”) received the lowest scores on commitment indicators. None of Rediff’s publicly available policies, including government mandated filings, disclose efforts towards protecting online freedoms. Rediff also does not disclose that it maintains a whistleblower policy or a company ethics policy. As a major online media and internet services provider in India, Rediff makes no public commitment towards freedom of speech and user privacy, and has not disclosed any efforts at engaging with stakeholders in this regard. Although the terms of use for various services provided by Rediff disclose the existence of a grievance redressal mechanism, it is only within the bounds of Rule 3 of the Intermediary Guidelines Rules, 2011. The terms of use do not explicitly make mention of grievances related to online freedoms, nor is any specific or aggregate data about the complaints mechanism released by the company. Rediff does not disclose that it undertakes any impact assessment of how its services may impact online freedoms.

    Freedom of expression
    General
    Rediff has an umbrella policy covering the use of all services offered by Rediff.com, as well as separate policies governing the use of its video sharing platform, its blogging platform and messaging boards. The use of any Rediff services is construed as acceptance of their terms of use. Rediff discloses that it may change any of its terms of use without prior notification to its users. Rediff’s services are accessible through a Rediffmail account, which does not require verification through any government issued license to link online users to their offline identity. The existence of various disparate policies and the manner and format of the policies somewhat decrease their accessibility.

    Content Restriction and Termination of Services
    Rediff’s General Terms of Use specify content which is prohibited on its various services, which is materially similar to the content prohibited under the guidelines issued under the Information Technology Act. Further, Rediff’s messaging board policy lists a number of vague and broad categories which are prohibited and may be restricted on the forums, including “negatively affecting other participants, disrupt the normal flow of the posting.”

    As per the General Terms of Use, Rediff reserves the right to remove any content posted by users, solely at its own discretion. Rediff’s General Terms of Use do not disclose any process for responding to requests by law enforcement or judicial or other government bodies for the takedown of content. However, the terms of Rediff’s video sharing platform specifies that written substantiation of any complaint from the complaining party is required. Rediff’s process for responding to complaints regarding intellectual property infringement are well detailed in this policy, although it does not substantiate the process for responding to other requests for restriction of content from private parties or law enforcement agencies.

    Rediff further reserves the right to terminate the services offered to its users, with or without cause and without notice of the same. Similar to most companies surveyed, Rediff does not disclose its process for responding to requests for restriction of content or services by private parties or by government agencies, nor does it publish specific or aggregate data about restriction of content, the number of requests for takedown received or the number complied with.

    Privacy
    General
    Rediff’s performance on privacy indicators is marginally better than those on freedom of expression. A single privacy policy is applicable to all of Rediff’s services, which is easily accessible through its various websites, including on its homepage. Rediff discloses that any material changes of its privacy policy will be notified prominently. Use of Rediff’s services entails acceptance of its privacy policy.

    Disclosure of Collection, Use and Sharing of Personal Information
    Rediff specifies that it collects both anonymous and personally identifiable information, automatically as well as what is solicited through their services, including financial information and ‘user preferences and interests’. Rediff does not disclose if any information so collected is combined for any purpose. It also specifies the purpose to which such information may be used, which includes its use ‘to preserve social history as governed by existing law or policy’, or to investigate violations of Rediff’s terms of use. The policy further specifies that Rediff may share information with third parties including law enforcement agencies or in compliance of court orders or legal process. Rediff discloses that it notifies users in case any personal information is being used for commercial purposes, and gives users the option to opt-out of such use. Rediff does not disclose its process for responding to orders for interception or for user information from private parties or from governmental agencies, nor does it provide any specific or aggregate data regarding the same.

    User Control over Information
    Rediff discloses that its users may chose to correct, update or delete their information stored with Rediff if they chose to discontinue the use of its services. However, unless users specifically chose to do so, Rediff continues to store user information even after termination of their account.

    Security
    Rediff discloses that it encrypts sensitive information (including financial information) through SSL encryption, and uses ‘accepted industry standards’ to protect other personal information submitted by users, although it does not define what these standards are.

     

    Vodafone India Limited
    www.vodafone.in
    Industry: Telecommunications
    Services evaluated: Broadband and Narrowband mobile internet services

    Vodafone India Limited is a wholly owned subsidiary of the Vodafone Group Plc., the world’s second largest telecommunications provider. As of March 2016, Vodafone India was the second largest telecommunications provider in India, with a market share of 19.71% of internet subscribers (broadband and narrowband). Vodafone entered the Indian market after acquiring Hutchison Telecom in 2007.

    This survey has only examined the policies of Vodafone India and those policies of Vodafone plc. which may be applicable specifically to Vodafone India.

    Commitment
    Vodafone India Limited (“Vodafone”) scores the highest on the commitment indicators of the companies examined in this survey. While the Vodafone Group, (the Group/holding company) examined as part of the global Ranking Digital Rights Index, discloses its compliance with the UN Guiding Principles on Business and Human Rights, Vodafone India does not specifically make any such disclosures independently. The companies annual report, corporate responsibility policies or business responsibility reports do not disclose any commitments towards online freedoms. However, Vodafone India does disclose the existence of a Privacy Management Framework, under which employees are provided training regarding data privacy of users. Moreover, Vodafone’s public statements disclose the existence of a privacy impact assessment procedure to ensure ‘data minimisation’ and reduce the risk of breach of privacy. Vodafone is also a member of the Data Security Council of India, an industry body which makes public commitments towards user privacy and data security, which includes guiding the Indian IT industry on self-regulation on issues of privacy and data security, as well as the Cellular Operators Association of India, another industry organization which also commits to protecting  consumer rights, including consumers right to privacy.

    Vodafone also discloses a multi-tiered grievance redressal mechanism, which includes an appellate authority  as well as a timeline of 39 days for the resolution of the complaint. However, the mechanism does not specify if grievances related to online freedoms may be reported or resolved. In addition, Vodafone has designated a Privacy Officer for redressing concerns under its privacy policy.

    Freedom of Expression
    General
    Vodafone scored the lowest on disclosures under this head of the companies surveyed. The terms of use for Vodafone India’s services are not available on their homepage or site-map nor are they presented in a clear or easily accessible manner. They may be accessed through the Vodafone Telecom Consumers Charter, with different terms of use for pre-paid and post-paid customers. There is no policy specific to the use of internet services through the use of the Vodafone network, nor do these policies make reference to the use of internet services by Vodafone users. Vodafone does not disclose that it provides any notification of changes to the policies to its users.

    Content Restriction and Termination of Services
    While the Terms of Use do not specifically refer to online content, Vodafone’s Terms of Use prohibit users from “sending messages” under various categories, which include messages which infringe upon or affect “national or social interest”. Vodafone reserves the right to terminate, suspend or limit the service upon any breach of its Terms of Use or for any reason which Vodafone believes warrants such termination, suspension or limitation. Vodafone does not disclose its process for responding to violations of its terms of use.

    Vodafone does not disclose its process for responding to requests for restriction of content or services by private parties or by government agencies, nor does it publish specific or aggregate data about restriction of content, the number of requests for takedown received or the number complied with. Although the Vodafone group internationally publishes a comprehensive law enforcement disclosure report (making it one of few major internet companies to do so), the report does not contain information on orders for blocking or restricting services or content.

    Vodafone has made public statements of its commitment to network neutrality and against any kind of blocking or throttling of traffic, although it does not have any policies in place for the same.

    As with all telecommunications companies in India, users must be authenticated by a valid government issued identification in order to use Vodafone’s telecommunication services.

    Privacy
    General
    Vodafone India’s privacy policy which is applicable to all users of its services is not as comprehensive as some other policies surveyed. It is accessibly through the Vodafone India website, and available in English. Vodafone merely discloses that the policy may change from time to time and does not disclose that it provides users any notice of these changes. Use of Vodafone’s services entails acceptance of its privacy policy.

    Collection, Use and Sharing of Personal Information
    Vodafone’s policy discloses the personal information collected, as well as the purpose and use of such information, and the purpose for which such information may be shared with third parties, including law enforcement agencies. However, Vodafone does not disclose how such information may be collected or for what duration.

    Vodafone India’s privacy policy does not disclose its process for responding to government requests for user information, including for monitoring or surveillance. However, the Vodafone law enforcement disclosure report elaborates upon the same, including the principles followed by Vodafone upon requests for user information or for monitoring their network in compliance with legal orders. However, as per the applicable laws in India, Vodafone does not publish any aggregate or specific data about such requests, although it states that the Indian government has made such requests.

    User Control over Personal Information
    Vodafone does not disclose that it allows users to access, amend, correct or delete any information it stores about its users. It does not disclose if user information is automatically deleted after account termination.

    Security
    Vodafone only discloses that it takes ‘reasonable steps’ to secure user information. Vodafone does not disclose that it employs encryption over its network, or if it allows users to encrypt communications over their network. Vodafone also does not disclose that it provides any guidance to users on securing their communications over their network.


    Reliance Communications Limited

    www.rcom.co.in

    Industry: Telecommunications

    Services evaluated: Broadband and Narrowband mobile internet services

    Market Capitalization: INR 118.35 Billion

    Reliance Communications Limited (“RCL”) is an Indian telecommunication services provider, and a part of the Reliance Anil Dhirubai Ambani group of companies. RCL is the fourth largest telecommunications provider in India, with a market share of 11.20% of Indian internet subscribers. Reliance also owns one of ten submarine cable landing stations in India, responsible for India’s connectivity to the global internet.

    Commitment
    RCL does not disclose any policy commitment towards the protection of online freedoms. Although RCL has filed business responsibility reports which include a report on the company’s commitment towards human rights, the same do not make a reference to privacy or freedom of expression of its users either. RCL does not disclose that it undertakes any impact assessment of how its services may impact online freedoms.

    While RCL does maintain a whistle-blower policy for reporting any unethical conduct within the company, the policy too does not expressly mention that it covers any conduct in violation of user privacy or freedom of expression. RCL is a member of at least three industry bodies which work towards stakeholder engagement on the issues of privacy and consumer protection and welfare, namely, the Data Security Council of India, the Internet Service Providers Association of India and the Association of Unified Telecom Service Providers of India (although none of these bodies expressly mention that they advocate for freedom of expression).

    RCL maintains a comprehensive manual of practice for the redressing consumer complaints. The manual of practice specifies the procedure for grievance redressal as well the timelines within which grievances should be resolved and the appellate authorities which can be approached, however, it does not specify whether complaints regarding privacy or freedom of expression are covered under this policy.

    Freedom of Expression
    General
    RCL’s terms of use for its internet services are part of its Telecom Consumer’s Charter, its Acceptable Use Policy (“AUP”) and its Consumer Application Form, which are not easily accessible through the RCL website. The charter contains the terms for its post-paid and pre-paid services as well the terms for broadband internet access. RCL discloses that it may change the terms of use of its services without any prior notification to its users.

    Content Restriction and Termination of Services
    RCL’s AUP lists certain categories of content which is not permitted, which includes vague categories such as ‘offensive’, ‘abusive’ or ‘indecent’, which are not clearly defined. In the event that a user fails to comply with its terms of use, RCL discloses that their services may be terminated or suspended. Further, as per the CAF, RCL reserves the right to terminate, suspend or vary its services at its sole discretion and without notice to users. The terms of use also require the subscriber/user to indemnify RCL in case of any costs or damages arising out of breach of the terms by any person with or without the consent of the subscriber.

    RCL discloses that upon receiving any complaints or upon any intimation of violation of its terms of use, RCL shall investigate the same, which may also entail suspension of the services of the user. RCL does not disclose that it provides users any notice of such investigation or reasons for suspension or termination of the services. RCL does not disclose specific or aggregate data regarding restriction of content upon requests by private parties or governmental authorities.

    RCL does not disclose its network practices relating to throttling or prioritization of any content or services on its network. However, RCL has published an opinion to the Telecom Regulatory Authority of India, wherein it supported regulation prohibiting throttling or prioritization of traffic. However, RCL was the network partner for Facebook’s Free Basics platform which was supposed to provide certain services free of cost through the RCL network. The Free Basics initiative was abandoned after the TRAI prescribed regulations prohibiting price discrimination by ISPs.

    Privacy
    RCL scores the lowest on this indicator of the companies surveyed. RCL does not disclose that it has a privacy policy which governs the use of its internet services. RCL’s AUP only discloses that it may access and use personal information which is collected through its services in connection with any investigation of violation of its AUP, and may share such information with third parties for this purpose, as it deems fit. Further, RCL’s terms of use further disclose that it may provide user information to third parties including security agencies, subject to statutory or regulatory factors, without any intimation to the user.

    Security
    RCL does not disclose any information on the security mechanisms in place in its network, including whether communications over the network are encrypted or whether end-to-end encrypted communications are allowed.

     

    Shaadi.Com

    www.shaadi.com

    Industry: Internet Marriage Arrangement

    Services evaluated: Online Wedding Service

    Shaadi.com, a subsidiary of the People group, is an online marriage arrangement service launched in 1996. While India is its primary market, the service also operates in the USA, UK, Canada, Singapore, Australia and the UAE. As of 2017, it was reported to have a user base of 35 million.

    Governance
    Shaadi.com makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. The company also does not disclose whether it has any internal mechanisms such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. The company does not disclose if it is part of any multi-stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by the company’s business. While details of a Grievance Officer are provided in the company’s Privacy Policy, it is not clearly disclosed if the mechanism may be used for freedom of expression or privacy related complaints. The company makes no public report of the complaints that it receives, and provides no clear evidence that it responds to them.

    Freedom Of Expression
    General
    The Terms of Service are easily locatable on the website, and are available in English. The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms. Shaadi.com makes no disclosure about whether it notifies users to changes in the Terms, and how it may do so. Shaadi.com also does not maintain any public archives or change log.

    Content Restriction and Termination of Services
    Shaadi.com discloses an indicative list of prohibited activities and content, but states that it may terminate services for any reason. Shaadi.com makes no disclosures about the process it uses to identify violations and enforce rules, or whether any government or private entity receives priority consideration in flagging content. Shaadi.com does not disclose data about the volume and nature of content and accounts it restricts. Shaadi.com makes no disclosures about its process for responding to requests from any third parties to restrict any content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Shaadi.com makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Shaadi.com discloses that it notifies users via email when restricting their accounts.

    Shaadi.com also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts or URLs affected, the types of subject matter associated with the requests, etc. Registration for the service requires a Mobile Number, which may be tied to offline identity.

    Privacy
    General
    The Privacy Policy is easily locatable on the website, and is available in English. The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.

    Shaadi.com discloses that material changes to the Privacy Policy will be notified by posting a prominent link on the Homepage. Further, if personally identified information is used in a materially different manner from that stated at the time of collection, Shaadi.com commits to notify users by email. However, Shaadi.com does not disclose a time frame within which it notifies users prior to the changes coming into effect. Shaadi.com also does not maintain any public archives or change log.

    Collection, Use and Sharing of Personal Information
    Shaadi.com clearly discloses the types of personal and non personal information it may collect, but does not explicitly disclose how it collects the information. There is no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.

    While the Privacy Policy states the terms of sharing information, it makes no type-specific discloses about how different types of user information may be shared or the purpose for which it may be shared. Shaadi.com also does not disclose the types of third parties with which information may be shared. Shaadi.com clearly discloses that it may share user information with government or legal authorities.

    The Privacy Policy discloses the purposes for which the information is collected, but does not disclose if user information is combined from different services. Shaadi.com makes no commitment to limit the use of information to the purpose for which it was collected. Shaadi.com makes no disclosures about how long it retains user information. It does not disclose whether it retains de-identified information, or its process for de-identification.

    Shaadi.com does not disclose whether it collects information from third parties through technical means, how it does so, or its policies about use, sharing, retention etc. Shaadi.com does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Shaadi.com makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.

    Shaadi.com also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.

    User Control over Information

    Shaadi.com does not disclose the time frame within which it may delete user information, if at all, after users terminate their account. Shaadi.com does not disclose whether users can control the collection of information by Shaadi.com. The Policy states that users are allowed to remove both public or private information from the database. However, certain (unspecified) financial information and account related information submitted at the time of registration may not be removed or changed.

    Shaadi.com does not disclose if users are provided options to control how their information is used for targeted advertising, or if targeted advertising is off by default.

    Shaadi.com does not disclose whether users may access a copy of their information, or what information may be available. Shaadi.com does not disclose whether it notifies users when their information is sought by government entities or private parties.

    Security
    Shaadi.com discloses that it follows generally accepted industry standards to protect personal information. Employees are granted access on a need to know basis. Shaadi.com does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.

    Shaadi.com does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them. Shaadi.com does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Shaadi.com does not disclose whether it notifies affected users about breaches, and any steps it may take to minimize impact.

    Shaadi.com discloses that sensitive information, such as card numbers, are transmitted using the Secure Socket Layer protocol, but not whether all user communications are encrypted by default. Shaadi.com does not disclose whether it uses advanced authentication methods to prevent unlawful access. Shaadi.com does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.

    Shaadi.com publishes privacy and security tips on its website which provide guidance about risks associated with the service, and how they may be avoided.

    Hike Messenger
    www.get.hike.in
    Industry: Internet Instant Messaging
    Services evaluated: Instant Messaging and VoIP application

    Hike messenger is an Indian cross platform messaging application for smartphones. Users can exchange text messages, communicate over voice and video calls, and exchange pictures, audio, video and other files. Hike launched in November 2012 and, as of January 2016 Hike became the first Indian internet company to have crossed 100 million users in India. It logs a monthly messaging volume of 40 billion messages. Hike’s parent Bharti SoftBank is a joint venture between Bharti Enterprises and SoftBank, a Japanese telecom firm. As of August 2016, hike was valued at $1.4 billion.

    Governance

    Hike makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. Hike also does not disclose whether it has any internal mechanisms such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. Hike does not disclose if it is part of any multi stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by Hike’s business.

    Hike’s Terms of Use provide contact details for submitting queries and complaints about the usage of the application. It notes that the complaints will be addressed in the manner prescribed by the Information Technology Act, 2000 and rules framed thereunder. The Terms do not disclose if the mechanism may be used for freedom of expression or privacy related issues. Hike makes no public report of the complaints that it receives, and provides no clear evidence that it responds to them.

    Freedom Of Expression

    General

    The Terms of Service are easily locatable on the website, and are available in English. The terms are presented in an understandable manner, with section headers, and often provide examples to explain the terms. Hike may make changes to the Terms at its discretion without any prior notice to the users. Hike does not disclose whether users are notified after changes have been made, or whether it maintains a public archive or change log.

    Though the Terms disclose a range of content and activities prohibited by the service, Hike may delete content, for any reason at its sole discretion. Further, Hike may terminate or suspend the use of the Application at anytime without notice to the user.

    Content Restriction and Termination of Services
    Hike makes no disclosures about the process it uses to identify violations and enforce its rules, or whether any government or private entity receives priority consideration in flagging content. Hike does not disclose data about the volume and nature of content and accounts it restricts.

    Hike makes no disclosures about its process for responding to requests from any third parties to restrict any content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Hike makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.

    Hike also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts, etc.

    Identity Policy

    Mobile Numbers would be required to sign up for the service, which could potentially be connected to offline identity.

    Privacy

    General
    The Privacy Policy is easily locatable on the website, and are available in English. The terms are presented in an understandable manner, with section headers, and often provide examples to explain the terms.

    Hike discloses that changes to the Privacy Policy will be posted on Hike website, and does not commit to directly notifying users of changes. Users are advised to review the website from time to time to remain aware of the terms. Hike does not disclose a time frame within which it may notify changes prior to them coming into effect. Hike also does not disclose whether it maintains a public archive or change log.

    Collection, Use and Sharing of Information
    Hike clearly discloses the types of user information it collects. However, Hike makes no explicit commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.

    Hike discloses that user information may be shared for a variety of purposes, but does not disclose the type, or names of third parties that may be given access to the information. Hike discloses that it may share user information with government entities and legal authorities.

    The Privacy Policy states the purposes for which user information is collected and shared, but makes no commitment to limit the use of information to the purpose for which it was collected.

    Hike discloses that undelivered messages are stored with Hike’s servers till they are delivered, or for 30 days, whichever is earlier. Messages or files sent through the service also reside on Hike’s servers for a short (unspecified) period of time till the delivery of the messages or files is complete. Hike does not disclose the duration for which it retains information such as profile pictures and status updates. Hike does not disclose whether it retains de-identified information, or its process for de-identification. Hike discloses that, subject to any applicable data retention laws, it does not retain user information beyond 30 days from deletion of the account.

    Hike does not disclose whether it collects information from third parties through technical means, and how it does so, or its policies about use, sharing, retention etc.

    Hike does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Hike makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.

    Hike also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.

    Hike does not disclose whether it notifies users when their information is sought by government entities or private parties.

    User Control over Information

    Hike discloses that the user may chose to not submit certain user information, but also notes that this may hinder use of the application. Hike makes no disclosure about whether users may request deletion of their user information.

    Hike discloses that users may opt out or opt in for specific services or products which may allow user information to be used for marketing or advertising purposes. Hike does not disclose if targeted advertising is on by default.

    Hike does not disclose whether users may obtain a copy of their user information.

    Security

    Hike discloses that it has security practices and procedures to limit employee access to user information on a need to know basis only. Hike does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits. Hike does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them.

    Hike does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach, but discloses that it may attempt to notify the user electronically. However, company does not the types of steps it would take to minimize impact of a data breach.

    Hike does not disclose if transmission of user information is encrypted by default, or whether it uses advanced authentication methods to prevent unlawful access. Hike does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.

    Hike does not publish and materials that educate users about cyber risks relevant to their service.

    Aircel
    www.aircel.com
    Industry: Telecommunications
    Services evaluated: Broadband and Narrowband Mobile Internet Services

    The Aircel group is a joint venture between Maxis Communications Berhad of Malaysia and Sindya Securities & Investments Private Limited. It is a GSM mobile service provider with a subscriber base of 65.1 million users. The company commenced operations in 1999 and has since become a pan India operator providing a host of mobile voice and data telecommunications services.

    Governance

    Aircel’s Terms and Conditions state that it is a duty of all service providers to assure that the privacy of their subscribers (not affecting national security) shall be scrupulously guarded. However, the company makes no similar commitment to freedom of expression.

    Aircel also does not disclose whether it has any oversight mechanisms in place. However, Aircel does disclose that it has established a Whistleblower Policy and an Ethics Hotline.  Further, the Privacy Policy states that employees are expected to follow a Code of Conduct and Confidentiality Policies in their handling of user information. There are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. Aircel does not disclose if it is part of any multi stakeholder initiatives, or any other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by Aircel’s business.

    Aircel has a process for receiving complaints on its website under the section of Customer Grievance. However, it is not clearly disclosed whether this process may be applicable for freedom of expression and privacy related issues. Though Aircel does disclose information such as the number of complaints received and redressed, the number of appeals filed, it makes no disclosure if any complaints were specifically related to freedom of expression and privacy.

    Freedom Of Expression
    General
    The Terms and Conditions are not easily locatable, and are found as part of a larger document titled Telecom Consumers Charter, which is itself posted as an inconspicuous link on the Customer Grievance page. The Terms are provided only in English, but it is likely that Aircel has a large Hindi speaking user base. The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms.

    Aircel discloses that it may make changes to the Terms without notice to users, or with written notice addressed to the last provided address, at its sole discretion. Aircel does not disclose if it maintains a public archive or change log.

    Content Restriction and Termination of Services
    The Terms prohibit certain activities, but Aircel discloses that it may terminate services for a user at its sole discretion for any reason, including a violation of its Terms.

    Aircel makes no disclosures about its process it uses to identify violations and enforce its rules, or whether any government or private entity receives priority consideration in flagging content. Aircel does not disclose data about the volume and nature of content and accounts it restricts.

    Aircel makes no disclosures about its process for responding to requests from third parties to restrict content or users. The Terms do not disclose the basis under which Aircel may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Aircel makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Aircel does not disclose if it notifies users when they try to access content that has been restricted, and the terms expressly waive users’ right to notice if their services are suspended/terminated.

    Aircel does not disclose its policy on network management, or whether it prioritizes, blocks, or delays certain types of traffic, applications, protocols, or content for reasons beyond assuring quality of service and reliability. Notably, in its comments to the Telecom Regulatory Authority of India on the issue of regulation of Over-The-Top Services, it argued for the right of Telecom Service Providers to negotiate commercial agreements with OTT providers, as well as the right to employ non price differentiation and network management practices.

    Aircel discloses that it may terminate its services in wholly or in part, at its sole discretion, and for any reasons, including directions from the government. Aircel does not disclose its process for responding to requests for network shutdowns, or the legal authority that makes the requests, nor does it commit to push back on such requests. The terms waive the users’ right to notice when services are suspended. Aircel also provides no data about the number of request received or complied with.

    Aircel discloses that it requires government approved identification in order to perform verifications.

    Privacy
    General

    The Privacy Policy is easily locatable on the website, and is available in English. It is likely that Aircel has a large Hindi and vernacular speaking user base. However, the website does not provide any other language versions of the Privacy Policy.  The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.

    The Privacy Policy states that changes will be reflected on the website, and makes no disclosure about whether it will directly notify users. Aircel does not disclose a time frame within which it may notify users prior to the changes coming into effect. Aircel also does not maintain any public archives or change log.

    Collection, Use and Sharing of Information

    Though Aircel discloses the types of user information it may collect, it does not explicitly disclose how it collects the information. Aircel makes no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.

    While the Privacy Policy states the terms of sharing information, it makes no type-specific disclosures about how different types of user information may be shared. Further, while Aircel broadly discloses the type of third parties with which it may share information, it does not provide a specific list of names. Aircel clearly discloses that it may share user information with government or legal authorities.

    The Privacy Policy broadly states the purposes for which the information is collected, but does not disclose in more specific terms the purposes for which various types of user information may be collected. Aircel also does not disclose if user information is combined from different services. Aircel makes no commitment to limit the use of information to the purpose for which it was collected.

    Aircel makes no disclosures about how long it retains user information, and the Privacy Policy states that it may retain information for as long as it requires. Aircel does not disclose whether it retains de-identified information, or its process for de-identification. Aircel does not disclose the time frame within which it may delete user information, if at all, after users terminate their account.

    Aircel does not disclose whether it collects information from third parties through technical means, how it does so, or its policies about use, sharing, retention etc. Aircel does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Aircel makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.

    Aircel also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.

    Aircel does not disclose whether it notifies users when their information is sought by government entities or private parties.

    User Control over Information

    Aircel does not disclose whether users can control the collection of information by Aircel. The Privacy Policy discloses that if information is not provided, or consent for usage is withdrawn, Aircel reserves the right to discontinue the service for which the information is sought. Aircel does not disclose if users can request the deletion of information.

    Aircel discloses that users can opt in or opt out of receiving telemarketing communications, and discloses that they must be specifically opted in for.  However, Aircel does not disclose any options with respect to the usage of use information for such purposes. Users may only choose to opt in or opt out of receiving commercial communications, and have no control over whether user information is used in the first place.

    Aircel does not disclose whether users may access a copy of their information, or what information may be available.

    Security

    Aircel discloses that it has adopted measures to protect information from unauthorized access and to ensure that personal information is accessible to employees or partners employees strictly on a need to know basis. Aircel discloses that its employees are bound by a Code of Conduct and Confidentiality Policies. Aircel does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.

    Aircel does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, or how it would respond to them.

    Aircel does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Aircel does not disclose whether it notifies affected users about breaches, or any steps it may take to minimize impact.

    Aircel discloses that highly confidential information such as passwords and credit card numbers are transmitted using the Secure Socket Layer protocol. However, Aircel does not disclose if all user communications are encrypted by default. Aircel also does not disclose whether it uses advanced authentication methods to prevent unlawful access. Aircel does not disclose whether users can view their recent account activity, or if it notifies users about unusual activity and possibly unauthorized access.

    Aircel publishes information about Security Awareness and Alerts that details various threats on the internet, and how they may be countered.

    Reliance Jio
    www.jio.com
    Industry: Telecommunications
    Services evaluated: Broadband and Narrowband mobile internet services

    Reliance Jio Infocomm Ltd. is a wholly owned subsidiary of Reliance Industries Ltd., and provides wireless 4G LTE service network across all 22 telecom circles in India. It does not offer 2G/3G based services, making it India’s only 100% VoLTE network. Jio began a massive rollout of its service in September 2016, as was reported to have reached 5 million subscribers in its first week. As of October 25, 2016, Jio is reported to have reached 24 million subscribers.

    Governance
    Jio does not score well in the Governance metrics. It makes no explicit commitment to freedom of expression and privacy, and does not disclose whether it has any oversight mechanisms in place. The company also does not disclose whether it has any internal mechanisms in place such as employee training on freedom of expression and privacy issues, or a whistleblower policy. Further, there are no disclosures as to any process of impact assessment for privacy and freedom of expression related concerns. The company does not disclose if it is part of any multi-stakeholder initiatives, or other organizations that engage with freedom of expression and privacy issues, or groups that are impacted by the company’s business.

    Jio’s website discloses a process for grievance redressal, along with the contact details of for their Grievance Officer.  The Regulatory Policy also lays down a Web Based Complaint Monitoring System for customer care. However, neither mechanism clearly discloses that the process may be for freedom of expression and privacy issues. In fact, the Grievance Redressal process under the Terms and Conditions process seems primarily meant for copyright owners alleging infringement. Jio makes no public report of the complaints it receives, and provides no clear evidence that it responds to them.

    Freedom Of Expression

    General
    The Terms of Service are easily locatable on the website, and are available in English. It is likely that Jio has a large Hindi and vernacular speaking user base. However, the website does not have any other language versions of the Terms of Service.

    The Terms are presented in an understandable manner, with section headers, but provide no additional guidance such as summaries, tips or graphics to explain the terms.

    Jio discloses that changes to the Terms of Service may be communicated through a written notice to the last address given by the Customer, or through a public notice in print media. However, this may be at Jio’s sole discretion. Further, Jio does not disclose a time frame within which it notifies users prior to the changes coming into effect. Jio also does not maintain any public archives or change log.

    The Terms of Service disclose a range of proscribed activities, and states that any violation of the Terms may be grounds to suspend or terminate services. However, Jio makes no disclosures about its process of identifying violations and enforcing rules, or whether any government or private entity receives priority consideration in flagging content. There are no clear examples provided to help users understand the provisions.

    Jio does not disclose data about the volume and nature of content and accounts it restricts.

    Content Restriction and Termination of Services
    Jio makes no disclosures about its process for responding to requests from third parties to restrict content or users. The Terms do not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to requests. Jio makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities. Jio does not disclose if it notifies users when they try to access content that has been restricted, or if it notifies users when their account has been restricted.

    Jio also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the the number of accounts or URLs affected, the types of subject matter associated with the requests, etc.

    Jio does not disclose its policy on network management, or whether it prioritizes, blocks, or delays certain types of traffic, applications, protocols, or content for reasons beyond assuring quality of service and reliability.

    Jio makes no disclosures about its policy on network shutdowns, or why it may shut down service to a particular area or group of users. Jio does not disclose its process for responding to such requests, or the legal authority that makes the requests, or whether it notifies users directly when it restricts access to the service. It also provides no data about the number of request received or complied with.

    Jio requires that users verify their identity with government issued identification such as Passport, Driver’s License or Aadhaar.

    Privacy

    General
    The Privacy Policy is easily locatable on the website, and is available in English. It is likely that Jio has a large Hindi and vernacular speaking user base. However, the website does not have any other language versions of the Privacy Policy

    The Policy is presented in an understandable manner, with section headers, but provides no additional guidance such as summaries, tips or graphics to explain the terms.

    Jio commits to make all efforts to communicate significant changes to the policy, but does not disclose its process for doing so. The policy recommends that users periodically review the website for any changes. Jio does not disclose a time frame within which it notifies users prior to the changes coming into effect. Jio also does not maintain any public archives or change log.

    Collection, Use and Sharing of Information
    Jio clearly discloses the types of personal and non personal information it may collect, but does not explicitly disclose how it collects the information. There is no commitment to limit collection only to information that is relevant and necessary to accomplish the purpose of the service.

    Jio commits to not sell or rent user information to third parties, but discloses that it may use and share non personal information at its discretion.

    Jio discloses the broad circumstances in which it may share personal information with third parties and the types of entities it may disclose such information to. The policy states that such partners operate under contract and strict confidentiality and security restrictions. However, it does not specifically disclose the names of third parties it shares information with. Jio clearly discloses that it may share user information with government or legal authorities.

    Jio discloses that it may share user information with third party websites or applications at the behest of the user (for instance, when logging into services with a Jio account). It discloses that Jio will provide notice to the user, and obtain consent regarding the details of the information that will be shared. In such a situation, the third party’s privacy policy would be applicable to the information shared.

    The Privacy Policy broadly states the purposes for which the information is collected, but does not disclose if user information is combined from different services. In detailing the types of third parties that Jio may share user information with, Jio also discloses the respective purposes for sharing. However, Jio makes no commitment to limit the use of information to the purpose for which it was collected.

    Jio does not disclose whether it collects information from third parties through technical means, and how it does so, or its policies about use, sharing, retention etc.

    Jio does not make any disclosures about its processes for responding to third party requests for user information. The Privacy Policy does not disclose the basis under which it may comply with government or private party requests, nor whether any due diligence is conducted before responding to the requests. Jio makes no commitment to pushback on inappropriate or overbroad requests from the government, or private entities.

    Jio also does not publish any data about the requests it receives, and how it responds to them. This could include, for instance, the number of requests received, the number of requests complied with, the number of accounts affected, the type of authority or legal process through which the request was made, etc.

    Jio does not disclose whether it notifies users when their information is sought by government entities or private parties.

    User Control over Information

    Jio makes no disclosures about how long it retains user information. It does not disclose whether it retains de-identified information, or its process for de-identification. Jio does not disclose the time frame within which it may delete user information, if at all, after users terminate their account.

    Jio does not disclose whether users can control the collection of information by Jio. The Privacy Policy does allow requests for access, correction or deletion of user information, but also notes that deletion of certain (unspecified) information may lead to termination of the service. However, deletion of information would be subject to any applicable data retention laws, law enforcement requests, or judicial proceedings. Further, the request may be rejected if there is extreme technical difficulty in implementing it, or may risk the privacy of others.

    Though the Privacy Policy allows for access requests, it does not disclose what user information may be obtained, or whether it may be made available in a structured data format. Jio does not disclose if targeted advertising is on by default, or whether users can control how their information is used for these purposes.

    Jio discloses that it has adopted measures to protect information from unauthorized access and to ensure that personal information is accessible to employees or partners employees strictly on a need to know basis. Jio does not disclose whether it has a security team that audits the service for security risk, or whether it commissions third party audits.

    Jio discloses that it has reasonable security practices and procedures in place in line with international standard IS/ISO/IEC 27001, to protect data and information. Jio does not disclose whether it has any process, policy or mechanism in place for researchers to submit security vulnerabilities, and how it would respond to them.  Jio does not explicitly commit to notify the relevant authorities without undue delay in case of a data breach. Jio does not disclose whether it notifies affected users about breaches, and any steps it may take to minimize impact.

    Jio does not disclose if transmission of user information is encrypted by default, or whether it uses advanced authentication methods to prevent unlawful access. Jio does not disclose whether users can view their recent account activity, or if notifies users about unusual activity and possibly unauthorized access.

    Jio does not publish and materials that educate users about cyber risks relevant to their service.


    For more information about the detailed methodology followed, please see - https://rankingdigitalrights.org/wp-content/uploads/2016/07/RDR-revised-methodology-clean-version.pdf.

    Internet Users Per 100 People, World Bank, available at http://data.worldbank.org/indicator/IT.NET.USER.P2.

    Telecommunications Indicator Report, Telecom Regulatory Authority of India, available at  http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Reports.pdf.

    The upstaging of extant telecos did, however, lead to allegations of anti-competitive practices by both Jio as well as existing telecos such as Vodafone and Airtel. See http://thewire.in/64966/telecom-regulator-calls-time-out-as-reliance-jio-coai-battle-turns-anti-consumer/.

    Get Ready for India’s Internet Boom, Morgan Stanley, available at http://www.morganstanley.com/ideas/rise-of-internet-in-india.

    Circular on Business Responsibility Reports, Securites Exchange Board of India, (August 13, 2012), available at  http://www.sebi.gov.in/cms/sebi_data/attachdocs/1344915990072.pdf.

    FAQ on Corporate Social Responsibility, Ministry of Coporate Affairs, available at https://www.mca.gov.in/Ministry/pdf/FAQ_CSR.pdf.

    Govind vs. State of Madhya Pradesh, (1975) 2 SCC 148;  R. Rajagopal vs. State of Tamil Nadu

    (1994) 6 S.C.C. 632; PUCL v. Union of India, AIR 1997 SC 568; Distt. Registrar & Collector vs Canara Bank, AIR 2005 SC 186.

    Justice K.S. Puttaswamy (Retd.) & Another Versus Union of India & Others, available at

    http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

    PUCL v Union of India, AIR 1997 SC 568.

    According to Section 2(w) of the IT Act, “Intermediary” with respect to any particular electronic records, means “…any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web hosting service providers, search engines, online payment sites, online-auction sites, online market places and cyber cafes.”

    See http://cis-india.org/internet-governance/resources/it-procedure-and-safeguards-for-interception-monitoring-and-decryption-of-information-rules-2009

    Rule 23, Interception Rules.

    Rule 19 & 20, Interception Rules.

    Rule 24, Interception Rules.

    See http://tikona.in/sites/default/files/pdf_using_mpdf/1-ISP%20Agreement%20Document.pdf.

    Pranesh Prakash and Jarpreet Grewal, How India Regulates Encryption, Centre for Internet and Society, (October 30, 2015) available at  http://cis-india.org/internet-governance/blog/how-india-regulates-encryption.

    See http://www.wipo.int/edocs/lexdocs/laws/en/in/in098en.pdf.

    As clarified in a Central Governemnt Press Note, this does not apply to corporates collecting data from other corporations, but only those handling data directly from natural persons, See http://meity.gov.in/sites/upload_files/dit/files/PressNote_25811.pdf.

    Section 79 – ‘Exemption from liability of intermediary in certain cases - (1) Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link hosted by him.

    (2) The provisions of sub-section (1) shall apply if-

    (a) the function of the intermediary is limited to providing access to a communication

    system over which information made available by third parties is transmitted or

    temporarily stored; or

    (b) the intermediary does not-

    (i) initiate the transmission,

    (ii) select the receiver of the transmission, and

    (iii) select or modify the information contained in the transmission

    (c) the intermediary observes due diligence while discharging his duties under this Act

    and also observes such other guidelines as the Central Government may prescribe in

    this behalf

    (3) The provisions of sub-section (1) shall not apply if-

    (a) the intermediary has conspired or abetted or aided or induced whether by threats or

    promise or otherwise in the commission of the unlawful act (ITAA 2008)

    (b) upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link residing in orconnected to a computer resource controlled by the intermediary is being used to

    commit the unlawful act, the intermediary fails to expeditiously remove or disable

    access to that material on that resource without vitiating the evidence in any manner.

    Explanation:- For the purpose of this section, the expression "third party information" means

    any information dealt with by an intermediary in his capacity as an intermediary.

    Information Technology (Intermediaries guidelines) Rules, 2011, available at http://dispur.nic.in/itact/it-intermediaries-guidelines-rules-2011.pdf.

     

    AIR 2015 SC 1523.

    See http://cis-india.org/internet-governance/resources/information-technology-procedure-and-safeguards-for-blocking-for-access-of-information-by-public-rules-2009.

    License Agreement For Unified License, available at  http://www.dot.gov.in/sites/default/files/Amended%20UL%20Agreement_0_1.pdf?download=1.

    http://www.trai.gov.in/WriteReadData/WhatsNew/Documents/Regulation_Data_Service.pdf.

    OECD Privacy Principles, available at  http://oecdprivacy.org/; Report of the Group of Experts on Privacy, Planning Commission of India, available at http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf.

    TATA Communications Annual Report 2016, available at https://www.tatacommunications.com/sites/default/files/FIN-AnnualReport2015-16-AR-20160711.pdf.

    Submarine Cable Networks Data, available at http://www.submarinenetworks.com/stations/asia/india.

    National Voluntary Guidelines on Social, Environmental and Economic Responsibilities of Business, Ministry of Corporate Affairs, Government of India; SEBI Amendment to Listing Agreement, (August 13, 2012) available at http://www.sebi.gov.in/cms/sebi_data/attachdocs/1344915990072.pdf.

    Employee Code of Conduct, TATA Group, available at http://www.tata.com/pdf/tcoc-booklet-2015.pdf.

    TATA Communications Busines Responsibility Policies, available at http://www.tatacommunications.com/sites/default/files/Business_Responsibility_Policies.pdf.

    Supra Note 4 , at page 18.

    TATA Communications Whistleblower Policy, available at https://www.tatacommunications.com/sites/default/files/Whistleblower%20Policy%20-%20Designed%20Version.pdf.

     

    Kamlesh Bajaj, DSCI: A self-regulatory organization, available at https://www.dsci.in/sites/default/files/DSCI%20Privacy%20SRO.pdf.

    Customer Charter, TATA Communications, available at https://www.tatacommunications.com/legal/customer-charter.

    AUP Violations Grievances Portal, available at http://www.tatacommunications.com/reporting-aup-violations; Privacy Policy, TATA Communications, available at https://www.tatacommunications.com/policies/privacy-policy.

    Shamnad Basheer, Busting a Baloney: Merely Viewing Blocked Websites Will Not Land You in Jail, Spicy IP, (August 23, 2016), available at http://spicyip.com/2016/08/busting-a-baloney-merely-viewing-blocked-websites-will-not-land-you-in-jail.html.

    Acceptable Use Policy, TATA Communications, available at https://www.tatacommunications.com/policies.

    See http://login.vsnl.com/terms_n_conditions.html.

    This includes inappropriate content, which may be threatening, hateful or abusive content; content that infringes any intellectual property right; transfer of viruses or harmful content, fraudulent content (such as credit card fraud) and spam or unsolicited email.

    Basheer, Supra note 11.

    Response to Consultation Paper on Regulatory Framework for Over-the-top (OTT) Services, TATA Communications, available at  http://trai.gov.in/Comments/Service-Providers/TCL.pdf.

    Kaustabh Srikanth, Technical Observations about Recent Internet Censorship in India, Huffington Post, (January 6, 2015) available at  http://www.huffingtonpost.in/kaustubh-srikanth/technical-observations-about-recent-internet-censorship-in-india/

    See https://www.tatacommunications.com/policies/privacy-policy; http://login.vsnl.com/privacy_policy.html (VSNL); However, there are other documents available on the TCL website purpoting to be the Privacy Policy. Since the policies are not dated, it is not entirely clear which is applicable.  (See  http://www.tatacommunications.com/downloads/Privacy-Policy-for-TCL-and-Indian-Subs.pdf).

    The disclosure of governmental requests may be affected by laws which require such information to remain confidential, as explained in detail in Section I of this report.

    See  http://www.alexa.com/siteinfo/rediff.com.

    See  http://www.rediff.com/terms.html.

    Id.

    See  http://ishare.rediff.com/templates/tc.html.

    See  http://blogs.rediff.com/terms/.

    See  http://www.rediff.com/news/disclaim.htm.

    See  http://blogs.rediff.com/terms/.

    Performance Indicator Report, Telecom Regulatory Authority of India,  (August, 2016) available at (http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Report_05_August_2016.pdf.

    See  https://www.vodafone.com/content/sustainabilityreport/2015/index/operating-responsibly/human-rights.html.

    Vodafone Sustainability Report, See http://static.globalreporting.org/report-pdfs/2015/ffaa6e1f645aa009c2af71ab9505b6b0.pdf.

     

    Amit Pradhan, CISO, on Data Privacy at Vodafone, DSCI Blog, (July 15, 2015), available at https://blogs.dsci.in/interview-amit-pradhan-vodafone-india-on-privacy/.

    See http://www.coai.com/about-us/members/core-members.

    Process for registration of a complaint, Vodafone India Telecom Consumers’ Charter, available at https://www.vodafone.in/documents/pdfs/IndiaCitizensCharter.pdf.

    Vodafone India: We are Pro Ne Neutrality, Gadgets Now, (May 20, 2015), available at http://www.gadgetsnow.com/tech-news/vodafone-wont-toe-zero-rating-plan-of-airtel/articleshow/47349710.cms; Vodafone Response to TRAI Consultation Paper on Regulatory Framework for Over-the-Top (OTT) services, Vodafone India, (March 27, 2015) available at  http://trai.gov.in/Comments/Service-Providers/Vodafone.pdf.

    See http://www.vodafone.in/privacy-policy.

    Vodafone Law Enforcement Disclosure Report, available at  https://www.vodafone.com/content/sustainabilityreport/2014/index/operating_responsibly/privacy_and_security/law_enforcement.html.

    Performance Indicator Report, Telecom Regulatory Authority of India,  (August, 2016) available at (http://www.trai.gov.in/WriteReadData/PIRReport/Documents/Indicator_Report_05_August_2016.pdf.

    Business Responsibility Reports, Reliance Communications Ltd., available at  http://www.rcom.co.in/Rcom/aboutus/ir/pdf/Business-Responsibility-Report-2015-16.pdf.

    Manual of Practice, Reliance Communications Ltd., available at http://www.rcom.co.in/Rcom/personal/customercare/pdf/Manual_of_Practice.pdf.

    See  http://www.rcom.co.in/Rcom/personal/home/pdf/1716-Telecom-Consumer-Charter_TRAI-180412.pdf.

    See  http://www.rcom.co.in/Rcom/personal/pdf/AUP.pdf.

    See  http://myservices.relianceada.com/ImplNewServiceAction.do#.

    Prohibition Of Discriminatory Tariffs For Data Services Regulations, Telecom Regulatory Authority of India, February 8, 2016), available at http://www.trai.gov.in/WriteReadData/WhatsNew/Documents/Regulation_Data_Service.pdf.

    Shaadi.com Terms of Use/Service Agreement, available at http://www.shaadi.com/shaadi-info/index/terms (Last visited on November 10, 2016).

    Shaadi.com Privacy Policy, available at http://www.shaadi.com/shaadi-info/index/privacy (Last visited on November 10, 2016).

    Shaadi.com Privacy Tips, available at http://www.shaadi.com/customer-relations/faq/privacy-tips (Last visited on November 10, 2016).

    https://blog.hike.in/hike-unveils-its-incredible-new-workplace-3068f070af08#.zagtgq5lk

    http://economictimes.indiatimes.com/small-biz/money/hike-messaging-app-raises-175-million-from-tencent-foxconn-and-others-joins-unicorn-club/articleshow/53730336.cms

    https://medium.com/@kavinbm/175-million-tencent-foxconn-d9cc8686821f#.7w6yljaii

    [75] Hike Terms of Use, available at http://get.hike.in/terms.html (Last visited on November 10, 2016).

     

    Hike Privacy Policy, available at http://get.hike.in/terms.html (Last visited on November 10, 2016).

    Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).

    Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).

    Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).

    Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).

    Aircel Whistle Blower Policy, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=P35400442051324996434644 (Last visited on November 10, 2016).

    Aircel National Customer Preference Registry, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=customercare_ndnc_page (Last visited on November 10, 2016).

    Aircel National Customer Preference Registry, available at http://www.aircel.com/AircelWar/appmanager/aircel/karnataka?_nfpb=true&_pageLabel=customercare_ndnc_page (Last visited on November 10, 2016).

    http://www.counterpointresearch.com/reliancejio/

    http://economictimes.indiatimes.com/tech/internet/gujarat-andhra-top-circles-for-jio-subscribers-cross-24mn-mark/articleshow/55040351.cms

    Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).

    Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).

    Jio Terms and Conditions, available at https://www.jio.com/en-in/terms-conditions (Last visited on November 10, 2016).

    Big Data in Governance in India: Case Studies

    by Amber Sinha, Vanya Rakesh and Vidushi Marda and Edited by Elonnai Hickok, Sumandro Chattapadhyay and Sunil Abraham — last modified Feb 26, 2017 04:24 PM
    This research seeks to understand the most effective way of researching Big Data in the Global South. Towards this goal, the research planned for the development of a Global South big data Research Network that identifies the potential opportunities and harms of big data in the Global South and possible policy solutions and interventions.

    This work has been made possible by a grant from the John D. and Catherine T. MacArthur Foundation. The conclusions, opinions, or points of view expressed in the report are those of the authors and do not necessarily represent the views of the John D. and Catherine T. MacArthur Foundation.


    Introduction

    The research was for a duration of 12 months and in form of an exploratory study which sought to understand the potential opportunity and harm of big data as well as to identify best practices and relevant policy recommendations. Each case study has been chosen based on the use of big data in the area and the opportunity that is present for policy recommendation and reform. Each case study will seek to answer a similar set of questions to allow for analysis across case studies.

    What is Big Data

    Big data has been ascribed a number of definitions and characteristics. Any study of big data must begin with first conceptualizing defining what big data is. Over the past few years, this term has been become a buzzword, used to refer to any number of characteristics of a dataset ranging from size to rate of accumulation to the technology in use.[1]

    Many commentators have critiqued the term big data as a misnomer and misleading in its emphasis on size. We have done a survey of various definitions and understandings of big data and we document the significant ones below.

    Computational Challenges

    The condition of data sets being large and taxing the capacities of main memory, local disk, and remote disk have been seen as problems that big data solves. While this understanding of big data focusses only on one of its features—size, other characteristics posing a computational challenge to existing technologies have also been examined. The (US) National Institute of Science and Technology has defined big data as data which “exceed(s) the capacity or capability of current or conventional methods and systems.” [2]

    These challenges are not merely a function of its size. Thomas Davenport provides a cohesive definition of big data in this context. According to him, big data is “data that is too big to fit on a single server, too unstructured to fit into a row-and-column database, or too continuously flowing to fit into a static data warehouse.” [3]

    Data Characteristics

    The most popular definition of big data was put forth in a report by Meta (now Gartner) in 2001, which looks at it in terms of the three 3V’s—volume[4], velocity and variety. It is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.[5]

    Aside from volume, velocity and variety, other defining characteristics of big data articulated by different commentators are— exhaustiveness,[6] granularity (fine grained and uniquely indexical),[7] scalability,[8] veracity,[9] value[10] and variability.[11] It is highly unlikely that any data-sets satisfy all of the above characteristics. Therefore, it is important to determine what permutation and combination of these gamut of attributes lead us to classifying something as big data.

    Qualitative Attributes

    Prof. Rob Kitchin has argued that big data is qualitatively different from traditional, small data. Small data has used sampling techniques for collection of data and has been limited in scope, temporality and size, and are “inflexible in their administration and generation.”[12]

    In this respect there are two qualitative attributes of big data which distinguish them from traditional data. First, the ability of big data technologies to accommodate unstructured and diverse datasets which hitherto were of no use to data processors is a defining feature. This allows the inclusion of many new forms of data from new and data heavy sources such as social media and digital footprints. The second attribute is the relationality of big data.[13]

    This relies on the presence of common fields across datasets which allow for conjoining of different databases. This attribute is usually a feature of not the size but the complexity of data enabling high degree of permutations and interactions within and across data sets.

    Patterns and Inferences

    Instead of focussing on the ontological attributes or computational challenges of big data, Kenneth Cukier and Viktor Mayer Schöenberger define big data in terms of what it can achieve.[14]

    They defined big data as the ability to harness information in novel ways to produce useful insights or goods and services of significant value. Building on this definition, Rohan Samarajiva has categorised big data into non-behavioral big data and behavioral big data. The latter leads to insights about human behavior.[15]

    Samarajiva believes that transaction-generated data (commercial as well as non-commercial) in a networked infrastructure is what constitutes behavioral big data. Scope of Research The initial scope arrived at for this case-study on role of big data in governance in India focussed on the UID Project, the Digital India Programme and the Smart Cities Mission. Digital India is a programme launched by the Government of India to ensure that Government services are made available to citizens electronically by improving online infrastructure and by increasing Internet connectivity or by making the country digitally empowered in the field of technology.[16]

    The Programme has nine components, two of which focus on e-governance schemes. Read More [PDF, 1948 Kb]


    [1]. Thomas Davenport, Big Data at Work: Dispelling the Myths, Uncovering the opportunities, Harvard Business Review Press, Boston, 2014.

    [2]. MIT Technology Review, The Big Data Conundrum: How to Define It?, available at https://www. technologyreview.com/s/519851/the-big-data-conundrum-how-to-define-it/

    [3]. Supra note 1.

    [4]. What constitutes as high volume remains an unresolved matter. Intel defined Big Data volumes are emerging in organizations generating a median of 300 terabytes of data a week.

    [5]. http://www.gartner.com/it-glossary/big-data/

    [6]. Viktor Mayer Schöenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think” John Murray, London, 2013.

    [7]. Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and their consequences, Sage, London, 2014.

    [8]. Nathan Marz and James Warren, Big Data: Principles and best practices of scalable realtime data systems, Manning Publication, New York, 2015.

    [9]. Bernard Marr, Big Data: the 5 Vs everyone should know, available at https://www.linkedin. com/pulse/20140306073407-64875646-big-data-the-5-vs-everyone-must-know.

    [10]. Id.

    [11]. Eileen McNulty, Understanding Big Data: the 7 Vs, available at http://dataconomy.com/sevenvs-big-data/.

    [12]. Supra Note 7.

    [13]. Danah Boyd and Kate Crawford, Critical questions for big data. Information, Communication and Society 15(5): 662–679, available at https://www.researchgate.net/publication/281748849_Critical_questions_for_big_data_Provocations_for_a_cultural_technological_and_scholarly_ phenomenon

    [14]. Supra Note 6.

    [15]. Rohan Samarajiva, What is Big Data, available at http://lirneasia.net/2015/11/what-is-bigdata/.

    [16]. http://www.digitalindia.gov.in/content/about-programme

    Mapping MAG: A study in Institutional Isomorphism

    by Jyoti Panday last modified Mar 03, 2017 12:59 AM
    The paper is an update to a shorter piece of MAG analysis that had been conducted in July 2015. At that time our analysis was limited by the MAG membership data that was made available by the Secretariat. Subsequently we wrote to the Secretariat and this paper is based on the data shared by them including for the years for which membership details were previously not available.

    This paper that delves into the history of the formation of the Multi-Stakeholder Advisory Group (MAG) and the Internet Governance Forum (IGF) including the lessons from the past that should be applied in strengthening its present structure. The paper covers three broad areas:

    • History of the formation of the MAG, its role within the IGF structure, influences that have impinged on its scope of work, manner in which its evolution has deviated from conceptualization
    • Analysis of MAG membership (2006-2015): Trends in the selection and rotation of the MAG membership
    • Recommendations to reform MAG/IGF

    Jyoti Panday[1]


    The recent renewal of the Internet Governance Forum[2] (IGF) mandate at the World Summit on the Information Society (WSIS)+10 High-Level Meeting[3] was something of a missed opportunity. The discussions unerringly focused on the periphery of the problem - the renewal of the mandate, leaving aside questions of vital importance such as strengthening and improving the structures and processes associated with the IGF. The creation of the IGF as a forum for governments and other stakeholders to discuss policy and governance issues related to Internet was a watershed moment in the history of the Internet.

    In the first decade of its existence the IGF has proven to be a valuable platform for policy debates, a space that fosters cooperation by allowing stakeholders to self-organise to address common areas of concern. But the IGF rests at being a platform for multistakeholder dialogue and is yet to realise its potential as per its mandate to “find solutions to the issues arising from the use and misuse of the Internet” as well as “identify emerging issues […] and, where appropriate, make recommendations”.[4]

    From the information available in the public domain, it is evident that the IGF is not crafting solutions and recommendations or setting the agenda on emerging issues. Even if unintended, this raises the disturbing possibility that alternative processes and forums are filling the vacuum created by the unrealised IGF mandate and helming policy development and agenda setting on Internet use and access worldwide. This sits uneasily with the fact that currently there is no global arrangement that serves or could be developed as an institutional home for global internet governance issues.

    Moreover, the economic importance of the internet as well as its impact on national security, human rights and global politics has created a wide range of actors who seek to exert their influence over its governance. Given the lack of a global centralized body with authority to enforce norms and standards across political and functional boundaries, control of internet is an important challenge for both developed and emerging economies. As the infrastructure over which the internet runs is governed by nation states and their laws, national governments continue to seek to exert their influence on global issues.

    Divergence of approaches to regulation and differences in capacity to engage in processes, has led to fragmentation of approaches to common challenges.[5] Importantly, not all governments are democratic and may exert restrictions on content and access that conflict with the open and global nature of the internet. Alongside national governments, transnational private corporations play a critical role in security and stability of the internet. Much like the state, they too raise the niggling question of how to guard against the guardians.

    Corporations control of sensitive information, their institutional identity, secrecy of operations: all are essential to their functioning but could also erode the practice of democratic governance, and the rights and liberties of users online. Additionally, as issues of human rights, access and local content have become interlinked with public policy issues civil society and academia have become relevant to traditionally closed policy spaces. Considering the variety of stakeholders and their competing interests, concerns about ensuring stability and security of the Internet have led the international community to pursue a range of governance initiatives.

    Implementing a Multistakeholder Approach

    At the broadest level debates about the appropriate way forward has evolved as a contestation between the choice of two models. On the one hand is the state-centric ‘multilateral’ model of participation, and on the other a ‘multistakeholder’ approach that aims for bottom up participation by all affected stakeholders. The multistakeholder approach sees resonance across several quarters[6] including a high level endorsement from the Indian government last year.[7] An innovative concept, a multistakeholder approach fits well within the wider debate about rethinking governance in a globalized world.

    Proponents of the multistakeholder approach see it as a democratic process that allows for a variety of views to be included in decision making.[8] Nevertheless, the intertwining of the Internet and society pitches actors and interests at opposing ends. While a multistakeholder approach broadens the scope for participation, it also raises serious issues of representation and accountability. Since multistakeholder processes fall outside the traditional paradigm of governance, establishing legitimacy of processes and structures becomes all the more important.

    The multistakeholder concept is only beginning to be critically studied or evaluated. There have been growing concerns, particularly, from emerging economies[9] of a lack of representation in policy development bodies and that issues affecting marginalised communities being overlooked in policy development process. From this view, the multistakeholder model has created ‘transnational and semi privatized’ structures and ‘transnational elites’.[10] Such critics define emerging and existing platforms derived from the multistakeholder concept as ‘an embryonic form of transnational democracy’ that are occupied by elite actors.[11]

    Elite actors may include the state, private and civil society organisations, technical and academic communities and intergovernmental institutions. In the context thus sketched out, the key question that the WSIS+10 Review should have addressed is whether the IGF provides the space for the development of institutions and solutions that are capable of responding to the challenges of applying the multistakeholder concept to internet governance.  The existing body of work on the role of the IGF has yet to identify, let alone come to terms with, this problem.

    Applying critical perspectives examining essential structures and processes associated with the IGF becomes even more relevant given its recently renewed mandate. However, already the forum’s first planning meeting scheduled to take place in Geneva this week is already mired in controversy[12] after a new Chair was named by the UN Secretary General.

    The decision for appointing a new Chair was made without any form of public process, or any indication on the selection criteria. Moreover, the "multistakeholder advisory group" (MAG), which decides the content and substance of the forum, membership was also renewed recently. Problematically most of the nominations put forth by different constituent groups to represent them were rejected and individuals were appointed through a parallel top-down and secretive UN process. Of the 55 MAG members, 21 are new but only eight were officially selected by their respective groups.[13]

    This paper focuses on the role of the MAG structure and functioning and highlights issues and challenges in its working so as to pave the way for strategic thinking on its improvement. A tentative beginning towards identifying what the levers for change can be made by sifting through the eddies of history to uncover how the MAG has evolved and become politicised.

    The paper makes two separate, but interrelated claims: first, it argues that as the de-facto bureau essential to the functioning of the IGF, there is an urgent need to introduce transparency and accountability in the selection procedure of the MAG members. Striking an optimum balance between expertise and legitimacy in the MAG composition is essential to ensure that workshops and sessions are not dominated by certain groups or interests and that the IGF remains an open, well-functioning circuit of information and robust debate.

    Second, it argues for immediate evaluation of MAG’s operations given the calls for  the production of tangible outcomes. There has been on-going discussion within the broader community about the role of the IGF with divisions between those who prefer a narrow interpretation of its mandate, while others who want to broaden its scope to provide policy recommendations and solutions.[14]

    The interpretation of the IGF mandate and whether the IGF should make recommendations has been a sticking point and is closely linked to the question of IGF’s legitimacy and relevance. Be that as it may, the intersessional work, best practices forum and dynamic coalitions over the last ten years have led to the creation of a vast repository of information that should feed into the pursuit of policy options and identification of best practices.

    The true test of the multistakeholder model is not only to bring together wide range of views but to also ensure that accumulated knowledge is applied to address common problems. Implementing a multistakeholder approach and developing solutions necessitates enhanced coordination amongst stakeholder groups and in the context of the IGF, is contingent on the strength and stability of the MAG to be able to facilitate such cooperation.

    The paper is organised in three parts: in the first section I delve into the history of the formation of the MAG. To understand the MAG’s role within the IGF structure it is essential to revisit the influences that shaped its conceptualisation and subsequent evolution over the decade. A critical historical perspective provides the context of the multiple considerations that have impinged on MAG’s scope of work, of the manner in which MAG’s evolution has deviated from intentions, and the lessons from the past that should be applied in strengthening its present structure.

    The second section analyses trends in the selection and rotation of the MAG membership and traces out the elite elements in the composition of the MAG. The analysis reveals two distinct stages in the evolution of the MAG membership which has remained significantly homogeneous across stakeholder representation. The final section of the paper focuses on a set of recommendations to ensure that the MAG is strengthened, becomes sustainable and provides the impetus for IGF reform in the future.

    Origins of the IGF

    The WSIS process was divided in two phases, the Geneva phase focused on principles of internet governance. The outcome documents of the first phase included a Declaration of Principles and a Plan of Action being adopted by 175 countries. Throughout the process, developing countries such as China, Brazil and Pakistan opposed the prevailing regime that allowed US dominance and control of ‘critical infrastructure’. As the first phase of the WSIS could not resolve these differences the Working Group on Internet Governance (WGIG) was set up by the UN Secretary General to deliberate and report on the issues.

    The establishment of the WGIG is an important development in the WSIS process not only because of the recommendations it developed to feed into the second phase of the negotiations, but also because of the procedural legitimacy the WGIG established through its working. The WGIG embodied the multistakeholder principle in its membership and open consultation processes. WGIG members were selected and appointed in their personal capacity through an open and consultative process. As a result the membership demonstrated diversity in the geography, stakeholder groups represented and gender demographics.

    The consultations were open, transparent and allowed for a diverse range of views in the form of oral and written submissions from the public to feed into the policy process. At its final meeting the WGIG membership divided into smaller working groups to focus on specific issues, and reassembled at the plenary to review, discuss and consolidate sections which were then approved in a public forum. As the WGIG background paper notes “The WGIG agreed that transparency was another key ingredient to ensure ownership of the process among all stakeholders.”[15]

    The WGIG final report[16] identified a vacuum within the context of existing structures and called for the establishment of a forum linked to the UN. The forum was to be modelled on the best practices and open format of the WGIG consultative processes allowing for the participation of diverse stakeholders to engage on an equal footing. It was in this context that the IGF was first conceptualised as a space for global multistakeholder ‘dialogue’ which would interface with intergovernmental bodies and other institutions on matters relevant to Internet governance.

    The forum was conceived as a body that would connect different stakeholders involved in the management of the internet, as well as contribute to capacity-building for governance for developing countries drawing on local sources of knowledge and expertise. Importantly, the forum was to promote and assess on an ongoing basis the embodiment of WSIS principles in Internet governance processes and make recommendations’ and ‘proposals for action’ addressing emerging and existing issues not being dealt with elsewhere. However, as things turned out the exercises of power between states and institutional arrangements ultimately led to the development of a subtly altered version of the original IGF mandate.

    Aftermath of the WGIG Report

    The WGIG report garnered much attention and was welcomed by most stakeholders with the exception of the US government which along with private sector representatives such as Coordinating Committee of Business Interlocutors (CCBI) disagreed with the recommendations.[17] Pre-empting the publication of the report, the National Telecommunications and Information Administration (NTIA) issued a statement in June 2005 affirming its resolve to “maintain its historic role in authorizing changes or modifications to the authoritative root zone file.”[18]

    The statement reiterated US government’s intention to fight for the preservation of the status quo, effectively ruling out the four alternative models for internet governance put forward in the WGIG report. The statement even referenced the WGIG report stating, “Dialogue related to Internet governance should continue in relevant multiple fora. Given the breadth of topics potentially encompassed under the rubric of Internet governance there is no one venue to appropriately address the subject in its entirety.”[19]

    The final report was presented to PrepCom 3 of the second phase in July 2005 and the subsequent negotiations were by far, the most significant in the context of the role and structure that the IGF would take in the future. US stance on its role with regard to the root zone garnered pushback from both civil society and other governments including Russia, Brazil, Iran and China. However the most significant reaction to US stance came from the European Union issuing a statement after the commencement of PrepCom 3 in September.

    EU’s position recognised that adjustments were needed in institutional arrangements for internet governance and called for a new model for international cooperation which would include “the development and application of globally applicable public policy principles.”[20] the US had not preempted this “shocking and profound change” and now isolated in its position on international governance of the internet, and it sent forth a strongly worded letter[21] invoking its long-standing relationship and urging the EU to reconsider its stance.

    The pressure worked since the US was in a strong position to stymie the achievement of a resolution from WSIS process. Moreover, introducing reforms to the internet naming and numbering arrangements was not possible without US cooperation. The letter resulted in EU going back on its aggressive stance and with it, the push for the establishment of global policy oversight over the domain names and numbers lost its momentum.

    The letter significantly impacted the WSIS negotiations and shaped the role of the IGF. By creating a deadlock and by applying pressure US was able to negotiate a favourable outcomes for itself. The last minute negotiations led to the status quo continuing and in exchange the US provided an undertaking that it would not interfere with other countries’ ccTLDs. The weakened mandate meant that even though creation of the IGF under the WSIS process moved forward the direction changed from its conceptualisation and origins from the WGIG report.

    Institutionalizing the IGF

    In 2006, the UN Secretary General appointed Markus Kummer to assist with the establishment of the IGF. The newly formed IGF Secretariat initiated an open consultation to be held in Geneva in and issued an open call to stakeholders seeking written submissions as inputs into the consultation.[22] Notably neither the US government nor the EU sent in a response to the consultation and submissions made by other stakeholders were largely a repetition of the views expressed at WSIS.

    The division on the mandate of IGF was evident in this very first consultation. Private sector representatives such as the CCBI and ICC-Basis, government representatives from OECD countries like Canada and the technical community represented by likes of Nominet and ISOC[23] opposed the development of the IGF as platform for policy development. On the other hand, civil society representatives such as APC called for the IGF to produce specific recommendations on issues where there is sufficient consensus.[24]

    With reference to the MAG structure, again there was division on whether the “effective and cost-efficient bureau” referred to in the Tunis Agenda should have a narrow mandate limited to setting the agenda for plenary meetings or if it should have a more substantial role. Civil society stakeholders envisioned assigning the bureau a more substantial role and notably the Internet Governance Project (IGP) discussion paper released in advance of the February 2006 Geneva consultations.[25]

    The paper offered design criteria for the Forum including specific organizational structures and processes proposing “a small, quasi-representational decision making structure” for the IGF Bureau.[26] The paper recommended formation of twelve member bureau with five representatives from governments (from each UN geographic region) and two each from private sector civil society academic and technical communities. The bureau would set the agenda for the plenary meeting not arbitrarily through private discussions, but driven by working group proposals and it would also have the power to approve or reject applications for forming working groups.

    The proposed structure in the IGP paper had it been implemented would have developed the bureau along the lines of the IETF where the working groups would develop recommendations which would feed into the deliberation process. However, there was a clear divide on the proposed structure with many stakeholders opposing the establishment of sub-groups or committees under the IGF.[27]

    Following the written submissions the first open consultations on the establishment of the IGF were held in Geneva on 16 and 17 February 2006, and were chaired by Nitin Desai.[28] The consultation was well attended with more than 300 participants including 40 representatives from governments and the proceedings were webcast. Further, the two-day consultation was structured as a moderated roundtable event at which most interventions were read from prepared statements, many of which were also tabled as documents and later made available from the IGF Web site. This ofcourse meant that there was a repetition of the views expressed in response to the questionnaire or the WGIG report and as a consequence, there was little opportunity for consensus-building.

    Once again there was conflict on whether the IGF should be conceptualised as annual ‘event’ that would provide space for policy dialogue or a ‘process’ of engaging with policy issues which would culminate in an annual event. The CCBI reiterated that “[t]he Tunis Agenda is clear that the IGF does not have decision-making or policy-making authority,” and the NRO emphasised that the “IGF must be a multi-stakeholder forum without decision-making attributions.”[29]

    William Drake argued for the IGF “as a process, not as a series of one-off meetings, but as a process that would promote collective dialogue, learning, and mutual understanding on an ongoing basis.”[30] Government representatives were split for example see El Salvador statement “that the Internet Governance Forum will come up with recommendations built on consensus on specific issues,” and Brazil even characterised the first meeting as “an excellent opportunity to initiate negotiations on a framework treaty to deal with international Internet public policy issues.”[31]

    Although a broad consensus was declared on need for a lightweight multi-stakeholder bureau there was no consensus on its size, composition and the mandate of this bureau. Nitin Desai held the issue for further written input and the subsequent consultation received twelve submissions with most respondents recommended a body of ten and twenty five members. The notable exceptions were submissions from the Group of 77 and China that sought a combined total of forty members half of which would be governmental representatives.

    The discussions during the February consultations and the input received from the written submissions paved the way for what eventually became the MAG. The IGF Secretariat announced the formation of a bureau with forty members and while not expressly stated, half of these would be governmental representatives. It has been speculated that the large membership decision was a result of political wrangling among governments, especially the G77 governments insisting on large group that would accommodate all the political and regional differences among its members.[32]

    IGF Secretariat - Set to Fail?

    The unwieldy size of the MAG meant that it would have to rely on the newly constituted Secretariat for organization, agenda-setting, and results. This structure empowered the Secretariat while limiting the scope of the MAG, a group that was already divided in its interests and agenda. However, the Secretariat was restrained in its services to stakeholders as it had limited resources since it was not funded by the United Nations and relied upon voluntary donations to a trust fund.[33]

    Early donors included the Swiss Agency for Development and Cooperation (SWADC), ICANN and Nominet.[34] Due to disjointed sources of funding, the Secretariat was vulnerable to the influence of its donors. For example, the decision to to base the Secretariat in Geneva was to meet the condition set by SWADC contribution. Distressingly, of the 20 non-governmental positions in the MAG, most were directly associated with the ICANN regime.

    The over-representation of ICANN representatives in MAG selection was problematic since the IGF was conceptualised to address the lack of acceptance of ICANN’s legitimacy in the WSIS process. The lack of independent funding led to a deficit of accountability demonstrated in instances where it was possible for one of the MAG members to quietly insinuate that private sector support for the IGF and its Secretariat would be withdrawn if reforms unacceptable to that stakeholder group went ahead.[35]

    As might perhaps be expected from a Secretariat with such limited resources, its services to stakeholders were confined to maintaining a rudimentary website and responding to queries and requests. The transparency of the Secretariat’s activities was also very limited, most clearly exemplified by the process by which the Advisory Group was appointed.

    Constituting the MAG

    Following the announcement of the establishment of the MAG, a call for membership to the advisory group was made in March 2006. From the beginning the nomination process was riddled with lack of transparency and the nominations received from stakeholders were not acknowledged by the IGF Secretariat, nor was the selection criteria of  made available. The legitimacy of the exercise was also marred by a top-down approach where first that nominees heard of the outcomes was the Secretariat's announcement of selected nominees. Lack of transparency and accountability resulted in the selection and appointment procedure being driven  by patronage and lobbying.

    The political wrangling was evident in the composition of the first MAG which was expanded to accommodate six regional coordinators personally appointed by Chair Nitin Desai to the Special Advisory Group (SAG). Of the twenty non-governmental positions, most were associated with the naming and numbering regime including sitting and former Board members and ICANN staff.[36] Participation from civil society was limited as the composition did not recognise[37] technical community as a distinct group, including it along with academic community and as part of civil society.

    The political struggles at play was visible in the appointment of Michael D. Gallagher, the former head of the US Commerce Department's NTIA. This appointment was all the more relevant since it was Gallagher who had had only a few months back stated that the US government owns the DNS root and has no intention of giving it up. His presence signalled that the US government took the forum seriously enough to ensure its interests were voiced and received attention on the MAG.

    Beyond issues of representation the working of the MAG suffered from a serious lack of transparency as meetings of the Advisory Group were closed, and no reports or minutes were released. The Advisory Group met in May and September in Geneva before the inaugural IGF meeting in Athens. Coordination between members for the preparations for Athens was done utilising a closed mailing list that was not publicly archived. Consequently, the detail of the operations of the Advisory Group ahead of the first IGF meeting were known only to its members.

    Whatever little has been reported suggests that the Advisory Group possessed little formal authority, operating like a forum where members expressed views and debated issues without the object of taking formal decisions. Decisions were settled upon by rough consensus as declared by the Chair, and on all matters where there was no agreement the issues were summarised by the Chair in a report to the UN Secretary-General. The Secretary-General would take the report summary in consideration however retained the ultimate authority to make a formal decision.[38]

    The UN’s clear deciding role was not so obvious in the early years of the MAG’s existence because of the relatively novel nature of the IGF. Moreover Nitin Desai Chair, MAG and Markus Kummer, IGF Secretariat were appointed by the UN Secretary General and were on good terms with the then-Secretary General Kofi Annan and working together they acted as de facto selectors of the members of the MAG.  Most of the MAG’s core membership in the first five years of its existence was made up of leaders from across the different stakeholder groups and self-selection within those groups was encouraged to lend broader stability.

    Over the last decade,  changes in institutional arrangements led the IGF to be moved as a ‘project’ under the UNDESA umbrella, where it is not a core mission, but simply one of many conferences that it handles across the world every year. The core personnel that shepherded the MAG and the IGF from its early days retired allowing for the creation a new core membership. The new group of leaders in the MAG membership have emerged partly as the result of selection and rotation process instituted by the UNDESA in appointing a ‘program committee’.

    The history presented above is to help understand how the MAG was established under the UN umbrella and to highlight the key developments that shaped its scope and working. Importantly the weakened IGF mandate created divergences on the scope of the MAG to function as a ‘program committee’ limited to selecting proposals and planning the IGF or as an ‘advisory committee’ with a  more substantial role in developing the forum as an innovative governance mechanism. In its conception the IGF was a novel idea and by empowering MAG and introducing transparency in the selection procedures of members and their workings could have perhaps led to a more democratic and accountable IGF. However, the possibility of this was stemmed early on.

    The opacity in the appointment processes meant that patronage and lobbying became key to being selected as a member of the MAG. It established the worrying trend of ensuring diversity and representation taking precedent over the necessity of ensuring that representatives were appointed through a bottom-up multistakeholder process. Further, distributing the composition to ensure geographic representation severely limited participation of technical, academic and civil society. In the next section, I focus on the rotation of members of the MAG over the last ten years to identify and highlight trends that have emerged in its composition.

    Analysis of MAG Composition (2006 - 2015)

    This primary data for the analysis of the MAG membership has been collected from the membership list from 2010-2015 available on the I website. The membership list for 2005, 2006, 2007 and 2008 have been provided by the UN IGF Secretariat during the course of this research. To the best of my knowledge, this data is yet to be made publicly available and may be accessed here.[39] The Secretariat notes that the MAG membership did not change in 2008 and 2009 and the confirmation is the only account of the list of members for both years, as the records were poorly maintained and are therefore unavailable in the public domain.

    It is also worth noting that to the best of my knowledge, no data has been made available by the IGF Secretariat regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG. The stakeholder groups identified for this analysis include government, civil society, industry, technical community and academia. Any overlap between two or more of these groups or movements of individuals between stakeholder groups and affiliations has been taken into account.

    Over the decade of its existence, the MAG has had 196 unique members from various stakeholder groups. As per the Terms of Reference[40] (ToR) of the MAG, it is the prerogative of the UN Secretary General to select MAG members. There also exists a policy of rotating one-third members of MAG every year for diversity and taking new viewpoints in consideration. Diversity within the UN is an ingrained process where every group is expected to be evenly balanced in geographic and gender representation. However, ensuring a diverse membership often comes at the cost of legitimate expertise. Further it may often lead to top-down decision making where individuals are appointed based on their characteristics rather than qualifications.

    The complexity of the selection process is further compounded by the fact that the IGF Secretariat provides an initial set of recommendations identifying which members should be appointed to the MAG, but the selection and appointment is undertaken by UNDESA civil servants based in New York. Notably, while the IGF Secretariat staff is familiar with and interacts with stakeholder representatives at internet governance meetings and forums that are regularly held in Geneva, the New York UN based officials do not share such relationships with constituent groups.

    Consequently, they end up selecting members who meet all their diversity requirements and have put themselves forward through the standard UN open nomination process. The practice of ensuring that UN diversity criteria is met, creates tension within the MAG membership as representatives nominated by different stakeholder and who have more legitimacy within their respective constituencies are not appointed to the MAG.

    The stress on maintaining diversity is evident in the MAG membership’s gradual expansion from an initial group of 46 members in 2006 to include a total of 56 members as of 2015. However the increase in membership has not impacted representation of the technical, academic and civil society constituencies with only 56 members having been appointed from the three groups over the last decade.

    This is problematic considering that at the time of its constitution of the MAG the composition did not recognise[41] technical community as a distinct group, including it along with academic community as part of civil society. Consequently the three stakeholder groups have been represented collectively in the MAG and yet account for only 24.77% of the total membership compared to the government’s share of 39.3% and industry’s share of 35.7% respectively. At the regional level too membership across the three groups has ranged between 20-25% of the total membership.

    Stakeholder share in MAG

    The technical community is the least represented constituency accounting for only 5% of the total membership with only 10 members having been appointed over ten years. Of the 10, 6 were appointed from the WEOG region and there were no representatives appointed from the GRULAC region. Representatives from academia accounted for only 6% of the total membership with 13 representatives from the group having been appointed on the MAG. The technical community representation too was low from the US with only two members being appointed to the MAG and with each serving for a period of three years.

    Civil society accounted for only 17% of the total membership with a total of 33 members and representation from the constituency was abysmally low across all regions. Civil society representation from the US included a total of five members, of which one served for one year, three served for two years each and only one representative continued for more than three years. Notably, there have been no academics from the US which is surprising given that most of the scholarship on internet governance is dominated by US scholars.

    Stakeholder representation across regions

    Industry was second largest represented group with a total of 64 members appointed to the MAG of which a whopping 30 members were appointed from the WEOG region. Representation was the highest across WEOG countries with 39.47% of the total  membership and the group accounted for 32.4% and 32.5% of the total members from Africa and Asia Pacific respectively. Across Eastern European and GRULAC countries industry representation was very low accounting for merely 11.53% and 18.18% of the total membership respectively. Industry representative from the US Included two members serving one year each, five members who served two years each, two members who continued for three years each, one member was appointed for five years, one member who completed the maximum MAG  term of eight years.

    It is also interesting to note that the industry membership base expanded steadily, spiking in 2012 with a total of 40 representatives from the industry on the MAG. When assessed against the trend of the core leadership trickling out in 2012, the sudden increase in industry representation may point to attempts at capture from the stakeholder group in 2012. Industry representation from US in the MAG was by far the most consistent over the years and had the most evenly distributed appointment terms for members within a group.

    Industry Representation across Regions

    Government has been the most dominant group within the MAG averaging a consistent 40% of the total membership over the last 10 years. At a regional level representation on the MAG was highest from Eastern Europe with more than 61% of its total membership comprising of individuals from the government constituency. GRULAC countries appointments to the MAG also demonstrate a preference for government representation with almost 58% of the total members appointed from within this group. The share of government representation in the total membership from Asia Pacific was 47.5% and 32.43% across Africa.

    Government representation across regions

    Participation from industry and governement

    Another general policy followed in the selection procedure is that members are appointed for a period of one year, which is automatically extendable for two more years consecutively depending on their engagement in MAG activities. Members serving for one year term is inevitable due to the rotation policy, as new members replace existing members and often it may be the case of filling slots to ensure stakeholder group, geographic and gender diversity. Due to the limited resources made available for coordination between MAG members, one year appointments may not allow sufficient time for integrating new members into the procedures and workings of UN institutions.

    Over the last decade 24.36% of the total appointed MAG members have been limited to serving a term of one year. Of the total 55 one year appointments 26 individuals served their first term in 2015 alone. This includes all nine representatives of civil society and it could be argued that for a stakeholder group with only 11% of the total membership share, such a rehaul weakens the ability of members to develop linkages severely limiting their ability to exert influence on decision making within the MAG.

    Interestingly, the analysis reveals that one year term was a trend in the early years of the MAG where a core group took on the leadership role and continued guiding activities for newcomers including negotiating often conflicting agendas. The pattern of one year appointments was hardly visible from 2008-2012 but picked up again in 2013 and has continued ever since. The trend is perhaps indicative of the movement in the core MAG leadership as many of the original members retired or moved on to other engagements from 2010.

    Importantly, the MAG ToR note that in case there is a lack of candidates fitting the desired area or under exceptional circumstances a member may continue beyond three years. However in the formative years the MAG this exception was the norm with most members continuing for more than three years. An analysis of the membership reveals that between 2006-2012 an elite core emerges which guided  and was responsible for shaping the MAG and the IGF in its present day format. No doubt some of these members were exceptional talents and difficult to replace, however the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the stipulated one year term.

    The analysis also suggests a shift in the leadership core over the last three years and points that a  new leadership group is emerging which is distinguishable in that most members have served on the MAG for three or four years. Members serving for one, two or three years makes up more than 75% of the total membership and 111 individual members have served more than 2 years on the MAG. This could be the result of the depletion in membership of those familiar with internal workings and power structures within the UN, and the selection and rotation criteria and procedures that have weakened the original composition over the last decade.

    Rotating membership might be necessary to prevent capture from any particular constituency or group, on the other hand more than half of the total members have spent less than three years on the MAG which makes the composition a shifting structure that limits long term engagement. Regular rotation of members can also lead to power struggles as continuing members exercise their influence to ensure that more members from within their constituency groups are appointed. Only seven individuals have completed the maximum term of eight years on the MAG while 23 individuals have completed five years or more on the MAG.

    Finally, in terms of gender diversity, the ratio of male to female members is approximately 13:7 in the total membership with the approximate value in percentage being 65% and 35% respectively. Female representatives from WEOG countries dominate with a total of 29 women having been appointed from the region. Participation of women was the lowest across Asia Pacific and Eastern Europe with only nine and five representatives having been appointed respectively. There was a better balance of gender ratios across countries from Africa and GRULAC with 12 and 14 females having been appointed from the region.

    Further analysis and visualisations derived from the MAG composition and identifying trends in appointment of individual members are available on the CIS website. The visualizations include MAG membership distribution across region[42] and stakeholder groups[43], evolution of stakeholder groups over the years[44], stakeholder group distribution across countries[45] and the timeline of total number of years served by individual members[46]. The valuation also include a comparison of stakeholder group representatives appointed across India and the USA.[47]

    Recommendations: Reforming MAG & the IGF

    Between April 4-6, 2016 the MAG convened in Geneva towards the IGF’s first planning meeting for the year[48]. The meeting marks the beginning of MAG’s work in planning and delivering the forum, the first in its recently renewed and now extended mandate. This report is a much needed documentation of its working and processes and has been undertaken as an attempt to scrutinize if the MAG is truly a multi-stakeholder institution or if it is has evolved as a closed group of elite members cloaked in a multi-stakeholder name.

    There is very little literature on the evolution of, or critiquing the MAG structure partly due to it being a relatively new structure and partly due its workings being shrouded in secrecy. The above analysis has been conducted with the aim of trying to understand MAG’s functioning of the selection of its membership. The paper explores the history of the formation of IGF and the MAG to identify the geo-political influences that have contributed to the MAG’s evolution and role in shaping the IGF over the last decade.

    In this section I apply the theory of institutional isomorphism developed by DiMaggio and Powell in their seminal paper[49] on organizational theory and social change. The paper posits that as organisations emerge as a field, a paradox arises where rational actors make their organizations increasingly similar as they try to change them. A focus on institutional isomorphism can add a much needed perspective on the political struggle for organizational power and survival that is missing from much of discourse and literature around the IGF and the MAG.

    A consideration of isomorphic processes also leads to a bifocal view of power and its application in modern politics. I believe that there is much to be gained by attending to similarity as well as to variation between organisations within the same field and, in particular, to change in the degree of homogeneity or variation over time. In this paper I have attempted to study the incremental change in the IGF mandate as well as in the selection of the MAG members.

    Applying the theoretical framework proposed by DiMaggio and Powell I identify possible areas of concern and offer recommendations for improvement of the IGF and reform of the MAG. I detail these recommendations through the impact of resource centralization and dependency, goal ambiguity, professionalization and structuration on isomorphic change. There is variability in the extent to and rate at which organizations in a field change to become more like their peers. Some organizations respond to external pressures quickly; others change only after a long period of resistance.

    DiMaggio and Powell hypothesize that the greater the extent to which an organizational field is dependent upon a single (or several similar) source of support for vital resources, the higher the level of isomorphism. Their organisational theory also posits that the greater the extent to which the organizations in a field transact with agencies of the state, the greater the extent of isomorphism in the field as a whole. As my analysis reveals both hypotheses hold true for the IGF which is currently defined as  a ‘project’ of the UNDESA. Since the IGF and the MAG are dependent on the UN for their existence, it is not surprising that both structures emulate the UN principles for diversity and governmental representation.

    It is also worth noting that UN projects are normally not permanent and require regular renewal of mandate, reallocation of resources and budgets. When budget cuts take place as was the case during the global economic crisis, project funding is jeopardized as was the case when the IGF was left without an executive coordinator or a secretariat due to UN budget cuts.

    This led to constituent groups coming together to directly fund the IGF secretariat through a special IGF Trust Fund created under an an agreement with the United Nations and to be administered by the UNDESA.[50] The fund was drawn up to expire on 31 December 2015 and efforts to renew contribution to the fund for 2016 is being opposed and questions on the legality of the arrangement are being raised.[51]

    It is widely rumoured that the third party opposing the contribution is UNDESA itself. Securing guaranteed, stable and predictable funding for the IGF, including through a broadened donor base, is essential for the forum’s long term stability and ability to realize its underutilized potential. There have been several suggestions from the community in this regard including IT for Change’s suggestion that part of domain names tax collected by ICANN should to be dedicated to IGF funding through statutory/ constitutional arrangements. Centralisation of resources may lead to power structures being created and therefore any attempts at IGF and MAG reform in the future must  consider the choice between incorporating the IGF as a permanent body with institutional funding under the UN and the implications of that on the forum’s structure.

    There are four other hypotheses in DiMaggio and Powell’s framework that may be helpful in identifying levers for improvement of the IGF and the MAG. The first states that, the greater the extent to which goals are ambiguous within afield, the greater the rate of isomorphic change. As my analysis suggests, there is an urgent need to address the decade long debate on the MAG’s scope as a programme committee limited to planning an annual forum.

    The question is linked to the broader need to clarify if the IGF will continue to evolve as an annual policy-dialogue forum or if it can take on a more substantive role that includes offering recommendations and assisting with development of policy on critical issues related to internet governance. Even the MAG is divided in its interpretation of its roles and responsibilities. A resurgence of the IGF necessitates that the global community reassess the need of the forum not only on the mandate assigned to it at the time of its conceptualisation but also in light of the newer and more complex challenges that have emerged over the decade.

    The second hypothesis holds that the greater the extent of professionalization in a field greater the amount of institutional isomorphic change. DiMaggio and Powell measure professionalization by the universality of credential requirements, the robustness of training programs, or the vitality of professional associations. As the MAG composition analysis reveals the structure has evolved in a manner that gives preference to participation from the government and industry over participation from civil society, technical and academic communities.

    Since the effect of institutional isomorphism is homogenization, the best indicator of isomorphic change is a decrease in variation and diversity, which could be measured by lower standard deviations of the values of selected indicators in a set of organizations. Such professionalization is evident in the functioning of the MAG that has taken on bureaucratic structure akin to other UN bodies where governmental approval weighs down an otherwise light-weight structure. Further the high level of  industry representation creates distrust amongst other stakeholders and may be a reason the forum lacks legitimacy as a mechanism for governance as it could be perceived as being susceptible to capture.

    The third hypothesis states that fewer the number of visible alternative organizational models in a field, the faster the rate of isomorphism in that field. The IGF occupies a special place in the UN pantheon of semi-autonomous groups and is often  held up as a shining example of the ‘multistakeholder model’,  where all groups have an equal say in decisions. Currently, there is no global definition of the multistakeholder model which at best remains a consensus framework for legitimizing Internet institutions.

    It is worth noting that the system of sovereignty where authority is imposed is at odds with the earned authority within Internet institutions. Given the various interpretations of the approach, if multistakeholderism is to survive as a concept then it needs to be understood as a legitimizing principle that is strictly at odds with state sovereignty-based conceptions of legitimacy.[52] Under a true multistakeholder system, states can have roles in Internet governance but they cannot unilaterally declare authority, or collectively assert it without the consent of the rest of the Internet.

    Unfortunately as the MAG membership reveals the composition is dominated by governmental representatives who seek to enforce territorial authority over issues of global significance. Further, while alternative approaches to its application exist within the ecosystem they are context specific and have evolved within unique environments.[53] As critics note emerging and existing platforms derived from the multistakeholder concept create ‘an embryonic form of transnational democracy’. Therefore it is important to recognise that the IGF is a physical manifestation of a much larger ideal, one where individuals and organizations have the ability to help shape the  Internet and the information society to which it is intrinsically connected. This points to the need to study and develop alternative models to multistakeholder governance while continuing to strengthen existing practices and platforms.

    As such, the IGF and its related local, national and regional initiatives represent a critical channel for expression especially for countries where such conversation is not pursued adequately and keeps discussions of the internet in the public space as opposed to building from regional/national initiatives. However, interaction between the global IGF and national IGFs is yet to be established. The MAG can play a critical role in developing and establishing mechanism to improve the national IGFs coordination with regional and national initiatives. A strengthened IGF could better serve national initiatives by providing formal backing and support to develop as platforms for engaging with long standing and emerging issues and identifying possible ways to address them.

    DiMaggio and Powell’s final hypothesis holds that the greater the extent of structuration of a field, the greater the degree of isomorphism. As calls for creating structures to govern cyberspace pick up pace and given the extension of the IGF mandate its structure and working are in need of a rehaul. More research and analysis is needed to understand if there is a preferred approach for multistakeholder participation and engagement is emerging within both the IGF and MAG.

    For example, if a portion or category of stakeholder group, countries and regions are not engaging in common dialogue, does the MAG have the mandate to promote and encourage participation? Has a process been established for ensuring a right balance when engaging different stakeholders and if yes, how is such a process initiated and promoted? The data shared by the IGF Secretariat confirmed that there were no records of the nomination procedure, that the membership list was missing for a year and that there was confusion in some cases who the nominees were are actually representing.

    This opens up glaring questions on the legitimacy of the MAG such as on what criteria were MAG members selected and rotated? Was this evaluation undertaken by objective criteria or were representative handpicked by the UN? Moreover, it is important to asses of selection took place following an open call for nominations; or if members were handpicked by UN. Such analysis will help determine if there is scope within the current selection procedure to reach out to the wider multistakeholder community or if all  MAG activities and discussions are restricted to its constituent membership. Clarifying the role of the IGF in the internet governance and policy space is inextricably linked to reforms in the MAG structure and processes and the questions raised above need urgent attention.

    While these issues have been well known and documented for a number of years, yet there has been no progress on resolving them. Currently there is no website or document that lists the activities conducted by MAG in furtherance of ToR, nor does it produce annual report or maintain a publicly archived mailing list. Important recommendations for strengthening the IGF were made by the UN CSTD working group on IGF improvements.

    The group took two years to produce its report identifying problems and offering recommendations  that were to be implemented by end of 2015 and yet many of the problems identified within it have yet to be addressed. Worryingly, an internal MAG proposal to set up a working group to dig into the delays is being bogged down with discussions over scope and membership and a similar effort six months ago was also shot down.[54]

    The ineffectiveness of the MAG to institute reform have led to calls for a new oversight body with established bylaws as the MAG in its present form does not seem up to the task. Further the opaque decision making process and lack of clarity on the scope of the MAG means that each time it undertakes efforts for improvements these are thwarted as being outside of its mandate. There remains a lot of work to be done in strengthening the MAG structure as the group that undertakes the day-to-day work of the IGF and the many issues that plague the role and function of the IGF. A tentative beginning can be made by introducing transparency and accountability in MAG member selection.


    [1] This paper has been authored as part of a series on internet governance and has been made possible through a grant from the MacArthur Foundation.

    [2] The Internet Governance Forum See: http://www.intgovforum.org/cms/

    [3] World Summit on the Information Society (WSIS)+10 High-Level Meeting See: https://publicadministration.un.org/wsis10/

    [4]The mandate and terms of reference of the IGF are set out in paragraphs 72 to 80 of the Tunis Agenda for the Information Society (the Tunis Agenda). See: http://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html

    [5] Samantha Bradshaw, Laura DeNardis, Fen Osler Hampson, Eric Jardine and Mark Raymond ‘The Emergence of Contention in Global Internet Governance’, the Centre for International Governance Innovation and Chatham House, 2015 See: https://www.cigionline.org/sites/default/files/no17.pdf

    [6] Mikael Wigell, ‘Multi-Stakeholder Cooperation in Global Governance’, The Finnish Institute of International Affairs. June 2008, See: https://www.ciaonet.org/attachments/6827/uploads

    [7] Arun Mohan Sukumar, India’s New ‘Multistakeholder’ Line Could Be a Game Changer in Global Cyberpolitics,The Wire, 22 June 2015 See:http://thewire.in/2015/06/22/indias-new-multistakeholder-line-could-be-a-gamechanger-in-global-cyberpolitics-4585/

    [8] Background Note on Sub-Theme Principles of Multistakeholder/Enhanced Cooperation, IGF Bali 2013 See: https://www.intgovforum.org/cmsold/2013/2013%20Press%20Releases%20and%20Articles/Principles%20of%20Multistakeholder-Enhanced%20Cooperation%20-%20Background%20Note%20on%20Sub%20Theme%20-%20IGF%202013-1.pdf

    [9] Statement by Mr. Santosh Jha, Director General, Ministry of External Affairs, at the First Session of the Review by the UN General Assembly on the implementation of the outcomes of the World Summit on Information Society in New York on July 1, 2015 See: https://www.pminewyork.org/adminpart/uploadpdf/74416WSIS%20stmnt%20on%20July%201,%202015.pdf

    [10] Jean-Marie Chenou, Is Internet governance a democratic process ? Multistakeholderism and transnational elites, IEPI – CRII Université de Lausanne, ECPR General Conference 2011,Section 35 Panel 4 See: http://ecpr.eu/filestore/paperproposal/1526f449-d7a7-4bed-b09a-31957971ef6b.pdf

    [11] Ibid. 9

    [12] Kieren McCarthy, ‘Critics hit out at 'black box' UN internet body’, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=3

    [13] Ibid.

    [14] Malcolm Jeremy, ‘Multistakeholder governance and the Internet Governance Forum, Terminus Press 2008

    [15] Background Report of the Working Group on Internet Governance June 2005 See: https://www.itu.int/net/wsis/wgig/docs/wgig-background-report.pdf

    [16] Report of the Working Group on Internet Governance, Château de Bossey June 2005  http://www.wgig.org/docs/WGIGREPORT.pdf

    [17] Compilation of Comments received on the Report of the WGIG, PrepCom-3 (Geneva, 19-30 September 2005) See: http://www.itu.int/net/wsis/documents/doc_multi.asp?lang=en&id=1818%7C2008

    [18] U.S. Principles on the Internet's Domain Name and Addressing System June 30, 2005 See: https://www.ntia.doc.gov/other-publication/2005/us-principles-internets-domain-name-and-addressing-system

    [19] Ibid. 16.

    [20] Tom Wright, ‘EU Tries to Unblock Internet Impasse’, International Herald Tribune

    Published: September 30, 2005 See: http://www.nytimes.com/iht/2005/09/30/business/IHT-30net.html

    [21] Kieren McCarthy, Read the letter that won the internet governance battle’, The Register,  2 Dec 2005 See: http://www.theregister.co.uk/2005/12/02/rice_eu_letter/

    [22] United Nations Press Release, 2 March, 2006 Preparations begin for Internet Governance Forum,

    http://www.un.org/press/en/2006/sgsm10366.doc.htm

    [23] The Internet Society’s contribution on the formation of the Internet Governance Forum, February 2006 See: http://www.internetsociety.org/sites/default/files/pdf/ISOC_IGF_CONTRIBUTION.pdf

    [24] APC, Questionnaire on the Convening the Internet Governance Forum (IGF) See:http://igf.wgig.org/contributions/apc-questionnaire.pdf

    [25] Milton Mueller, John Mathiason, Building an Internet Governance Forum, 2 Febryary 2006, See: http://www.internetgovernance.org/wordpress/wp-content/uploads/igp-forum.pdf

    [26] Ibid.

    [27] Supra note 11.

    [28] Supra note 20.

    [29] Consultations on the convening of the Internet Governance Forum, Transcript of Morning Session 16 February 2006. See: http://unpan1.un.org/intradoc/groups/public/documents/igf/unpan038960.pdf

    [30] Ibid.

    [31] Ibid.

    [32]Milton Mueller, ICANN Watch, ‘The Forum MAG: Who Are These People?’ May 2006 See: http://www.icannwatch.org/article.pl?sid=06/05/18/226205&mode=thread

    [33] IGF Funding, See: https://intgovforum.org/cmsold/funding

    [34] Supra note 12.

    [35] Ibid.

    [36] ICANN’s infiltration of the MAG was evident in the composition of the first advisory group which included Alejandro Pisanty and Veni Markovski who were sitting ICANN Board members, one staff member (Theresa Swineheart), two former ICANN Board members (Nii Quaynor and Masanobu Katoh); two representatives of ccTLD operators (Chris Disspain and Emily Taylor); two representatives of the Regional Internet Address Registries (RIRs) (Raul Echeberria and Adiel Akplogan).  Even the "civil society" representatives appointed were all associated with either ICANN's At Large Advisory Committee or its Noncommercial Users Constituency (or both) Adam Peake of Glocom, Robin Gross of IP Justice, Jeanette Hofmann of WZ Berlin, and Erick Iriarte of Alfa-Redi.

    [37] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum,  17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm

    [38] Jeremy Malcolm, Multi-Stakeholder Public Policy Governance and its Application to the Internet Governance Forum See: https://www.malcolm.id.au/thesis/x31762.html

    [39] MAG Spreadsheet CIS Website https://docs.google.com/spreadsheets/d/1uZzfBz9ihj1M0QSvlnORE0nRD62TCRxhA5d1E_RKfhc/edit#gid=1912343648

    [40] Terms of Reference for the Internet Governance Forum (IGF) Multistakeholder Advisory Group (MAG) Individual Member Responsibilities and Group Procedures See: http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference

    [41] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum,  17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm

    [42] IGF MAG Membership Analysis, 2006-2015 http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Treemap.html

    [43] IGF MAG Membership - Stakeholder Types and Regions - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Regions.html

    [44] IGF MAG Membership - Stakeholder Types across Years - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Years.html

    [45] IGF MAG Membership - Stakeholder Types and Countries - 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Country.html

    [46] IGF MAG Membership Timeline, 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Member-Timeline.html

    [47] MAG Membership - India and USA - 2006-2015

    See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-India-USA.html

    [48] MAG Meetings in 2016

    http://www.intgovforum.org/cms/open-consultations-and-mag-meeting

    [49] Paul J. DiMaggio and Walter W. Powell, ‘The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields’, Yale University, American Sociological Review 1983, Vol. 48 (April: 147-160)

    [50] United Nations Funds-In-Trust Project Document Project number: GLO/11/X01 Project title: Internet Governance Forum Country/area: Global Start date: 1 April 2011 End date: 31 December 2015 Executing agency: UNDESA Funding: Multi-donor – extrabudgetary Budget: Long-term project framework – budget “A” See: http://www.intgovforum.org/cms/2013/TrustFund/Project%20document%20IGF.pdf

    [51] Kieren McCarthy, Critics hit out at 'black box' UN internet body, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=2

    [52] Eli Dourado, Too Many Stakeholders Spoil the Soup, Foreign Policy, 15 May 2013 See:http://foreignpolicy.com/2013/05/15/too-many-stakeholders-spoil-the-soup/

    [53] IANA Transition, NetMundial are some of the other examples of multi-stakeholder engagement.

    [54] Ibid.

    Privacy Gaps in India's Digital India Project

    by Anisha Gupta and Edited by Amber Sinha — last modified Feb 21, 2017 01:55 AM
    This paper seeks to assess the privacy protections under 15 e-governance schemes: Soil Health Card, Crime and Criminal Tracking Network & Systems (CCTNS), Project Panchdeep, U-Dise, Electronic Health Records, NHRM Smart Card, MyGov, eDistricts, Mobile Seva, Digi Locker, eSign framework for Aadhaar, Passport Seva, PayGov, National Land Records Modernization Programme (NLRMP), and Aadhaar.

    Introduction

    The Central and State governments in India have been increasingly taking steps to fulfill the goal of a ‘Digital India’ by undertaking e-governance schemes. Numerous schemes have been introduced to digitize sectors such as agriculture, health, insurance, education, banking, police enforcement, etc. With the introduction of the e-Kranti program under the National e-Governance Plan, we have witnessed the introduction of forty four Mission Mode Projects.[1]

    The digitization process is aimed at reducing the human handling of personal data and enhancing the decision making functions of the government. These schemes are postulated to make digital infrastructure available to every citizen, provide on demand governance and services and digital empowerment.[2]

    In every scheme, personal information of citizens are collected in order to avail their welfare benefits. While the efforts of the government are commendable, the efficacy of these programs in the absence of sufficient infrastructure for security raises various concerns. Increased awareness among citizens and stronger security measures by the governments are necessary to combat the cogent threats to data privacy arising out of the increasing rate of cyberattacks.[3]

    The schemes identified for the purpose of this paper have been introduced by the following government agencies:

    S.No.SchemeGovernment Agency Involved
    1 SOIL HEALTH CARD
    A scheme designed to provide complete soil information to farmers.
    Department of Agriculture Corporation (DACNET)
    2 CRIME AND CRIMINAL NETWORK TRACKING & SYSTEMS (CCTNS)
    A scheme that seeks to facilitate the functioning of the criminal system through online records, and has proposed data analysis for the purpose of trend setting, crime analysis, disaster and traffic management, etc.
    National Crime Records Bureau (NCRB)
    3 U-Dise
    Serves as the official data repository for educational information.
    Ministry of Human Resource Development (MHRD)
    4 PROJECT PANCHDEEP
    The use of Unified Information System for implementation of health insurance facilities under ESIC (Employee State Insurance
    Corporation).
    Ministry of Labour & Employment
    5 ELECTRONIC HEALTH RECORDS
    A scheme to digitally record all health data of a citizen from birth to death.
    Ministry of Health and Family Welfare (MoHFW)
    6 NHRM SMART CARD
    Under the Rashtriya Swasthya Bima Yojana (RSBY) Scheme, every beneficiary family is issued a biometric enabled smart card for providing health insurance to persons covered under the scheme.
    Ministry of Health and Family Welfare (MoHFW)
    7 MYGOV
    An online platform for government and citizen interaction.
    The Department of Electronics and Information Technology (DeITY)
    8 EDISTRICTS
    Common Service Centres are being established under the scheme to provide multiple services to the citizens at a district level.
    DeITY
    9 MOBILE SEVA
    A centralized mobile app, used to host various mobile applications.
    DeITY
    10 DIGILOCKER
    A scheme that provides a secure dedicated personal electronic space for storing the documents.
    DeITY
    11 eSIGN FRAMEWORK FOR AADHAAR eSign is an online electronic signature service to facilitate an Aadhaar holder to digitally sign a document. Ministry of Electronic and Information Technology
    12 PAYGOV
    A centralized platform for all citizen to government payments.
    DeITY and NSDL Database Management Limited (NDML)
    13 PASSPORT SEVA
    An online scheme for passport application and documentation.
    Ministry of External Affairs
    14 NATIONAL LAND RECORDS MODERNIZATION PROGRAM (NLRMP)
    The scheme seeks to modernize land records system through digitization and computerization of land records.
    DeITY and NDML
    15 AADHAAR
    A scheme for unique identification of citizens for the purpose of targeted delivery of welfare benefits.
    Unique Identification Authority of India (UIDAI)

    Read the full paper


    [1]. Introduction to Digital India, available at http://www.governancenow.com/ news/regular-story/securing-digital-india

    [2]. Id.

    [3]. GN Bureau, Securing Digital India, Governance Now (June 11, 2016) available at http://www.governancenow.com/news/regular-story/securing-digitalindia

    Can the Judiciary Upturn the Lok Sabha Speaker’s Decision on Aadhaar?

    by Amber Sinha last modified Feb 27, 2017 03:44 PM
    When ruling on the petition filed by Jairam Ramesh challenging passing the Aadhaar Act as a money Bill, the court has differing precedents to look at.
    Can the Judiciary Upturn the Lok Sabha Speaker’s Decision on Aadhaar?

    Jairam Ramesh (L) has said Lok Sabha speaker Sumitra Mahajan’s decision to pass the Aadhaar Act as a money Bill is unconstitutional. It remains to be seen what the court will say. Credit: PTI

    The article was published in the Wire on February 21, 2017.


    In an earlier article, I had argued that the characterisation of the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, as a money Bill by Sumitra Mahajan, speaker of the Lok Sabha, was erroneous. Specifically, I had argued that upon perusal of Article 110 (1) of the constitution, the Aadhaar Act does not satisfy the conditions required of a money Bill. For a legislation to be classified as a money Bill, it must comprise of ‘only’ provisions dealing with the following matters: (a) imposition, regulation and abolition of any tax, (b) borrowing or other financial obligations of the government of India, (c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, (d) appropriation of money out of CFI, (e) expenditure charged on the CFI or (f) receipt or custody or audit of money into CFI or public account of India; or (g) any matter incidental to any of the matters specified in sub-clauses (a) to (f).

    Article 110 is modelled on Section 1(2) of the UK’s Parliament Act, 1911, which also defines money Bills as those only dealing with certain enumerated matters. The use of the word ‘only’ was brought up by Ghanshyam Singh Gupta during the constituent assembly debates. He pointed out that the use of the word ‘only’ limits the scope money Bills to only those legislations which did not deal with other matters. His amendment to delete the word ‘only’ was rejected, clearly establishing the intent of the framers of the constitution to keep the ambit of money Bills extremely narrow. G.V. Mavalankar, the first speaker of Lok Sabha, had stated that the word ‘only’ must not be construed so as to give an overly restrictive meaning. For instance, a Bill which deals with taxation could have provisions which deal with the administration of the tax. The finance minister, Arun Jaitley, referred to these words by Mavalankar, justifying the classification of the Aadhaar Act as a money Bill.

    While the Aadhaar Bill does makes references to benefits, subsidies and services funded by the CFI, even a cursory reading of the Bill reveals its main objectives as creating a right to obtain a unique identification number and providing for a statutory apparatus to regulate the entire process. Any reasonable reading of the legislation would be hard pressed to view all provisions in the Aadhaar Act, aside from the one creating a charge on the CFI, as merely administrative provisions incidental to the creation such charge. The mere fact of establishing the Aadhaar number as the identification mechanism for benefits and subsidies funded by the CFI does not give it the character of a money Bill. The Bill merely speaks of facilitating access to unspecified subsidies and benefits rather than their creation and provision being the primary object of the legislation. Erskine May’s seminal textbook, Parliamentary Practice, is instructive in this respect and makes it clear that a legislation which simply makes a charge on the consolidated fund does not becomes a money Bill if otherwise its character is not that of one. Further, the subordinate regulations notified under the Aadhaar Act deal almost entirely with matters to do with enrolment, updation, authentication of the Aadhaar number and related matters such as data security regulations and sharing of information collected, rather than the provision of benefits or subsidies or disbursal of funds otherwise from the CFI.

    However, in the context of the petition filed by former Union minister Jairam Ramesh challenging the passage of the law on Aadhaar as a money Bill, the more important question is whether the judiciary has a right to question the speaker’s decision in such a matter. If not, any other questions about whether the legislation is a money Bill will remain merely academic in nature.

    Irregularity vs illegality

    Article 110 (3) clearly states that with regard to the question whether a legislation is a money Bill or not, the decision of the speaker is final and binding. The question is whether such a clause completely excludes any judicial review. Further, Article 122 prohibits the courts from questioning the validity of any proceedings in parliament on the ground of any alleged irregularity of procedure.

    During the arguments in the court, the attorney general questioned the locus standi of Ramesh. The petition has been made under Article 32 of the constitution and the government argued that no fundamental rights of Ramesh were violated. However, the court has asked Ramesh to make his submission and adjourned the hearing to July. The petition by Ramesh would hinge largely on the powers of the judiciary to question the decision of the speaker of the Lok Sabha.

    The powers of privilege that parliamentarians enjoy are integral to the principle of separation of powers. The rationale behind parliamentary privilege is to prevent interference in the lawmakers’ powers to perform essential functions. The ability to speak and vote inside the legislature without the fear of punishment is certainly essential to the role of a lawmaker. However, the extent of this protection lies at the centre of this discussion. During the constituent assembly debates, H.V. Kamath and others had argued for a schedule to exhaustively codify the existing privileges. However, B.R. Ambedkar pointed to the difficulty of doing so and parliamentary privilege on the lines of the British parliamentary practice was retained in the constitution. In the last few decades, a judicial position has emerged that courts could exercise a limited degree of scrutiny over privileges, as they are primarily responsible for interpreting the constitution.

    In the matter of Raja Ram Pal vs The Hon’ble Speaker, Lok Sabha, it had been clarified that proceedings of the legislature were immune from questioning by courts in the case of procedural irregularity but not in the case of illegality. In this case, the Supreme Court while dealing with Article 122 stated that it does not oust review by the judiciary in cases of “gross illegality, irrationality, violation of constitutional mandate, mala fides, non-compliance with rules of natural justice and perversity.”

    In 1968, the speaker of the Punjab legislative assembly adjourned the proceedings for a period of two months following rowdy behaviour. Subsequently, an ordinance preventing such a suspension was promulgated and the legislature was summoned by the governor to consider some expedient financial matters. The speaker disagreed with the decision and after some confusion, the deputy speaker passed a few Bills as money Bills. While looking into the question of what was protected from judicial review, the court stated that the protection did not extend to breaches of mandatory provisions of the constitution, only to directory provisions. By that logic, if Article 110 (1) is seen as a mandatory provision, a breach of its provisions could lead to an interpretation that the Supreme Court may well question an erroneous decision by the speaker of the Lok Sabha to certify a legislation as a money Bill. The use of the word “shall” in Article 110 (1), the nature and design of the provision, its overriding impact on the other constitutional provisions granting the Rajya Sabha powers are ample evidence of its mandatory nature. Based on the above, Anup Surendranath has argued that the passage of the Aadhaar Act as a money Bill when it does not satisfy the constitutional conditions for it does amount to a gross illegality.

    The judicial precedent in Mohd. Saeed Siddiqui vs State of Uttar Pradesh where the matter of the court’s power to question the decision of a speaker was considered, though, leans in the other direction. In 2012, the Uttar Pradesh Lokayukta and Up-Lokayuktas (Amendment) Act, 2012 was passed as money Bill by the Uttar Pradesh state legislature. Subsequently, a writ petition was filed challenging its constitutional validity. A three-judge bench of the Supreme Court looked into the application of Article 212. It is the provision corresponding to Article 122, dealing with the power of the courts to inquire into the proceedings of the state legislature. The court held that Article 212 makes “it clear that the finality of the decision of the Speaker and the proceedings of the State Legislature being important privilege of the State Legislature, viz., freedom of speech, debate and proceedings are not to be inquired by the Courts.” Importantly, ‘proceedings of the legislature’ were deemed to include within its scope everything done in transacting parliamentary business, including the passage of the Bill. While the court did acknowledge the limitations of parliamentary privilege as established in the Raja Ram Pal case, it did not adequately take into account the reasoning in it.

    The Aadhaar Act is a legislation which makes it mandatory of all residents to enrol for a biometric identification system in order to avail certain subsidies, benefits and services. It has huge potential risks for individual privacy and national security and has been the subject of an extremely high profile Public Interest Litigation. Its passage as a money Bill, without any oversight from the Rajya Sabha and an opportunity for substantial debate and discussion, is a fraud on the Constitution. Whether or not the court chooses to see it that way remains to be seen.

    Comments on Information Technology (Security of Prepaid Payment Instruments) Rules, 2017

    by Amber Sinha last modified Mar 23, 2017 01:54 AM
    The Centre for Internet and Society submitted comments on the Information Technology (Security of Prepaid Payment Instruments) Rules, 2017. The comments were prepared by Udbhav Tiwari, Pranesh Prakash, Abhay Rana, Amber Sinha and Sunil Abraham.

    1. Preliminary

    1.1. This submission presents comments by the Centre for Internet and Society[1] in response to the Information Technology (Security of Prepaid Payment Instruments) Rules 2017 (“the Rules”).[2] The Ministry of Electronics and Information Technology (MEIT) issued a consultation paper (pdf) which calls for developing a framework for security of digital wallets operating in the country on March 08, 2017. This proposed rules have been drafted under provisions of Information Technology Act, 2000, and comments have been invited from the general public and stakeholders before the enactment of these rules.

    2. The Centre for Internet and Society

    2.1. The Centre for Internet and Society, (“CIS”), is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, and open access), internet governance, telecommunication reform, digital privacy, and cyber-security.

    2.2. This submission is consistent with CIS’ commitment to safeguarding general public interest, and the interests and rights of various stakeholders involved, especially the privacy and data security of citizens. CIS is thankful to the MEIT for this opportunity to provide feedback to the draft rules.

    3. Comments

    3.1  General Comments

    Penalty

    There is no penalty for not complying with these rules.  Even the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 doesn’t have penalties.  Under section 43A of the Information Technology Act (under which the 2011 Rules have been promulgated), a wrongful gain or a wrongful loss needs to be demonstrated.  This should not be a requirement for financial sector.

    Expansion to Contractual Parties.

    A majority of these rules, in order to be effective and realistically protect consumer interest, should also be expanded to third parties, agents, contractual relationships and any other relevant relationship an e-PPI issuer may delegate as a part of their functioning.

    3.2  Rule 2: Definitions

    Certain key words relevant to the field of e-PPI based digital payments such as authorisation, metadata, etc. are not defined in the rules and should both be defined and accounted for in the rules to ensure modern developments such as big data and machine learning, digital surveillance, etc. do not violate human rights and consumer interest.

    3.2  Rule 7: Definition of personal information

    Rule 7 provides an exhaustive list of data that will be deemed to be personal information for the purposes of the Rules. While information collected at the time of issuance of the pre-paid payment instrument and during its use is included within the scope of Rule 7, it makes no reference to metadata generated and collected by the e-PPI issuer.

    3.3 Rule 4: Inadequate privacy protections

    Rule 4(2) specifies the details that the privacy policies of each e-PPI issuer must contain. However, these specifications are highly inadequate and fall well below the recommendations under the National Privacy Principles in Report of the Group of Experts on Privacy chaired by Justice A P Shah.

    Suggestions: The Rules should include include clearly specified rights to access, correction and opt in/opt out, continuing obligations to seek consent in case of change in policy or purpose and deletion of data after purpose is achieved. Additionally, it must be required that a log of each version of past privacy policies be maintained along with the relevant period of applicability.

    3.4 Rule 10: Reasonable security practices

    Problem: Financial information (“such as bank account or credit card or debit card or other payment instrument details”) is already invoked in an inclusive manner in the definition of ‘personal information’ in Rule 7.  Given this there is no need to make the Reasonable Security Practices Rules applicable to financial data through this provisions: it already is, and it is best to avoid unnecessary redundancy.

    Solution: This entire rule should be removed.

    3.5  Rule 12: Traceability

    Problem: There is a requirement created under this rule that payment-related interactions with customers or other service providers be “appropriately trace[able]”.  But it is unclear what that would practically mean: would IP logging suffice? would IMEI need to be captured for mobile transactions? what is “appropriately” traceable? — none of those questions are answered.

    Suggestion: The NPCI’s practices and RBI regulations, for instance, seek to limit the amount of information that entities like e-PPI providers have.  These rules need to be brought in line with those practices and regulations.

    3.6 Rule 5: Risk Assessment

    Rule 5 requires e-PPI issuers to carry out risk assessments associated with the security of the payments systems at least once a year and after any major security incident. However, there are no transparency requirements such as publications of details of such review, a summary of the analysis, any security vulnerabilities discovered etc.

    Suggestion:

    • Broaden the scope of this provision to include not just risk assessments but also security audits.
    • Mandate publication of risk assessment and security audit reports.

     

    3.7 Rule 11: End-to-End Encryption

    The rule concerning end-to-end encryption (E2E) needs significantly greater detailing to be effective in ensuring the the protection of information at both storage and transit.

    Suggestions: Elements such as Secure Element or a Secured Server and Trusted User Interface, both concepts to enable secure payments, can be detailed in the rule and a timeline can be established to require hardware, e-PPI practices and security standards to realistically account for such best practices to ensure modern, secure and industry accepted implementation of the rule.

    3.8 Rule 13: Retention of Information

    Problem: Rule 13 leaves the question of retention entirely unanswered by deferring the future rulemaking to the Central Government.

    Suggestions: Rule 13 should be expanded to include the various categories of information that can be stored, guidelines for the short-term (fast access) and long-term storage of the information retained under the rule and other relevant details. The rule should also include the security standards that should be followed in the storage of such information, require access logs be maintained for whenever this information is accessed by individuals, detail secure destruction practices at the end of the retention period  and finally mandate that end users be notified by the e-PPI issuer of when such retained information is accessed in all situations bar exceptional circumstances such as national security, compromising an ongoing criminal investigations, etc.

    3.9 Rule 14: Reporting of Cyber Incidents

    Rule 14 is an excellent opportunity to uphold transparency, accountability and consumer rights by mandating time- and information-bound notification of cyber incidents to customers, including intrusions, database breaches and any other compromise of the integrity of the financial system. While the requirement of reporting such incidents to CERT-In is already present in the Rule 12 of the CERT Rules, the rule retains the optional nature of notifying customers. The rule should include an exhaustive list of categories or kinds of cyber incidents that should be reported to affected end users without compromising the investigation of such breaches by private organisations and public authorities. Further, the rule should also include penalties for non-compliance of this requirement (both to CERT-In and the consumer) to serve as an incentive for e-PPI issuers to uphold consumer public interest. The rule should be expanded to include a detailed mechanism for such reporting, including when e-PPI issuers and the CERT-In can withhold information from consumers as well as requiring the withheld information be disclosed when the investigation has been completed. Finally, the rule should also require that such disclosures be public in nature and consumers not be required to not disseminate such information to enable informed choice by the end user community.

    Suggestion:

    (1) In Rule 14(3) “may” should be substituted by “shall”.

    (2) Penalties of up to 5 lakh rupees may be imposed for each day that the e-PPI issuer fails to report any severe vulnerability that could likely result in harm to customers.

    3.10 Rule 15: Customer Awareness and Education

    Problem: Rule 15 on Customer Awareness and Education by e-PPI issuers does not take into account the vast lingual diversity and varied socio-economic demographic that makes up the end users of e-PPI providers in India, by mandating the actions under the rule must account for these factors prior to be propagated.

    Solutions: The rule must ensure that e-PPI issuers track record in carrying out awareness is regularly held accountable by both the government and public disclosures on their websites. Further, the rule can be made more concrete and effective by including mobile operating systems in their scope (along with equipments), mandating awareness for best practices for inclusive technologies like USSD banking, specifying notifications to include SMS reports of financial transactions, etc.

    3.11 Rule 16: Grievance Redressal

    Problem: Rule 16 lays down the requirement of grievance redressal, without specifying appellate mechanisms (both within the organisation and at the regulatory level), accountability (via penalties) for non-compliance of the rule nor requiring a clear hierarchy of responsibility within the e-PPI organisation. These factors seriously compromise the efficacy of a grievance redressal framework.

     

    Solutions: Similar rules for grievance redressal that have been enacted by the Insurance Regulatory and Development Authority for the insurance sector and the Telecom Regulatory Authority of India for the telecom sector can and should serve as a reference point for this rule. Their effectiveness and real world operation should also be monitored by the relevant authorities while ensuring sufficient flexibility exists in the rule to uphold consumer rights and the public interest. Proper appellate mechanisms at the regulatory level are essential along with penalties for non-compliance.

    3.12 Rule 17: Security Standards

    Problem: Rule 17 empowers the Central Government to mandate security standards to be followed by e-PPI issuers operating in India. While appreciable in its overall outlook on ensuring a minimum standard of security, the Rule needs be improved upon to make it more effective. This can be in done by specifying certain minimum security standards to ensure all e-PPI issuers have a minimal level of security, instead of leaving them open to being intimated at a later date.

    Solutions: Standards that can either be made mandatory or be used as a reference point to create a new standard under Rule 17(2) are ISO/IEC 14443, IS 14202, ISO/IEC 7816, PCI DSS, etc. Further, the Rule should include penalties for non-compliance of these standards, to make them effectively enforceable by both the government and end users alike. Additional details like the maximum time period in which such security standards should be implemented post their notification, requiring regular third party audits to ensure continuing compliance and effectiveness and requiring updated standards be used upon their release will go a long way in ensuring e-PPI issuers fulfil their mandate under these Rules.


    [1] http://cis-india.org/

    [2] http://meity.gov.in/sites/upload_files/dit/files/draft-rules-security%20of%20PPI-for%20public%20comments.pdf

    Benefits, Harms, Rights and Regulation: A Survey of Literature on Big Data

    by Amber Sinha, Vanya Rakesh, Vidushi Marda and Geethanjali Jujjavarapu — last modified Mar 23, 2017 02:17 AM
    This survey draws upon a range of literature including news articles, academic articles, and presentations and seeks to disaggregate the potential benefits and harms of big data, organising them into several broad categories that reflect the existing scholarly literature. The survey also recognises the non-technical big data regulatory options which are in place as well as those which have been proposed by various governments, civil society groups and academics.

    The survey was edited by Sunil Abraham, Elonnai Hickok and Leilah Elmokadem


    Introduction

    In 2011, it was estimated that the quantity of data produced globally surpassed 1.8 zettabyte.By 2013 it had increased to 4 zettabytes. With the nascent development of the so-called ‘Internet of Things’ gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “Big Data”; though there is no single agreed upon definition of the term. Although still in its initial stages, big data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even a decade ago.

    Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. Most notably are the concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data. Concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on big data analytics as opposed to traditional forms of analysis and the creation of new digital divides. The existing literature on big data is vast. However, many of the benefits and harms identified by researchers tend to focus on sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of big data’s possible applications, they do not offer a holistic perspective of the broader impacts of Big Data.

    Click to read the full survey here

    How Aadhaar compromises privacy? And how to fix it?

    by Sunil Abraham last modified Apr 01, 2017 07:00 AM
    Aadhaar is mass surveillance technology. Unlike targeted surveillance which is a good thing, and essential for national security and public order – mass surveillance undermines security. And while biometrics is appropriate for targeted surveillance by the state – it is wholly inappropriate for everyday transactions between the state and law abiding citizens.

    The op-ed was published in the Hindu on March 31, 2017.


    When assessing a technology, don't ask - “what use is it being put to today?”. Instead, ask “what use can it be put to tomorrow and by whom?”. The original noble intentions of the Aadhaar project will not constrain those in the future that want to take full advantage of its technological possibilities.  However, rather than frame the surveillance potential of Aadhaar in a negative tone as three problem statements - I will propose three modifications to the project that will reduce but not eliminate its surveillance potential.

    Shift from biometrics to smart cards: In January 2011, the Centre for Internet and Society had written to the parliamentary finance committee that was reviewing what was then called the “National Identification Authority of India Bill 2010”. We provided nine reasons for the government to stop using biometrics and instead use an open smart card standard. Biometrics allows for identification of citizens even when they don't want to be identified. Even unconscious and dead citizens can be identified using biometrics. Smart cards, on the other hand, require pins and thus citizens' conscious cooperation during the identification process. Once you flush your smart cards down the toilet nobody can use them to identify you. Consent is baked into the design of the technology. If the UIDAI adopts smart cards, we can destroy the centralized database of biometrics just like the UK government did in 2010 under Theresa May's tenure as Home Secretary. This would completely eliminate the risk of foreign governments, criminals and terrorists using the biometric database to remotely, covertly and non-consensually identify Indians.

    Destroy the authentication transaction database: The Aadhaar Authentication Regulations 2016 specifies that transaction data will be archived for five years after the date of the transaction. Even though the UIDAI claims that this is a zero knowledge database from the perspective of “reasons for authentication”, any big data expert will tell you that it is trivial to guess what is going on using the unique identifiers for the registered devices and time stamps that are used for authentication.  That is how they put Rajat Gupta and Raj Rajratnam in prison. There was nothing in the payload ie. voice recordings of the tapped telephone conversations – the conviction was based on meta-data. Smart cards based on open standards allow for decentralized authentication by multiple entities and therefore eliminate the need for a centralized transaction database.

    Prohibit the use of Aadhaar number in other databases: We must, as a nation, get over our obsession with Know Your Customer [KYC] requirements. For example, for SIM cards there is no KYC requirement is most developed countries. Our insistence on KYC has only resulted in retardation of Internet adoption, a black market for ID documents and unnecessary wastage of resources by telecom companies. It has not prevented criminals and terrorists from using phones. Where we must absolutely have KYC for the purposes of security, elimination of ghosts and regulatory compliance – we must use a token issued by UIDAI instead of the Aadhaar number itself. This would make it harder for unauthorized parties to combine databases while at the same time, enabling law enforcement agencies to combine databases using the appropriate authorizations and infrastructure like NATGRID. The NATGRID, unlike Aadhaar, is not a centralized database. It is a standard and platform for the express assembly of sub-sets of up to 20 databases which is then accessed by up to 12 law enforcement and intelligence agencies.

    To conclude, even as a surveillance project – Aadhaar is very poorly designed. The technology needs fixing today, the law can wait for tomorrow.

    Analysis of Key Provisions of the Aadhaar Act Regulations

    by Amber Sinha last modified Apr 03, 2017 02:05 PM
    In exercise of their powers under of the powers conferred by Aadhaar (Targeted Delivery of Financial and other Subsidies, Benefits and Services) Act, 2016, (Aadhaar Act) the UIDAI has come out with a set of five regulations in late 2016 last year. In this policy brief, we look at the five regulations, their key provisions and highlight point out the unresolved, issues, unaddressed, and created issues as result of these regulations.

    This blog post was edited by Elonnai Hickok


    Introduction

    At the outset it is important to note that a concerning feature of these regulations is that they intend to govern the processes of a body which has been in existence for over six years, and has engaged in all the activities sought to be governed by these policies at a massive scale, considering the claims of over one billion Aadhaar number holders. However, the regulation do not acknowledge, let alone address past processes, practices, enrollments, authentications, use of technology etc.  this fact, and there are no provisions that effectively address  the past operations of the UIDAI. Below is an analysis of the five regulations issued thus far by the UIDAI.

    Unique Identification Authority of India (Transactions of Business at Meetings of the Authority) Regulations[1]

    These regulations framed under clause (h) of sub-section (2) of section 54 read with sub-section (1) of section 19 of the Aadhaar Act, deal with the meetings of the UIDAI, the process following up to each meeting, and the manner in which all meetings are to be conducted.

    Provision: Sub-Regulation 3.

    Meetings of the Authority– (1) There shall be no less than three meetings of the Authority in a financial year on such dates and at such places as the Chairperson may direct and the interval between any two meetings shall not in any case, be longer than five months

    Observations:

    The number of times that UIDAI would meet in a year is far too less, taking in account the significance of the responsibilities of UIDAI as the sole body for policy making for all issues related to Aadhaar. In contrast, the Telecom Regulatory Authority of India is required to meet at least once a month. Other bodies such as SEBI and IRDAI are also required to meet at least four times[2] and six times[3] in a year respectively.

    Provision: Sub-Regulation 8 (5)

    Decisions taken at every meeting of the Authority shall be published on the website of Authority unless the Chairperson determines otherwise on grounds of ensuring confidentiality.

    Observations:

    The Chairperson has the power to determine withholding publication of the decisions of the meeting on the broad grounds of ‘confidentiality’. Given the fact that the decisions taken by UIDAI as a public body can have very real implications for the rights of residents, the ground of confidentiality is not sufficient to warrant withholding publication. It is curious that instead of referring to the clearly defined exceptions laid down in other similar provisions such as the exceptions in Section 8 of the Right to Information Act, 2005, the rules merely refer to vague and undefined criteria of ‘confidentiality’.

    Provision: Sub-Regulation 14 (4)

    Members of the Authority and invitees shall sign an initial Declaration at the first meeting of the Authority for maintaining the confidentiality of the business transacted at meetings of the Authority in Schedule II.

    Observations:

    The above provision, combined with the fact that there is no provision regarding publication of the minutes of the meetings of UIDAI raise serious questions about the transparency of  its functioning.

    Unique Identification Authority of India (Enrolment and Update) Regulations[4]

    These regulations, framed under  sub-section (1), and sub-clauses (a), (b), (d,) (e), (j), (k), (l), (n), (r), (s), and (v) of sub-section (2), of Section 54 of the Aadhaar Act deals with the enrolment process, the generation of an Aadhaar number, updation of information and governs the conduct of enrolment agencies and associated third parties.

    Provisions:

    Sub-Regulation 8 (2), (3) and (4)

    The standard enrolment/update software shall have the security features as may be specified by the Authority for this purpose.

    All equipment used in enrolment, such as computers, printers, biometric devices and other accessories shall be as per the specifications issued by the Authority for this purpose.

    The biometric devices used for enrolment shall meet the specifications, and shall be certified as per the procedure, as may be specified by the Authority for this purpose.

    Sub-Regulation 3 (2)

    The standards for collecting the biometric information shall be as specified by the Authority for this purpose.

    Sub-Regulation 4 (5)

    The standards of the above demographic information shall be as may be specified by the Authority for this purpose.

    Sub-Regulation 6 (2)

    For residents who are unable to provide any biometric information contemplated by these regulations, the Authority shall provide for handling of such exceptions in the enrolment and update software, and such enrolment shall be carried out as per the procedure as may be specified by the Authority for this purpose.

    Sub-Regulation 14 (2)

    In case of rejection due to duplicate enrolment, resident may be informed about the enrolment against which his Aadhaar number has been generated in the manner as may be specified by the Authority.

    Observations:

    Though in February 2017,  the UIDAI published technical specifications for registered devices[5], the regulations  leave unaddressed issues such as lack of appropriately defined security safeguards in the Aadhaar. There is a general trend of continued deferrals in the regulations by stating that matters would be specified later on important aspects such as rejection of applications, uploading of the enrolment packet to the CIDR, the procedure for enrolling residents with biometric exceptions, the procedure for informing residents about acceptance/rejection of enrolment application, specifying the convenience fee for updation of residents’ information, the procedure for authenticating individuals across services etc.c. There is a clear failure to exercise the mandate delegated to UIDAI, leaving key matters to determined at a future unspecified date. The delay and ambiguity around when regulations will be defined is  all the more problematic  in light of the fact that the project has been implemented since 2010 and the Aadhaar number is now mandatory for availing a number of services.

    Further it is important to note that a number of policies put out by the UIDAI predate these regulations, on which the regulations are  completely silent, thus neither endorsing previous policies  nor suggesting that they may be revisited. Further, the regulations choose to not engage with the question of operation of the Aadhaar project, enrolment and storage of data etc prior to the notification of these regulations, or the policies which these regulations may regularise. For instance, the regulations do not specify any measures to deal with issues arising out of enrolment devices used prior to the development of the February 2017 specifications.

    Provision: Sub-Regulation 32

    The Authority shall set up a contact centre to act as a central point of contact for resolution of queries and grievances of residents, accessible to residents through toll free number(s) and/ or e-mail, as may be specified by the Authority for this purpose.

    (2) The contact centre shall:

    1. Provide a mechanism to log queries or grievances and provide residents with a unique reference number for further tracking till closure of the matter;
    2. Provide regional language support to the extent possible;
    3. Ensure safety of any information received from residents in relation to their identity information;
    4. Comply with the procedures and processes as may be specified by the Authority for this purpose.

    (3) Residents may also raise grievances by visiting the regional offices of the Authority or through any other officers or channels as may be specified by the Authority.

    Observations:

    While the setting up of a grievance redressal mechanism under the regulations is a welcome move, there is little clarity about the procedure to be followed, nor is a timeline for it specified. The chapter on grievance redressal is in fact one of the shortest chapters in the regulations. The only provision in this chapter deals with the setting up of a contact centre, a curious choice of term for what is supposed to be the primary quasi judicial grievance redressal body for the Aadhaar project. In line with the indifferent and insouciant terminology of ‘contact centre’, the chapter is restricted to the matters of the logging of queries and grievances by the contact centre, and does not address the matter of procedure or timelines, and even the substantive provisions about the nature of redress available. Furthermore, the obligation on the contact centre to protect information received is limited to ‘ensuring safety’ an ambiguous standard that does not speak to any other standards in Indian law.

    Aadhaar (Authentication) Regulations, 2016[6]

    These regulations, framed under  sub-section (1), and sub-clauses (f) and (w) of sub-section (2) of Section 54 of the Aadhaar Act deals with the authentication framework for Aadhaar numbers, the governance of authentication agencies and the procedure for collection, storage of authentication data and records.

    Provisions:

    Sub-Regulation 5 (1)

    At the time of authentication, a requesting entity shall inform the Aadhaar number holder of the following details:—

    (a) the nature of information that will be shared by the Authority upon authentication;

    (b) the uses to which the information received during authentication may be put; and

    (c) alternatives to submission of identity information

    Sub-Regulation 6 (2)

    A requesting entity shall obtain the consent referred to in sub-regulation (1) above in physical or preferably in electronic form and maintain logs or records of the consent obtained in the manner and form as may be specified by the Authority for this purpose.

    Observations:

    Sub-regulation 5 mentions that at the time of authentication, requesting entities shall inform the Aadhaar number holder of alternatives to submission of identity information for the purpose of authentication. Similarly, sub-regulation 6 mentions that requesting entity shall obtain the consent of the Aadhaar number holder for the authentication. However, in neither of the above circumstances do the regulations specify the clearly defined options that must be made available to the Aadhaar number holder in case they do not wish submit identity information, nor do the regulations specify the procedure to be followed in case the Aadhaar number holder does not provide consent.

    Most significantly, this provision does little by way of allaying the fears raised by the language in Section 8 (4) of the Aadhaar Act which states that UIDAI “shall respond to an authentication query with a positive, negative or any other appropriate response sharing such identity information.” This section gives a very wide discretion to UIDAI to share personal identity information with third parties, and the regulations do not temper or qualify this power in any way.

    Sub-Regulation 11 (1) and (4)

    The Authority may enable an Aadhaar number holder to permanently lock his biometrics and temporarily unlock it when needed for biometric authentication.

    The Authority may make provisions for Aadhaar number holders to remove such permanent locks at any point in a secure manner.

    Observations:

    A welcome provision in the regulation is that of biometric locking which allows Aadhaar number holders to permanently lock his biometrics and temporarily unlock it only when needed for biometric authentication. However, in the same breath, the regulation also provides for the UIDAI to make provisions to remove such locking without any specified grounds for doing so.

    Provision: Sub-Regulation 18 (2), (3) and (4)

    The logs of authentication transactions shall be maintained by the requesting entity for a period of 2 (two) years, during which period an Aadhaar number holder shall have the right to access such logs, in accordance with the procedure as may be specified.

    Upon expiry of the period specified in sub-regulation (2), the logs shall be archived for a period of five years or the number of years as required by the laws or regulations governing the entity, whichever is later, and upon expiry of the said period, the logs shall be deleted except those records required to be retained by a court or required to be retained for any pending disputes.

    The requesting entity shall not share the authentication logs with any person other than the concerned Aadhaar number holder upon his request or for grievance redressal and resolution of disputes or with the Authority for audit purposes. The authentication logs shall not be used for any purpose other than stated in this sub-regulation.

    Observations:

    While it is specified that the authentication logs collected by the requesting entities shall not be shared with any person other than the concerned Aadhaar number holder upon their request or for grievance redressal and resolution of disputes or with the Authority for audit purposes, and that the authentication logs may not be used for any other purpose, the maintenance of the logs for a period of seven years seems excessive. Similarly, the UIDAI is also supposed to store Authentication transaction data for over five years. This is in violation of the widely recognized data minimisation principles which seeks that data collectors and data processors delete personal data records when the purpose for which it has been collected if fulfilled. While retention of data for audit and dispute-resolution purpose is legitimate, the lack of specification of security standards and the overall lack of transparency and inadequate grievance redressal mechanism greatly exacerbate the risks associated with data retention.

    Aadhaar (Sharing of Information) Regulations, 2016 and Aadhaar (Data security) Regulations, 2016[7]

    Framed under the powers conferred by sub-section (1), and sub-clause (o) of sub-section (2), of Section 54 read with sub-clause (k) of sub-section (2) of Section 23, and sub-sections

    (2) and (4) of Section 29, of the Aadhaar Act, the Sharing of Information regulations look at the restrictions on sharing of identity information collected by the UIDAI and requesting entities. The Data Security regulation, framed under powers conferred by clause (p) of subsection (2) of section 54 of the Aadhaar Act, looks at security obligations of all service providers engaged by the UIDAI.

    Provision: Sub-Regulation 6 (1)

    All agencies, consultants, advisors and other service providers engaged by the Authority, and ecosystem partners such as registrars, requesting entities, Authentication User Agencies and Authentication Service Agencies shall get their operations audited by an information systems auditor certified by a recognised body under the Information Technology Act, 2000 and furnish certified audit reports to the Authority, upon request or at time periods specified by the Authority.

    Observations:

    The regulation states that audits shall be conducted by an information systems auditor certified by a recognised body under the Information Technology Act, 2000. However, there is no such certifying body under the Information Technology Act. This suggests a lack of diligence in framing the rules, and will inevitably to lead to inordinate delays, or alternately, a lack of a clear procedure in the appointment of  an auditor. Further, instead of prescribing a regular and proactive process of audits, the regulation only limits audits to when requested or as deemed appropriate by UIDAI. This is another, in line of many provisions, whose implication is power being concentrated in the hands of  UIDAI, with little scope for accountability and transparency.

    Conclusion

    In conclusion, it must be stated that the regulations promulgated by the UIDAI leave a lot to be desired. Some of the most important issues raised against the Aadhaar Act, which were delegated to the UIDAI’s rule making powers have not been addressed at all. Some of the most important issues such as data security policies, right to access records of Aadhaar number holders, procedure to be followed by the grievance redressal bodies, uploading of the enrolment packet to the CIDR, procedure for enrolling residents with biometric exceptions, procedure for informing residents about acceptance/rejection of enrolment application have left unaddressed and ‘may be specified’ at a later data. These failures leave a gaping hole especially in light of the absence of a comprehensive data protection legislation in India, as well the speed and haste with the enrolment and seeding has been done by the UIDAI, and the number of services, both private and public, which are using or planning to use the Aadhaar number and the authentication process as a primary identifier for residents.

    [1] Available at https://uidai.gov.in/legal-framework/acts/regulations.html

    [2] https://www.irda.gov.in/ADMINCMS/cms/frmGeneral_Layout.aspx?page=PageNo62&flag=1

    [3] http://www.sebi.gov.in/acts/boardregu.html

    [4] Available at https://uidai.gov.in/legal-framework/acts/regulations.html

     

    [5] Available at:  https://uidai.gov.in/images/resource/aadhaar_registered_devices_2_0_09112016.pdf

    [6] Available at https://uidai.gov.in/legal-framework/acts/regulations.html

    [7] Available at https://uidai.gov.in/legal-framework/acts/regulations.html

    Aadhaar marks a fundamental shift in citizen-state relations: From ‘We the People’ to ‘We the Government’

    by Pranesh Prakash last modified Apr 04, 2017 04:10 PM
    Your fingerprints, iris scans, details of where you shop. Compulsory Aadhaar means all this data is out there. And it’s still not clear who can view or use it.

    The article was published in the Hindustan Times on April 3, 2017.


     

    Aadhaar
    Until recently, people were allowed to opt out of Aadhaar and withdraw consent to have their data stored. This is no longer going to be an option.
    (Siddhant Jumde / HT Illustration)


    Imagine you’re walking down the street and you point the camera on your phone at a crowd of people in front of you. An app superimposes on each person’s face a partially-redacted name, date of birth, address, whether she’s undergone police verification, and, of course, an obscured Aadhaar number.

    OnGrid, a company that bills itself as a “trust platform” and offers “to deliver verifications and background checks”, used that very imagery in an advertisement last month. Its website notes that “As per Government regulations, it is mandatory to take consent of the individual while using OnGrid”, but that is a legal requirement, not a technical one.

    Since every instance of use of Aadhaar for authentication or for financial transactions leaves behind logs in the Unique Identification Authority of India’s (UIDAI) databases, the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software. The space for digital identities as divorced from legal identities gets removed. Clearly, Aadhaar has immense potential for profiling and surveillance. Our only defence: law that is weak at best and non-existent at worst.

    The Aadhaar Act and Rules don’t limit the information that can be gathered from you by the enrolling agency; it doesn’t limit how Aadhaar can be used by third parties (a process called ‘seeding’) if they haven’t gathered their data from UIDAI; it doesn’t require your consent before third parties use your Aadhaar number to collate records about you (eg, a drug manufacturer buying data from various pharmacies, and creating profiles using Aadhaar).

    It even allows your biometrics to be shared if it is “in the interest of national security”. The law offers provisions for UIDAI to file cases (eg, for multiple enrollments), but it doesn’t allow citizens to file a case against private parties or the government for misuse of Aadhaar or identity fraud, or data breach.

    It is also clear that the government opposes any privacy-related improvements to the law. After debating the Aadhaar Bill in March 2016, the Rajya Sabha passed an amendment by MP Jairam Ramesh that allowed people to opt out of Aadhaar, and withdraw their consent to UIDAI storing their data, if they had other means of proving their identity (thus allowing Aadhaar to remain an enabler).

    But that amendment, as with all amendments passed in the Rajya Sabha, was rejected by the Lok Sabha, allowing the government to make Aadhaar mandatory, and depriving citizens of consent. While the Aadhaar Act requires a person’s consent before collecting or using Aadhaar-provided details, it doesn’t allow for the revocation of that consent.

    In other countries, data security laws require that a person be notified if her data has been breached. In response to an RTI application asking whether UIDAI systems had ever been breached, the Authority responded that the information could not be disclosed for reasons of “national security”.

    The citizen must be transparent to the state, while the state will become more opaque to the citizen.

    How Did Aadhaar Change?

    In neither of those is the need for Aadhaar properly established. Only in November 2012 — after scholars like Reetika Khera pointed out UIDAI’s fundamental misunderstanding of leakages in the welfare delivery system — was the first cost-benefit analysis commissioned, by when UIDAI had already spent ₹28 billion. That same month, Justice KS Puttaswamy, a retired High Court judge, filed a PIL in the Supreme Court challenging Aadhaar’s constitutionality, wherein the government has argued privacy isn’t a fundamental right.

    Every time you use Aadhaar, you leave behind logs in the UIDAI databases. This means that the government can potentially have very detailed information about everything from the your medical purchases to your use of video-chatting software.

    Even today, whether the ‘deduplication’ process — using biometrics to ensure the same person can’t register twice — works properly is a mystery, since UIDAI hasn’t published data on this since 2012. Instead of welcoming researchers to try to find flaws in the system, UIDAI recently filed an FIR against a journalist doing so.

    At least in 2009, UIDAI stated it sought to prevent anyone from “[e]ngaging in or facilitating profiling of any nature for anyone or providing information for profiling of any nature for anyone”, whereas the 2014 document doesn’t. As OnGrid’s services show, the very profiling that the UIDAI said it would prohibit is now seen as a feature that all, including private companies, may exploit.

    UID has changed in other ways too. In 2009, it was as a system that never sent out any information other than ‘Yes’ or ‘No’, which it did in response to queries like ‘Is Pranesh Prakash the name attached to this UID number’ or ‘Is April 1, 1990 his date of birth’, or ‘Does this fingerprint match this UID number’.

    With the addition of e-KYC (wherein UIDAI provides your demographic details to the requester) and Aadhaar-enabled payments to the plan in 2012, the fundamentals of Aadhaar changed. This has made Aadhaar less secure.

    Security Concerns

    With Aadhaar Pay, due to be launched on April 14, a merchant will ask you to enter your Aadhaar number into her device, and then for your biometrics — typically a fingerprint, which will serve as your ‘password’, resulting in money transfer from your Aadhaar-linked bank account.

    Basic information security theory requires that even if the identifier (username, Aadhaar number etc) is publicly known — millions of people names and Aadhaar numbers have been published on dozens of government portals — the password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?

    In 2015, researchers in Carnegie Mellon captured the iris scans of a driver using car’s side-view mirror from distances of up to 40 feet. In 2013, German hackers fooled Apple iOS’s fingerprint sensors by replicating a fingerprint from a photo taken off a glass held by an individual. They even replicated the German Defence Minister’s fingerprints from photographs she herself had put online. Your biometrics can’t be kept secret.

    Typically, even if your username (in this case, Aadhaar number) is publicly known, your password must be secret. That’s how most logins works, that’s how debit and credit cards work. How are you or UIDAI going to keep your biometrics secret?

    In the US, in a security breach of 21.5 million government employees’ personnel records in 2015, 5.2 million employees’ fingerprints were copied. If that breach had happened in India, those fingerprints could be used in conjunction with Aadhaar numbers not only for large-scale identity fraud, but also to steal money from people’s bank accounts.

    All ‘passwords’ should be replaceable. If your credit card gets stolen, you can block it and get a new card. If your Aadhaar number and fingerprint are leaked, you can’t change it, you can’t block it.

    The answer for Aadhaar too is to choose not to use biometrics alone for authentication and authorisation, and to remove the centralised biometrics database. And this requires a fundamental overhaul of the UID project.

    Aadhaar marks a fundamental shift in citizen-state relations: from ‘We the People’ to ‘We the Government’. If the rampant misuse of electronic surveillance powers and wilful ignorance of the law by the state is any precedent, the future looks bleak. The only way to protect against us devolving into a total surveillance state is to improve rule of law, to strengthen our democratic institutions, and to fundamentally alter Aadhaar. Sadly, the political currents are not only not favourable, but dragging us in the opposite direction.

    Document Actions