Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

CIS Cybersecurity Series (Part 24) – Shantanu Ghosh

by Purba Sarkar last modified Jul 15, 2015 02:58 PM
CIS interviews Shantanu Ghosh, Managing Director, Symantec Product Operations, India, as part of the Cybersecurity Series.

“Remember that India is also a land where there are a lot of people who are beginning to use computing devices for the first time in their lives. For many people, their smartphone is their first computing device because they have never had computers in the past. For them, the challenge is how do you make sure that they understand that that can be a threat too. It can be a threat not only to their bank accounts, with their financial information, but even to their private lives.”

Centre for Internet and Society presents its twenty fourth installment of the CIS Cybersecurity Series.”

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Shantanu Ghosh is the Managing Director of Symantec Product Operations, India. He also runs the Data Centre Security Group for Symantec globally.

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

A Dissent Note to the Expert Committee for DNA Profiling

by Elonnai Hickok last modified Jul 21, 2016 11:01 AM
The Centre for Internet and Society has participated in the Expert Committee for DNA Profiling constituted by the Department of Biotechnology in 2012 for the purpose of deliberating on and finalizing the draft Human DNA Profiling Bill and appreciates this opportunity. CIS respectively dissents from the January 2015 draft of the Bill.

 

Click for DNA Bill Functions, DNA List of Offences, and CIS Note on DNA Bill. A modified version was published by Citizen Matters Bangalore on July 28.


Based on the final draft of the Human DNA Profiling Bill that was circulated on the 13th of January 2015 by the committee, the Centre for Internet and Society is issuing this note of dissent on the following grounds:

The Centre for Internet and Society has made a number of submissions to the committee regarding different aspects of the Bill including recommendations for the functions of the board, offences for which DNA can be collected, and a general note on the Bill. Though the Centre for Internet and Society recognizes that the present form of the Bill contains stronger language regarding human rights and privacy, we do not find these to be adequate and believe that the core concerns or recommendations submitted to the committee by CIS have not been incorporated into the Bill.

The Centre for Internet and Society has foundational objections to the collection of DNA profiles for non-forensic purposes. In the current form the DNA Bill provides for collection of DNA for the following non forensic purposes:

  • Section 31(4) provides for the maintenance of indices in the DNA Bank and includes a missing person’s index, an unknown deceased person’s index, a volunteers’ index, and such other DNA indices as may be specified by regulation.
  • Section 38 defines the permitted uses of DNA profiles and DNA samples including: identifying victims of accidents or disasters or missing persons or for purposes related to civil disputes and other civil matters and other offences or cases listed in Part I of the Schedule or for other purposes as may be specified by regulation.
  • Section 39 defines the permitted instances of when DNA profiles or DNA samples may be made available and include: for the creation and maintenance of a population statistics Data Bank that is to be used, as prescribed, for the purposes of identification research, protocol development or quality control provided that it does not contain any personally identifiable information and does not violate ethical norms.
  • Part I of the schedule lists laws, disputes, and offences for which DNA profiles and DNA samples can be used. These include, among others, the Motor Vehicles Act, 1988, parental disputes, issues relating to pedigree, issues relating to assisted reproductive technologies, issues relating to transplantation of human organs, issues relating to immigration and emigration, issues relating to establishment of individual identity, any other civil matter as may be specified by the regulations, medical negligence, unidentified human remains, identification of abandoned or disputed children.

While rejecting non-forensic use entirely, we have specific substantive and procedural objections to the provisions relating to forensic profiling in the present version of the Bill. These include:

  • Over delegation of powers to the board: The DNA Board currently has vast powers as delegated by Section 12  including:
    “authorizing procedures for communication of DNA profiles for civil proceedings and for crime investigation by law enforcement and other agencies, establishing procedure for cooperation in criminal investigation between various investigation agencies within the country and with international agencies, specifying by regulations the list of applicable instances of human DNA profiling and the sources and manner of collection of samples in addition to the lists contained in the Schedule, undertaking any other activity which in the opinion of the Board advances the purposes of this Act.”

    Section 65 gives the Board the power to make regulations for a number purposes including: “other purposes in addition to identification of victims of accidents, disasters or missing persons or for purposes related to civil disputes and other civil matters and other offences or cases lists in Part I of the Schedule for which records or samples may be used under section 38, other laws, if any, to be included under item (viii) of para B of Part I of the Schedule, other civil matters, if any, to be included under item (vii) of para C of Part I of the Schedule, and authorization of other persons, if any, for collection of non intimate body samples and for performance of non-intimate forensic procedures, under Part III of the Schedule.

    Ideally these powers would lie with the legislative or judicial branch. Furthermore, the Bill establishes no mechanism for accountability or oversight over the functioning of the Board and section 68 specifically states that “no civil court shall have jurisdiction to entertain any suit or proceeding in respect to any matter which the Board is empowered by or under this Act to determine.”

    The above represents only a few instances of the overly broad powers that have been given to the Board. Indeed, the Bill gives the Board the power to make regulations for 37 different aspects relating to the collection, storage, use, sharing, analysis, and deletion of DNA samples and DNA profiles. As a result, the Bill establishes a Board that controls the entire ecosystem of DNA collection, analysis, and use in India without strong external oversight or accountability.
  • Key terms undefined: Section 31 (5) states that the “indices maintained in every DNA Data Bank will include information of data based on DNA analysis prepared by a DNA laboratory duly approved by the Board under section 1 of the Act, and of records relating thereto, in accordance with the standards as may be specified by the regulations.”

    The term’ DNA analysis’ is not defined in the Act, yet it is a critical term as any information based on such an analysis and associated records can be included in the DNA Database.
  • Low standards for sharing of information: Section 34 empowers the DNA Data Bank Manager to compare a received DNA profile with the profiles stored in the databank and for the purposes of any investigation or criminal prosecution, communicate the information regarding the received DNA profile to any court, tribunal, law enforcement agencies, or DNA laboratory which the DNA Data Bank Manager considers is concerned with it.

    The decision to share compared profiles and with whom should be made by an independent third party authority, rather than the DNA Bank Manager. Furthermore, this provision isvague and although the intention seems to be that the DNA profiles should be matched and the results communicated only in certain cases, the generic wording could take into its ambit every instance of receipt of a DNA profile. For eg. the regulations envisaged under section 31(4)(g) may prescribe for a DNA Data Bank for medical purposes, but section 34 as it is currently worded may include DNA profiles of patients to be compared and their information released to various agencies by the Data Bank Manager as an unintentional consequence.
  • Missing privacy safeguards: Though the Bill refers to security and privacy procedures that labs are to follow, these have been left to be developed and implemented by the DNA Board. Thus, except for bare minimum standards and penalties addressing the access, sharing, and use of data – the Bill contains no privacy safeguards.

    In our interactions with the committee we have asked that the Bill be brought in line with the nine national privacy principles established by the Report of the Group of Experts on Privacy submitted to the Planning Commission in 2012. This has not been done.



DNA Bill Functions

by Prasad Krishna last modified Jul 17, 2015 01:30 AM

PDF document icon DNA Bill - Functions (2).pdf — PDF document, 4 kB (5087 bytes)

DNA List of Offences

by Prasad Krishna last modified Jul 17, 2015 01:34 AM

PDF document icon DNA Bill - List of Offences (1).pdf — PDF document, 8 kB (8604 bytes)

CIS Note on DNA Bill

by Prasad Krishna last modified Jul 17, 2015 01:37 AM

PDF document icon CIS Note on DNA Bill.pdf — PDF document, 98 kB (100977 bytes)

Best Practices Meet 2015

by Prasad Krishna last modified Jul 17, 2015 01:08 PM

PDF document icon BPM 2015 Agenda.pdf — PDF document, 705 kB (722356 bytes)

Five Nations, One Future

by Prasad Krishna last modified Jul 18, 2015 02:24 AM

PDF document icon FutureMag001.pdf — PDF document, 6119 kB (6266080 bytes)

Aadhaar Number vs the Social Security Number

by Elonnai Hickok last modified Jul 24, 2015 01:24 AM
This blog calls out the differences between the Aadhaar Number and the Social Security Number

In response to news items that reported the Government of India running pilot projects to enroll children at the time of birth for Aadhaar numbers - an idea that government officials in the news items claimed was along the lines of the social security number - this note seeks to point out the ways in which the Aadhaar number and the social security number are different.[1]

Governance

SSN is governed by Federal legislation: The issuance, collection, and use of the SSN is governed by a number of Federal and State legislation with the most pertinent being the Social Security Act 1935[2] - which provides legal backing for the number, and the Privacy Act 1974 which regulates the collection, access, and sharing of the SSN by Federal Executive agencies.[3]

Aadhaar was constituted under the Planning Commission: The UIDAI was constituted as an attached office under the Planning Commission in 2009.[4] A Unique Identification Authority Bill has been drafted, but has not been enacted.[5] Though portions of the Information Technology Act 2008 apply to the UID scheme, section 43A and associated Rules (India's data protection standards) do not clearly apply to the UIDAI as the provision has jurisdiction only over body corporate.

Purpose

SSN was created as a number record keeping scheme for government services: The Social Security Act provides for the creation of a record keeping scheme - the SSN. Originally, the SSN was used as a means to track an individuals earnings in the Social Security system.[6] In 1943 via an executive order, the number was adopted across Federal agencies. Eventually the number has evolved from being a record keeping scheme into a means of identity. In 1977 it was clarified by the Carter administration that the number could act as a means to validate the status of an individual (for example if he or she could legally work in the country) but that it was not to serve as a national identity document.[7] Today the SSN serves as a number for tracking individuals in the social security system and as one (among other) form of identification for different services and businesses. Alone, the SSN card does not serve proof of identity, citizenship, and it cannot be used to transact with and does not have the ability to store information. [8]

Aadhaar was created as a biometric based authenticator and a single unique proof of identity: The Aadhaar number was established as a single proof of identity and address for any resident in India that can be used to authenticate the identity of an individual in transactions with organizations that have adopted the number. The scheme as been promoted as a tool for reducing fraud in the public distribution system and enabling the government to better deliver public benefits.[9]

Applicability

SSN is for citizens and non-citizens authorized to work: The social security number is primarily for citizens of the United States of America. In certain cases, non citizens who have been authorized by the Department of Homeland Security to work in the US may obtain a Social Security number.[10]

Aadhaar is for residents: The aadhaar number is available to any resident of India.[11]

Storage, Access, and Disclosure

SSN and applications are stored in the Numident: The numident is a centralized database containing the individuals original SNN and application and any re-application for the same. All information stored in the Numident is protected under the Privacy Act. Individuals may request records of their own personal information stored in the Numident. With the exception of the Department of Homeland Security and U.S Citizenship and Immigration Services, third parties may only request access to Numident records with the consent of the concerned individual.[12] Federal agencies and private entities that collect the SSN for a specific service store the number at the organizational level. The Privacy Act and various state level legislation regulates the disclosure, access, and sharing of the SSN number collected by agencies and organizations.

Aadhaar and data generated at multiple sources is stored in the CIDR and processed in the data warehouse: According to the report "Analytics, Empowering Operations", "At UIDAI, data generated at multiple sources would typically come to the CIDR (Central ID Repository), UIDAIs Data centre, through an online mechanism. There could be certain exceptional sources, like Contact centre or Resident consumer surveys, that will not feed into the Data center directly. Data is then processed in the Data Warehouse using Business Intelligence tools and converted into forms that can be accessed and shared easily." Examples of data that is stored in the CIDR include enrollments, letter delivery, authentication, processing, resident survey, training, and data from contact centres.[13] It is unclear if organizations that authenticate individuals via the Adhaar number store the number at the organizational level. Biometrics are listed as a form of sensitive personal information in the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) 2011, thus if any body corporate collects biometrics with the Aadhaar number - the storage, access, and disclosure of this information would be protected as per the Rules, but the Aadhaar number is not explicitly protected. [14]

Use by public and private entities

Public and private entities can request SSN: Public and private entities can request the SSN to track individuals in a system or as a form of identifying an individual. Any private business is allowed to request and use the SSN as long as the use does not violate federal or state law. Legally, an individual is only required to provide their SSN to a business if they are engaging in a transaction that requires notification to the Internal Revenue Service or the individual is initiating a transaction that is subject to federal Customer Identification Program rules.[15] Thus, an individual can refuse to provide their SSN, but a private business can also refuse to provide a service.[16]

Any public authority requesting the SSN must provide a disclosure notice to the individual explaining if the provision of SSN is required or optional. According to the Privacy Act of 1974, no individual can be denied a government service or benefit for not providing the SSN unless Federal law specifically requires the number for a particular service.[17] Thus, there are a number of Federal legislation in the U.S that specifically require the SSN. For example, the Social Security Independence and Program Improvements Act 1994 allows for the use of the SSN for jury selection and allows for cross matching of SSNs and Employer Identification Numbers for investigation into violation of Federal Laws. [18]

Public and private entities can request Aadhaar: The Aadhaar number can be adopted by any public or private entity as a single means of identifying an individual. The UIDAI has stated that the Aadhaar number is not mandatory,[19] and the Supreme Court of India has clarified that services cannot be denied on the grounds that an individual does not have an Aadhaar number.[20]

Verification

The SSN can be verified only in certain circumstances: The SSA will only respond to requests for SSN verification in certain circumstances:

  • Before issuing a replacement SSN, posting a wage item to the Master Earnings File, or establishing a claims record - the SSA will verify that the name and the number match as per their records.
  • When legally permitted, the SSA verification system will verify SSNs for government agencies.
  • When legally permitted the SSA verification system will verify a workers SSN for pre-registered and approved private employers.
  • If an individual has provided his/her consent, the SSA will verify a SSN request from a third party.

For verification the SSN number must be submitted with an accompanying name to be matched to and additional information such as date of birth, fathers name, mothers name etc. When verifying submitted SSN's, the system will respond with either confirmation that the information matches or that it does not match. It is important to note that because SSN is verified only in certain circumstances, it is not guaranteed that the person providing an SSN number is the person whom the number was assigned.[21]

The Aadhaar number can be verified in any transaction: If an organization, department, or platform has adopted the Aadhaar number as a form of authentication, they can send requests for verification to the UIDAI. The UIDAI will respond with a yes or no answer. When using their Aadhaar number as a form of authentication individuals can submit their number and demographic information or their number and biometrics for verification.[22]

Lost or stolen

SSN can be replaced: If an individual loses his/her SSN card lost or their number is fraudulently used, they can apply for a replacement SSN card or a new SNN number. [23]

Aadhaar number can be replaced: If an individual has lost their Aadhaar number, there is a process that they can follow to have their number re-sent to them. If the number cannot be located by the UIDAI , the individual has the option of re-enrolling for a new Aadhaar number.[24] The UIDAI has built the scheme with the understanding the biometrics are a unique identifier that cannot be lost or stolen, and thus have not created a system to address the possibility of stolen or fraudulent use of biometrics.

Implementation

Legislation and formal roll out: The SSN program was brought into existence via the Social Security Act and officially rolled out while eventually being adopted across Federal Departments.

Bill and pilot studies: The UID scheme has been envisioned as being brought into existence via the Unique Identification Authority Bill 2010 which has not been passed. Thus far, the project has been implemented in pilot phases across States and platforms.

Enrollment

Social Security Administration: The Social Security Agency is the soul body in the US that receives and processes applications for SSN and issues SSN numbers. [25]

UIDAI, registrars, and enrolling agencies: The UIDAI is the soul body that issues Aadhaar numbers. Registrars (contracted bodies under the UIDAI_ - and enrolling agencies (contracted bodies under Registrars) are responsible for receiving and processing enrollments into the UID scheme.

Required supporting documents

SSN requires proof of age, identity, and citizenship: To obtain a SSN you must be able to provide proof of your age, your identity, and US citizenship. The application form requires the following information:

  • Name to be shown on the card
  • Full name at birth, if different
  • Other names used
  • Mailing address
  • Citizenship or alien status
  • Sex
  • Race/ethnic description (SSA does not receive this information under EAB)
  • Date of birth
  • Place of birth
  • Mother's name at birth
  • Mother's SSN (SSA collects this information for the Internal Revenue Service (IRS) on an original application for a child under age 18. SSA does not retain these data.)
  • Fathers' name
  • Father's SSN (SSA collects this information for IRS on an original application for a child under age 18. SSA does not retain these data).
  • Whether applicant ever filed for an SSN before
  • Prior SSNs assigned
  • Name on most recent Social Security card
  • Different date of birth if used on an earlier SSN application.
  • Date application completed
  • Phone number
  • Signature
  • Applicant's relationship to the number holder.[26]

Aadhaar requires proof of age, address, birth, and residence and biometric information: The application form requires the following information:

  • Name
  • Date of birth
  • Gender
  • Address
  • Parent/guardian details
  • Email
  • Mobile number
  • Indication of consenting or not consenting to the sharing of information provided to the UIDAI with Public services including welfare services
  • Indication of if the individual wants the UIDAI to facilitate the opening of a bank account linked to the Aadhaar number and permits the sharing of information for this purpose
  • If the individual has no objection to linking their present bank account to the Aadhaar number and the relevant bank details
  • Signature[27]


[1] Sahil Makkar, "PM's idea to track kids from birth hits practical hurdles", Business Standard. April 11th 2015. Available at: http://www.business-standard.com/article/current-affairs/pm-s-idea-to-track-kids-from-birth-hits-practical-hurdles-115041100828_1.html

[2] The Social Security Act of 1935. Available at: http://www.ssa.gov/history/35act.html

[3] The United States Department of Justice, "Overview of the Privacy Act of 1974". Available at: http://www.justice.gov/opcl/social-security-number-usage

[4] Government of India Planning Commission "Notification". Available at: https://uidai.gov.in/images/notification_28_jan_2009.pdf

[5] The National Identification Authority of India Bill 2010. Available at: http://www.prsindia.org/uploads/media/UID/The%20National%20Identification%20Authority%20of%20India%20Bill,%202010.pdf

[6] History of SSA 1993 - 2000. Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[7] Social Security Number Chronology. Available at: http://www.ssa.gov/history/ssn/ssnchron.html

[8] History of SSA 1993 - 2000, Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[9] UID FAQ: Aadhaar Features, Eligibility. Available at: https://resident.uidai.net.in/faqs

[10] Social Security Numbers for Noncitizens. Available at: http://www.ssa.gov/pubs/EN-05-10096.pdf

[11] Aapka Aadhaar. Available at: https://uidai.gov.in/aapka-aadhaar.html

[12] Program Operations Manual System. Available at: https://secure.ssa.gov/poms.nsf/lnx/0203325025

[13] UIDAI Analytics -Empowering Operations - the UIDAI Experience. Available at: https://uidai.gov.in/images/commdoc/other_doc/uid_doc_30012012.pdf

[14] Information Technology (Reasonable security practices and procedures and sensitive personal data or information rules 2011) available at: http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

[15] IdentityHawk, "Who can lawfully request my social security number?" Available at: http://www.identityhawk.com/Who-Can-Lawfully-Request-My-Social-Security-Number

[16] SSA FAQ " Can I refuse to give my social security number to a private business?" Available at: https://faq.ssa.gov/link/portal/34011/34019/Article/3791/Can-I-refuse-to-give-my-Social-Security-number-to-a-private-business

[17] The United States Department of Justice, "Overview of the Privacy Act of 1974". Available at: http://www.justice.gov/opcl/social-security-number-usage

[18] Social Security Number Chronology. Available at: http://www.ssa.gov/history/ssn/ssnchron.html

[19] Aapka Aadhaar. Available at: https://uidai.gov.in/what-is-aadhaar.html

[20] Business Standard, "Aadhaar not mandatory to claim any state benefit, says Supreme Court" March 17th, 2015. Available at: http://www.business-standard.com/article/current-affairs/aadhaar-not-mandatory-to-claim-any-state-benefit-says-supreme-court-115031600698_1.html

[21] Social Security History 1993 - 2000, Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[22] Aapka Aadhaar. Available at: https://uidai.gov.in/auth.html

[23] SSA. New or Replacement Social Security Number Card. Available at: http://www.ssa.gov/ssnumber/

[24] UIDAI, Lost EID/UID Process. Available at: https://uidai.gov.in/images/mou/eiduid_process_ver5_2_27052013.pdf

[25] Social Security. Availabl at: http://www.ssa.gov/

[26] Social Security Administration, Application for a Social Security. Available at: http://www.ssa.gov/forms/ss-5.pdf

[27] Aadhaar enrollment/correction form. Available at: http://hstes.in/pdf/2013_pdf/Genral%20Notification/Aadhaar-Enrolment-Form_English.pdf

Technology Business Incubators

by Prasad Krishna last modified Jul 25, 2015 03:41 PM

PDF document icon TBI Report - CIS.pdf — PDF document, 860 kB (880913 bytes)

First draft of Technology Business Incubators: An Indian Perspective and Implementation Guidance Report

by Vidushi Marda last modified Jul 25, 2015 04:14 PM
Contributors: Sunil Abraham, Vidushi Marda, Udbhav Tiwari and Anumeha Karnatak
The Centre for Internet and Society presents the first draft of its analysis on technology business incubators("TBI") in India. The report prepared by Sunil Abraham, Vidushi Marda, Udbhav Tiwari and Anumeha Karnatak looks at operating procedures, success stories and lessons that can be learnt from TBIs in India.

A technology business incubator (TBI) is an organisational setup that nurtures technology based and knowledge driven companies by helping them survive during the startup period in the company’s history, which lasts around the initial two to three years. Incubators do this by providing an integrated package of work space, shared office services, access to specialized equipment along with value added services like fund raising, legal services, business planning, technical assistance and networking support. The main objective of the technology business incubators is to produce successful business ventures that create jobs and wealth in the region, along with encouraging an attitude of innovation in the country as a whole.

The primary aspects that this report shall go into are the stages of a startup, the motivational factors behind establishing incubators by governments & private players, the process followed by them in selecting, nurturing talent as well as providing post incubation support. The report will also look at the role that incubators play in the general economy apart from their function of incubating companies, such as educational or public research roles. A series of case analysis of seven well established incubators from India shall follow which will look into their nurturing processes, success stories as well as lessons that can be learnt from their establishment. The final section shall look into challenges faced by incubators in developing economies and the measures taken by them to overcome these challenges.

Download the full paper

Decriminalising Defamation in India

by Prasad Krishna last modified Jul 27, 2015 02:14 PM

PDF document icon Criminal Defamation - Summary of Issues.pdf — PDF document, 78 kB (80679 bytes)

Iron out contradictions in the Digital India programme

by Sumandro Chattapadhyay last modified Jul 28, 2015 01:04 AM
The Digital India initiative takes an ambitious 'Phir Bhi Dil Hai Hindustani' approach to develop communication infrastructure, government information systems, and general capacity to digitise public life in India. I of course use 'public life' in the sense of the wide sphere of interactions between people and public institutions.

The article was published in the Hindustan Times on July 15, 2015.


The 'Phir Bhi Dil Hai Hindustani' approach involves putting together Japanese shoes, British trousers, and a Russian cap to make an entertainer with a pure Indian heart. In this case, the analogy must not be understood as different components of the initiative coming from different countries, but as coming from different efforts to use digital technologies for governance in India.

It is deploying the Public Information Infrastructure vision, inclusive of the National Optical Fibre Network (now renamed as BharatNet) and the national cloud computing platform titled Meghraj, so passionately conceptualised and pursued by Sam Pitroda. It has chosen the Aadhaar ID and the authentication-as-a-service infrastructure built by Nandan Nilekani, Ram Sewak Sharma, and the team, as the identity platform for all governmental processes across Digital India projects. It has closely embraced the mandate proposed by Jaswant Singh led National Task Force on Information Technology and Software Development for completely electronic interface for paper-free citizen-government interactions.

The digital literacy and online education aspects of the initiative build upon the National Mission on Education through ICT driven by Kapil Sibal. Two of the three vision areas of the Digital India initiative, namely 'Digital infrastructure as a utility to every citizen' and 'governance and service on demand,' are directly drawn from the two core emphasis clusters of the National e-Governance Plan designed by R. Chandrashekhar and team, namely the creation of the national and state-level network and data infrastructures, and the National Mission Mode projects to enable electronic delivery of services across ministries.

And this is not a bad thing at all. In fact, the need for this programmatic and strategic convergence has been felt for quite some time now, and it is wonderful to see the Prime Minister directly addressing this need. Although, while drawing benefits from the existing programmes, the DI initiative must also deal with the challenges inherited in the process.

Recently circulated documents describes that the institutional framework for Digital India will be headed by a Monitoring Committee overseeing two main drivers of the initiative: the Digital India Advisory Group led by the minister of communication and information technology, and the Apex Committee chaired by the cabinet secretary. While the former will function primarily through guiding the implementation works by the Department of Electronics and Information Technology (DeitY), the latter will lead the activities of both the DeitY and the various sectoral ministries.

Here lies one possible institutional bottleneck that the Digital India architecture inherits from the National e-Governance Plan. Putting the DeitY in the driving seat of the digital transformation agenda in parallel with all other central government departments indicate an understanding that the transformation is fundamentally a technical issue. However, most often what is needed is administrative reform at a larger scale, and re-engineering of processes at a smaller scale.

Government agencies that have addressed such challenges in the past, such as the department of administrative reforms and public grievances, is not mentioned explicitly within the institutional framework, and instead DeitY has been trusted with a range of tasks that may be beyond its scope and core skills.

The danger of this is that the Digital India initiative will end up initiating more infrastructural and software projects, without transforming the underlying governmental processes. For example, the recently launched eBasta website creates a centralised online shop for publishers of educational materials to make books available for teachers to browse and select for their classes, and for the students to directly download, against payment or otherwise. The website has been developed by the Centre for Development of Advanced Computing and DeitY. At the same time, the ministry of human resource development, which is responsible for matters related to public education, has already collaborated with the Central Institute of Educational Technology and the Homi Bhabha Centre for Science Education in TIFR to build a comprehensive platform for multi-media resources for education – the National Repository of Open Educational Resources. The initial plans of the DI initiative are yet to explicitly recognise that the key challenge is not in building new applications and websites, but aligning existing efforts.

This mismatch, between what the Digital India initiative proposes to achieve and how it plans to achieve it, is further demonstrated in the 'e-Governance Policy Initiatives under Digital India' document. The compilation lists the key policies to govern designing and implementation of the Digital India programmes, but surprisingly fails to mention any policies, acts, and pending bills approved or initiated by any previous government. This is remarkably counter-productive as the existing policy frameworks, such as the Framework for Mobile Governance, the National Data Sharing and Accessibility Policy, and the Interoperability Framework for e-Governance, are suitably placed to complement the new policies around use of free of open source softwares for e-governance systems, so as to ensure their transparency, interoperability, and inclusive outreach. Several pending bills like The National Identification Authority of India Bill, 2010, The Electronic Delivery of Services Bill, 2011, and The Privacy (Protection) Bill, 2013, are absolutely fundamental for comprehensive and secure implementation of the various programmes under the Digital India initiative.

The next year will complete a decade of development of national e-governance systems in India, since the launch of National e-Governance Plan in 2006. Given this history of information systems sometimes partially implemented and sometimes working in isolation, a 'Phir Bhi Dil Hai Hindustani' approach to digitise India is a very pragmatic one. What we surely do not need is increased contradiction among e-governance systems. Simultaneously, we neither need digital systems that centralise governmental power within one ministry on technical grounds, or expose citizens to abuse of their digital identity and assets due to lack of sufficient legal frameworks.

(Sumandro Chattapadhyay is research director, The Centre for Internet and Society. The views expressed are personal.)

FINANCIAL STATEMENTS OF 2013-14.pdf

by Prasad Krishna last modified Jul 28, 2015 01:11 AM

PDF document icon FINANCIAL STATEMENTS OF 2013-14.pdf — PDF document, 7173 kB (7345362 bytes)

Expert Committee Meetings

by Prasad Krishna last modified Aug 04, 2015 01:56 AM
In 2013 the Department of Biotechnology set up an Expert Committee to discuss the Human DNA Profiling Bill. The Expert Committee met four times with an additional meeting by a sub-committee set up by the Expert Committee. The Centre for Internet and Society was a member of the Committee. The zip file contains: Record Note of discussions of the Experts Committee Meeting held on 31st January 2013 at DBT, New Delhi, to discuss the potential privacy concerns on draft Human DNA Profiling Bill; Record Note of the 2nd discussion meeting of the Expert Committee held on 13th May 2013 in DBT to discuss the draft Human DNA Profiling Bill; Minutes of the 3rd meeting of the Expert Committee held on 25th November 2013 in DBT to discuss the draft Human DNA Profiling Bill; Minutes of the 4th meeting of the Expert Committee held on 10th November 2014 in DBT to discuss and finalize the draft Human DNA Profiling Bill; Record Note of discussions of the Experts Sub-Committee Meeting on Human DNA Profiling Bill held on 3rd September 2013 at CDFD, Hyderabad

ZIP archive icon Expert Committee Meetings.zip — ZIP archive, 2319 kB (2375322 bytes)

Role of Intermediaries in Countering Online Abuse

by Jyoti Panday last modified Aug 02, 2015 04:38 PM
The Internet can be a hostile space and protecting users from abuse without curtailing freedom of expression requires a balancing act on the part of online intermediaries.

This got published as two blog entries in the NALSAR Law Tech Blog. Part 1 can be accessed here and Part 2 here.


As platforms and services coalesce around user-generated content (UGC) and entrench themselves in the digital publishing universe, they are increasingly taking on the duties and responsibilities of protecting  rights including taking reasonable measures to restrict unlawful speech. Arguments around the role of intermediaries tackling unlawful content usually center around the issue of regulation—when is it feasible to regulate speech and how best should this regulation be enforced?

Recently, Twitter found itself at the periphery of such questions when an anonymous user of the platform, @LutyensInsider, began posting slanderous and sexually explicit comments about Swati Chaturvedi, a Delhi-based journalist. The online spat which began in February last year,  culminated into Swati filing an FIR against the anonymous user, last week. Within hours of the FIR, the anonymous user deleted the tweets and went silent. Predictably, Twitter users hailed this as a much needed deterrence to online harassment. Swati’s personal victory is worth celebrating, it is an encouragement for the many women bullied daily on the Internet, where harassment is rampant. However, while Swati might be well within her legal rights to counter slander, the rights and liabilities of private companies in such circumstances are often not as clear cut.

Should platforms like Twitter take on the mantle of deciding what speech is permissible or not? When and how should the limits on speech be drawn? Does this amount to private censorship?The answers are not easy and as the recent Grand Chamber of the European Court of Human Rights (ECtHR) judgment in the case of Delfi AS v. Estonia confirms, the role of UGC platforms in balancing the user rights, is an issue far from being settled. In its ruling, the  ECtHR reasoned that because of their role in facilitating expression, online platforms have a requirement “to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship”.

This is problematic because the decision moves the regime away from a framework that grants immunity from liability, as long as platforms meet certain criteria and procedures. In other words the ruling establishes strict liability for intermediaries in relation to manifestly illegal content, even if they may have no knowledge. The 'obligation' placed on the intermediary does not grant them safe harbour and is not proportionate to the monitoring and blocking capacity thus necessitated. Consequently,  platforms might be incentivized to err on the side of caution and restrict comments or confine speech resulting in censorship. The ruling is especially worrying, as the standard of care placed on the intermediary does not recognize the different role played by intermediaries in detection and removal of unlawful content. Further, intermediary liability is its own legal regime and is at the same time, a subset of various legal issues that need an understanding of variation in scenarios, mediums and technology both globally and in India.

Law and Short of IT

Earlier this year, in a leaked memo, the Twitter CEO Dick Costolo took personal responsibility for his platform's chronic problem and failure to deal with harassment and abuse. In Swati's case, Twitter did not intervene or take steps to address  harrassment. If it had to, Twitter (India),  as all online intermediaries would be bound by the provisions established under Section 79 and accompanying Rules of the Information Technology Act. These legislations outline the obligations and conditions that intermediaries must fulfill to claim immunity from liability for third party content. Under the regime, upon receiving actual knowledge of unlawful information on their platform, the intermediary must comply with the notice and takedown (NTD) procedure for blocking and removal of content.

Private complainants could invoke the NTD procedure forcing intermediaries to act as adjudicators of an unlawful act—a role they are clearly ill-equipped to perform, especially when the content relates to political speech or alleged defamation or obscenity. The SC judgment in Shreya Singhal addressing this issue, read down the provision (Section 79 by holding that a takedown notice can only be effected if the complainant secures a court order to support her allegation. Further, it was held that the scope of restrictions under the mechanism is restricted to the specific categories identified under Article 19(2). Effectively, this means Twitter need not take down content in the absence of a court order.

Content Policy as Due Diligence

Another provision, Rule 3(2) prescribes a content policy which, prior to the Shreya Singhal judgment was a criteria for administering takedown. This content policy includes an exhaustive list of types of restricted expressions, though worryingly, the terms included in it are  not clearly defined and go beyond the reasonable restrictions envisioned under Article 19(2). Terms such as “grossly harmful”, “objectionable”, “harassing”, “disparaging” and “hateful” are not defined anywhere in the Rules, are subjective and contestable as alternate interpretation and standard could be offered for the same term. Further, this content policy is not applicable to content created by the intermediary.

Prior to the SC verdict in Shreya Singhal, actual knowledge could have been interpreted to mean the intermediary is called upon its own judgement under sub-rule (4) to restrict impugned content in order to seek exemption from liability. While liability accrued from not complying with takedown requests under the content policy was clear, this is not the case anymore. By reading down of S. 79 (3) (b) the court has addressed the issue of intermediaries complying with places limits on the private censorship of intermediaries and the invisible censorship of opaque government takedown requests as they must and should adhere, to the boundaries set by Article 19(2). Following the SC judgment intermediaries do not have to administer takedowns without a court order thereby rendering this content policy redundant. As it stands, the content policy is an obligation that intermediaries must fulfill in order to be exempted from liability for UGC and this due diligence is limited to publishing rules and regulations, terms and conditions or user agreement informing users of the restrictions on content. The penalties for not publishing this content policy should be clarified.

Further, having been informed of what is permissible users are agreeing to comply with the policy outlined, by signing up to and using these platforms and services. The requirement of publishing content policy as due diligence is unnecessary given that mandating such ‘standard’ terms of use negates the difference between different types of intermediaries which accrue different kinds of liability. This also places an extraordinary power of censorship in the hands of the intermediary, which could easily stifle freedom of speech online. Such heavy handed regulation could make it impossible to publish critical views about anything without the risk of being summarily censored.

Twitter may have complied with its duties by publishing the content policy, though the obligation does not seem to be an effective deterrence. Strong safe harbour provisions for intermediaries are a crucial element in the promotion and protection of the right to freedom of expression online. By absolving platforms of responsibility for UGC as long as they publish a content policy that is vague and subjective is the very reason why India’s IT Rules are in fact, in urgent need of improvement.

Size Matters

The standards for blocking, reporting and responding to abuse vary across different categories of platforms. For example, it may be easier to counter trolls and abuse on blogs or forums where the owner or an administrator is monitoring comments and UGC. Usually platforms outline monitoring and reporting policies and procedures including recourse available to victims and action to be taken against violators. However, these measures are not always effective in curbing abuse as it is possible for users to create new accounts under different usernames. For example, in Swati’s case the anonymous user behind @LutyensInsider account changed their handle to @gregoryzackim and @gzackim before deleting all tweets. In this case, perhaps the fear of criminal charges ahead was enough to silence the anonymous user, which may not always be the case.

Tackling the Trolls

Most large intermediaries have privacy settings which restrict the audience for user posts as well as prevent strangers from contacting them as a general measure against online harassment. Platforms also publish monitoring policy outlining the procedure and mechanisms for users to register their complaint or report abuse. Often reporting and blocking mechanisms rely on community standards and users reporting unlawful content. Last week Twitter announced a new feature allowing lists of blocked users to be shared between users. An improvement on existing mechanism for blocking, the feature is aimed at making the service safer for people facing similar issues and while an improvement on standard policies defining permissible limits on content, such efforts may have their limitations.

The mechanisms follow a one-size-fits-all policy. First, such community driven efforts do not address concerns of differences in opinion and subjectivity. Swati in defending her actions stressed the “coarse discourse” prevalent on social media, though as this article points out she might be assumed guilty of using offensive and abusive language. Subjectivity and many interpretations of the same opinion can pave the way for many taking offense online. Earlier this month, Nikhil Wagle’s tweets criticising Prime Minister Narendra Modi as a “pervert” was interpreted as “abusive”, “offensive” and “spreading religious disharmony”. While platforms are within their rights to establish policies for dealing with issues faced by users, there is a real danger of them doing so for political reasons” and based on “popularity” measures which may chill free speech. When many get behind a particular interpretation of an opinion, lawful speech may also be stifled as Sreemoyee Kundu found out. A victim of online abuse her account was blocked by Facebook owing to multiple reports from a “faceless fanatical mob”. Allowing the users to set standards of permissible speech is an improvement, though it runs the risk of mob justice and platforms need to be vigilant in applying such standards.

While it may be in the interest of platforms to keep a hands off approach to community policies, certain kind of content may necessiate intervention by the intermediary. There has been an increase in private companies modifying their content policy to place reasonable restriction on certain hateful behaviour in order to protect vulnerable or marginalised voices. Twitter and Reddit's policy change in addressing revenge porn are reflective of a growing understanding amongst stakeholders that in order to promote free expression of ideas, recognition and protection of certain rights on the Internet may be necessary. However, any approach to regulate user content must assess the effect of policy decisions on user rights. Google's stand on tackling revenge porn may be laudable, though the decision to push down 'piracy' sites in its search results could be seen to adversely impact the choice that users have. Terms of service implemented with subjectivity and lack of transparency can and does lead to private censorship.

The Way Forward

Harassment is damaging, because of the feeling of powerlessness that it invokes in the victims and online intermediaries represent new forms of power through which users' negotiate and manage their online identity. Content restriction policies and practices must address this power imbalance by adopting baseline safeguards and best practices. It is only fair that based on principles of equality and justice, intermediaries be held responsible for the damage caused to users due to wrongdoings of other users or when they fail to carry out their operations and services as prescribed by the law. However, in its present state, the intermediary liability regime in India is not sufficient to deal with online harassment and needs to evolve into a more nuanced form of governance.

Any liability framework must evolve bearing in mind the slippery slope of overbroad regulation and differing standards of community responsibility. Therefore, a balanced framework would need to include elements of both targeted regulation and soft forms of governance as liability regimes need to balance fundamental human rights and the interests of private companies. Often, achieving this balance is problematic given that these companies are expected to be adjudicators and may also be the target of the breach of rights, as is the case in Delfi v Estonia. Global frameworks such as the Manila Principles can be a way forward in developing effective mechanisms. The determination of content restriction practices should  always adopt the least restrictive means of doing so, distinguishing between the classes of intermediary. They must evolve considering the proportionality of the harm, the nature of the content and the impact on affected users including the proximity of affected party to content uploader.

Further, intermediaries and governments should communicate a clear mechanism for review and appeal of restriction decisions. Content restriction policies should incorporate an effective right to be heard. In exceptional circumstances when this is not possible, a post facto review of the restricton order and its implementation must take place as soon as practicable. Further, unlawful content restricted for a limited duration or within a specific geography, must not extend beyond these limits and a periodic review should take place to ensure the validity of the restriction. Regular, systematic review of rules and guidelines guiding intermediary liability will go a long way in ensuring that such frameworks are not overly burdensome and remain effective.

Policy Paper on Surveillance in India

by Vipul Kharbanda last modified Aug 03, 2015 03:27 PM
This policy brief analyses the different laws regulating surveillance at the State and Central level in India and calls out ways in which the provisions are unharmonized. The brief then provides recommendations for the harmonization of surveillance law in India.

Introduction

The current legal framework for surveillance in India is a legacy of the colonial era laws that had been drafted by the British. Surveillance activities by the police are an everyday phenomenon and are included as part of their duties in the various police manuals of the different states. It will become clear from an analysis of the laws and regulations below, that whilst the police manuals cover the aspect of physical surveillance in some detail, they do not discuss the issue of interception of telephone or internet traffic. These issues are dealt with separately under the Telecom Act and the Information Technology Act and the Rules made thereunder, which are applicable to all security agencies and not just the police. Since the Indian laws deal with different aspects of surveillance under different legislations, the regulations dealing with this issue do not have any uniform standards. This paper therefore argues that the need of the hour is to have a single legislation which deals with all aspects of surveillance and interception in one place so that there is uniformity in the laws and practices of surveillance in the entire country.

Legal Regime

India does not have one integrated policy on surveillance and law enforcement and security agencies have to rely upon a number of different sectoral legislations to carry out their surveillance activities. These include:

1. Police Surveillance under Police Acts and Model Police Manual

Article 246(3) of the Constitution of India, read with Entry 2, List II, of the VIIth Schedule, empowers the States to legislate in matters relating to the police. This means that the police force is under the control of the state government rather than the Central government. Consequently, States have their own Police Acts to govern the conduct of the police force. Under the authority of these individual State Police Acts, rules are formulated for day-to-day running of the police. These rules are generally found in the Police Manuals of the individual states. Since a discussion of the Police Manual of each State with its small deviations is beyond the scope of this study, we will discuss the Model Police Manual issued by the Bureau of Police Research and Development.

As per the Model Police Manual, “surveillance and checking of bad characters” is considered to be one of the duties of the police force mentioned in the “Inventory of Police Duties, Functions and Jobs”.[1] Surveillance is also one of the main methods utilized by the police for preventing law and order situations and crimes.[2] As per the Manual the nature and degree of surveillance depends on the circumstances and persons on whom surveillance is mounted and it is only in very rare cases and on rare occasions that round the clock surveillance becomes necessary for a few days or weeks.[3]

Surveillance of History Sheeted Persons: Beat Police Officers should be fully conversant with the movements or changes of residence of all persons for whom history sheets of any category are maintained. They are required to promptly report the exact information to the Station House Officer (SHO), who make entries in the relevant registers. The SHO on the basis of this information reports, by the quickest means, to the SHO in whose jurisdiction the concerned person/persons are going to reside or pass through. When a history-sheeted person is likely to travel by the Railway, intimation of his movements should also be given to the nearest Railway Police Station.[4] It must be noted that the term “history sheet” or “history sheeter” is not defined either in the Indian Penal Code, 1860, most of the State Police Acts or the Model Police Manual, but it is generally understood and defined in the Oxford English Dictionary as persons with a criminal record.

Surveillance of “Bad Characters”: Keeping tabs on and getting information regarding “bad characters” is part of the duties of a beat constable. In the case of a “bad character” who is known to have gone to another State, the SHO of the station in the other state is informed using the quickest means possible followed by sending of a BC Roll 'A' directly to the SHO.[5] When a “bad character” absents himself or goes out of view, whether wanted in a case or not, the information is required to be disseminated to the police stations having jurisdiction over the places likely to be visited by him and also to the neighbouring stations, whether within the State or outside. If such person is traced and intimation is received of his arrest or otherwise, arrangements to get a complete and true picture of his activities are required to be made and the concerned record updated.[6]

The Police Manual clarifies the term “bad characters” to mean “offenders, criminals, or members of organised crime gangs or syndicates or those who foment or incite caste, communal violence, for which history sheets are maintained and require surveillance.”[7] A fascinating glimpse into the history of persons who were considered to be “bad characters” is contained in the article by Surjan Das & Basudeb Chattopadhyay in EPW[8] wherein they bring out the fact that in colonial times a number of the stereotypes propagated by the British crept into their police work as well. It appears that one did not have to be convicted to be a bad character, but people with a dark complexion, strong built, broad chins, deep-set eyes, broad forehead, short hair, scanty or goatee beard, marks on face, moustache, blunt nose, white teeth and monkey-face would normally fit the description of “bad characters”.

Surveillance of Suspicious Strangers: When a stranger of suspicious conduct or demeanour is found within the limits of a police station, the SHO is required to forward a BC Roll to the Police Station in whose jurisdiction the stranger claims to have resided. The receipt of such a roll is required to be immediately acknowledged and replied. If the suspicious stranger states that he resides in another State, a BC Roll is sent directly to the SHO of the station in the other State.[9] The manual however, does not define who a “suspicious stranger” is and how to identify one.

Release of Foreign Prisoners: Before a foreign prisoner (whose finger prints are taken for record) is released the Superintendent of Police of the district where the case was registered is required to send a report to the Director, I.B. through the Criminal Investigation Department informing the route and conveyance by which such person is likely to leave the country.[10]

Shadowing of convicts and dangerous persons: The Police Manual contains the following rules for shadowing the convicts on their release from jails:

(a) Dangerous convicts who are not likely to return to their native places are required to be shadowed. The fact, when a convict is to be shadowed is entered in the DCRB in the FP register and communicated to the Superintendent of Jails.

(b) The Police Officer deputed for shadowing an ex-convict is required to enter the fact in the notebook. The Police Officers area furnished with a challan indicating the particulars of the ex-convict marked for shadowing. This form is returned by the SHO of the area where the ex-convict takes up his residence or passes out of view to the DCRB / OCRS where the jail is situated, where it is put on record for further reference and action if any. Even though the subjects being shadowed are kept in view, no restraint is to put upon their movements on any account.[11]

Apart from the provisions discussed above, there are also provisions in the Police Manual regarding surveillance of convicts who have been released on medical grounds as well as surveillance of ex-convicts who are required to report their movements to the police as per the provisions of section 356 of the Code of Criminal Procedure.[12]

As noted above, the various police manuals are issued under the State Police Acts and they govern the police force of the specific states. The fact that each state has its own individual police manual itself leads to non-uniformity regarding standards and practices of surveillance. But it is not only the legislations at the State levels which lead to this problem, even legislation at the Central level, which are applicable to the country as a whole also have differing standards regarding different aspects of surveillance. In order to explore this further, we shall now discuss the central legislations dealing with surveillance.

2. The Indian Telegraph Act, 1885

Section 5 of the Indian Telegraph Act, 1885, empowers the Central Government and State Governments of India to order the interception of messages in two circumstances: (1) in the occurrence of any public emergency or in the interest of public safety, and (2) if it is considered necessary or expedient to do so in the interest of:[13]

  • the sovereignty and integrity of India; or
  • the security of the State; or
  • friendly relations with foreign states; or
  • public order; or
  • for preventing incitement to the commission of an offence.

The Supreme Court of India has specified the terms 'public emergency' and 'public safety', based on the following[14]:

"Public emergency would mean the prevailing of a sudden condition or state of affairs affecting the people at large calling for immediate action. The expression 'public safety' means the state or condition of freedom from danger or risk for the people at large. When either of these two conditions are not in existence, the Central Government or a State Government or the authorised officer cannot resort to telephone tapping even though there is satisfaction that it is necessary or expedient so to do in the interests of it sovereignty and integrity of India etc. In other words, even if the Central Government is satisfied that it is necessary or expedient so to do in the interest of the sovereignty and integrity of India or the security of the State or friendly relations with sovereign States or in public order or for preventing incitement to the commission of an offence, it cannot intercept the message, or resort to telephone tapping unless a public emergency has occurred or the interest of public safety or the existence of the interest of public safety requires. Neither the occurrence of public emergency nor the interest of public safety are secretive conditions or situations. Either of the situations would be apparent to a reasonable person."

In 2007, Rule 419A was added to the Indian Telegraph Rules, 1951 framed under the Indian Telegraph Act which provided that orders on the interception of communications should only be issued by the Secretary in the Ministry of Home Affairs. However, it provided that in unavoidable circumstances an order could also be issued by an officer, not below the rank of a Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary.[15]

According to Rule 419A, the interception of any message or class of messages is to be carried out with the prior approval of the Head or the second senior most officer of the authorised security agency at the Central Level and at the State Level with the approval of officers authorised in this behalf not below the rank of Inspector General of Police, in the belowmentioned emergent cases:

  • in remote areas, where obtaining of prior directions for interception of messages or class of messages is not feasible; or
  • for operational reasons, where obtaining of prior directions for interception of message or class of messages is not feasible;

however, the concerned competent authority is required to be informed of such interceptions by the approving authority within three working days and such interceptions are to be confirmed by the competent authority within a period of seven working days. If the confirmation from the competent authority is not received within the stipulated seven days, such interception should cease and the same message or class of messages should not be intercepted thereafter without the prior approval of the Union Home Secretary or the State Home Secretary.[16]

Rule 419A also tries to incorporate certain safeguards to curb the risk of unrestricted surveillance by the law enforcement authorities which include the following:

  • Any order for interception issued by the competent authority should contain reasons for such direction and a copy of such an order should be forwarded to the Review Committee within a period of seven working days;[17]
  • Directions for interception should be issued only when it is not possible to acquire the information by any other reasonable means;[18]
  • The directed interception should include the interception of any message or class of messages that are sent to or from any person n or class of persons or relating to any particular subject whether such message or class of messages are received with one or more addresses, specified in the order being an address or addresses likely to be used for the transmission of communications from or to one particular person specified or described in the order or one particular set of premises specified or described in the order;[19]
  • The interception directions should specify the name and designation of the officer or the authority to whom the intercepted message or class of messages is to be disclosed to;[20]
  • The directions for interception would remain in force for sixty days, unless revoked earlier, and may be renewed but the same should not remain in force beyond a total period of one hundred and eighty days;[21]
  • The directions for interception should be conveyed to the designated officers of the licensee(s) in writing by an officer not below the rank of Superintendent of Police or Additional Superintendent of Police or the officer of the equivalent rank;[22]
  • The officer authorized to intercept any message or class of messages should maintain proper records mentioning therein, the intercepted message or class of messages, the particulars of persons whose message has been intercepted, the name and other particulars of the officer or the authority to whom the intercepted message or class of messages has been disclosed, etc.;[23]
  • All the requisitioning security agencies should designate one or more nodal officers not below the rank of Superintendent of Police or the officer of the equivalent rank to authenticate and send the requisitions for interception to the designated officers of the concerned service providers to be delivered by an officer not below the rank of Sub-Inspector of Police;[24]
  • Records pertaining to directions for interception and of intercepted messages should be destroyed by the competent authority and the authorized security and Law Enforcement Agencies every six months unless these are, or likely to be, required for functional requirements;[25]

According to Rule 419A, service providers \are required by law enforcement to intercept communications are required to comply with the following[26]:

  • Service providers should designate two senior executives of the company in every licensed service area/State/Union Territory as the nodal officers to receive and handle such requisitions for interception;[27]
  • The designated nodal officers of the service providers should issue acknowledgment letters to the concerned security and Law Enforcement Agency within two hours on receipt of intimations for interception;[28]
  • The system of designated nodal officers for communicating and receiving the requisitions for interceptions should also be followed in emergent cases/unavoidable cases where prior approval of the competent authority has not been obtained;[29]
  • The designated nodal officers of the service providers should forward every fifteen days a list of interception authorizations received by them during the preceding fortnight to the nodal officers of the security and Law Enforcement Agencies for confirmation of the authenticity of such authorizations;[30]
  • Service providers are required to put in place adequate and effective internal checks to ensure that unauthorized interception of messages does not take place, that extreme secrecy is maintained and that utmost care and precaution is taken with regards to the interception of messages;[31]
  • Service providers are held responsible for the actions of their employees. In the case of an established violation of license conditions pertaining to the maintenance of secrecy and confidentiality of information and unauthorized interception of communication, action shall be taken against service providers as per the provisions of the Indian Telegraph Act, and this shall not only include a fine, but also suspension or revocation of their license;[32]
  • Service providers should destroy records pertaining to directions for the interception of messages within two months of discontinuance of the interception of such messages and in doing so they should maintain extreme secrecy.[33]

Review Committee

Rule 419A of the Indian Telegraph Rules requires the establishment of a Review Committee by the Central Government and the State Government, as the case may be, for the interception of communications, as per the following conditions:[34]

(1) The Review Committee to be constituted by the Central Government shall consist of the following members, namely:

(a) Cabinet Secretary - Chairman

(b) Secretary to the Government of India in charge, Legal Affairs - Member

(c) Secretary to the Government of India, Department of Telecommunications – Member

(2) The Review Committee to be constituted by a State Government shall consist of the following members, namely:

(a) Chief Secretary – Chairman

(b) Secretary Law/Legal Remembrancer in charge, Legal Affairs – Member

(c) Secretary to the State Government (other than the Home Secretary) – Member

(3) The Review Committee meets at least once in two months and records its findings on whether the issued interception directions are in accordance with the provisions of sub-section (2) of Section 5 of the Indian Telegraph Act. When the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above it may set aside the directions and order for destruction of the copies of the intercepted message or class of messages;[35]

It must be noted that the Unlawful Activities (Prevention) Act, 1967, (which is currently used against most acts of urban terrorism) also allows for the interception of communications but the procedures and safeguards are supposed to be the same as under the Indian Telegraph Act and the Information Technology Act.[36]

3. Telecom Licenses

The telecom sector in India has seen immense activity in the last two decades ever since it was opened up to private competition. These last twenty years have seen a lot of turmoil and have offered a tremendous learning opportunity for the private players as well as the governmental bodies regulating the sector. Currently any entity wishing to get a telecom license is offered a UL (Unified License) which contains terms and conditions for all the services that a licensee may choose to offer. However there were a large number of other licenses before the current regime, and since the licenses have a long phase out, we have tried to cover what we believe are the four most important licenses issued to telecom operators starting with the CMTS License:

Cellular Mobile Telephony Services (CMTS) License

In terms of National Telecom Policy (NTP)-1994, the first phase of liberalization in mobile telephone service started with issue of 8 licenses for Cellular Mobile Telephony Services (CMTS) in the 4 metro cities of Delhi, Mumbai, Calcutta and Chennai to 8 private companies in November 1994. Subsequently, 34 licenses for 18 Territorial Telecom Circles were also issued to 14 private companies during 1995 to 1998. During this period a maximum of two licenses were granted for CMTS in each service area and these licensees were called 1st & 2nd cellular licensees.[37] Consequent upon announcement of guidelines for Unified Access (Basic & Cellular) Services licenses on 11.11.2003, some of the CMTS operators were permitted to migrate from CMTS License to Unified Access Service License (UASL) but currently no new CMTS and Basic service licenses are being awarded after issuing the guidelines for Unified Access Service Licence (UASL).

The important provisions regarding surveillance in the CMTS License are listed below:

Facilities for Interception: The CMTS License requires the Licensee to provide necessary facilities to the designated authorities for interception of the messages passing through its network.[38]

Monitoring of Telecom Traffic: The designated person of the Central/State Government as conveyed to the Licensor from time to time in addition to the Licensor or its nominee have the right to monitor the telecommunication traffic in every MSC or any other technically feasible point in the network set up by the licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. The hardware at licensee’s end and software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at licensee’s cost. In case the security agencies intend to locate the equipment at licensee’s premises for facilitating monitoring, the licensee is required to extend all support in this regard including space and entry of the authorised security personnel. The interface requirements as well as features and facilities as defined by the Licensor are to be implemented by the licensee for both data and speech. The Licensee is also required to ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 210 simultaneous calls.[39]

Monitoring Records to be maintained: Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[40]

Protection of Privacy: It is the responsibility of the Licensee to ensure the protection of privacy of communication and ensure unathorised interception of messages does not take place.[41]

License Agreement for Provision of Internet Services (ISP License)

Internet services were launched in India on 15th August, 1995 by Videsh Sanchar Nigam Limited. In November, 1998, the Government opened up the sector for providing Internet services by private operators. The major provisions dealing with surveillance contained in the ISP License are given below:

Authorization for monitoring: Monitoring shall only be by the authorization of the Union Home Secretary or Home Secretaries of the States/Union Territories.[42]

Access to subscriber list by authorized intelligence agencies and licensor: The complete and up to date list of subscribers will be made available by the ISP on a password protected website – accessible to authorized intelligence agencies.[43] Information such as customer name, IP address, bandwidth provided, address of installation, data of installation, contact number and email of leased line customers shall be included in the website.[44] The licensor or its representatives will also have access to the Database relating to the subscribers of the ISP which is to be available at any instant.[45]

Right to monitor by the central/state government: The designated person of the central/state government or the licensor or nominee will have the right to monitor telecommunications traffic in every node or any other technically feasible point in the network. To facilitate this, the ISP must make arrangements for the monitoring of simultaneous calls by the Government or its security agencies.[46]

Right of DoT to monitor: DoT will have the ability to monitor customers who generate high traffic value and verify specified user identities on a monthly basis.[47]

Provision of mirror images: Mirror images of the remote access information should be made available online for monitoring purposes.[48] A safeguard provided for in the license is that remote access to networks is only allowed in areas approved by the DOT in consultation with the Security Agencies.[49]

Provision of information stored on dedicated transmission link: The ISP will provide the login password to DOT and authorized Government agencies on a monthly basis for access to information stored on any dedicated transmission link from ISP node to subscriber premises.[50]

Provision of subscriber identity and geographic location: The ISP must provide the traceable identity and geographic location of their subscribers, and if the subscriber is roaming – the ISP should try to find traceable identities of roaming subscribers from foreign companies.[51]

Facilities for monitoring: The ISP must provide the necessary facilities for continuous monitoring of the system as required by the licensor or its authorized representatives.[52]

Facilities for tracing: The ISP will also provide facilities for the tracing of nuisance, obnoxious or malicious calls, messages, or communications. These facilities are to be provided specifically to authorized officers of the Government of India (police, customs, excise, intelligence department) when the information is required for investigations or detection of crimes and in the interest of national security.[53]

Facilities and equipment to be specified by government: The types of interception equipment to be used will be specified by the government of India.[54] This includes the installation of necessary infrastructure in the service area with respect to Internet Telephony Services offered by the ISP including the processing, routing, directing, managing, authenticating the internet calls including the generation of Call Details Record, IP address, called numbers, date, duration, time, and charge of the internet telephony calls.[55]

Facilities for surveillance of mobile terminal activity: The ISP must also provide the government facilities to carry out surveillance of Mobile Terminal activity within a specified area whenever requested.[56]

Facilities for monitoring international gateway: As per the requirements of security agencies, every international gateway location having a capacity of 2 Mbps or more will be equipped will have a monitoring center capable of monitoring internet telephony traffic.[57]

Facilities for monitoring in the premise of the ISP: Every office must be at least 10x10 with adequate power, air conditioning, and accessible only to the monitoring agencies. One local exclusive telephone line must be provided, and a central monitoring center must be provided if the ISP has multiple nodal points.[58]

Protection of privacy: There is a responsibility on the ISP to protect the privacy of its communications transferred over its network. This includes securing the information and protecting against unauthorized interception, unauthorized disclosure, ensure the confidentiality of information, and protect against over disclosure of information- except when consent has been given.[59]

Log of users: Each ISP must maintain an up to date log of all users connected and the service that they are using (mail, telnet, http, etc). The ISPs must also log every outward login or telnet through their computers. These logs as well as copies of all the packets must be made available in real time to the Telecom Authority.[60]

Log of internet leased line customers: A record of each internet leased line customer should be kept along with details of connectivity, and reasons for taking the link should be kept and made readily available for inspection.[61]

Log of remote access activities: The ISP will also maintain a complete audit trail of the remote access activities that pertain to the network for at least six months. This information must be available on request for any agency authorized by the licensor.[62]

Monitoring requirements: The ISP must make arrangements for the monitoring of the telecommunication traffic in every MSC exchange or any other technically feasible point, of at least 210 calls simultaneously.[63]

Records to be made available:

  • CDRS: When required by security agencies, the ISP must make available records of i) called/calling party mobile/PSTN numbers ii) time/date and duration of calls iii) location of target subscribers and from time to time precise location iv) telephone numbers – and if any call forwarding feature has been evoked – records thereof v) data records for failed call attempts vi) CDR of roaming subscriber.[64]
  • Bulk connections: On a monthly basis, and from time to time, information with respect to bulk connections shall be forwarded to DoT, the licensor, and security agencies.[65]
  • Record of calls beyond specified threshold: Calls should be checked, analyzed, and a record maintained of all outgoing calls made by customers both during the day and night that exceed a set threshold of minutes. A list of suspected subscribers should be created by the ISP and should be informed to DoT and any officer authorized by the licensor at any point of time.[66]
  • Record of subscribers with calling line identification restrictions: Furthermore, a list of calling line identification restriction subscribers with their complete address and details should be created on a password protected website that is available to authorized government agencies.[67]

Unified Access Services (UAS) License

Unified Access Services operators provide services of collection, carriage, transmission and delivery of voice and/or non-voice messages within their area of operation, over the Licensee’s network by deploying circuit and/or packet switched equipment. They may also provide Voice Mail, Audiotex services, Video Conferencing, Videotex, E-Mail, Closed User Group (CUG) as Value Added Services over its network to the subscribers falling within its service area on a non-discriminatory basis.

The terms of providing the services are regulated under the Unified Access Service License (UASL) which also contains provisions regarding surveillance/interception. These provisions are regularly used by the state agencies to intercept telephonic and data traffic of subscribers. The relevant terms of the UASL dealing with surveillance and interception are discussed below:

Confidentiality of Information: The Licensee cannot employ bulk encryption equipment in its network. Any encryption equipment connected to the Licensee’s network for specific requirements has to have prior evaluation and approval of the Licensor or officer specially designated for the purpose. However, any encryption equipment connected to the Licensee’s network for specific requirements has to have prior evaluation and approval of the Licensor or officer specially designated for the purpose. However, the Licensee has the responsibility to ensure protection of privacy of communication and to ensure that unauthorised interception of messages does not take place.[68] The Licensee shall take necessary steps to ensure that the Licensee and any person(s) acting on its behalf observe confidentiality of customer information.[69]

Responsibility of the Licensee: The Licensee has to take all necessary steps to safeguard the privacy and confidentiality of any information about a third party and its business to whom it provides the service and from whom it has acquired such information by virtue of the service provided and shall use its best endeavors to secure that :

  • No person acting on behalf of the Licensee or the Licensee divulges or uses any such information except as may be necessary in the course of providing such service to the third party; and
  • No such person seeks such information other than is necessary for the purpose of providing service to the third party.[70]

Provision of monitoring facilities: Requisite monitoring facilities /equipment for each type of system used, shall be provided by the service provider at its own cost for monitoring as and when required by the licensor.[71] The license also requires the Licensee to provide necessary facilities to the designated authorities for interception of the messages passing through its network.[72] The licensor in this case is the President of India, as the head of the State, therefore all references to the term licensor can be assumed to be to the government of India (which usually acts through the department of telecom (DOT). For monitoring traffic, the licensee company has to provide access of their network and other facilities as well as to books of accounts to the security agencies.[73]

Monitoring by Designated Person: The designated person of the Central/ State Government as conveyed to the Licensor from time to time in addition to the Licensor or its nominee has the right to monitor the telecommunication traffic in every MSC/Exchange/MGC/MG or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. The hardware at Licensee’s end and software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at Licensee’s cost. However, the respective Government instrumentality bears the cost of user end hardware and leased line circuits from the MSC/ Exchange/MGC/MG to the monitoring centres to be located as per their choice in their premises or in the premises of the Licensee. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including space and entry of the authorized security personnel. The Licensee is required to implement the interface requirements as well as features and facilities as defined by the Licensor for both data and speech. The Licensee is to ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 210 simultaneous calls for seven security agencies.[74]

Monitoring Records to be maintained: Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[75]

List of Subscribers: The complete list of subscribers shall be made available by the Licensee on their website (having password controlled access), so that authorized Intelligence Agencies are able to obtain the subscriber list at any time, as per their convenience with the help of the password.[76] The Licensor or its representative(s) have an access to the Database relating to the subscribers of the Licensee. The Licensee shall also update the list of his subscribers and make available the same to the Licensor at regular intervals. The Licensee shall make available, at any prescribed instant, to the Licensor or its authorized representative details of the subscribers using the service.[77] The Licensee must provide traceable identity of their subscribers,[78] and should be able to provide the geographical location (BTS location) of any subscriber at a given point of time, upon request by the Licensor or any other agency authorized by it.[79]

CDRs for Large Number of Outgoing Calls: The call detail records for outgoing calls made by subscribers making large number of outgoing calls day and night and to the various telephone numbers should be analyzed. Normally, no incoming call is observed in such cases. This can be done by running special programs for this purpose.[80] Although this provision itself does not say that it is limited to bulk subscribers (subscribers with more than 10 lines), it is contained as a sub-clause of section 41.19 which talks about specific measures for bulk subscribers, therefore it is possible that this provision is limited only to bulk subscribers and not to all subscribers.

No Remote Access to Suppliers: Suppliers/manufacturers and affiliate(s) are not allowed any remote access to the be enabled to access Lawful Interception System(LIS), Lawful Interception Monitoring(LIM), Call contents of the traffic and any such sensitive sector/data, which the licensor may notify from time to time, under any circumstances.[81] The Licensee is also not allowed to use remote access facility for monitoring of content.[82] Further, suitable technical device is required to be made available at Indian end to the designated security agency/licensor in which a mirror image of the remote access information is available on line for monitoring purposes.[83]

Monitoring as per the Rules under Telegraph Act: In order to maintain the privacy of voice and data, monitoring shall be in accordance with rules in this regard under Indian Telegraph Act, 1885.[84] It interesting to note that the monitoring under the UASL license is required to be as per the Rules prescribed under the Telegraph Act, but no mention is made of the Rules under the Information Technology Act.

Monitoring from Centralised Location: The Licensee has to ensure that necessary provision (hardware/ software) is available in its equipment for doing lawful interception and monitoring from a centralized location.[85]

Unified License (UL)

The National Telecom Policy - 2012 recognized the fact that the evolution from analog to digital technology has facilitated the conversion of voice, data and video to the digital form which are increasingly being rendered through single networks bringing about a convergence in networks, services and devices. It was therefore felt imperative to move towards convergence between various services, networks, platforms, technologies and overcome the incumbent segregation of licensing, registration and regulatory mechanisms in these areas. It was for this reason that the Government of India decided to move to the Unified License regime under which service providers could opt for all or any one or more of a number of different services.[86]

Provision of interception facilities by Licensee: The UL requires that the requisite monitoring/ interception facilities /equipment for each type of service, should be provided by the Licensee at its own cost for monitoring as per the requirement specified by the Licensor from time to time.[87] The Licensee is required to provide necessary facilities to the designated authorities of Central/State Government as conveyed by the Licensor from time to time for interception of the messages passing through its network, as per the provisions of the Indian Telegraph Act.[88]

Bulk encryption and unauthorized interception: The UL prohibits the Licensee from employing bulk encryption equipment in its network. Licensor or officers specially designated for the purpose are allowed to evaluate any encryption equipment connected to the Licensee’s network. However, it is the responsibility of the Licensee to ensure protection of privacy of communication and to ensure that unauthorized interception of messages does not take place.[89] The use of encryption by the subscriber shall be governed by the Government Policy/rules made under the Information Technology Act, 2000.[90]

Safeguarding of Privacy and Confidentiality: The Licensee shall take necessary steps to ensure that the Licensee and any person(s) acting on its behalf observe confidentiality of customer information.[91] Subject to terms and conditions of the license, the Licensee is required to take all necessary steps to safeguard the privacy and confidentiality of any information about a third party and its business to whom it provides services and from whom it has acquired such information by virtue of the service provided and shall use its best endeavors to secure that: a) No person acting on behalf of the Licensee or the Licensee divulges or uses any such information except as may be necessary in the course of providing such service; and b) No such person seeks such information other than is necessary for the purpose of providing service to the third party.

Provided the above para does not apply where: a) The information relates to a specific party and that party has consented in writing to such information being divulged or used, and such information is divulged or used in accordance with the terms of that consent; or b) The information is already open to the public and otherwise known.[92]

No Remote Access to Suppliers: Suppliers/manufacturers and affiliate(s) are not allowed any remote access to the be enabled to access Lawful Interception System(LIS), Lawful Interception Monitoring(LIM), Call contents of the traffic and any such sensitive sector/data, which the licensor may notify from time to time, under any circumstances.[93] The Licensee is also not allowed to use remote access facility for monitoring of content.[94] Further, suitable technical device is required to be made available at Indian end to the designated security agency/licensor in which a mirror image of the remote access information is available on line for monitoring purposes.[95]

Monitoring as per the Rules under Telegraph Act: In order to maintain the privacy of voice and data, monitoring shall be in accordance with rules in this regard under Indian Telegraph Act, 1885.[96] Just as in the UASL, the monitoring under the UL license is required to be as per the Rules prescribed under the Telegraph Act, but no mention is made of the Rules under the Information Technology Act.

Terms specific to various services

Since the UL License intends to cover all services under a single license, in addition to the general terms and conditions for interception, it also has terms for each specific service. We shall now discuss the terms for interception specific to each service offered under the Unified License.

Access Service: The designated person of the Central/ State Government, in addition to the Licensor or its nominee, shall have the right to monitor the telecommunication traffic in every MSC/ Exchange/ MGC/ MG/ Routers or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. For establishing connectivity to Centralized Monitoring System, the Licensee at its own cost shall provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including space and entry of the authorized security personnel.

The Interface requirements as well as features and facilities as defined by the Licensor should be implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Lawful Interception and Monitoring equipment for trouble free operations of monitoring of at least 480 simultaneous calls as per requirement with at least 30 simultaneous calls for each of the designated security/ law enforcement agencies. Each MSC of the Licensee in the service area shall have the capacity for provisioning of at least 3000 numbers for monitoring. Presently there are ten (10) designated security/ law enforcement agencies. The above capacity provisions and no. of designated security/ law enforcement agencies may be amended by the Licensor separately by issuing instructions at any time.

Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[97]

The call detail records for outgoing calls made by those subscribers making large number of outgoing calls day and night to the various telephone numbers with normally no incoming calls, is required to be analyzed by the Licensee. The service provider is required to run special programme, devise appropriate fraud management and prevention programme and fix threshold levels of average per day usage in minutes of the telephone connection; all telephone connections crossing the threshold of usage are required to be checked for bona fide use. A record of check must be maintained which may be verified by Licensor any time. The list/details of suspected subscribers should be informed to the respective TERM Cell of DoT and any other officer authorized by Licensor from time to time.[98]

The Licensee shall provide location details of mobile customers as per the accuracy and time frame mentioned in the Unified License. It shall be a part of CDR in the form of longitude and latitude, besides the co-ordinate of the BTS, which is already one of the mandated fields of CDR. To start with, these details will be provided for specified mobile numbers. However, within a period of 3 years from effective date of the Unified License, location details shall be part of CDR for all mobile calls.[99]

Internet Service: The Licensee is required to maintain CDR/IPDR for Internet including Internet Telephony Service for a minimum period of one year. The Licensee is also required to maintain log-in/log-out details of all subscribers for services provided such as internet access, e-mail, Internet Telephony, IPTV etc. These logs are to be maintained for a minimum period of one year. For the purpose of interception and monitoring of traffic, the copies of all the packets originating from / terminating into the Customer Premises Equipment (CPE) shall be made available to the Licensor/Security Agencies. Further, the list of Internet Lease Line (ILL) customers is to be placed on a password protected website in the format prescribed in the Unified License.[100]

Lawful Interception and Monitoring (LIM) systems of requisite capacities are to be set up by the Licensees for Internet traffic including Internet telephony traffic through their Internet gateways and /or Internet nodes at their own cost, as per the requirement of the security agencies/Licensor prescribed from time to time. The cost of maintenance of the monitoring equipment and infrastructure at the monitoring centre located at the premises of the licensee shall be borne by the Licensee. In case the Licensee obtains Access spectrum for providing Internet Service / Broadband Wireless Access using the Access Spectrum, the Licensee shall install the required Lawful Interception and Monitoring systems of requisite capacities prior to commencement of service. The Licensee, while providing downstream Internet bandwidth to an Internet Service provider is also required to ensure that all the traffic of downstream ISP passing through the Licensee’s network can be monitored in the network of the Licensee. However, for nodes of Licensee having upstream bandwidth from multiple service providers, the Licensee may be mandated to install LIM/LIS at these nodes, as per the requirement of security agencies. In such cases, upstream service providers may not be required to monitor this bandwidth.[101]

In case the Licensee has multiple nodes/points of presence and has capability to monitor the traffic in all the Routers/switches from a central location, the Licensor may accept to monitor the traffic from the said central monitoring location, provided that the Licensee is able to demonstrate to the Licensor/Security Agencies that all routers / switches are accessible from the central monitoring location. Moreover, the Licensee would have to inform the Licensor of every change that takes place in their topology /configuration, and ensure that such change does not make any routers/switches inaccessible from the central monitoring location. Further, Office space of 10 feet x 10 feet with adequate and uninterrupted power supply and air-conditioning which is physically secured and accessible only to the monitoring agencies shall be provided by the Licensee at each Internet Gateway location at its cost.[102]

National Long Distance (NLD) Service: The requisite monitoring facilities are required to be provided by the Licensee as per requirement of Licensor. The details of leased circuit provided by the Licensee is to be provided monthly to security agencies & DDG (TERM) of the Licensed Service Area where the licensee has its registered office.[103]

International Long Distance (ILD) Service: Office space of 20’x20’ with adequate and uninterrupted power supply and air-conditioning which is physically secured and accessible only to the personnel authorized by the Licensor is required to be provided by the Licensee at each Gateway location free of cost.[104] The cost of monitoring equipment is to be borne by the Licensee. The installation of the monitoring equipment at the ILD Gateway Station is to be done by the Licensee. After installation of the monitoring equipment, the Licensee shall get the same inspected by monitoring /security agencies. The permission to operate/commission the gateway will be given only after this.[105]

The designated person of the Central/ State Government, in addition to the Licensor or its nominee, has the right to monitor the telecommunication traffic in every ILD Gateway / Routers or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. For establishing connectivity to Centralized Monitoring System, the Licensee, at its own cost, is required to provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including Space and Entry of the authorized security personnel. The Interface requirements as well as features and facilities as defined by the Licensor should be implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 480 simultaneous calls as per requirement with at least 30 simultaneous calls for each of the designated security/ law enforcement agencies. Each ILD Gateway of the Licensee shall have the capacity for provisioning of at least 5000 numbers for monitoring. Presently there are ten (10) designated security/ law enforcement agencies. The above capacity provisions and number of designated security/ law enforcement agencies may be amended by the Licensor separately by issuing instructions at any time.[106]

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies in the format prescribed from time to time.[107]

Global Mobile Personal Communication by Satellite (GMPCS) Service: The designated Authority of the Central/State Government shall have the right to monitor the telecommunication traffic in every Gateway set up in India. The Licensee shall make arrangement for monitoring of calls as specified in the Unified License.[108]

The hardware/software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at the ICC (Intercept Control Centre) to be established at the GMPCS Gateway(s) as also in the premises of security agencies at Licensee’s cost. The Interface requirements as well as features and facilities shall be worked out and implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations. The Licensee shall provide suitable training to the designated representatives of the Licensor regarding operation and maintenance of Monitoring equipment (ICC & MC). Interception of target subscribers using messaging services should also be provided even if retrieval is carried out using PSTN links. For establishing connectivity to Centralized Monitoring System, the Licensee at its own cost shall provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time.[109] The License also has specific obligations to extend monitored calls to designated security agencies as provided in the UL.[110] Further, the Licensee is required to provide the call data records of all the calls handled by the system at specified periodicity, if and as and when required by the security agencies.[111] It is the responsibility of the service provider for Global Mobile Personal Communication by Satellite (GMPCS) to provide facility to carry out surveillance of User Terminal activity.[112]

The Licensee has to make available adequate monitoring facility at the GMPCS Gateway in India to monitor all traffic (traffic originating/terminating in India) passing through the applicable system. For this purpose, the Licensee shall set up at his cost, the requisite interfaces, as well as features and facilities for monitoring of calls by designated agencies as directed by the Licensor from time to time. In addition to the Target Intercept List (TIL), it should also be possible to carry out specific geographic location based interception, if so desired by the designated security agencies. Monitoring of calls should not be perceptible to mobile users either during direct monitoring or when call has been grounded for monitoring. The Licensee shall not prefer any charges for grounding a call for monitoring purposes. The intercepted data is to be pushed to designated Security Agencies’ server on fire and forget basis. No records shall be maintained by the Licensee regarding monitoring activities and air-time used beyond prescribed time limit.

The Licensee has to ensure that any User Terminal (UT) registered in the gateway of another country shall re-register with Indian Gateway when operating from Indian Territory. Any UT registered outside India, when attempting to make/receive calls from within India, without due authority, shall be automatically denied service by the system and occurrence of such attempts along with information about UT identity as well as location shall be reported to the designated authority immediately.

The Licensee is required to have provision to scan operation of subscribers specified by security/ law enforcement agencies through certain sensitive areas within the Indian territory and shall provide their identity and positional location (latitude and longitude) to Licensor on as and when required basis.

Public Mobile Radio Trunking Service (PMRTS): Suitable monitoring equipment prescribed by the Licensor for each type of System used has to be provided by the Licensee at his own cost for monitoring, as and when required.[113]

Very Small Aperture Terminal (VSAT) Closed User Group (CUG) Service: Requisite monitoring facilities/ equipment for each type of system used have to be provided by the Licensee at its own cost for monitoring as and when required by the Licensor.[114] The Licensee shall provide at its own cost technical facilities for accessing any port of the switching equipment at the HUB for interception of the messages by the designated authorities at a location to be determined by the Licensor.[115]

Surveillance of MSS-R Service: The Licensee has to provide at its own cost technical facilities for accessing any port of the switching equipment at the HUB for interception of the messages by the designated authorities at a location as and when required.[116] It is the responsibility of the service provider of INSAT- Mobile Satellite System Reporting (MSS-R) service to provide facility to carry out surveillance of User Terminal activity within a specified area.[117]

Resale of International Private Leased Circuit (IPLC) Service: The Licensee has to take IPLC from the licensed ILDOs. The interception and monitoring of Resellers circuits will take place at the Gateway of the ILDO from whom the IPLC has been taken by the Licensee. The provisioning for Lawful Interception & Monitoring of the Resellers’ IPLC shall be done by the ILD Operator and the concerned ILDO shall be responsible for Lawful Interception and Monitoring of the traffic passing through the IPLC. The Resellers shall extend all cooperation in respect of interception and monitoring of its IPLC and shall be responsible for the interception results. The Licensee shall be responsible to interact, correspond and liaise with the licensor and security agencies with regard to security monitoring of the traffic. The Licensee shall, before providing an IPLC to the customer, get the details of services/equipment to be connected on both ends of IPLC, including type of terminals, data rate, actual use of circuit, protocols/interface to be used etc. The Resellers shall permit only such type of service/protocol on the IPLC for which the concerned ILDO has capability of interception and monitoring. The Licensee has to pass on any direct request placed by security agencies on him for interception of the traffic on their IPLC to the concerned ILDOs within two hours for necessary actions.[118]

4. The Information Technology Act, 2000

The Information Technology Act, 2000, was amended in a major way in 2008 and is the primary legislation which regulates the interception, monitoring, decryption and collection of traffic information of digital communications in India.

More specifically, section 69 of the Information Technology Act empowers the central Government and the state governments to issue directions for the monitoring, interception or decryption of any information transmitted, received or stored through a computer resource. Section 69 of the Information Technology Act, 2000 expands the grounds upon which interception can take place as compared to the Indian Telegraph Act, 1885. As such, the interception of communications under Section 69 is carried out in the interest of[119]:

  • The sovereignty or integrity of India
  • Defence of India
  • Security of the State
  • Friendly relations with foreign States
  • Public order
  • Preventing incitement to the commission of any cognizable offense relating to the above
  • For the investigation of any offense

While the grounds for interception are similar to the Indian Telegraph Act (except for the condition of prevention of incitement of only cognizable offences and the addition of investigation of any offence) the Information Technology Act does not have the overarching condition that interception can only occur in the case of public emergency or in the interest of public safety.

Additionally, section 69 of the Act mandates that any person or intermediary who fails to assist the specified agency with the interception, monitoring, decryption or provision of information stored in a computer resource shall be punished with imprisonment for a term which may extend to seven years and shall be liable for a fine.[120]

Section 69B of the Information Technology Act empowers the Central Government to authorise the monitoring and collection of information and traffic data generated, transmitted, received or stored through any computer resource for the purpose of cyber security. According to this section, any intermediary who intentionally or knowingly fails to provide technical assistance to the authorised agency which is required to monitor and collection information and traffic data shall be punished with an imprisonment which may extend to three years and will also be liable to a fine.[121]

The main difference between Section 69 and Section 69B is that the first requires the interception, monitoring and decryption of all information generated, transmitted, received or stored through a computer resource when it is deemed “necessary or expedient” to do so, whereas Section 69B specifically provides a mechanism for all metadata of all communications through a computer resource for the purpose of combating threats to “cyber security”. Directions under Section 69 can be issued by the Secretary to the Ministry of Home Affairs, whereas directions under Section 69B can only be issued by the Secretary of the Department of Information Technology under the Union Ministry of Communications and Information Technology.

Overlap with the Telegraph Act

Thus while the Telegraph Act only allows for interception of messages or class of messages transmitted by a telegraph, the Information Technology Act enables interception of any information being transmitted or stored in a computer resource. Since a “computer resource” is defined to include a communication device (such as cellphones and PDAs) there is a overlap between the provisions of the Information Technology Act and the Telegraph Act concerning the provisions of interception of information sent through mobile phones. This is further complicated by the fact that the UAS License specifically states that it is governed by the provisions of the Indian Telegraph Act, the Indian Wireless Telegraphy Act and the Telecom Regulatory Authority of India Act, but does not mention the Information Technology Act.[122] This does not mean that the Licensees under the Telecom Licenses are not bound by any other laws of India (including the Information Technology Act) but it is just an invitation to unnecessary complexities and confusions with regard to a very serious issue such as interception. This situation has thankfully been remedied by the Unified License (UL) which, although issued under section of 4 of the Telegraph Act, also references the Information Technology Act thus providing essential clarity with respect to the applicability of the Information Technology Act to the License Agreement.

Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009

The interception of internet communications is mainly covered by the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009under the Information Technology Act (the “IT Interception Rules”). In particular, the rules framed under Section 69 and 69B include safeguards stipulating to who may issue directions of interception and monitoring, how such directions are to be executed, the duration they remain in operation, to whom data may be disclosed, confidentiality obligations of intermediaries, periodic oversight of interception directions by a Review Committee under the Indian Telegraph Act, the retention of records of interception by intermediaries and to the mandatory destruction of information in appropriate cases.

According to the IT Interception Rules, only the competent authority can issue an order for the interception, monitoring or decryption of any information generated, transmitted, received or stored in any computer resource under sub-section (2) of section 69 of the Information Technology Act.[123] At the State and Union Territory level, the State Secretaries respectively in charge of the Home Departments are designated as “competent authorities” to issue interception directions.[124] In unavoidable circumstances the Joint Secretary to the Government of India, when so authorised by the Competent Authority, may issue an order. Interception may also be carried out with the prior approval of the Head or the second senior most officer of the authorised security agency at the Central Level and at the State Level with the approval of officers authorised in this behalf not below the rank of Inspector General of Police, in the belowmentioned emergent cases:

(1) in remote areas, where obtaining of prior directions for interception or monitoring or decryption of information is not feasible; or

(2) for operational reasons, where obtaining of prior directions for interception or monitoring or decryption of any information generation, transmitted, received or stored in any computer resource is not feasible,

however, in the above circumstances the officer would have to inform the competent authority in writing within three working days about the emergency and of the interception, monitoring or decryption and obtain the approval of the competent authority within a period of seven working days. If the approval of the competent authority is not obtained within the said period of seven working days, such interception or monitoring or decryption shall cease and the information shall not be intercepted or monitored or decrypted thereafter without the prior approval of the competent authority.[125] If a state wishes to intercept information that is beyond its jurisdiction, it must request permission to issue the direction from the Secretary in the Ministry of Home Affairs.[126]

In order to avoid the risk of unauthorised interception, the IT Interception Rules provide for the following safeguards:

  • If authorised by the competent authority, any agency of the government may intercept, monitor, or decrypt information transmitted, received, or stored in any computer resource only for the purposes specified in section 69(1) of the IT Act.[127]
  • The IT Interception Rules further provide that the competent authority may give any decryption direction to the decryption key holder.[128]
  • The officer issuing an order for interception is required to issue requests in writing to designated nodal officers of the service provider.[129]
  • Any direction issued by the competent authority must contain the reasons for direction, and must be forwarded to the review committee seven days after being issued.[130]
  • In the case of issuing or approving an interception order, in arriving at its decision the competent authority must consider all alternate means of acquiring the information.[131]
  • The order must relate to information sent or likely to be sent from one or more particular computer resources to another (or many) computer resources.[132]
  • The reasons for ordering interceptions must be recorded in writing, and must specify the name and designation of the officer to whom the information obtained is to be disclosed, and also specify the uses to which the information is to be put.[133]
  • The directions for interception will remain in force for a period of 60 days, unless renewed. If the orders are renewed they cannot be in force for longer than 180 days.[134]
  • Authorized agencies are prohibited from using or disclosing contents of intercepted communications for any purpose other than investigation, but they are permitted to share the contents with other security agencies for the purpose of investigation or in judicial proceedings. Furthermore, security agencies at the union territory and state level will share any information obtained by following interception orders with any security agency at the centre.[135]
  • All records, including electronic records pertaining to interception are to be destroyed by the government agency “every six months, except in cases where such information is required or likely to be required for functional purposes”.[136]
  • The contents of intercepted, monitored, or decrypted information will not be used or disclosed by any agency, competent authority, or nodal officer for any purpose other than its intended purpose.[137]
  • The agency authorised by the Secretary of Home Affairs is required to appoint a nodal officer (not below the rank of superintendent of police or equivalent) to authenticate and send directions to service providers or decryption key holders.[138]

The IT Interception Rules also place the following obligations on the service providers:

  • In addition, all records pertaining to directions for interception and monitoring are to be destroyed by the service provider within a period of two months following discontinuance of interception or monitoring, unless they are required for any ongoing investigation or legal proceedings.[139]
  • Upon receiving an order for interception, service providers are required to provide all facilities, co-operation, and assistance for interception, monitoring, and decryption. This includes assisting with: the installation of the authorised agency's equipment, the maintenance, testing, or use of such equipment, the removal of such equipment, and any action required for accessing stored information under the direction.[140]
  • Additionally, decryption key holders are required to disclose the decryption key and provide assistance in decrypting information for authorized agencies.[141]
  • Every fifteen days the officers designated by the intermediaries are required to forward to the nodal officer in charge a list of interceptions orders received by them. The list must include the details such as reference and date of orders of the competent authority.[142]
  • The service provider is required to put in place adequate internal checks to ensure that unauthorised interception does not take place, and to ensure the extreme secrecy of intercepted information is maintained.[143]
  • The contents of intercepted communications are not allowed to be disclosed or used by any person other than the intended recipient.[144]
  • Additionally, the service provider is required to put in place internal checks to ensure that unauthorized interception of information does not take place and extreme secrecy is maintained. This includes ensuring that the interception and related information are handled only by the designated officers of the service provider.[145]

Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009

The Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009, under section 69B of the Information Technology Act, stipulate that directions for the monitoring and collection of traffic data or information can be issued by an order made by the competent authority[146] for any or all of the following purposes related to cyber security:

  • forecasting of imminent cyber incidents;
  • monitoring network application with traffic data or information on computer resource;
  • identification and determination of viruses or computer contaminant;
  • tracking cyber security breaches or cyber security incidents;
  • tracking computer resource breaching cyber security or spreading virus or computer contaminants;
  • identifying or tracking any person who has breached, or is suspected of having breached or likely to breach cyber security;
  • undertaking forensic of the concerned computer resource as a part of investigation or internal audit of information security practices in the computer resources;
  • accessing stored information for enforcement of any provisions of the laws relating to cyber security for the time being in force;
  • any other matter relating to cyber security.[147]

According to these Rules, any direction issued by the competent authority should contain reasons for such direction and a copy of such direction should be forwarded to the Review Committee within a period of seven working days.[148] Furthermore, these Rules state that the Review Committee shall meet at least once in two months and record its finding on whether the issued directions are in accordance with the provisions of sub-section (3) of section 69B of the Act. If the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above, it may set aside the directions and issue an order for the destruction of the copies, including corresponding electronic record of the monitored or collected traffic data or information.[149]

Information Technology (Guidelines for Cyber Cafes) Rules, 2011

The Information Technology (Guidelines for Cyber Cafes) Rules, 2011, were issued under powers granted under section 87(2), read with section 79(2) of the Information Technology Act, 2000.[150] These rules require cyber cafes in India to store and maintain backup logs for each login by any user, to retain such records for a year and to ensure that the log is not tampered. Rule 7 requires the inspection of cyber cafes to determine that the information provided during registration is accurate and remains updated.

5. The Indian Post Office Act, 1898
Section 26 of the Indian Post Office Act, 1898, empowers the Central Government and the State Governments to intercept postal articles.[151] In particular, section 26 of the Indian Post Office Act, 1898, states that on the occurrence of any public emergency or in the interest of public safety or tranquility, the Central Government, State Government or any officer specially authorised by the Central or State Government may direct the interception, detention or disposal of any postal article, class or description of postal articles in the course of transmission by post. Furthermore, section 26 states that if any doubt arises regarding the existence of public emergency, public safety or tranquility then a certificate to that effect by the Central Government or a State Government would be considered as conclusive proof of such condition being satisfied.

According to this section, the Central Government and the State Governments of India can intercept postal articles if it is deemed to be in the instance of a 'public emergency' or for 'public safety or tranquility'. However, the Indian Post Office Act, 1898, does not cover electronic communications and does not mandate their interception, which is covered by the Information Technology Act, 2000 and the Indian Telegraph Act, 1885.

6. The Indian Wireless Telegraphy Act, 1933
The Indian Wireless Telegraphy Act was passed to regulate and govern the possession of wireless telegraphy equipment within the territory of India. This Act essentially provides that no person can own “wireless telegraphy apparatus”[152] except with a license provided under this Act and must use the equipment in accordance with the terms provided in the license.[153]

One of the major sources of revenue for the Indian State Broadcasting Service was revenue from the licence fee from working of wireless apparatus under the Indian Telegraph Act, 1885.The Indian State Broadcasting Service was losing revenue due to lack of legislation for prosecuting persons using unlicensed wireless apparatus as it was difficult to trace them at the first place and then prove that such instrument has been installed, worked and maintained without licence. Therefore, the current legislation was proposed, in order to prohibit possession of wireless telegraphy apparatus without licence.

Presently the Act is used to prosecute cases, related to illegal possession and transmission via satellite phones. Any person who wishes to use satellite phones for communication purposes has to get licence from the Department of Telecommunications.[154]

7. The Code of Criminal Procedure
Section 91 of the Code of Criminal Procedure regulates targeted surveillance. In particular, section 91 states that a Court in India or any officer in charge of a police station may summon a person to produce any document or any other thing that is necessary for the purposes of any investigation, inquiry, trial or other proceeding under the Code of Criminal Procedure.[155] Under section 91, law enforcement agencies in India could theoretically access stored data. Additionally, section 92 of the Code of Criminal Procedure regulates the interception of a document, parcel or thing in the possession of a postal or telegraph authority.

Further section 356(1) of the Code of Criminal Procedure provides that in certain cases the Courts have the power to direct repeat offenders convicted under certain provisions, to notify his residence and any change of, or absence from, such residence after release for a term not exceeding five years from the date of the expiration of the second sentence.

Policy Suggestions

In order to avoid the different standards being adopted for different aspects of surveillance and in different parts of the country, there should be one single policy document or surveillance and interception manual which should contain the rules and regulations regarding all kinds of surveillance. This would not only help in identifying problems in the law but may also be useful in streamlining the entire surveillance regime. However it is easier said than done and requires a mammoth effort at the legislative stage. This is because under the Constitutional scheme of India law and order is a State subject and the police machinery in every State is under the authority of the State government. Therefore it would not be possible to issue a single legislation dealing with all aspects of surveillance since the States are independent in their powers to deal with the police machinery.

Even when we look at the issue of interception, certain state legislations especially the ones dealing with organized crime and bootleggers such as the Maharashtra Control of Organized Crime Act, 1999, the Andhra Pradesh Control of Organized Crime Act, 2001, also deal with the issue of interception and contain provisions empowering the state government to intercept communications for the purpose of using it to investigate or prevent criminal activities. Further even the two central level legislations that deal with interception, viz. the Telegraph Act and the Information Technology Act, specifically empower the State governments also to intercept communications on the same grounds as the Central Government. Since interception of communications is mostly undertaken by security and law enforcement agencies, broadly for the maintenance of law and order, State governments cannot be prevented from issuing their own legislations to deal with interception.

Due to the abovementioned legal and constitutional complexities the major problem in achieving harmonization is to get both the Central and State governments on to the same page. Even if the Central government amends the Telegraph Act and the IT Act to bring them in line with each other, the State governments will still be free to do whatever they please. Therefore it seems the best approach in order to achieve harmonization may be to have a two pronged strategy, i.e. (i) issue a National Surveillance Policy covering both interception and general surveillance; and (ii) amend the central legislations i.e. the Telegraph Act and the Information Technology Act in accordance with the National Surveillance Policy. Once a National Surveillance Policy, based on scientific data and the latest theories on criminology is issued, it is hoped that State governments will themselves like to adopt the principles enshrined therein and amend their own legislations dealing with interception to fall in line with the National Surveillance Policy.


[1] Section 6(2)(b) of the Model Police Manual.

[2] Section 191 (D) of the Model Police Manual.

[3] Section 200 (D) of the Model Police Manual.

[4] Section 2011 (I) of the Model Police Manual.

[5] Section 201 (II) of the Model Police Manual.

[6] Section 201 (IV) of the Model Police Manual.

[7] Section 193 (III) of the Model Police Manual.

[8] Surjan Das & Basudeb Chattopadhyay, Rural Crime in Police Perception: A Study of Village Crime Note Books, 26(3) Economic and Political Weekly 129, 129 (1991).

[9] Section 201 (III) of the Model Police Manual.

[10] Section 201 (V) of the Model Police Manual.

[11] Section 201 (VII) of the Model Police Manual.

[12] Section 356(1) of the Criminal Procedure Code states as follows:

356. Order for notifying address of previously convicted offender.

(1) When any person, having been convicted by a Court in India of an offence punishable under section 215, section 489A, section 489B, section 489C or section 489D of the Indian Penal Code, (45 of 1860 ) or of any offence punishable under Chapter XII or Chapter XVII of that Code, with imprisonment for a term of three years or upwards, is again convicted of any offence punishable under any of those sections or Chapters with imprisonment for a term of three years or upwards by any Court other than that of a Magistrate of the second class, such Court may, if it thinks fit, at the time of passing a sentence of imprisonment on such person, also order that his residence and any change of, or absence from, such residence after release be notified as hereinafter provided for a term not exceeding five years from the date of the expiration of such sentence.

[13] The Indian Telegraph Act, 1885, http://www.ijlt.in/pdffiles/Indian-Telegraph-Act-1885.pdf

[14] Privacy International, Report: “India”, Chapter 3: “Surveillance Policies”, https://www.privacyinternational.org/reports/india/iii-surveillance-policies

[15] Rule 419A(1), Indian Telegraph Rules, 1951.

[16] Rule 419A(1), Indian Telegraph Rules, 1951.

[17] Rule 419A(2), Indian Telegraph Rules, 1951.

[18] Rule 419A(3), Indian Telegraph Rules, 1951.

[19] Rule 419A(4), Indian Telegraph Rules, 1951.

[20] Rule 419A(5), Indian Telegraph Rules, 1951.

[21] Rule 419A(6), Indian Telegraph Rules, 1951.

[22] Rule 419A(7), Indian Telegraph Rules, 1951.

[23] Rule 419A(8), Indian Telegraph Rules, 1951.

[24] Rule 419A(9), Indian Telegraph Rules, 1951.

[25] Rule 419A(18), Indian Telegraph Rules, 1951.

[26] Ibid.

[27] Rule 419A(10), Indian Telegraph Rules, 1951.

[28] Rule 419A(11), Indian Telegraph Rules, 1951.

[29] Rule 419A(12), Indian Telegraph Rules, 1951.

[30] Rule 419A(13), Indian Telegraph Rules, 1951.

[31] Rule 419A(14), Indian Telegraph Rules, 1951.

[32] Rule 419A(15), Indian Telegraph Rules, 1951.

[33] Rule 419A(19), Indian Telegraph Rules, 1951.

[34] Ibid.

[35] Ibid.

[36] Section 46 of the Unlawful Activities Prevention Act, 1967. The Unlawful Activities (Prevention) Act, 1967 has certain additional safeguards such as not allowing intercepted information to be disclosed or received in evidence unless the accused has been provided with a copy of the same atleast 10 days in advance, unless the period of 10 days is specifically waived by the judge.

[37] State owned Public Sector Undertakings (PSUs) (Mahanager Telephone Nigam Limited (MTNL) and Bharat Sanchar Nigam Limited (BSNL)) were issued licenses for provision of CMTS as third operator in various parts of the country. Further, 17 fresh licenses were issued to private companies as fourth cellular operator in September/ October, 2001, one each in 4 Metro cities and 13 Telecom Circles.

[38] Section 45.2 of the CMTS License.

[39] Section 41.09 of the CMTS License.

[40] Section 41.09 of the CMTS License.

[41] Section 44.4 of the CMTS License. Similar provision exists in section 44.11 of the CMTS License.

[42] Section 34.28 (xix) of the ISP License.

[43] Section 34.12 of the ISP License.

[44] Section 34.13 of the ISP License.

[45] Section 34.22 of the ISP License.

[46] Section 34.6 of the ISP License.

[47] Section 34.15 of the ISP License.

[48] Section 34.28 (xiv) of the ISP License.

[49] Section 34.28 (xi) of the ISP License.

[50] Section 34.14 of the ISP License.

[51] Section 34.28 (ix)&(x) of the ISP License.

[52] Section 30.1 of the ISP License.

[53] Section 33.4 of the ISP License.

[54] Section 34.4 of the ISP License.

[55] Section 34.7 of the ISP License.

[56] Section 34.9 of the ISP License.

[57] Section 34.27 (a)(i) of the ISP License.

[58] Section 34.27(a)(ii-vi) of the ISP License.

[59] Section 32.1, 32.2 (i)(ii), 32.3 of the ISP License.

[60] Section 34.8 of the ISP License.

[61] Section 34.18 of the ISP License.

[62] Section 34.28 (xv) of the ISP License.

[63] Section 41.10 of the ISP License.

[64] Section 41.10 of the ISP License.

[65] Section 41.19(i) of the ISP License.

[66] Section 41.19(ii) of the ISP License.

[67] Section 41.19(iv) of the ISP License.

[68] Section 39.1 of the UASL. Similar provision is contained in section 41.4, 41.12 of the UASL.

[69] Section 39.3 of the UASL.

[70] Section 39.2 of the UASL.

[71] Section 23.2 of the UASL. Similar provisions are contained in section 41.7 of the UASL regarding provision of monitoring equipment for monitoring in the “interest of security”.

[72] Section 42.2 of the UASL.

[73] Section 41.20(xx) of the UASL.

[74] Section 41.10 of the UASL.

[75] Section 41.10 of the UASL.

[76] Section 41.14 of the UASL.

[77] Section 41.16 of the UASL.

[78] Section 41.20(ix) of the UASL.

[79] Section 41.20(ix) of the UASL.

[80] Section 41.19(ii) of the UASL.

[81] Section 41.20(xii) of the UASL.

[82] Section 41.20(xiii) of the UASL.

[83] Section 41.20(xiv) of the UASL.

[84] Section 41.20 (xix) of the UASL.

[85] Section 41.20(xvi) of the UASL.

[86] The different services covered by the Unified License are:

a. Unified License (All Services)

b. Access Service (Service Area-wise)

c. Internet Service (Category-A with All India jurisdiction)

d. Internet Service (Category-B with jurisdiction in a Service Area)

e. Internet Service (Category-C with jurisdiction in a Secondary Switching Area)

f. National Long Distance (NLD) Service

g. International Long Distance (ILD) Service

h. Global Mobile Personal Communication by Satellite (GMPCS) Service

i. Public Mobile Radio Trunking Service (PMRTS) Service

j. Very Small Aperture Terminal (VSAT) Closed User Group (CUG) Service

k. INSAT MSS-Reporting (MSS-R) Service

l. Resale of International private Leased Circuit (IPLC) Service

Authorisation for Unified License (All Services) would however cover all services listed at para 2(ii) (b) in all service areas, 2 (ii) (c), 2(ii) (f) to 2(ii) (l) above.

[87] Chapter IV, Para 23.2 of the UL.

[88] Chapter VI, Para 40.2 of the UL.

[89] Chapter V, Para 37.1 of the UL. Similar provision is contained in Chapter VI, Para 39.4,

[90] Chapter V, Para 37.5 of the UL/

[91] Chapter V, Para 37.3 of the UL.

[92] Chapter V, Para 37.2 of the UL.

[93] Chapter VI, Para 39.23(xii) of the UL.

[94] Chapter VI, Para 39.23 (xiii) of the UL.

[95] Chapter VI, Para 39.23 (xiv) of the UL.

[96] Chapter VI, Para 39.23 (xix) of the UL.

[97] Chapter VIII, Para 8.3 of the UL.

[98] Chapter VIII, Para 8.4 of the UL.

[99] Chapter VIII, Para 8.5 of the UL.

[100] Chapter IX, Paras 7.1 to 7.3 of the UL. Further obligations have also been imposed on the Licensee to ensure that its ILL customers maintain the usage of IP addresses/Network Address Translation (NAT) syslog, in case of multiple users on the same ILL, for a minimum period of one year.

[101] Chapter IX, Paras 8.1 to 8.3 of the UL.

[102] Chapter IX, Paras 8.4 and 8.5 of the UL.

[103] Chapter X, Para 5.2 of the UL.

[104] Chapter XI, Para 6.3 of the UL.

[105] Chapter XI, Para 6.4 of the UL.

[106] Chapter XI, Para 6.6 of the UL.

[107] Chapter XI, Para 6.7 of the UL.

[108] Chapter XII, Para 7.4 of the UL.

[109] Chapter XII, Para 7.5 of the UL.

[110] Chapter XII, Para 7.6 of the UL.

[111] Chapter XII, Para 7.7 of the UL.

[112] Chapter XII, Para 7.8 of the UL.

[113] Chapter XIII, Para 7.1 of the UL.

[114] Chapter XIV, Para 8.1 of the UL.

[115] Chapter XIV, Para 8.2 of the UL.

[116] Chapter XV, Para 8.1 of the UL.

[117] Chapter XV, Para 8.5 of the UL.

[118] Chapter XVI, Paras 4.1 - 4.4 of the UL.

[119] Section 69 of the Information Technology Act, 2000.

[120] Ibid.

[121] Section 69B of the Information Technology Act, 2000.

[122] Section 32 of the ISP License.

[123] Rule 3, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[124] Rule 2(d), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[125] Rule 3, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[126] Rule 6, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[127] Rule 4, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[128] Rule 5, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[129] Rule 13, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[130] Rule 7, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[131] Rule 8, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[132] Rule 9, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[133] Rule 10, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[134] Rule 11, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[135] Rule 25(2)&(6), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[136] Rule 23, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[137] Rule 25, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[138] Rule 12, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[139] Rule 23(2), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[140] Rule 19, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[141] Rule 17, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[142] Rule 18, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[143] Rule 20& 21, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[144] Rule 25, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[145] Rule 20, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[146] Rule 3(1) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[147] Rule 3(2) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[148] Rule 3(3) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[149] Rules 7 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[150] Introduction to the Information Technology (Guidelines for Cyber Cafe) Rules, 2011.

[151] The Indian Post Office Act, 1898, http://www.indiapost.gov.in/Pdf/Manuals/TheIndianPostOfficeAct1898.pdf

[152] The expression “wireless telegraphy apparatus” has been defined as “any apparatus, appliance, instrument or material used or capable of use in wireless communication, and includes any article determined by rule made under Sec. 10 to be wireless telegraphy apparatus, but does not include any such apparatus, appliance, instrument or material commonly used for other electrical purposes, unless it has been specially designed or adapted for wireless communication or forms part of some apparatus, appliance, instrument or material specially so designed or adapted, nor any article determined by rule made under Section 10 not to be wireless telegraphy apparatus;”

[153] Section 4, Wireless Telegraphy Act, 1933.

[154] Snehashish Ghosh, Indian Wireless Telegraphy Act, 1933, http://cis-india.org/telecom/resources/indian-wireless-telegraphy-act.

[155] The Code of Criminal Procedure, 1973, Section 91, http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

Comparison of the Human DNA Profiling Bill 2012 with: CIS recommendations, Sub-Committee Recommendations, Expert Committee Recommendations, and the Human DNA Profiling Bill 2015

by Elonnai Hickok last modified Aug 10, 2015 03:20 AM
This blog a comparison of 1. The Human DNA Profiling Bill 2012 vs. the Human DNA Profiling Bill 2015, 2. CIS's main recommendations vs. the 2015 Bill 3. The Sub-Committee Recommendations vs. the 2015 Bill 4. The Expert Committee Recommendations vs. the 2015 Bill.

In 2013 the Expert Committee to discuss the draft Human DNA Profiling Bill was constituted by the Department of Biotechnology. The Expert Committee had constituted a Sub-Committee to modify the draft Bill in the light of invited comments/inputs from the members of the Committee

These changes were then deliberated upon by the Expert Committee. The Record Notes and Meeting Minutes of the Expert Committee and Sub-Committee can be found here. The Centre for Internet and Society was a member of the Expert Committee and sat on the Sub-Committee. In addition to input in meetings, CIS submitted a number of recommendations to the Committee. The Committee has drafted a 2015 version of the Bill and the same is to be introduced to Parliament.

Below is a comparison of 1. The 2012 Bill vs. the 2015 Bill, 2. CIS's main recommendations vs. the 2015 Bill 3. The Sub-Committee Recommendations vs.  the 2015 Bill 4.  The Expert Committee Recommendations vs. the 2015 Bill.

Introduction

  • CIS Recommendation: Recognition that DNA evidence is not infallible.
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill
  • CIS Recommendation:

Chapter I : Preliminary

Inclusion of an 'Objects Clause' that makes clear that (i) the principles of notice, confidentiality, collection limitation, personal autonomy, purpose limitation and data minimization must be adhered to at all times; (ii) DNA profiles merely estimate the identity of persons, they do not conclusively establish unique identity; (iii) all individuals have a right to privacy that must be continuously weighed against efforts to collect and retain DNA; (iv) centralized databases are inherently dangerous because of the volume of information that is at risk; (v) forensic DNA profiling is intended to have probative value; therefore, if there is any doubt regarding a DNA profile, it should not be received in evidence by a court; (vi) once adduced, the evidence created by a DNA profile is only corroborative and must be treated on par with other biometric evidence such as fingerprint measurements.

  • Sub Committee Recommendation: The Bill will not regulate DNA research. The current draft will only regulate use of DNA for civil and criminal purposes.
  • Expert Committee Recommendation: The Bill will not regulate DNA research. The current draft will only regulate use of DNA for civil and criminal purposes.
  • 2015 Bill: No Change from the 2012 Bill

Chapter II : Definitions

CIS Recommendation:

  • Removal of 2(1)(a) “analytical procedure”
  • Removal of 2(1)(b) “audit”
  • Removal of 2(1)(d) “calibration”
  • Re-drafting of 2(1)(h) “DNA Data Bank”
  • Re-naming of 2(1)(i) “DNA Data Bank Manager” to “National DNA Data Bank Manager”
  • Re-drafting of 2(1)(j) “DNA laboratory”
  • Re-drafting of 2(1)(l) “DNA Profile”
  • Re-drafting of 2(1)(o) “forensic material”
  • Removal of 2(1)(q) “intimate body sample”
  • Removal of 2(1)(v) “non-intimate body sample”
  • Removal of 2(1)(r) “intimate forensic procedure”
  • Removal of 2(1)(w) “non-intimate forensic procedure”
  • Removal of 2(1)(s) “known samples”
  • Re-drafting of 2(1)(y) “offender”
  • Removal of 2(1)(zb) “proficiency testing”
  • Re-drafting of 2(1)(zi) “suspect”
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill.

Chapter III : DNA Profiling Board

  • CIS Recommendation:
  1. The board should be made up of no more than five members. The Board must contain at least one ex-Judge or senior lawyer since the Board will perform the legal function of licensing and must obey the tenets of administrative law. To further multi-stakeholder interests, the Board should have an equal representation from civil society – both institutional (e.g NHRC and the State Human Rights Commissions) and non-institutional (well-regarded and experienced civil society persons). The Board should also have privacy advocates. CIS also recommended that the functions of the board be limited to: licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority. CIS also recommended a 'duty to consult' with affected or impacted individuals, interested individuals, and the public at large.
  • Sub-Committee Recommendation:
  1. Reduce the DNA Profiling Board (Section 4) from 16 members to 11 members and include civil society representation on the Board.
  2. Include either clause 4(f) or (g) i.e. Chief Forensic Scientist, Directorate of Forensic Science, Ministry of Home Affairs, Government of India - ex-officio Member or Director of a Central Forensic Science Laboratory to be nominated by Ministry of Home Affairs, Government of India- ex-officio Member;
  3. Change clause 4(i) i.e., to replace Chairman, National Bioethics Committee of Department of Biotechnology, Government of India- ex-officio Member with Chairman, National Human Rights Commissions or his nominee.
  4. Delete Members mentioned in clause 4(l) i.e. Two molecular biologists to be nominated by the Secretary, Department of Biotechnology, Ministry of Science and Technology, Government of India- Members;
  5. DPB Members with potential conflict of interest in matters under consideration should recuse themselves in deliberations in respect of such matters (Section 7), and they should be liable to be removed from the Board in case they are found to have not disclosed the nature of such interest.
  6. With regards to the establishment of the DNA Profiling Board (clause 3) the committee clarified that the DNA Board needs to be a body corporate
  7. The functions of the Board should be redrafted with fewer functions, and these should be listed in descending order of priority to sharpen this function – namely regulate process, regulate the labs, regulate databanks.
  • Expert Committee Recommendation:
  1. Accepted sub-committee recommendation to reduce the Board from 16 to 11 members and the detailed changes.
  2. Accepted sub-committee recommendation to include civil society on the Board.
  3. Accepted sub-committee recommendation to reduce the functions of the Board.
  • 2015 Bill:
  1. Addition in 2015 Bill of Section 4 (b) – “Chairman, National Human Rights Commission or his nominee – ex-officio Member” (2015 Bill) Note: This change represents incorporation of CIS's recommendation, sub-committee recommendation, and expert committee recommendation.
  2. Changing of Section 4 (h)  from: “Director of a State Forensic Science Laboratory to be nominated by Ministry of Home Affairs, Government of India- ex-officio Member” (2012 Bill)  toDirector cum – Chief Forensic Scientist, Directorate of Forensic  Science Services, Ministry of Home Affairs, Government of India -ex-officio Member”(2015 Bill) Note: This change represents partial incorporation of the sub-committee recommendation and expert committee recommendation.
  3. Changing of Section 4 (j) from: “Director, National Accreditation Board for Testing and Calibration of Laboratories, New Delhi- ex-officio Member”; (2012 Bill) to Director of a State Forensic Science Lab to be nominated by MHA ex-officio member” (2015 Bill)
  4. Addition of section 11(4) and 11(5) “(4) The Board shall, in carrying out its functions and activities, consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. (5) The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation.” Note: This change represents partial incorporation of CIS's recommendation and Expert Committee recommendation.

Chapter IV : Approval of DNA Laboratories

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation:
  1. Add in section 16 1(d), the words “including audit reports”
  2. Include in section 16(1)(c) that if labs do not file their audit report on an annual basis, the lab will lose approval. If the lab loses their approval - all the materials will be shifted to another lab and the data subject will be informed.
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill.

Chapter V : Standards, Quality Control and Quality Assurance

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation:
  1. Section 19(2) DNA laboratory to be headed by person possessing a doctorate in a subject germane to molecular biology.
  2. Clauses 20 and 30 should be merged into Clause 20 to read as:

“(1). The staff of every DNA laboratory shall possess such qualifications and experience commensurate with the job requirements as may be specified by the regulations.

(2). Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations.

(3). Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations.

Accordingly, change the Title: “Qualification, Recruitment and Training of DNA lab personnel.”

  1. Require DNA labs to have in place an evidence control system (Clause 22) Note: This existed in the DNA 2012 Bill
  2. Amend Clause 23(1) to read as ““Every DNA laboratory shall possess and shall follow a validation process as may be specified by the regulations.”
  3. Paraphrase Clause 27 as, “Every DNA laboratory shall have audits conducted annually in accordance with the standards as may be specified by the regulations.” It was agreed that the audits of the DNA Laboratory (clause 27) do not need to be external. Note: This existed in the DNA 2012 Bill.
  4. Bring sections 28-31 on infrastructure and training brought into Chapter V and thus new title of the chapter reads as “Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training”.
  • Expert Committee Recommendation: N/A
  • 2015 Bill
  1. Changing of Section 20 (2) from  (2) Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations made by the Board. (2012) to Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations; (2015)”  and Addition in 2015 Bill of Section 20 (3) - “Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA profiling and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations” (2015) Note: This is as per the Sub-Committee's recommendation.
  2. Amending of  Clause 23(1) to read as ““Every DNA laboratory shall possess and shall follow a validation process as may be specified by the regulations.” Note: This is as per the Sub-Committee's recommendation.
  3. Changing of section 30 from:“Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations made by the Board and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations made by the Board.” (2012) to “Every DNA laboratory shall have installed appropriate security system and system for safety of personnel as may be specified by the regulations.”
  • Sections 28-31 on infrastructure and training brought into Chapter V and thus new title of the chapter reads as “Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training”.  Note: This is as per the Sub-Committee's recommendation.
  • CIS Recommendation:

Chapter VI : DNA Data Bank

  1. Removal of section 32(6) which requires the names of individuals to be connected to their profiles and recommended that DNA profiles once developed, should be anonymized and retained separate from the names of their owners.
  2. Section 34(2) to be limited to containing only an offenders' index and a crime scene index
  3. Removal of section 36 which allows for international dicslosures of DNA profiles of Indians.
  • Sub-Committee Recommendation:
  1. Amend Clause 32(1) to reads as: “The Central Government shall, by notification, establish a National DNA Data Bank”.
  2. Anonymize the volunteer's database.
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill.

Chapter VII : Confidentiality of and access to DNA profiles, samples, and records

  • CIS Recommendation:
  1. Re-drafting section 39 and 40 to specify that DNA can only be used for forensic purposes and specify the manner in which DNA profiles may be received in evidence.
  2. Removal of section 40
  3. Removal of section 43
  4. Re-dreaft section 45 as it sets out a post-conviction right related to criminal procedure and evidence. This would fundamentally alter the nature of India’s criminal justice system, which currently does not contain specific provisions for post-conviction testing rights. However, courts may re-try cases in certain narrow cases when fresh evidence is brought forth that has a nexus to the evidence upon which the person was convicted and if it can be proved that the fresh evidence was not earlier adduced due to bias. Any other fresh evidence that may be uncovered cannot prompt a new trial. Clause 45 is implicated by Article 20(2) of the Constitution of India and by 6 section 300 of the CrPC. The principle of autrefois acquit that informs section 300 of the CrPC specifically deals with exceptions to the rule against double jeopardy that permit re-trials. [See, for instance, Sangeeta Mahendrabhai Patel (2012) 7 SCC 721.]
  • Sub-Committee Recommendation:
  1. Amend Clause 40 (f) to read as  “-------to the concerned parties to the said civil dispute or civil matter, with the concurrence of the court and to the concerned judicial officer or authority”.Incorporated, but is now located at section 39
  2. Include in Chapter VIII  additional Sections:   Clause 42A: “A person whose DNA profile has been created shall be given a copy of the DNA profile upon request”. Clause 42B: A person whose DNA profile has been created and stored shall be given information as to who has accessed his DNA profile or DNA information.
  • Expert Committee: N/A
  • 2015 Bill:
  1. Addition of  the phrase in section 39 “with the concurrence of the court”, thus the new clause reads as:  “-------to the concerned parties to the said civil dispute or civil matter, with the concurrence of the court” and to the concerned judicial officer or authority”. Note: This as per the recommendations of the Sub-Committee.

Chapter VIII : Finance, Accounts, and Audit

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill

Chapter IX : Offences and Penalties

  • CIS Recommendation:
  1. The law prohibits the delegation of “essential legislative functions” [In re Delhi Laws, 1951]. The creation of criminal offences must be conducted by a statute that is enacted by Parliament, and when offences are created via delegated legislation, such as Rules, the quantum of punishment must be pre-set by the parent statute.
  2. Since the listing of offences for DNA profiling will directly affect the fundamental right of personal liberty, it is an undeniable fact that the identification of these offences should be subject to a democratic process of the legislature rather than be determined by the whims of the executive.
  • Sub-Committee Recommendation:
  1. Ensure a minimal jail term for any offence under the Act from DNA Data Banks without authorization is a period of one month (chapter 10 (53)) Note: This already existed in the 2012 Bill.
  2. Add to Section 56 the phrase “… or otherwise willfully neglects any other duty cast upon him under the provisions of this Act, shall be punishable …”.
  • Expert Committee: N/A
  • 2015 Bill: No change from 2012 Bill
  • CIS Recommendation: N/A
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill

Chapter X : Miscellaneous

Schedule

  • CIS Recommendation

The creation of a list of offenses under which upon arrest under which DNA samples may lawfully be collected from the arrested person without his consent including:

  1. Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the Code of Criminal Procedure, 1973; [Alternatively, all cognizable offences under the Indian Penal Code may be listed here]
  2. Every offence punishable under the Immoral Traffic (Prevention) Act, 1956;
  3. Any cognizable offence under the Indian Penal Code, 1860 that is committed by a registered medical practitioner and is not saved under section 3 of the Medical Termination of Pregnancy Act, 1971; [Note that the ITP Act does not itself create or list any offences, it only saves doctors from prosecution from IPC offences if certain conditions are met]
  4. Every offence punishable under the Pre-conception and Pre-natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994;
  5. The offence listed under sub-section (1) of section 31 of the Protection of Women from Domestic Violence Act, 2005;
  6. Every offence punishable under the Protection of Civil Rights Act, 1955;
  7. Every offence punishable under the Scheduled Castes and the Scheduled Tribes (Prevention of Atrocities) Act, 1989.
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: Incorporation of CIS's recommendation to the schedule regarding instances of when DNA samples can be collected without consent.
  • 2015 Bill:
  1. Addition in 2015 of “Part II: List of specified offences - Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the Code of Criminal Procedure, 1973” (2015). Note: This represents partial incorporation of CIS's recommendation.
  2. Expansion of sources of samples for DNA profiling from - “(1) Scene of occurrence or crime (2) Tissue and skeleton remains (3) Clothing and other objects (4) Already preserved body fluids and other samples” (2012) to “1. Scene of occurrence, or scene of crime 2. Tissue and skeleton remains 3. Clothing and other objects 4. Already preserved body fluids and other samples 5. Medical Examination 6. Autopsy examination 7. Exhumation” (2015)” and Deletion of “Manner of collection of samples for DNA: (1) Medical Examination (2) Autopsy examination (3) Exhumation “ (2012)

CIS submission to the UNGA WSIS+10 Review

by Jyoti Panday last modified Aug 09, 2015 04:24 PM
The Centre for Internet & Society (CIS) submitted its comments to the non-paper on the UNGA Overall Review of the Implementation of the WSIS outcomes, evaluating the progress made and challenges ahead.

To what extent has progress been made on the vision of the peoplecentred, inclusive and development oriented Information Society in the ten years since the WSIS?
The World Summit on the Information Society (WSIS) in 2003 and 2005 played an important role in encapsulating the potential of knowledge and information and communication technologies (ICT) to contribute to economic and social development. Over the past ten years, most countries have sought to foster the use of information and knowledge by creating enabling environment for innovation and through efforts to increase access. There have been interventions to develop ICT for development both at an international and national level through private sector investment, bilateral treaties and national strategies.

However, much of the progress made in the past ten years in terms of getting people connected and reaping the benefits of ICT has not been sufficiently peoplecentred, nor have they been sufficiently inclusive.

These developments have not been sufficiently peoplecentred, since governments across the world have been using the Internet as a monumental surveillance tool, invading people’s privacy without legitimate justifications, in an arbitrary manner without due care for reasonableness,  proportionality, or democratic accountability. These developments have not been sufficiently peoplecentred, since the largest and most profitable Internet businesses — businesses that have more users than most nationstates have citizens, yet have one-sided terms of service — have eschewed core principles like open standards and interoperability that helped create the Internet and the World Wide Web, and instead promote silos.

We still reside in a world where development has been very lopsided, and ICTs have contributed to reducing some of these gulfs, while exacerbating others. For instance, persons with visual impairment are largely yet to reap the benefits of the Information Society due to a lack of attention paid to universal, while sighted persons have benefited far more; the ability of persons who don’t speak a language like English to contribute to global Internet governance discussions is severely limited; the spread of academic knowledge largely remains behind prohibitive paywalls.

As ICTs have grown both in sophistication and reach, much work remains to achieve the peoplecentred, inclusive and developmentoriented information society envisaged in WSIS. While the diffusion of ICTs has created new opportunities for development, even today less than half the world has access to broadband (with only eleven per cent of the world’s population having access to fixed broadband). See International Telecommunication Union, ICT Facts and Figures: The World in 2015.

Ninety per cent of people connected come from the industrialized countries — North America (thirty per cent), Europe (thirty per cent) and the AsiaPacific (thirty per cent). Four billion people from developing countries remain offline, representing two-thirds of the population residing in developing countries. Of the nine hundred and forty million people residing in Least Developed Countries (LDCs), only eighty-nine million use the Internet and only seven per cent of households have Internet access, compared with the world average of forty-six per cent. See International Telecommunication Union, ICT Facts and Figures: The World in 2015. This digital divide is first and foremost a question of access to basic infrastructure (like electricity).

Furthermore, there is a problem of affordability, all the more acute since in the South in comparison with countries of the North due to the high costs related to access to the connection. Further, linguistic, educational, cultural and content related barriers are also contributing to this digital divide. Growth of restrictive regimes around intellectual property, vision of the equal and connected society. Security of critical infrastructure with in light of ever growing vulnerabilities, the loss of trust following revelations around mass surveillance and a lack of consensus on how to tackle these concerns are proving to be a challenge to the vision of a connected information society. The WSIS+10 overall review is timely and a much needed intervention in assessing the progress made and planning for the challenges ahead.

There were two bodies as major outcomes of the WSIS process: the Internet Governance Forum and the Digital Solidarity Fund, with both of these largely failing to achieve their intended goals. The Internet Governance Forum, which is meant to be a leading example of “multi-stakeholder governance” is also a leading example of what the Multi-stakeholder Advisory Group (MAG) noted in 2010 as “‘black box’ approach”, with the entire process around the nomination and selection of the MAG being opaque. Indeed, when CIS requested the IGF Secretariat to share information on the nominators, we were told that this information will not be made private. Five years since the MAG lamented its own blackbox nature, things have scarcely improved. Further, analysis of MAG membership since 2006 shows that 26 persons have served for 6 years or more, with the majority of them being from government, industry, or the technical community. Unsurprisingly, 36 per cent of the MAG membership has come from the WEOG group, highlighting both deficiencies in the nomination/selection
process as well as the need for capacity building in this most important area. The Digital Solidarity Fund failed for a variety of reason, which we have analysed in a separate document annexed to this response.

What are the challenges to the implementation of WSIS outcomes?

Some of the key areas that need attention going forward and need to be addressed include:

Access to Infrastructure

  • Developing policies aimed at promoting innovation and increasing affordable access to hardware and software, and curbing the ill effects of the currentlyexcessive patent and copyright regimes.
  • Focussing global energies on solutions to lastmile access to the Internet in a manner that is not decoupled from developmental ground realities.
  • This would include policies on spectrum sharing, freeing up underutilized spectrum, and increasing unlicensed spectrum.
  • This would also include governmental policies on increasing competition among Internet providers at the last mile as well as at the backbone (both nationally and internationally), as well as commitments for investments in basic infrastructure such as an openaccess national fibreoptic backbone where the private sector investment is not sufficient.
  • Developing policies that encourage local Internet and communications infrastructure in the form of Internet exchange points, data centres, community broadcasting.

Access to Knowledges

  • As the Washington Declaration on IP and the Public Interest5 points out, the enclosure of the public domain and knowledge commons through expansive “intellectual property” laws and policies has only gotten worse with digital technologies, leading to an unjust allocation of information goods, and continuing royalty outflows from the global South to a handful of developing countries. This is not sustainable, and urgent action is needed to achieve more democratic IP laws, and prevent developments such as extra judicial enforcement mechanisms such as digital restrictions management systems from being incorporated within Web standards.
  • Aggressive development of policies and adoption of best practices to ensure that persons with disabilities are not treated as secondgrade citizens, but are able to fully and equally participate in and benefit from the Information Society.
  • Despite the rise of video content on the Internet, much of that has been in parts of the world with already high literacy, and language and illiteracy continue to pose barriers to full usage of the Internet.
  • While the Tunis Agenda highlighted the need to address communities marginalized in Information Society discourse, including youth, older persons, women, indigenous peoples, people with disabilities, and remote and rural communities, but not much progress has been seen on this front.

Rights, Trust, and Governance

  • Ensuring effective and sustainable participation especially from developing countries and marginalised communities. Developing governance mechanisms that are accountable, transparent and provide checks against both unaccountable commercial interests as well as governments.
  • Building citizen trust through legitimate, accountable and transparent governance mechanisms.
  • Ensuring cooperation between states as security is influenced by global foreign policy, and is of principal importance to citizens and consumers, and an enabler of other rights.
  • As the Manila Principles on Intermediary Liability show, uninformed intermediary liability policies, blunt and heavy handed regulatory measures, failing to meet the principles of necessity and proportionality, and a lack of consistency across these policies has resulted in censorship and other human rights abuses by governments and private parties, limiting individuals’ rights to free expression and creating an environment of uncertainty that also impedes innovation online. In developing, adopting, and reviewing legislation, policies and practices that govern the liability of intermediaries, interoperable and harmonized regimes that can promote innovation while respecting users’ rights in line with the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the United Nations Guiding Principles on Business and Human Rights are needed and should be encouraged.
  • An important challenge before the Information Society is that of the rise of “quantified society”, where enormous amounts of data are generated constantly, leading to great possibilities and grave concerns regarding privacy and data protection.
  • Reducing tensions arising from the differences between cultural and digital nationalism including on issues such as data sovereignty, data localisation, unfair trade and the need to have open markets.
  • Currently, there is a lack of internationally recognized venues accessible to all stakeholders for not only discussing but also acting upon many of these issues.

What should be the priorities in seeking to achieve WSIS outcomes and progress towards the Information Society, taking into account emerging trends?
All the challenges mentioned above should be a priority in achieving WSIS outcomes and ensuring innovation to lead social and economic progress in society. Digital literacy, multilingualism and addressing privacy and user data related issues need urgent attention in the global agenda. Enabling increased citizen participation thus accounting for the diverse voices that make the Internet a unique medium should also be treated as priority. Renewing the IGF mandate and giving it teeth by adopting indicators for development and progress, periodic review and working towards tangible outcomes would be beneficial to achieving the goal of a connected information society.

What are general expectations from the WSIS + 10 High Level Meeting of the United Nations General Assembly?
We would expect the WSIS+10 High Level Meeting to endorse an outcome document that seeks to d evelop a comprehensive policy framework addressing the challenges highlighted above . It would also be beneficial, if the outcome document could identify further steps to assess development made so far, and actions for overcoming the identified challenges. Importantly, this should not only be aimed at governments, but at all stakeholders. This would be useful as a future road map for regulation and would also allow us to understand the impact of Internet on society.

What shape should the outcome document take?
The outcome document should be a resolution of the UN General Assembly, with high level policy statements and adopted agreements to work towards identified indicators. It should stress the urgency of reforms needed for ICT governance that is democratic, respectful of human rights and social justice and promotes participatory policymaking. The language should promote the use of technologies and institutional architectures of governance that ensure users’ rights over data and information and recognize the need to restrict abusive use of technologies including those used for mass surveillance. Further, the outcome document should underscore the relevance of the Universal Declaration of Human Rights, including civil, political, social, economic, and cultural rights, in the Information Society.

The outcome document should also acknowledge that certain issues such as security, ensuring transnational rights, taxation, and other such cross jurisdictional issues may need greater international cooperation and should include concrete steps on how to proceed on these issues. The outcome document should acknowledge the limited progress made through outcome-less multi-stakeholder governance processes such as the Internet Governance Forum, which favour status quoism, and seek to enable the IGF to be more bold in achieving its original goals, which are still relevant. It should be frank in its acknowledgement of the lack of consensus on issues such as “enhanced cooperation” and the “respective roles” of stakeholders in multi-stakeholder processes, as brushing these difficulties under the carpet won’t help in magically building consensus. Further, the outcome document should recognize that there are varied approaches to multi-stakeholder governance.

A Review of the Policy Debate around Big Data and Internet of Things

by Elonnai Hickok last modified Aug 17, 2015 08:36 AM
This blog post seeks to review and understand how regulators and experts across jurisdictions are reacting to Big Data and Internet of Things (IoT) from a policy perspective.

Defining and Connecting Big Data and Internet of Things

The Internet of Things is a term that refers to networked objects and systems that can connect to the internet and can transmit and receive data. Characteristics of IoT include the gathering of information through sensors, the automation of functions, and analysis of collected data.[1] For IoT devices, because of the velocity at which data is generated, the volume of data that is generated, and the variety of data generated by different sources [2] - IoT devices can be understood as generating Big Data and/or relying on Big Data analytics. In this way IoT devices and Big Data are intrinsically interconnected.

General Implications of Big Data and Internet of Things

Big Data paradigms are being adopted across countries, governments, and business sectors because of the potential insights and change that it can bring. From improving an organizations business model, facilitating urban development, allowing for targeted and individualized services, and enabling the prediction of certain events or actions - the application of Big Data has been recognized as having the potential to bring about dramatic and large scale changes.

At the same time, experts have identified risks to the individual that can be associated with the generation, analysis, and use of Big Data. In May 2014, the White House of the United States completed a ninety day study of how big data will change everyday life. The Report highlights the potential of Big Data as well as identifying a number of concerns associated with Big Data. For example: the selling of personal data, identification or re-identification of individuals, profiling of individuals, creation and exacerbation of information asymmetries, unfair, discriminating, biased, and incorrect decisions based on Big Data analytics, and lack of or misinformed user consent.[3] Errors in Big Data analytics that experts have identified include statistical fallacies, human bias, translation errors, and data errors.[4] Experts have also discussed fundamental changes that Big Data can bring about. For example, Danah Boyd and Kate Crawford in the article "Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon" propose that Big Data can change the definition of knowledge and shape the reality it measures.[5] Similarly, a BSC/Oxford Internet Institute conference report titled " The Societal Impact of the Internet of Things" points out that often users of Big Data assume that information and conclusions based on digital data is reliable and in turn replace other forms of information with digital data.[6]

Concerns that have been voiced by the Article 29 Working Party and others specifically about IoT devices have included insufficient security features built into devices such as encryption, the reliance of the devices on wireless communications, data loss from infection by malware or hacking, unauthorized access and use of personal data, function creep resulting from multiple IoT devices being used together, and unlawful surveillance.[7]

Regulation of Big Data and Internet of Things

The regulation of Big Data and IoT is currently being debated in contexts such as the US and the EU. Academics, civil society, and regulators are exploring questions around the adequacy of present regulation and overseeing frameworks to address changes brought about Big Data, and if not - what forms of or changes in regulation are needed? For example, Kate Crawford and Jason Shultz in the article "Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms"stress the importance of bringing in 'data due process rights' i.e ensuring fairness in the analytics of Big Data and how personal information is used.[8] While Solon Barocas and Andrew Selbst in the article "Big Data's Disparate Impact" explore if present anti-discrimination legislation and jurisprudence in the US is adequate to protect against discrimination arising from Big Data practices - specifically data mining.[9]

The Impact of Big Data and IoT on Data Protection Principles

In the context of data protection, various government bodies, including the Article 29 Data Protection Working Party set up under the Directive 95/46/EC of the European Parliament, the Council of Europe, the European Commission, and the Federal Trade Commission, as well as experts and academics in the field, have called out at least ten different data protection principles and concepts that Big Data impacts:

  1. Collection Limitation: As a result of the generation of Big Data as enabled by networked devices, increased capabilities to analyze Big Data, and the prevalent use of networked systems - the principle of collection limitation is changing.[10]
  2. Consent: As a result of the use of data from a wide variety of sources and the re-use of data which is inherent in Big Data practices - notions of informed consent (initial and secondary) are changing.[11]
  3. Data Minimization: As a result of Big Data practices inherently utilizing all data possible - the principle of data minimization is changing/obsolete.[12]
  4. Notice: As a result of Big Data practices relying on vast amounts of data from numerous sources and the re-use of that data - the principle of notice is changing.[13]
  5. Purpose Limitation: As a result of Big Data practices re-using data for multiple purposes - the principle of purpose limitation is changing/obsolete.[14]
  6. Necessity: As a result of Big Data practices re-using data, the new use or re-analysis of data may not be pertinent to the purpose that was initially specified- thus the principle of necessity is changing.[15]
  7. Access and Correction: As a result of Big Data being generated (and sometimes published) at scale and in real time - the principle of user access and correction is changing.[16]
  8. Opt In and Opt Out Choices: Particularly in the context of smart cities and IoT which collect data on a real time basis, often without the knowledge of the individual, and for the provision of a service - it may not be easy or possible for individuals to opt in or out of the collection of their data.[17]
  9. PI: As a result of Big Data analytics using and analyzing a wide variety of data, new or unexpected forms of personal data may be generated - thus challenging and evolving beyond traditional or specified definitions of personal information.[18]
  10. Data Controller: In the context of IoT, given the multitude of actors that can collect, use and process data generated by networked devices, the traditional understanding of what and who is a data controller is changing.[19]

Possible Technical and Policy Solutions

In a Report titled "Internet of Things: Privacy & Security in a Connected World" by the Federal Trade Commission in the United States it was noted that though IoT changes the application and understanding of certain privacy principles, it does not necessarily make them obsolete.[20] Indeed many possible solutions that have been suggested to address the challenges posed by IoT and Big Data are technical interventions at the device level rather than fundamental policy changes. For example it has been proposed that IoT devices can be programmed to:

  • Automatically delete data after a specified period of time [21] (addressing concerns of data retention)
  • Ensure that personal data is not fed into centralized databases on an automatic basis [22] (addressing concerns of transfer and sharing without consent, function creep, and data breach)
  • Offer consumers combined choices for consent rather than requiring a one time blanket consent at the time of initiating a service or taking fresh consent for every change that takes place while a consumer is using a service. [23] (addressing concerns of informed and meaningful consent)
  • Categorize and tag data with accepted uses and programme automated processes to flag when data is misused. [24] (addressing concerns of misuse of data)
  • Apply 'sticky policies' - policies that are attached to data and define appropriate uses of the data as it 'changes hands' [25] (addressing concerns of user control of data)
  • Allow for features to only be turned on with consent from the user [26] (addressing concerns of informed consent and collection without the consent or knowledge of the user)
  • Automatically convert raw personal data to aggregated data [27] (addressing concerns of misuse of personal data and function creep)
  • Offer users the option to delete or turn off sensors [28] (addressing concerns of user choice, control, and consent)

Such solutions place the designers and manufacturers of IoT devices in a critical role. Yet some, such as Kate Crawford and Jason Shultz are not entirely optimistic about the possibility of effective technological solutions - noting in the context of automated decision making that it is difficult to build in privacy protections as it is unclear when an algorithm will predict personal information about an individual.[29]

Experts have also suggested that more emphasis should be placed on the principles and practices of:

  • Transparency,
  • Access and correction,
  • Use/misuse
  • Breach notification
  • Remedy
  • Ability to withdraw consent

Others have recommended that certain privacy principles need to be adapted to the Big Data/IoT context. For example, the Article 29 Working Party has clarified that in the context of IoT, consent mechanisms need to include the types of data collected, the frequency of data collection, as well as conditions for data collection.[30] While the Federal Trade Commission has warned that adopting a pure "use" based model has its limitations as it requires a clear (and potentially changing) definition of what use is acceptable and what use is not acceptable, and it does not address concerns around the collection of sensitive personal information.[31] In addition to the above, the European Commission has stressed that the right of deletion, the right to be forgotten, and data portability also need to be foundations of IoT systems and devices.[32]

Possible Regulatory Frameworks

To the question - are current regulatory frameworks adequate and is additional legislation needed, the FTC has recommended that though a specific IoT legislation may not be necessary, a horizontal privacy legislation would be useful as sectoral legislation does not always account for the use, sharing, and reuse of data across sectors. The FTC also highlighted the usefulness of privacy impact assessments and self regulatory steps to ensure privacy.[33] The European Commission on the other hand has concluded that to ensure enforcement of any standard or protocol - hard legal instruments are necessary.[34] As mentioned earlier, Kate Crawford and Jason Shultz have argued that privacy regulation needs to move away from principles on collection, specific use, disclosure, notice etc. and focus on elements of due process around the use of Big Data - as they say "procedural data due process". Such due process should be based on values instead of defined procedures and should include at the minimum notice, hearing before an independent arbitrator, and the right to review. Crawford and Shultz more broadly note that there are conceptual differences between privacy law and big data that pose as serious challenges i.e privacy law is based on causality while big data is a tool of correlation. This difference raises questions about how effective regulation that identifies certain types of information and then seeks to control the use, collection, and disclosure of such information will be in the context of Big Data – something that is varied and dynamic. According to Crawford and Shultz many regulatory frameworks will struggle with this difference – including the FTC's Fair Information Privacy Principles and the EU regulation including the EU's right to be forgotten.[35] The European Data Protection Supervisor on the other hand looks at Big Data as spanning the policy areas of data protection, competition, and consumer protection – particularly in the context of 'free' services. The Supervisor argues that these three areas need to come together to develop ways in which the challenges of Big Data can be addressed. For example, remedy could take the form of data portability – ensuring users the ability to move their data to other service providers empowering individuals and promoting competitive market structures or adopting a 'compare and forget' approach to data retention of customer data. The Supervisor also stresses the need to promote and treat privacy as a competitive advantage, thus placing importance on consumer choice, consent, and transparency.[36] The European Data Protection reform has been under discussion and it is predicted to be enacted by the end of 2015. The reform will apply across European States and all companies operating in Europe. The reform proposes heavier penalties for data breaches, seeks to provide users with more control of their data.[37] Additionally, Europe is considering bringing digital platforms under the Network and Information Security Directive – thus treating companies like Google and Facebook as well as cloud providers and service providers as a critical sector. Such a move would require companies to adopt stronger security practices and report breaches to authorities.[38]

Conclusion

A review of the different opinions and reactions from experts and policy makers demonstrates the ways in which Big Data and IoT are changing traditional forms of protection that governments and societies have developed to protect personal data as it increases in value and importance. While some policy makers believe that big data needs strong legislative regulation and others believe that softer forms of regulation such as self or co-regulation are more appropriate, what is clear is that Big Data is either creating a regulatory dilemma– with policy makers searching for ways to control the unpredictable nature of big data through policy and technology through the merging of policy areas, the honing of existing policy mechanisms, or the broadening of existing policy mechanisms - while others are ignoring the change that Big Data brings with it and are forging ahead with its use.

Answering the 'how do we regulate Big Data” question requires re-conceptualization of data ownership and realities. Governments need to first recognize the criticality of their data and the data of their citizens/residents, as well as the contribution to a country's economy and security that this data plays. With the technologies available now, and in the pipeline, data can be used or misused in ways that will have vast repercussions for individuals, society, and a nation. All data, but especially data directly or indirectly related to citizens and residents of a country, needs to be looked upon as owned by the citizens and the nation. In this way, data should be seen as a part of critical national infrastructure of a nation, and accorded the security, protections, and legal backing thereof to prevent the misuse of the resource by the private or public sectors, local or foreign governments. This could allow for local data warehousing and bring physical and access security of data warehouses on par with other critical national infrastructure. Recognizing data as a critical resource answers in part the concern that experts have raised – that Big Data practices make it impossible for data to be categorized as personal and thus afforded specified forms of protection due to the unpredictable nature of big data. Instead – all data is now recognized as critical.

In addition to being able to generate personal data from anonymized or non-identifiable data, big data also challenges traditional divisions of public vs. private data. Indeed Big Data analytics can take many public data points and derive a private conclusion. The use of Big Data analytics on public data also raises questions of consent. For example, though a license plate is public information – should a company be allowed to harvest license plate numbers, combine this with location, and sell this information to different interested actors? This is currently happening in the United States.[39] Lastly, Big Data raises questions of ownership. A solution to the uncertainty of public vs. private data and associated consent and ownership could be the creation a National Data Archive with such data. The archive could function with representation from the government, public and private companies, and civil society on the board. In such a framework, for example, companies like Airtel would provide mobile services, but the CDRs and customer data collected by the company would belong to the National Data Archive and be available to Airtel and all other companies within a certain scope for use. This 'open data' approach could enable innovation through the use of data but within the ambit of national security and concerns of citizens – a framework that could instill trust in consumers and citizens. Only when backed with strong security requirements, enforcement mechanisms and a proactive, responsive and responsible framework can governments begin to think about ways in which Big Data can be harnessed.


[1] BCS - The Chartered Institute for IT. (2013). The Societal Impact of the Internet of Things. Retrieved May 17, 2015, from http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf

[2] Sicular, S. (2013, March 27). Gartner’s Big Data Definition Consists of Three Parts, Not to Be Confused with Three “V”s. Retrieved May 20, 2015, from http://www.forbes.com/sites/gartnergroup/2013/03/27/gartners-big-data-definition-consists-of-three-parts-not-to-be-confused-with-three-vs/

[3] Executive Office of the President. “Big Data: Seizing Opportunities, Preserving Values”. May 2014. Available at: https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf. Accessed: July 2nd 2015.

[4] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

[5] Danah Boyd, Kate Crawford. CRITICAL QUESTIONS FOR BIG DATA. Information, Communication & Society Vol. 15, Iss. 5, 2012. Available at: http://www.tandfonline.com/doi/full/10.1080/1369118X.2012.678878. Accessed: July 2nd 2015.

[6]  The Chartered Institute for IT, Oxford Internet Institute, University of Oxford. “The Societal Impact of the Internet of Things” February 2013. Available at: http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf. Accessed: July 2nd 2015.

[7] ARTICLE 29 Data Protection Working Party. (2014). Opinion 8/2014 on the on Recent Developments on the Internet of Things. European Commission. Retrieved May 20, 2015, from http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf

[8] Crawford, K., & Schultz, J. (2013). Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms (SSRN Scholarly Paper No. ID 2325784). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2325784

[9] Barocas, S., & Selbst, A. D. (2015). Big Data’s Disparate Impact (SSRN Scholarly Paper No. ID 2477899). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2477899

[10] Barocas, S., & Selbst, A. D. (2015). Big Data’s Disparate Impact (SSRN Scholarly Paper No. ID 2477899). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2477899

[11] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[12] Tene, O., & Polonetsky, J. (2013). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 239.

[13]  Omer Tene and Jules Polonetsky, Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. & Intell. Prop. 239 (2013).

[14] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[15] Information Commissioner's Office. (2014). Big Data and Data Protection. Infomation Commissioner's Office. Retrieved May 20, 2015, from https://ico.org.uk/media/for-organisations/documents/1541/big-data-and-data-protection.pdf

[16] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[17] The Chartered Institute for IT and Oxford Internet Institute, University of Oxford. “The Societal Impact of the Internet of Things”. February 14th 2013. Available at: http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf. Accessed: July 2nd 2015.

[18] Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[19] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[20] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[21] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[22] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[23] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[24] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[25] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[26] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[27] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[28] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[29]  Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[30]  Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[31] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commission. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[32] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[33] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commission. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[34] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[35] Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[36] European Data Protection Supervisor. Preliminary Opinion of the European Data Protection Supervisor, Privacy and competitiveness in the age of big data: the interplay between data protection, competition law and consumer protection in the Digital Economy. March 2014. Available at: https://secure.edps.europa.eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

[37] SC Magazine. Harmonised EU data protection and fines by the end of the year. June 25th 2015. Available at: http://www.scmagazineuk.com/harmonised-eu-data-protection-and-fines-by-the-end-of-the-year/article/422740/. Accessed: August 8th 2015.

[38] Tom Jowitt, “Digital Platforms to be Included in EU Cybersecurity Law”. TechWeek Europe. August 7th 2015. Available at: http://www.techweekeurope.co.uk/e-regulation/digital-platforms-eu-cybersecuity-law-174415

[39] Adam Tanner. Data Brokers are now Selling Your Car's Location for $10 Online. July 10th 2013. Available at: http://www.forbes.com/sites/adamtanner/2013/07/10/data-broker-offers-new-service-showing-where-they-have-spotted-your-car/

Big Data and the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011

by Elonnai Hickok last modified Aug 11, 2015 07:01 AM
Experts and regulators across jurisdictions are examining the impact of Big Data practices on traditional data protection standards and principles. This will be a useful and pertinent exercise for India to undertake as the government and the private and public sectors begin to incorporate and rely on the use of Big Data in decision making processes and organizational operations.This blog provides an initial evaluation of how Big Data could impact India's current data protection standards.

Experts and regulators across the globe are examining the impact of Big Data practices on traditional data protection standards and principles. This will be a useful and pertinent exercise for India to undertake as the government and the private and public sectors begin to incorporate and rely on the use of Big Data in decision making processes and organizational operations.

Below is an initial evaluation of how Big Data could impact India's current data protection standards.

India currently does not have comprehensive privacy legislation - but the Reasonable Security Practices and Procedures and Sensitive Personal Data or Information Rules 2011 formed under section 43A of the Information Technology Act 2000[1] define a data protection framework for the processing of digital data by Body Corporate. Big Data practices will impact a number of the provisions found in the Rules:

Scope of Rules: Currently the Rules apply to Body Corporate and digital data. As per the IT Act, Body Corporate is defined as "Any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities."

The present scope of the Rules excludes from its purview a number of actors that do or could have access to Big Data or use Big Data practices. The Rules would not apply to government bodies or individuals collecting and using Big Data. Yet, with technologies such as IoT and the rise of Smart Cities across India – a range of government, public, and private organizations and actors could have access to Big Data.

Definition of personal and sensitive personal data: Rule 2(i) defines personal information as "information that relates to a natural person which either directly or indirectly, in combination with other information available or likely to be available with a body corporate, is capable of identifying such person."

Rule 3 defines sensitive personal information as:

  • Password,
  • Financial information,
  • Physical/physiological/mental health condition,
  • Sexual orientation,
  • Medical records and history,
  • Biometric information

The present definition of personal data hinges on the factor of identification (data that is capable of identifying a person). Yet this definition does not encompass information that is associated to an already identified individual - such as habits, location, or activity.

The definition of personal data also addresses only the identification of 'such person' and does not address data that is related to a particular person but that also reveals identifying information about another person - either directly - or when combined with other data points.

By listing specific categories of sensitive personal information, the Rules do not account for additional types of sensitive personal information that might be generated or correlated through the use of Big Data analytics.

Importantly, the definitions of sensitive personal information or personal information do not address how personal or sensitive personal information - when anonymized or aggregated – should be treated.

Consent: Rule 5(1) requires that Body Corporate must, prior to collection, obtain consent in writing through letter or fax or email from the provider of sensitive personal data regarding the use of that data.

In a context where services are delivered with little or no human interaction, data is collected through sensors, data is collected on a real time and regular basis, and data is used and re-used for multiple and differing purposes - it is not practical, and often not possible, for consent to be obtained through writing, letter, fax, or email for each instance of data collection and for each use.

Notice of Collection: Rule 5(3) requires Body Corporate to provide the individual with a notice during collection of information that details the fact that information is being collected, the purpose for which the information is being collected, the intended recipients of the information, the name and address of the agency that is collecting the information and the agency that will retain the information. Furthermore body corporate should not retain information for longer than is required to meet lawful purposes.

Though this provision acts as an important element of transparency, in the context of Big Data, communicating the purpose for which data is collected, the intended recipients of the information, the name and address of the agency that is collecting the information and the agency that will retain the information could prove to be difficult to communicate as they are likely to encompass numerous agencies and change depending upon the analysis being done.

Access and correction: Rule 5(6) provides individuals with the ability to access sensitive personal information held by the body corporate and correct any inaccurate information.

This provision would be difficult to implement effectively in the context of Big Data as vast amounts of data are being generated and collected on an ongoing and real time basis and often without the knowledge of the individual.

Purpose Limitation: Rule 5(5) requires that body corporate should use information only of the purpose which it has been collected.

In the context of Big Data this provision would overlook the re-use of data that is inherent in such practices.

Security: Rule 8 states that any Body Corporate or person on its behalf will be understood to have complied with reasonable security practices and procedures if they have implemented such practices and have in place codes that address managerial, technical, operational and physical security control measures. These codes could follow the IS/ISO/IEC 27001 standard or another government approved and audited standard.

This provision importantly requires that data controllers collecting and processing data have in place strong security practices. In the context of Big Data – the security of devices that might be generating or collecting data and algorithms processing and analysing data is critical. Once generated, it might be challenging to ensure the data is being transferred to or being analysed by organisations that comply with such security practices as listed.

Data Breach : Rule 8 requires that if a data breach occurs, Body Corporate would have to be able to demonstrate that they have implemented their documented information security codes.

Though this provision holds a company accountable for the implementation of security practices, it does not address how a company should be held accountable for a large scale data breach as in the context of Big Data the scope and impact of a data breach is on a much larger scale.

Opt in and out and ability to withdraw consent : Rule 5(7) requires Body Corporate or any person on its behalf, prior to the collection of information - including sensitive personal information - must give the individual the option of not providing information and must give the individual the option of withdrawing consent. Such withdrawal must be sent in writing to the body corporate.

The feasibility of such a provision in the context of Big Data is unclear, especially in light of the fact that Big Data practices draw upon large amounts of data, generated often in real time, and from a variety of sources.

Disclosure of Information: Rule 6 maintains that disclosure of sensitive personal data can only take place with permission from the provider of such information or as agreed to through a lawful contract.

This provision addresses disclosure and does not take into account the “sharing” of information that is enabled through networked devices, as well as the increasing practice of companies to share anonymized or aggregated data.

Privacy Policy : Rule 4 requires that body corporate have in place a privacy policy on their website that provides clear and accessible statements of its practices and policies, type of personal or sensitive personal information that is being collected, purpose of the collection, usage of the information, disclosure of the information, and the reasonable security practices and procedures that have been put in place to secure the information.

In the context of Big Data where data from a variety of sources is being collected, used, and re-used it is important for policies to 'follow data' and appear in a contextualized manner. The current requirement of having Body Corporate post a single overarching privacy policy on its website could prove to be inadequate.

Remedy : Section 43A of the Act holds that if a body corporate is negligent in implementing and maintain reasonable security practices and procedures which results in wrongful loss or wrongful gain to any person, the body corporate can be held liable to pay compensation to the affected person.

This provision will provide limited remedy for an affected individual in the context of Big Data. Though important to help prevent data breaches resulting from negligent data practices, implementation of reasonable security practices and procedures cannot be the only hinging point for determining liability of a Body Corporate for violations and many of the harms possible through Big Data are not in the form of wrongful loss or wrongful gain to another person. Indeed many harms possible through Big Data are non-economic in nature – including physical invasion of privacy, and discriminatory practices that can arise from decisions based on Big Data analytics. Nor does the provision address the potential for future damage that can result from a 'Big Data data breach'.

The safeguards noted in the above section are not the only legal provisions that speak to privacy in India. There are over fifty sectoral legislation that have provisions addressing privacy - for example provisions addressing confidentiality of health and banking information. The government of India is also in the process of drafting a privacy legislation. In 2012 the Report of the Group of Experts on Privacy provided recommendations for a privacy framework in India. The Report envisioned a framework of co-regulation - with sector level self regulatory organization developing privacy codes (that are not lower than the defined national privacy principles) and that are enforced by a privacy commissioner.[2] Perhaps this method would be optimal for the regulation of Big Data- allowing for the needed flexibility and specificity in standards and device development. Though the Report notes that individuals can seek remedy from the court and the Privacy Commissioner can issue fines for a violation, the development of privacy legislation in India has yet to clearly integrate the importance of due process and remedy. With the onset of Big Data - this will become more important than ever.

Conclusion

The use and generation of Big Data in India is growing. Plans such as free wifi zones in cities[3], city wide CCTV networks with facial recognition capabilities[4], and the implementation of an identity/authentication platform for public and private services[5], are indicators towards a move of data generation that is networked and centralized, and where the line between public and private is blurred through the vast amount of data that is collected.

In such developments and innovations what is privacy and what role does privacy play? Is it the archaic inhibitor - limiting the sharing and use of data for new and innovative purposes? Will it be defined purely by legislative norms or through device/platform design as well? Is it a notion that makes consumers think twice about using a product or service or is it a practice that enables consumer and citizen uptake and trust and allows for the growth and adoption of these services?

How privacy will be regulated and how it will be perceived is still evolving across jurisdictions, technologies, and cultures - but it is clear that privacy is not being and cannot be overlooked. Governments across the world are reforming and considering current and future privacy regulation targeted towards life in a quantified society. As the Indian government begins to roll out initiatives that create a "Digital India" indeed a "quantified India", taking privacy into consideration could facilitate the uptake, expansion, and success of these practices and services. As the Indian government pursues the opportunities possible through Big Data it will be useful to review existing privacy protections and deliberate on if, and in what form, future protections for privacy and other rights will be needed.


[1]Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information Rules 2011). Available at: http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

[2]Group of Experts on Privacy. (2012). Report of the Group of Experts on Privacy. New Delhi: Planning Commission, Government of India. Retrieved May 20, 2015, from http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

[3] NDTV. “Free Public Wi-Fi Facility in Delhi to Have Daily Data Limit. NDTV, May 25th 2015, Available at: http://gadgets.ndtv.com/internet/news/free-public-wi-fi-facility-in-delhi-to-have-daily-data-limit-695857. Accessed: July 2nd 2015.

[4]FindBiometrics Global Identity Management. “Surat Police Get NEC Facial Recognition CCTV System”. July 21st 2015. Available at: http://findbiometrics.com/surat-police-nec-facial-recognition-27214/

[5]UIDAI Official Website. Available at: https://uidai.gov.in/

Right to Privacy in Peril

by Vipul Kharbanda last modified Aug 13, 2015 03:32 PM
It seems to have become quite a fad, especially amongst journalists, to use this headline and claim that the right to privacy which we consider so inherent to our being, is under attack. However, when I use this heading in this piece I am not referring to the rampant illegal surveillance being done by the government, or the widely reported recent raids on consenting (unmarried) adults who were staying in hotel rooms in Mumbai. I am talking about the fact that the Supreme Court of India has deemed it fit to refer the question of the very existence of a fundamental right to privacy to a Constitution Bench to finally decide the matter, and define the contours of such right if it does exist.

In an order dated August 11, 2015 the Supreme Court finally gave in to the arguments advanced by the Attorney General and admitted that there is some “unresolved contradiction” regarding the existence of a constitutional “right to privacy” under the Indian Constitution and requested that a Constitutional Bench of appropriate strength.

The Supreme Court was hearing a petition challenging the implementation of the Adhaar Card Scheme of the government, where one of the grounds to challenge the scheme was that it was violative of the right to privacy guaranteed to all citizens under the Constitution of India. However to counter this argument, the State (via the Attorney General) challenged the very concept that the Constitution of India guarantees a right to privacy by relying on an “unresolved contradiction” in judicial pronouncements on the issue, which so far had only been of academic interest. This “unresolved contradiction” arose because in the cases of M.P. Sharma & Others v. Satish Chandra & Others,[1] and Kharak Singh v. State of U.P. & Others,[2] (decided by Eight and Six Judges respectively) the Supreme Court has categorically denied the existence of a right to privacy under the Indian Constitution.

However somehow the later case of Gobind v. State of M.P. and another,[3] (which was decided by a two Judge Bench of the Supreme Court) relied upon the opinion given by the minority of two judges in Kharak Singh to hold that a right to privacy does exist and is guaranteed as a fundamental right under the Constitution of India.[4] Thereafter a large number of cases have held the right to privacy to be a fundamental right, the most important of which are R. Rajagopal & Another v. State of Tamil Nadu & Others,[5] (popularly known as Auto Shanker’s case) and People’s Union for Civil Liberties (PUCL) v. Union of India & Another.[6] However, as was noticed by the Supreme Court in its August 11 order, all these judgments were decided by two or three Judges only.

The petitioners on the other hand made a number of arguments to counter those made by the Attorney General to the effect that the fundamental right to privacy is well established under Indian law and that there is no need to refer the matter to a Constitutional Bench. These arguments are:

(i) The observations made in M.P. Sharma regarding the absence of right to privacy are not part of the ratio decidendi of that case and, therefore, do not bind the subsequent smaller Benches such as R. Rajagopal and PUCL.

(ii) Even in Kharak Singh it was held that the right of a person not to be disturbed at his residence by the State is recognized to be a part of a fundamental right guaranteed under Article 21. It was argued that this is nothing but an aspect of privacy. The observation in para 20 of the majority judgment (quoted in footnote 2 above) at best can be construed only to mean that there is no fundamental right of privacy against the State’s authority to keep surveillance on the activities of a person. However, they argued that such a conclusion cannot be good law any more in view of the express declaration made by a seven-Judge bench decision of this Court in Maneka Gandhi v. Union of India & Another.[7]

(iii) Both M.P. Sharma (supra) and Kharak Singh (supra) were decided on an interpretation of the Constitution based on the principles expounded in A.K. Gopalan v. State of Madras,[8] which have themselves been declared wrong by a larger Bench in Rustom Cavasjee Cooper v. Union of India.[9]

Other than the points above, it was also argued that world over in all the countries where Anglo-Saxon jurisprudence is followed, ‘privacy’ is recognized as an important aspect of the liberty of human beings. The petitioners also submitted that it was too late in the day for the Union of India to argue that the Constitution of India does not recognize privacy as an aspect of the liberty under Article 21 of the Constitution of India.

However these arguments of the petitioners were not enough to convince the Supreme Court that there is no doubt regarding the existence and contours of the right to privacy in India. The Court, swayed by the arguments presented by the Attorney General, admitted that questions of far reaching importance for the Constitution were at issue and needed to be decided by a Constitutional Bench.

Giving some insight into its reasoning to refer this issue to a Constitutional Bench, the Court did seem to suggest that its decision to refer the matter to a larger bench was more an exercise in judicial propriety than an action driven by some genuine contradiction in the law. The Court said that if the observations in M.P. Sharma (supra) and Kharak Singh (supra) were accepted as the law of the land, the fundamental rights guaranteed under the Constitution of India would get “denuded of vigour and vitality”. However the Court felt that institutional integrity and judicial discipline require that smaller benches of the Court follow the decisions of larger benches, unless they have very good reasons for not doing so, and since in this case it appears that the same was not done therefore the Court referred the matter to a larger bench to scrutinize the ratio of M.P. Sharma (supra) and Kharak Singh (supra) and decide the judicial correctness of subsequent two judge and three judge bench decisions which have asserted or referred to the right to privacy.


[1] AIR 1954 SC 300. In para 18 of the Judgment it was held: “A power of search and seizure is in any system of jurisprudence an overriding power of the State for the protection of social security and that power is necessarily regulated by law. When the Constitution makers have thought fit not to subject such regulation to constitutional limitations by recognition of a fundamental right to privacy, analogous to the American Fourth Amendment, we have no justification to import it, into a totally different fundamental right, by some process of strained construction.”

[2] AIR 1963 SC 1295. In para 20 of the judgment it was held: “Nor do we consider that Art. 21 has any relevance in the context as was sought to be suggested by learned counsel for the petitioner. As already pointed out, the right of privacy is not a guaranteed right under our Constitutionand therefore the attempt to ascertain the movement of an individual which is merely a manner in which privacy is invaded is not an infringement of a fundamental right guaranteed by Part III.”

[3] (1975) 2 SCC 148.

[4] It is interesting to note that while the decisions in both Kharak Singh and Gobind were given in the context of similar facts (challenging the power of the police to make frequent domiciliary visits both during the day and night at the house of the petitioner) while the majority in Kharak Singh specifically denied the existence of a fundamental right to privacy, however they held the conduct of the police to be violative of the right to personal liberty guaranteed under Article 21, since the Regulations under which the police actions were undertaken were themselves held invalid. On the other hand, while Gobind held that a fundamental right to privacy does exist in Indian law, it may be interfered with by the State through procedure established by law and therefore upheld the actions of the police since they were acting under validly issued Regulations.

[5] (1994) 6 SCC 632.

[6] (1997) 1 SCC 301.

[7] (1978) 1 SCC 248.

[8] AIR 1950 SC 27.

[9] (1970) 1 SCC 248.

Clearing Misconceptions: What the DoT Panel Report on Net Neutrality Says (and Doesn't)

by Pranesh Prakash last modified Jul 21, 2015 12:36 PM
There have been many misconceptions about what the DoT Panel Report on Net Neutrality says: the most popular ones being that they have recommended higher charges for services like WhatsApp and Viber, and that the report is an anti-Net neutrality report masquerading as a pro-Net neutrality report. Pranesh Prakash clears up these and other incorrect notions about the report in this brief analysis.

Background of the DoT panel

In January 2015, the Department of Telecommunication (DoT) formed a panel to look into "net neutrality from public policy objective, its advantages and limitations," as well the impact of a "regulated telecom services sector and unregulated content and applications sector". After spending a few months collecting both oral and written testimony from a number of players in this debate, and analysing it, on July 16 that panel submitted its report to the DoT and released it to the public for comments (till August 15, 2015). At the same time, independently, the Telecom Regulatory Authority of India (TRAI) is also considering the same set of issues. TRAI received more than a million responses in response to its consultation paper — the most TRAI has ever received on any topic — the vast majority of of them thanks in part to the great work of the Save the Internet campaign. TRAI is yet to submit its recommendations to the DoT. Once those recommendations are in, the DoT will have to take its call on how to go ahead with these two sets of issues: regulation of certain Internet-based communications services, and net neutrality.

Summary of the DoT panel report

The DoT panel had the tough job of synthesising the feedback from dozens of people and organizations. In this, they have done an acceptable job. Although, in multiple places, the panel has wrongly summarised the opinions of the "civil society" deponents: I was one of the deponents on the day that civil society actors presented their oral submissions, so I know. For instance, the panel report notes in 4.2.9.c that "According to civil society, competing applications like voice OTT services were eroding revenues of the government and the TSPs, creating security and privacy concerns, causing direct as well as indirect losses." I do not recall that being the main thrust of any civil society participant's submission before the panel. That having been said, one might still legitimately claim that none of these or other mistakes (which include errors like "emergency" instead of "emergence", "Tim Burners Lee" instead of "Tim Berners-Lee", etc.) are such that they have radically altered the report's analysis or recommendations.

The report makes some very important points that are worth noting, which can be broken into two broad headings:

On governmental regulation of OTTs

  1. Internet-based (i.e., over-the-top, or "OTT") communications services (like WhatsApp, Viber, and the like) are currently taking advantage of "regulatory arbitrage": meaning that the regulations that apply to non-IP communications services and IP communications services are different. Under the current "unified licence" regime, WhatsApp, Viber, and other such services don't have to get a licence from the government, don't have to abide by anti-spam Do-Not-Disturb regulations, do not have to share any part of their revenue with the government, do not have to abide by national security terms in the licence, and in general are treated differently from other telecom services. The report wishes to bring these within a licensing regime.
  2. The report distinguishes between Internet-based voice calls (voice over IP, or VoIP) and messaging services, and doesn't wish to interfere with the latter. It also distinguishes between domestic and international VoIP calls, and believes only the former need regulation. It is unclear on what bases these distinctions are made.
  3. OTT "application services" do not need special telecom-oriented regulation.
  4. There should a separation in regulatory terms between the network layer and the service layer. While this doesn't mean much in the short-term for Net neutrality, it will be very important in the long-term for ICT regulation, and is very welcome.

On Net neutrality

  1. The core principles of Net neutrality — which are undefined in the report, though definitions proposed in submissions they've received are quoted — should be adhered to. In the long-run, these should find place in a new law, but for the time being they can be enforced through the licence agreement between the DoT and telecom providers.
  2. On the contentious issue of zero-rating, a process that involves both ex-ante and ex-post regulation is envisaged to prevent harmful zero-rating, while allowing beneficial zero-rating. Further, the report notes that the supposed altruistic or "public interest" motives of the zero-rating scheme do not matter if they result in harm to competition, distort consumer markets, violate the core tenets of Net neutrality, or unduly benefit an Internet "gatekeeper".

Where does the DoT panel report go wrong?

  1. The proposal by the DoT panel of a licensing regime for VoIP services is a terrible idea. It would presumptively hold all licence non-holders to be unlawful, and that should not be the case. While it is in India's national interest to want to hold VoIP services to account if they do not follow legitimate regulations, it is far better to do this through ex-post regulations rather than an ex-ante licensing scheme. A licensing scheme would benefit Indian VoIP companies (including services like Hike, which Airtel has invested in) over foreign companies like Viber. The report also doesn't say how one would distinguish between OTT communication services and OTT application services, when many apps such as food ordering apps, including text chat facilities. Further, VoIP need not be provided by a company: I run my own XMPP servers, which is a protocol used for both text and video/voice. Will a licensing regime force me to become a licence-holder or will it set a high bar? The DoT panel report doesn't say. Will there be a revenue-sharing mechanism, as is currently the case under the Unified Licence? If so, how will it be calculated in case of services like WhatsApp? These questions too find no answer in the report. All in all, this part of the report's analysis is found to be sadly wanting.
  2. Many important terms are left undefined, and many distinctions that the report draws are left unexplained. For instance, it is unclear on what regulatory basis the report distinguishes between domestic and international VoIP calls — which is an unenforceable (not to mention regulatorily unimportant) distinction — or between regulation of messaging services and VoIP services, or what precisely they mean by "application-agnostic" and "application-specific" network management (since different scholars on this issue mean different things when they say "application").

What does the DoT panel report mean for consumers?

  1. Not too much currently, since the DoT panel report is still just a set of recommendations by an expert body based on (invited) public consultations.
  2. Does it uphold Net neutrality? The DoT panel report is clear that they strongly endorse the "core principles of Net neutrality". On the issue of "zero-rating", the panel proposes some sound measures, saying that there should be a two-part mechanism for ensuring that harmful zero-rating doesn't go through: First, telecom services need to submit zero-rating tariff proposals to an expert body constituted by DoT; and second consumers will be able to complain about the harmful usage of zero-rating by any service provider, which may result in a fine. What constitutes harm / violation of Net neutrality? The panel suggests that any tariff scheme that may harm competition, distorts the consumer market, or violates the core principles of Net neutrality is harmful. This makes sense.

  3. Will it increase cost of access to WhatsApp and Viber? Well, one the one hand, zero-rating of those services could decrease the cost of access to WhatsApp and Viber, but that might not be allowed if the DoT panel recommendations are accepted, since that would possibly be judged to harm competition and distort the consumer markets. The DoT panel has also recommended bringing such services within a licensing framework to bridge the "regulatory arbitrage" that they are able benefit from (meaning that these services don't have to abide by many regulations that a telecom provider has to follow). Whether this will lead to WhatsApp and similar services charging depends on what kinds of regulations are placed on them, and if any costs are imposed on them. If the government decides to take the approach they took to ISPs in the late 90s (essentially, charging them Re. 1 as the licence fee), doesn't impose any revenue sharing (as they currently require of all telecom services), etc., then there needn't be any overly burdensome costs that WhatsApp-like services will need to pass on to consumers.

What misunderstandings do people have?

  1. There are multiple news reports that the DoT panel has recommended increased charges for domestic VoIP calls, or that ISPs will now be able to double-charge. Both of these are untrue. The DoT panel's recommendations are about "regulatory arbitrage" and licensing, which need not be related to cost.
  2. There is a fear that the exception from net neutrality of "managed services and enterprise services" is a "loophole", or that exceptions for "emergency services" and "desirable public or government services" are too vague and carry the potential of misuse. If one goes by the examples that the panel cites of managed services (e.g., services an ISP provides for a private company separately from the rest of the Internet, etc.), these fear seems largely misplaced. We must also realize the the panel report is a report, and not legislation, and the rationale for wanting exemptions from Net neutrality are clear.
  3. The DoT panel has given the go-ahead for zero-rating. Once again, this is untrue. The panel cites instances of zero-rating that aren't discriminatory, violative of Net neutrality and don't harm competition or distort consumer markets (such as zero-rating of all Internet traffic for a limited time period). Then it goes on to state that the regulator should not allow zero-rating that violates the core principles of Net neutrality.

What's missing in the Net neutrality debate is nuance. It's become a debate in which you are either for Net neutrality or against it. However, none of the underlying components of Net neutrality — a complex mix of competition policy, innovation policy, the right to freedom of expression, etc. — are absolutes; therefore, it is clear that Net neutrality cannot be an absolute either.

Security: Privacy, Transparency and Technology

by Sunil Abraham last modified Sep 15, 2015 10:53 AM
The Centre for Internet and Society (CIS) has been involved in privacy and data protection research for the last five years. It has participated as a member of the Justice A.P. Shah Committee, which has influenced the draft Privacy Bill being authored by the Department of Personnel and Training. It has organised 11 multistakeholder roundtables across India over the last two years to discuss a shadow Privacy Bill drafted by CIS with the participation of privacy commissioners and data protection authorities from Europe and Canada.

 

The article was co-authored by Sunil Abraham, Elonnai Hickok and Tarun Krishnakumar. It was published by Observer Research Foundation, Digital Debates 2015: CyFy Journal Volume 2.


Our centre’s work on privacy was considered incomplete by some stakeholders because of a lack of focus in the area of cyber security and therefore we have initiated research on it from this year onwards. In this article, we have undertaken a preliminary examination of the theoretical relationships between the national security imperative and privacy, transparency and technology.

Security and Privacy

Daniel J. Solove has identified the tension between security and privacy as a false dichotomy: "Security and privacy often clash, but there need not be a zero-sum tradeoff." [1] Further unpacking this false dichotomy, Bruce Schneier says, "There is no security without privacy. And liberty requires both security and privacy." [2] Effectively, it could be said that privacy is a precondition for security, just as security is a precondition for privacy. A secure information system cannot be designed without guaranteeing the privacy of its authentication factors, and it is not possible to guarantee privacy of authentication factors without having confidence in the security of the system. Often policymakers talk about a balance between the privacy and security imperatives—in other words a zero-sum game. Balancing these imperatives is a foolhardy approach, as it simultaneously undermines both imperatives. Balancing privacy and security should instead be framed as an optimisation problem. Indeed, during a time when oversight mechanisms have failed even in so-called democratic states, the regulatory power of technology [3] should be seen as an increasingly key ingredient to the solution of that optimisation problem.

Data retention is required in most jurisdictions for law enforcement, intelligence and military purposes. Here are three examples of how security and privacy can be optimised when it comes to Internet Service Provider (ISP) or telecom operator logs:

  1. Data Retention: We propose that the office of the Privacy Commissioner generate a cryptographic key pair for each internet user and give one key to the ISP / telecom operator. This key would be used to encrypt logs, thereby preventing unauthorised access. Once there is executive or judicial authorisation, the Privacy Commissioner could hand over the second key to the authorised agency. There could even be an emergency procedure and the keys could be automatically collected by concerned agencies from the Privacy Commissioner. This will need to be accompanied by a policy that criminalises the possession of unencrypted logs by ISP and telecom operators.

  2. Privacy-Protective Surveillance: Ann Cavoukian and Khaled El Emam [4] have proposed combining intelligent agents, homomorphic encryption and probabilistic graphical models to provide “a positive-sum, ‘win–win’ alternative to current counter-terrorism surveillance systems.” They propose limiting collection of data to “significant” transactions or events that could be associated with terrorist-related activities, limiting analysis to wholly encrypted data, which then does not just result in “discovering more patterns and relationships without an understanding of their context” but rather “intelligent information—information selectively gathered and placed into an appropriate context to produce actual knowledge.” Since fully homomorphic encryption may be unfeasible in real-world systems, they have proposed use of partially homomorphic encryption. But experts such as Prof. John Mallery from MIT are also working on solutions based on fully homomorphic encryption.

  3. Fishing Expedition Design: Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal have proposed a standard [5] that could be adopted by authorised agencies, telecom operators and ISPs. Instead of giving authorised agencies complete access to logs, they propose a format for database queries, which could be sent to the telecom operator or ISP by authorised agencies. The telecom operator or ISP would then process the query, and anonymise/obfuscate the result-set in an automated fashion based on applicable privacypolicies/regulation. Authorised agencies would then hone in on a subset of the result-set that they would like with personal identifiers intact; this smaller result set would then be shared with the authorised agencies.

An optimisation approach to resolving the false dichotomy between privacy and security will not allow for a total surveillance regime as pursued by the US administration. Total surveillance brings with it the ‘honey pot’ problem: If all the meta-data and payload data of citizens is being harvested and stored, then the data store will become a single point of failure and will become another target for attack. The next Snowden may not have honourable intentions and might decamp with this ‘honey pot’ itself, which would have disastrous consequences.

If total surveillance will completely undermine the national security imperative, what then should be the optimal level of surveillance in a population? The answer depends upon the existing security situation. If this is represented on a graph with security on the y-axis and the proportion of the population under surveillance on the x-axis, the benefits of surveillance could be represented by an inverted hockey-stick curve. To begin with, there would already be some degree of security. As a small subset of the population is brought under surveillance, security would increase till an optimum level is reached, after which, enhancing the number of people under surveillance would not result in any security pay-off. Instead, unnecessary surveillance would diminish security as it would introduce all sorts of new vulnerabilities. Depending on the existing security situation, the head of the hockey-stick curve might be bigger or smaller. To use a gastronomic analogy, optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

In India the designers of surveillance projects have fortunately rejected the total surveillance paradigm. For example, the objective of the National Intelligence Grid (NATGRID) is to streamline and automate targeted surveillance; it is introducing technological safeguards that will allow express combinations of result-sets from 22 databases to be made available to 12 authorised agencies. This is not to say that the design of the NATGRID cannot be improved.

Security and Transparency

There are two views on security and transparency: One, security via obscurity as advocated by vendors of proprietary software, and two, security via transparency as advocated by free/open source software (FOSS) advocates and entrepreneurs. Over the last two decades, public and industry opinion has swung towards security via transparency. This is based on the Linus rule that “given enough eyeballs, all bugs are shallow.” But does this mean that transparency is a necessary and sufficient condition? Unfortunately not, and therefore it is not necessarily true that FOSS and open standards will be more secure than proprietary software and proprietary standards.

Optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

The recent detection of the Heartbleed [6] security bug in Open SSL, [7] causing situations where more data can be read than should be allowed, and Snowden’s revelations about the compromise of some open cryptographic standards (which depend on elliptic curves), developed by the US National Institute of Standards and Technology, are stark examples. [8]

At the same time, however, open standards and FOSS are crucial to maintaining the balance of power in information societies, as civil society and the general public are able to resist the powers of authoritarian governments and rogue corporations using cryptographic technology. These technologies allow for anonymous speech, pseudonymous speech, private communication, online anonymity and circumvention of surveillance and censorship. For the media, these technologies enable anonymity of sources and the protection of whistle-blowers—all phenomena that are critical to the functioning of a robust and open democratic society. But these very same technologies are also required by states and by the private sector for a variety of purposes—national security, e-commerce, e-banking, protection of all forms of intellectual property, and services that depend on confidentiality, such as legal or medical services.

In order words, all governments, with the exception of the US government, have common cause with civil society, media and the general public when it comes to increasing the security of open standards and FOSS. Unfortunately, this can be quite an expensive task because the re-securing of open cryptographic standards depends on mathematicians. Of late, mathematical research outputs that can be militarised are no longer available in the public domain because the biggest employers of mathematicians worldwide today are the US military and intelligence agencies. If other governments invest a few billion dollars through mechanisms like Knowledge Ecology International’s proposed World Trade Organization agreement on the supply of knowledge as a public good, we would be able to internationalise participation in standard-setting organisations and provide market incentives for greater scrutiny of cryptographic standards and patching of vulnerabilities of FOSS. This would go a long way in addressing the trust deficit that exists on the internet today.

Security and Technology

A techno-utopian understanding of security assumes that more technology, more recent technology and more complex technology will necessarily lead to better security outcomes.

This is because the security discourse is dominated by vendors with sales targets who do not present a balanced or accurate picture of the technologies that they are selling. This has resulted in state agencies and the general public having an exaggerated understanding of the capabilities of surveillance technologies that is more aligned with Hollywood movies than everyday reality.

More Technology

Increasing the number of x-ray machines or full-body scanners at airports by a factor of ten or hundred will make the airport less secure unless human oversight is similarly increased. Even with increased human oversight, all that has been accomplished is an increase in the potential locations that can be compromised. The process of hardening a server usually involves stopping non-essential services and removing non-essential software. This reduces the software that should be subject to audit, continuously monitored for vulnerabilities and patched as soon as possible. Audits, ongoing monitoring and patching all cost time and money and therefore, for governments with limited budgets, any additional unnecessary technology should be seen as a drain on the security budget. Like with the airport example, even when it comes to a single server on the internet, it is clear that, from a security perspective, more technology without a proper functionality and security justification is counter-productive. To reiterate, throwing increasingly more technology at a problem does not make things more secure; rather, it results in a proliferation of vulnerabilities.

Latest Technology

Reports that a number of state security agencies are contemplating returning to typewriters for sensitive communications in the wake of Snowden’s revelations makes it clear that some older technologies are harder to compromise in comparison to modern technology. [9] Between iris- and fingerprint-based biometric authentication, logically, it would be easier for a criminal to harvest images of irises or authentication factors in bulk fashion using a high resolution camera fitted with a zoom lens in a public location, in comparison to mass lifting of fingerprints.

Complex Technology

Fifteen years ago, Bruce Schneier said, "The worst enemy of security is complexity. This has been true since the beginning of computers, and it’s likely to be true for the foreseeable future." [10] This is because complexity increases fragility; every feature is also a potential source of vulnerabilities and failures. The simpler Indian electronic machines used until the 2014 elections are far more secure than the Diebold voting machines used in the 2004 US presidential elections. Similarly when it comes to authentication, a pin number is harder to beat without user-conscious cooperation in comparison to iris- or fingerprint-based biometric authentication.

In the following section of the paper we have identified five threat scenarios [11] relevant to India and identified solutions based on our theoretical framing above.

Threat Scenarios and Possible Solutions

Hacking the NIC Certifying Authority
One of the critical functions served by the National Informatics Centre (NIC) is as a Certifying Authority (CA). [12] In this capacity, the NIC issues digital certificates that authenticate web services and allow for the secure exchange of information online. [13] Operating systems and browsers maintain lists of trusted CA root certificates as a means of easily verifying authentic certificates. India’s Controller of Certifying Authority’s certificates issued are included in the Microsoft Root list and recognised by the majority of programmes running on Windows, including Internet Explorer and Chrome. [14] In 2014, the NIC CA’s infrastructure was compromised, and digital certificates were issued in NIC’s name without its knowledge. [15] Reports indicate that NIC did not "have an appropriate monitoring and tracking system in place to detect such intrusions immediately." [16] The implication is that websites could masquerade as another domain using the fake certificates. Personal data of users can be intercepted or accessed by third parties by the masquerading website. The breach also rendered web servers and websites of government bodies vulnerable to attack, and end users were no longer sure that data on these websites was accurate and had not been tampered with. [17] The NIC CA was forced to revoke all 250,000 SSL Server Certificates issued until that date [18] and is no longer issuing digital certificates for the time being. [19]Public key pinning is a means through which websites can specify which certifying authorities have issued certificates for that site. Public key pinning can prevent man-in-the-middle attacks due to fake digital certificates. [20] Certificate Transparency allows anyone to check whether a certificate has been properly issued, seeing as certifying authorities must publicly publish information about the digital certificates that they have issued. Though this approach does not prevent fake digital certificates from being issued, it can allow for quick detection of misuse. [21]

‘Logic Bomb’ against Airports
Passenger operations in New Delhi’s Indira Gandhi International Airport depend on a centralised operating system known as the Common User Passenger Processing System (CUPPS). The system integrates numerous critical functions such as the arrival and departure times of flights, and manages the reservation system and check-in schedules. [22] In 2011, a logic bomb attack was remotely launched against the system to introduce malicious code into the CUPPS software. The attack disabled the CUPPS operating system, forcing a number of check-in counters to shut down completely, while others reverted to manual check-in, resulting in over 50 delayed flights. Investigations revealed that the attack was launched by three disgruntled employees who had assisted in the installation of the CUPPS system at the New Delhi Airport. [23] Although in this case the impact of the attack was limited to flight delay, experts speculate that the attack was meant to take down the entire system. The disruption and damage resulting from the shutdown of an entire airport would be extensive.

Adoption of open hardware and FOSS is one strategy to avoid and mitigate the risk of such vulnerabilities. The use of devices that embrace the concept of open hardware and software specifications must be encouraged, as this helps the FOSS community to be vigilant in detecting and reporting design deviations and investigate into probable vulnerabilities.

Attack on Critical Infrastructure
The Nuclear Power Corporation of India encounters and prevents numerous cyber attacks every day. [24] The best known example of a successful nuclear plant hack is the Stuxnet worm that thwarted the operation of an Iranian nuclear enrichment complex and set back the country’s nuclear programme. [25]

The worm had the ability to spread over the network and would activate when a specific configuration of systems was encountered [26] and connected to one or more Siemens programmable logic controllers. [27] The worm was suspected to have been initially introduced through an infected USB drive into one of the controller computers by an insider, thus crossing the air gap. [28] The worm used information that it gathered to take control of normal industrial processes (to discreetly speed up centrifuges, in the present case), leaving the operators of the plant unaware that they were being attacked. This incident demonstrates how an attack vector introduced into the general internet can be used to target specific system configurations. When the target of a successful attack is a sector as critical and secured as a nuclear complex, the implications for a country’s security and infrastructure are potentially grave.

Security audits and other transparency measures to identify vulnerabilities are critical in sensitive sectors. Incentive schemes such as prizes, contracts and grants may be evolved for the private sector and academia to identify vulnerabilities in the infrastructure of critical resources to enable/promote security auditing of infrastructure.

Micro Level: Chip Attacks
Semiconductor devices are ubiquitous in electronic devices. The US, Japan, Taiwan, Singapore, Korea and China are the primary countries hosting manufacturing hubs of these devices. India currently does not produce semiconductors, and depends on imported chips. This dependence on foreign semiconductor technology can result in the import and use of compromised or fraudulent chips by critical sectors in India. For example, hardware Trojans, which may be used to access personal information and content on a device, may be inserted into the chip. Such breaches/transgressions can render equipment in critical sectors vulnerable to attack and threaten national security. [29]

Indigenous production of critical technologies and the development of manpower and infrastructure to support these activities are needed. The Government of India has taken a number of steps towards this. For example, in 2013, the Government of India approved the building of two Semiconductor Wafer Fabrication (FAB) manufacturing facilities [30] and as of January 2014, India was seeking to establish its first semiconductor characterisation lab in Bangalore. [31]

Macro Level: Telecom and Network Switches

The possibility of foreign equipment containing vulnerabilities and backdoors that are built into its software and hardware gives rise to concerns that India’s telecom and network infrastructure is vulnerable to being hacked and accessed by foreign governments (or non-state actors) through the use of spyware and malware that exploit such vulnerabilities. In 2013, some firms, including ZTE and Huawei, were barred by the Indian government from participating in a bid to supply technology for the development of its National Optic Network project due to security concerns. [32] Similar concerns have resulted in the Indian government holding back the conferment of ‘domestic manufacturer’ status on both these firms. [33]

Following reports that Chinese firms were responsible for transnational cyber attacks designed to steal confidential data from overseas targets, there have been moves to establish laboratories to test imported telecom equipment in India. [34] Despite these steps, in a February 2014 incident the state-owned telecommunication company Bharat Sanchar Nigam Ltd’s network was hacked, allegedly by Huawei. [35]

Security practitioners and policymakers need to avoid the zero-sum framing prevalent in popular discourse regarding security VIS-A-VIS privacy, transparency and technology.

A successful hack of the telecom infrastructure could result in massive disruption in internet and telecommunications services. Large-scale surveillance and espionage by foreign actors would also become possible, placing, among others, both governmental secrets and individuals personal information at risk.

While India cannot afford to impose a general ban on the import of foreign telecommunications equipment, a number of steps can be taken to address the risk of inbuilt security vulnerabilities. Common International Criteria for security audits could be evolved by states to ensure compliance of products with international norms and practices. While India has already established common criteria evaluation centres, [36] the government monopoly over the testing function has resulted in only three products being tested so far. A Code Escrow Regime could be set up where manufacturers would be asked to deposit source code with the Government of India for security audits and verification. The source code could be compared with the shipped software to detect inbuilt vulnerabilities.

Conclusion

Cyber security cannot be enhanced without a proper understanding of the relationship between security and other national imperatives such as privacy, transparency and technology. This paper has provided an initial sketch of those relationships, but sustained theoretical and empirical research is required in India so that security practitioners and policymakers avoid the zero-sum framing prevalent in popular discourse and take on the hard task of solving the optimisation problem by shifting policy, market and technological levers simultaneously. These solutions must then be applied in multiple contexts or scenarios to determine how they should be customised to provide maximum security bang for the buck.


[1]. Daniel J. Solove, Chapter 1 in Nothing to Hide: The False Tradeoff between Privacy and Security (Yale University Press: 2011), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1827982.

[2]. Bruce Schneier, “What our Top Spy doesn’t get: Security and Privacy aren’t Opposites,” Wired, January 24, 2008, http://archive.wired.com/politics/security commentary/security matters/2008/01/securitymatters_0124 and Bruce Schneier, “Security vs. Privacy,” Schneier on Security, January 29, 2008, https://www.schneier.com/blog/archives/2008/01/security_vs_pri.html.

[3]. There are four sources of power in internet governance: Market power exerted by private sector organisations; regulatory power exerted by states; technical power exerted by anyone who has access to certain categories of technology, such as cryptography; and finally, the power of public pressure sporadically mobilised by civil society. A technically sound encryption standard, if employed by an ordinary citizen, cannot be compromised using the power of the market or the regulatory power of states or public pressure by civil society. In that sense, technology can be used to regulate state and market behaviour.

[4]. Ann Cavoukian and Khaled El Emam, “Introducing Privacy-Protective Surveillance: Achieving Privacy and Effective Counter-Terrorism,” Information & Privacy Commisioner, September 2013, Ontario, Canada, http://www.privacybydesign.ca/content/uploads/2013/12/pps.pdf.

[5]. Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal, “Information Integration and Analysis: A Semantic Approach to Privacy”(presented at the third IEEE International Conference on Information Privacy, Security, Risk and Trust, Boston, USA, October 2011), ebiquity.umbc.edu/_file_directory_/papers/578.pdf.

[6]. Bruce Byfield, “Does Heartbleed disprove ‘Open Source is Safer’?,” Datamation, April 14, 2014, http://www.datamation.com/open-source/does-heartbleed-disprove-open-source-is-safer-1.html.

[7]. “Cybersecurity Program should be more transparent, protect privacy,” Centre for Democracy and Technology Insights, March 20, 2009, https://cdt.org/insight/cybersecurity-program-should-be-more-transparent-protect-privacy/#1.

[8]. “Cracked Credibility,” The Economist, September 14, 2013, http://www.economist.com/news/international/21586296-be-safe-internet-needs-reliable-encryption-standards-software-and.

[9]. Miriam Elder, “Russian guard service reverts to typewriters after NSA leaks,” The Guardian, July 11, 2013, www.theguardian.com/world/2013/jul/11/russia-reverts-paper-nsa-leaks and Philip Oltermann, “Germany ‘may revert to typewriters’ to counter hi-tech espionage,” The Guardian, July 15, 2014, www.theguardian.com/world/2014/jul/15/germany-typewriters-espionage-nsa-spying-surveillance.

[10]. Bruce Schneier, “A Plea for Simplicity,” Schneier on Security, November 19, 1999, https://www.schneier.com/essays/archives/1999/11/a_plea_for_simplicit.html.

[11]. With inputs from Pranesh Prakash of the Centre for Internet and Society and Sharathchandra Ramakrishnan of Srishti School of Art, Technology and Design.

[12]. “Frequently Asked Questions,” Controller of Certifying Authorities, Department of Electronics and Information Technology, Government of India, http://cca.gov.in/cca/index.php?q=faq-page#n41.

[13]. National Informatics Centre Homepage, Government of India, http://www.nic.in/node/41.

[14]. Adam Langley, “Maintaining Digital Certificate Security,” Google Security Blog, July 8, 2014, http://googleonlinesecurity.blogspot.in/2014/07/maintaining-digital-certificate-security.html.

[15]. This is similar to the kind of attack carried out against DigiNotar, a Dutch certificate authority. See: http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1246&context=jss.

[16]. R. Ramachandran, “Digital Disaster,” Frontline, August 22, 2014, http://www.frontline.in/the-nation/digital-disaster/article6275366.ece.

[17]. Ibid.

[18]. “NIC’s digital certification unit hacked,” Deccan Herald, July 16, 2014, http://www.deccanherald.com/content/420148/archives.php.

[19]. National Informatics Centre Certifying Authority Homepage, Government of India, http://nicca.nic.in//.

[20]. Mozilla Wiki, “Public Key Pinning,” https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinning.

[21]. “Certificate Transparency - The quick detection of fraudulent digital certificates,” Ascertia, August 11, 2014, http://www.ascertiaIndira.com/blogs/pki/2014/08/11/certificate-transparency-the-quick-detection-of-fraudulent-digital-certificates.

[22]. “Indira Gandhi International Airport (DEL/VIDP) Terminal 3, India,” Airport Technology.com, http://www.airport-technology.com/projects/indira-gandhi-international-airport-terminal -3/.

[23]. “How techies used logic bomb to cripple Delhi Airport,” Rediff, November 21, 2011, http://www.rediff.com/news/report/how-techies-used-logic-bomb-to-cripple-delhi-airport/20111121 htm.

[24]. Manu Kaushik and Pierre Mario Fitter, “Beware of the bugs,” Business Today, February 17, 2013, http://businesstoday.intoday.in/story/india-cyber-security-at-risk/1/191786.html.

[25]. “Stuxnet ‘hit’ Iran nuclear plants,” BBC, November 22, 2010, http://www.bbc.com/news/technology-11809827.

[26]. In this case, systems using Microsoft Windows and running Siemens Step7 software were targeted.

[27]. Jonathan Fildes, “Stuxnet worm ‘targeted high-value Iranian assets’,” BBC, September 23, 2010, http://www.bbc.com/news/technology-11388018.

[28]. Farhad Manjoo, “Don’t Stick it in: The dangers of USB drives,” Slate, October 5, 2010, http://www.slate.com/articles/technology/technology/2010/10/dont_stick_it_in.html.

[29]. Ibid.

[30]. “IBM invests in new $5bn chip fab in India, so is chip sale off?,” ElectronicsWeekly, February 14, 2014, http://www.electronicsweekly.com/news/business/ibm-invests-new-5bn-chip-fab-india-chip-sale-2014-02/.

[31]. NT Balanarayan, “Cabinet Approves Creation of Two Semiconductor Fabrication Units,” Medianama, February 17, 2014, http://articles.economictimes.indiatimes.com/2014-02-04/news/47004737_1_indian-electronics-special-incentive-package-scheme-semiconductor-association.

[32]. Jamie Yap, “India bars foreign vendors from national broadband initiative,” ZD Net, January 21, 2013, http://www.zdnet.com/in/india-bars-foreign-vendors-from-national-broadband-initiative-7000010055/.

[33]. Kevin Kwang, “India holds back domestic-maker status for Huawei, ZTE,” ZD Net, February 6, 2013, http://www.zdnet.com/in/india-holds-back-domestic-maker-status-for-huawei-zte-70 00010887/. Also see “Huawei, ZTE await domestic-maker tag,” The Hindu, February 5, 2013, http://www.thehindu.com/business/companies/huawei-zte-await-domesticmaker-tag/article4382888.ece.

[34]. Ellyne Phneah, “Huawei, ZTE under probe by Indian government,” ZD Net, May 10, 2013, http://www.zdnet.com/in/huawei-zte-under-probe-by-indian-government-7000015185/.

[35]. Devidutta Tripathy, “India investigates report of Huawei hacking state carrier network,” Reuters, February 6, 2014, http://www.reuters.com/article/2014/02/06/us-india-huawei-hacking-idUSBREA150QK20140206.

[36]. “Products Certified,” Common Criteria Portal of India, http://www.commoncriteria-india.gov.in/Pages/ProductsCertified.aspx.

Security: Privacy, Transparency and Technology

by Prasad Krishna last modified Aug 19, 2015 02:24 AM

PDF document icon Digital-Debates.pdf — PDF document, 5860 kB (6000742 bytes)

Free Speech Policy in India: Community, Custom, Censorship, and the Future of Internet Regulation

by Bhairav Acharya last modified Aug 23, 2015 10:12 AM
This note summarises my panel contribution to the conference on Freedom of Expression in a Digital Age at New Delhi on 21 April 2015, which was organised by the Observer Research Foundation (ORF) and the Centre for Internet and Society (CIS) in collaboration with the Internet Policy Observatory of the Center for Global Communication Studies (CGCS) at the Annenberg School for Communication, University of Pennsylvania

Download the Note here (PDF, 103 Kb)


Preliminary

There has been legitimate happiness among many in India at the Supreme Court’s recent decision in the Shreya Singhal case to strike down section 66A of the Information Technology Act, 2000 ("IT Act") for unconstitutionally fettering the right to free speech on the Internet. The judgment is indeed welcome, and reaffirms the Supreme Court’s proud record of defending the freedom of speech, although it declined to interfere with the government’s stringent powers of website blocking. As the dust settles there are reports the government is re-grouping to introduce fresh law, allegedly stronger to secure easier convictions, to compensate the government’s defeat.

Case Law and Government Policy

India’s constitutional courts have a varied history of negotiating the freedom of speech that justifiably demands study. But, in my opinion, inadequate attention is directed to the government’s history of free speech policy. It is possible to discern from the government’s actions over the last two centuries a relatively consistent narrative of governance that seeks to bend the individual’s right to speech to its will. The defining characteristics of this narrative – the government’s free speech policy – emerge from a study of executive and legislative decisions chiefly in relation to the press, that continue to shape policy regarding the freedom of expression on the Internet.

India’s corpus of free speech case law is not uniform nor can it be since, for instance, the foundational issues that attend hate speech are quite different from those that inform contempt of court. So too, Indian free speech policy has been varied, captive to political compulsions and disparate views regarding the interests of the community, governance and nation-building. There has been consistent tension between the individual and the community, as well as the role of the government in enforcing the expectations of the community when thwarted by law.

Dichotomy between Modern and Native Law

To understand free speech policy, it is useful to go back to the early colonial period in India, when Governor-General Warren Hastings established a system of courts in Bengal’s hinterland to begin the long process of displacing traditional law to create a modern legal system. By most accounts, pre-modern Indian law was not prescriptive, Austinian, and uniform. Instead, there were several legal systems and a variety of competing and complementary legal sources that supported different interpretations of law within most legal systems. J. Duncan M. Derrett notes that the colonial expropriation of Indian law was marked by a significant tension caused by the repeatedly-stated objective of preserving some fields of native law to create a dichotomous legal structure. These efforts were assisted by orientalist jurists such as Henry Thomas Colebrook whose interpretation of the dharmasastras heralded a new stage in the evolution of Hindu law.

In this background, it is not surprising that Elijah Impey, a close associate of Hastings, simultaneously served as the first Chief Justice of the Supreme Court at Fort William while overseeing the Sadr Diwani Adalat, a civil court applying Anglo-Hindu law for Hindus, and the Sadr Faujdari Adalat, a criminal court applying Anglo-Islamic law to all natives. By the mid-nineteenth century, this dual system came under strain in the face of increasing colonial pressure to rationalise the legal system to ensure more effective governance, and native protest at the perceived insensitivity of the colonial government to local customs.

Criminal Law and Free Speech in the Colony

In 1837, Thomas Macaulay wrote the first draft of a new comprehensive criminal law to replace indigenous law and custom with statutory modern law. When it was enacted as the Indian Penal Code in 1860 ("IPC"), it represented the apogee of the new colonial effort to recreate the common law in India. The IPC’s enactment coincided with the growth and spread of both the press and popular protest in India. The statute contained the entire gamut of public-order and community-interest crimes to punish unlawful assembly, rioting, affray, wanton provocation, public nuisance, obscenity, defiling a place of worship, disturbing a religious assembly, wounding religious feelings, and so on. It also criminalised private offences such as causing insult, annoyance, and intimidation. These crimes continue to be invoked in India today to silence individual opinion and free speech, including on the Internet. Section 66A of the IT Act utilised a very similar vocabulary of censorship.

Interestingly, Macaulay’s IPC did not feature the common law offences of sedition and blasphemy or the peculiar Indian crime of promoting inter-community enmity; these were added later. Sedition was criminalised by section 124A at the insistence of Barnes Peacock and applied successfully against Indian nationalist leaders including Bal Gangadhar Tilak in 1897 and 1909, and Mohandas Gandhi in 1922. In 1898, the IPC was amended again to incorporate section 153A to criminalise the promotion of enmity between different communities by words or deeds. And, in 1927, a more controversial amendment inserted section 295A into the IPC to criminalise blasphemy. All three offences have been recently used in India against writers, bloggers, professors, and ordinary citizens.

Loss of the Right to Offend

The two amendments of 1898 and 1927, which together proscribed the promotion of inter-community enmity and blasphemy, represent the dismantling of the right to offend in India. But, oddly, they were defended by the colonial government in the interests of native sensibilities. The proceedings of the Imperial Legislative Council reveal several members, including Indians, were enthusiastic about the amendments. For some, the amendments were a necessary corrective action to protect community honour from subversive speech. The 1920s were a period of foment in India as the freedom movement intensified and communal tension mounted. In this environment, it was easy to fuse the colonial interest in strong administration with a nationalist narrative that demanded the retrieval of Indian custom to protect native sensibilities from being offended by individual free speech, a right derived from modern European law. No authoritative jurist could be summoned to prove or refute the claim that native custom privileged community honour.

Sadly the specific incident which galvanised the amendment of 1927, which established the crime of blasphemy in India, would not appear unfamiliar to a contemporary observer. Mahashay Rajpal, an Arya Samaj activist, published an offensive pamphlet of the Prophet Muhammad titled Rangeela Rasool, for which he was arrested and tried but acquitted in the absence of specific blasphemy provisions. With his speech being found legal, Rajpal was released and given police protection but Ilam Din, a Muslim youth, stabbed him to death. Instead of supporting its criminal law and strengthening its police forces to implement the decisions of its courts, the colonial administration surrendered to the threat of public disorder and enacted section 295A of the IPC.

Protest and Community Honour

The amendment of 1927 marks an important point of rupture in the history of Indian free speech. It demonstrated the government’s policy intention of overturning the courts to restrict the individual’s right to speech when faced with public protest. In this way, the combination of public disorder and the newly-created crimes of promoting inter-community enmity and blasphemy opened the way for the criminal justice system to be used as a tool by natives to settle their socio-cultural disputes. Both these crimes address group offence; they do not redress individual grievances. In so far as they are designed to endorse group honour, these crimes signify the community’s attempt to suborn modern law and individual rights.

Almost a century later, the Rangeela Rasool affair has become the depressing template for illegal censorship in India: fringe groups take offence at permissible speech, crowds are marshalled to articulate an imagined grievance, and the government capitulates to the threat of violence. This formula has become so entrenched that governance has grown reflexively suppressive, quick to silence speech even before the perpetrators of lumpen violence can receive affront. This is especially true of online speech, where censorship is driven by the additional anxiety brought by the difficulty of Internet regulation. In this race to be offended the government plays the parochial referee, acting to protect indigenous sensibilities from subversive but legal speech.

The Censorious Post-colony

Independence marked an opportunity to remake Indian governance in a freer image. The Constituent Assembly had resolved not to curb the freedom of speech in Article 19(1)(a) of the Constitution on account of public order. In two cases from opposite ends of the country where right-wing and left-wing speech were punished by local governments on public order grounds, the Supreme Court acted on the Constituent Assembly’s vision and struck down the laws in question. Free speech, it appeared, would survive administrative concerns, thanks to the guarantee of a new constitution and an independent judiciary. Instead Prime Minister Jawaharlal Nehru and his cabinet responded with the First Amendment in 1951, merely a year after the Constitution was enacted, to create three new grounds of censorship, including public order. In 1963, a year before he demitted office, the Sixteenth Amendment added an additional restriction.

Nehru did not stop at amending the Constitution, he followed shortly after with a concerted attempt to stage-manage the press by de-legitimising certain kinds of permissible speech.

Under Justice G. S. Rajadhyaksha, the government constituted the First Press Commission which attacked yellow journalism, seemingly a sincere concern, but included permissible albeit condemnable speech that was directed at communities, indecent or vulgar, and biased. Significantly, the Commission expected the press to only publish speech that conformed to the developmental and social objectives of the government. In other words, Nehru wanted the press to support his vision of India and used the imperative of nation-building to achieve this goal. So, the individual right to offend communities was taken away by law and policy, and speech that dissented from the government’s socio-economic and political agenda was discouraged by policy. Coupled with the new constitutional ground of censorship on account of public order, the career of free speech in independent India began uncertainly.

How to regulate permissible speech?

Despite the many restrictions imposed by law on free speech, Indian free speech policy has long been engaged with the question of how to regulate the permissible speech that survives constitutional scrutiny. This was significantly easier in colonial India. In 1799, Governor-General Richard Wellesley, the brother of the famous Duke of Wellington who defeated Napoleon at Waterloo, instituted a pre-censorship system to create what Rajeev Dhavan calls a “press by permission” marked by licensed publications, prior restraint, subsequent censorship, and harsh penalties. A new colonial regime for strict control over the publication of free speech was enacted in the form of the Press and Registration of Books Act, 1867, the preamble of which recognises that “the literature of a country is…an index of…the condition of [its] people”. The 1867 Act was diluted after independence but still remains alive in the form of the Registrar of Newspapers.

After surviving Indira Gandhi’s demand for a committed press and the depredations of her regime during the Emergency, India’s press underwent the examination of the Second Press Commission. This was appointed in 1978 under the chairmanship of Justice P. K. Goswami, a year after the Janata government released the famous White Paper on Misuse of Mass Media. When Gandhi returned to power, Justice Goswami resigned and the Commission was reconstituted under Justice K. K. Mathew. In 1982, the Commission’s report endorsed the earlier First Press Commission’s call for conformist speech, but went further by proposing the appointment of a press regulator invested with inspection powers; criminalising attacks on the government; re-interpreting defamation law to encompass democratic criticism of public servants; retaining stringent official secrecy law; and more. It was quickly acted upon by Rajiv Gandhi through his infamous Defamation Bill.

The contours of future Internet regulation

The juggernaut of Indian free speech policy has received temporary setbacks, mostly inflicted by the Supreme Court. Past experience shows us that governments with strong majorities – whether Jawaharlal Nehru’s following independence or Indira Gandhi’s in the 1970s – act on their administrative impulses to impede free speech by government policy. The Internet is a recent and uncontrollable medium of speech that attracts disproportionately heavy regulatory attention. Section 66A of the IT Act may be dead but several other provisions remain to harass and punish online free speech. Far from relaxing its grip on divergent opinions, the government appears poised for more incisive invasions of personal freedoms.

I do not believe the contours of future speech regulation on the Internet need to be guessed at, they can be derived from the last two centuries of India’s free speech policy. When section 66A is replaced – and it will be, whether overtly by fresh statutory provisions or stealthily by policy and non-justiciable committees and commissions – it will be through a regime that obeys the mandate of the First Press Commission to discourage dissenting and divergent speech while adopting the regulatory structures of the Second Press Commission to permit a limited inspector raj and forbid attacks on personalities. The interests of the community, howsoever improperly articulated, will seek precedence over individual freedoms and the accompanying threat of violence will give new meaning to Bhimrao Ambedkar’s warning of the “grammar of anarchy”.

Net Neutrality and the Law of Common Carriage

by Bhairav Acharya last modified Aug 23, 2015 11:09 AM
Net neutrality makes strange bedfellows. It links the truck operators that dominate India’s highways, such as those that carry vegetables from rural markets to cities, and Internet service providers which perform a more technologically advanced task.

Download PDF


Over the last decade, the truckers have opposed the government’s attempts to impose the obligations of common carriage on them, this has resulted in strikes and temporary price rises; and, in the years ahead, there is likely to be a similar – yet, technologically very different – debate as net neutrality advocates call for an adapted version of common carriage to bind Internet services.

Net neutrality demands a rigorous examination that is not attempted by this short note which, constrained by space, will only briefly trace the law and policy of net neutrality in the US and attempt a brief comparison with the principles of common carriage in India. Net neutrality defies definition. Very simply, the principle demands that Internet users have equal access to all content and applications on the Internet. This can only be achieved if Internet service providers: (i) do not block lawful content; (ii) do not throttle – deliberately slow down or speed up access to selected content; (iii) do not prioritise certain content over others for monetary gain; and, (iv) are transparent in their management of the networks by which data flows.

Almost exactly a year ago, the District of Columbia Circuit Court of Appeals – a senior court below the US Supreme Court – struck down portions of the ‘Open Internet Order’ that was issued by the Federal Communications Commission (FCC) in 2010. Although sound in law, the Court’s verdict impeded net neutrality to raise crucial questions regarding common carriage, free speech, competition, and others. More recently, Airtel’s announcement of its decision to charge certain end-users for VoIP services – subsequently suspended pending a policy decision from the Telecom Regulatory Authority of India (TRAI) – has fuelled the net neutrality debate in India.

Because of its innovative technological history in relation to the Internet, the US has pioneered many legal attempts to regulate the Internet in respect of net neutrality. In 1980, when Internet data flowed through telephone lines, the FCC issued the ‘Computer II’ regime which distinguished basic services from enhanced services. The difference between the two turned on the nature of the transmission. Regular telephone calls involved a pure transmission of data and were hence classified as basic services. On the other hand, access to the Internet required the processing of user data through computers; these were classified as enhanced services. Importantly, because of their essential nature, the Computer II rules bound basic services providers to the obligations of common carriage whereas enhanced services providers were not.

What is common carriage? Common law countries share a unique heritage in respect of their law governing the transport of goods and people. Those that perform such transport are called carriers. The law makes a distinction between common carriers and other carriers. A carrier becomes a common carrier when it “holds itself out” to the public as willing to transport people or goods for compensation. The act of holding out is simply a public communication of an offer to transport, it may be fulfilled even by an advertisement. The four defining elements of a common carrier are (i) a holding out of a willingness (a public undertaking) (ii) to transport persons or property (iii) from place to place (iv) for compensation.

Common carriers discharge a public trust. By virtue of their unique position and essential function, they are required to serve their customers equally and without discrimination. The law of carriage of goods and people places four broad duties upon common carriers. Firstly, common carriers are bound to carry everyone’s goods or all people and cannot refuse such carriage unless certain strict conditions are met. Secondly, common carriers must perform their carriage safely without deviating from accepted routes unless in exceptional circumstances. Thirdly, common carriers must obey the timeliness of their schedules, they must be on time. And, lastly, common carriers must assume liabilities for the loss or damages of goods, or death or injuries to people, during carriage.

The Computer II regime was issued under a telecommunications law of 1934 which retained the classical markers and duties of common carriers. The law extended the principles of common carriage to telephone services providers. In 1980, when the regime was introduced, the FCC did not invest Internet services with the same degree of essence and public trust; hence, enhanced services escaped strict regulation. However, the FCC did require that basic services and enhanced services be offered through separate entities, and that basic services providers that operated the ‘last-mile’ wired transmission infrastructure to users offer these facilities to enhanced services providers on a common carrier basis.

In 1996, the new Telecommunications Act revisited US law after more than sixty years. The new dispensation maintained the broad structure of the Computer II regime: it recognised telecommunications carriers in place of basic services providers, and information-services providers in place of enhanced services. Carriers in the industry had already converged telephone and Internet communications as a single service. Hence, when a user engaged a carrier that provided telephone and broadband Internet services, the classification of the carrier would depend on the service being accessed. When a carrier provided broadband Internet access, it was an information-services provider (not a telecommunications carrier) and vice versa. Again, telecommunications carriers were subjected to stricter regulations and liability resembling common carriage.

In 1998, the provision of broadband Internet over wired telephone lines through DSL technologies was determined to be a pure transmission and hence a telecommunications service warranting common carriage regulation. However, in 2002, the FCC issued the ‘Cable Broadband Order’ that treated the provision of cable broadband through last-mile wired telephone transmission networks as a single and integrated information service. This exempted most cable broadband from the duties of common carriage. This policy was challenged in the US Supreme Court in 2005 in the Brand X case and upheld.

Significantly, the decision in the Brand X case was not made on technological merits. The case arose when a small ISP that had hitherto used regular telephone lines to transmit data wanted equal access to the coaxial cables of the broadcasting majors on the basis of common carriage. Instead of making a finding on the status of cable broadband providers based on the four elements of common carriage, the Court employed an administrative law principle of deferring to the decisions of an expert technical regulator – known as the Chevron deference principle – to rule against the small ISP. Thereafter wireless and mobile broadband were also declared to be information services and saved from the application of common carriage law.

Taking advantage of this exemption from common carriage which released broadband providers from the duty of equal access and anti-discrimination, Comcast began from 2007 to degrade P2P data flows to its users. This throttling was reported to the FCC which responded with the 2008 ‘Comcast Order’ to demand equal and transparent transmission from Comcast. Instead, Comcast took the FCC to court. In 2010, the Comcast Order was struck down by the DC Circuit Court of Appeals. And, again, the decision in the Comcast case was made on an administrative law principle, not on technological merits.

In the Comcast case, the Court said that as long as the FCC treated broadband Internet access as an information service it could not enforce an anti-discrimination order against Comcast. This is because the duty of anti-discrimination attached only to common carriers which the FCC applied to telecommunications carriers. Following the Comcast case, the FCC began to consider reclassifying broadband Internet providers as telecommunications carriers.

However, in the 2010 ‘Open Internet Order’, the FCC attempted a different regulatory approach. Instead of a classification based on common carriage, the new rules recognised two types of Internet service providers: (i) fixed providers, which transmitted to homes, and, (ii) mobile providers, which were accessed by smartphones. The rules required both types of providers to ensure transparency in network management, disallowed blocking of lawful content, and re-imposed the anti-discrimination requirement to forbid prioritised access or throttling of certain content.

Before they were even brought into effect, Verizon challenged the Open Internet Order in the same court that delivered the Comcast judgement. The decision of the Court is pending. Meanwhile, in India, Airtel’s rollback of its announcement to charge its pre-paid mobile phone users more for VoIP services raises very similar questions. Like the common law world, India already extends the principles of common carriage to telecommunications. Indian jurisprudence also sustains the distinction between common carriage and private carriage, and applies an anti-discrimination requirement to telecommunications providers through a licensing regime.

TRAI must decide if it wants to continue this distinction. No doubt, the provision of communications services through telephone and the Internet serves an eminent public good. It was on this basis that President Obama called on the FCC to reclassify broadband Internet providers as common carriers. Telecommunications carriers, such as Airtel, might argue that they have expended large sums of money on network infrastructure that is undermined by the use of high-bandwidth free VoIP applications, and that the law of common carriage must recognise this fact. And still others call for a new approach to net neutrality outside the dichotomy of common and private carriage. Whatever the solution, it must be reached by widespread engagement and participation, for Internet access – as the government’s Digital India project is aware – serves public interest.

Net Neutrality and the Law of Common Carriage

by Bhairav Acharya last modified Aug 23, 2015 11:06 AM

PDF document icon Net Neutrality and the Law of Common Carriage.pdf — PDF document, 92 kB (94529 bytes)

Privacy, Autonomy, and Sexual Choice: The Common Law Recognition of Homosexuality

by Bhairav Acharya last modified Aug 23, 2015 12:20 PM
In the last few decades, all major common law jurisdictions have decriminalised non-procreative sex – oral and anal sex (sodomy) – to allow private, consensual, and non-commercial homosexual intercourse.

Download PDF

Anti-sodomy statutes across the world, often drafted in the same anachronistic vein as section 377 of the Indian Penal Code, 1860 (“IPC”), have either been repealed or struck down on the grounds that they invade individual privacy and are detrimentally discriminative against homosexual people.

This is not an examination of India’s laws against homosexuality, it does not review the Supreme Court of India’s judgment in Suresh Koushal v. Naz Foundation (2014) 1 SCC 1 nor the Delhi High Court’s judgment in Naz Foundation v. Government of NCT Delhi 2009 (160) DLT 277, which the former overturned – in my view, wrongly. This note simply provides a legal history of the decriminalisation of non-procreative sexual activity in the United Kingdom and the United States. Same-sex marriage is also not examined.

In the United Kingdom

The Wolfenden Report

In England, following a campaign of arrests of non-heterosexual persons and subsequent protests in the 1950s, the government responded to public dissatisfaction by appointing the Departmental Committee on Homosexual Offences and Prostitution chaired by John Frederick Wolfenden. The report of this committee (“Wolfenden Report”) was published in 1957 and recommended that:

“…homosexual behaviour between consenting adults in private should no longer be a criminal offence.”

The Report further observed that it was not the function of a State to punitively scrutinise the private lives of its citizens:

“(T)he law’s function is to preserve public order and decency, to protect the citizen from what is offensive or injurious, and to provide sufficient safeguards against exploitation and corruption of others… It is not, in our view, the function of the law to intervene in the private life of citizens, or to seek to enforce any particular pattern of behaviour.”

The Sexual Offences Act, 1967

The Wolfenden Report was accepted and, in its pursuance, the Sexual Offences Act, 1967 was enacted to, for the first time in common law jurisdictions, partially decriminalise homosexual activity – described in English law as ‘buggery’ or anal sex between males.
Section 1(1) of the original Sexual Offences Act, as notified on 27 July 1967 stated –
"Notwithstanding any statutory or common law provision, but subject to the provisions of the next following section, a homosexual act in private shall not be an offence provided that the parties consent thereto and have attained the age of twenty one years."
A ‘homosexual act’ was defined in section 1(7) as –
“For the purposes of this section a man shall be treated as doing a homosexual act if, and only if, he commits buggery with another man or commits an act of gross indecency with another man or is a party to the commission by a man of such an act.”
The meaning of ‘private’ was also set forth rather strictly in section 1(2) –
“An act which would otherwise be treated for the purposes of this Act as being done in private shall not be so treated if done –
(a) when more than two persons take part or are present; or
(b) in a lavatory to which the public have or are permitted to have access, whether on
payment or otherwise.”
Hence, by 1967, English law permitted:

  • as between two men,
  • both twenty-one years or older,
  • anal sex (buggery),
  • and other sexual activity (“gross indecency”)
  • if, and only if, a strict prescription of privacy was maintained,
  • that excluded even a non-participating third party from being present,
  • and restricted the traditional conception of public space to exclude even lavatories.

However, the benefit of Section 1 of the Sexual Offences Act, 1967 did not extend beyond England and Wales; to mentally unsound persons; members of the armed forces; merchant ships; and, members of merchant ships whether on land or otherwise.

Developments in Scotland and Northern Ireland

Over the years, the restrictions in the original Sexual Offences Act, 1967 were lifted. In 1980, the Criminal Justice (Scotland) Act, 1980 partially decriminalised homosexual activity in Scotland on the same lines that the Act of 1967 did for England and Wales. One year later, in 1981, an Irishman Jeffrey Dudgeon successfully challenged the continued criminalisation of homosexuality in Northern Ireland before the European Court of Human Rights (“ECHR”) in the case of Dudgeon v. United Kingdom (1981) 4 EHRR 149. Interestingly, Dudgeon was not decided on the basis of detrimental discrimination or inequality, but on the ground that the continued illegality of homosexuality violated the petitioner’s right to privacy guaranteed by Article 8 of the 1950 European Convention on Human Rights (“European Convention”). In a 15-4 majority judgement, the ECHR found that “…moral attitudes towards male homosexuality…cannot…warrant interfering with the applicant’s private life…” Following Dudgeon, the Homosexual Offences (Northern Ireland) Order, 1982 came into effect; and with it, brought some semblance of uniformity in the sodomy laws of the United Kingdom.

Equalising the age of consent

However, protests continued against the unequal age of consent required for consensual homosexual sex (21 years) as opposed to that for heterosexual sex (16 years). In 1979, a government policy advisory recommended that the age of consent for homosexual sex be reduced to 18 years – two years older than that for heterosexual sex, but was never acted upon. In 1994, an attempt to statutorily equalise the age of consent at 16 years was defeated in the largely conservative House of Commons although a separate legislative proposal to reduce it to 18 years was carried and enacted under the Criminal Justice and Public Order Act, 1994. Following this, the unequal ages of consent forced a challenge against UK law in the ECHR in 1994; four years later, in Sutherland v. United Kingdom [1998] EHRLR 117, the ECHR found that the unequal age of consent violated Articles 8 and 14 of the European Convention – relating to privacy and discrimination. Sutherland was significant in two ways – it forced the British government to once again introduce legislation to equalise the ages of consent; and, significantly, it affirmed a homosexual human right on the ground of anti-discrimination (as opposed to privacy).

To meet its European Convention commitments, the House of Commons passed, in June 1998, a bill for an equal age of sexual consent but it was rejected by the more conservative House of Lords. In December 1998, the government reintroduced the equal age of consent legislation which again passed the House of Commons and was defeated in the House of Lords. Finally, in 1999, the government invoked the statutory superiority of the House of Commons, reintroduced for the third time the legislation, passed it unilaterally to result in the enactment of the Sexual Offences (Amendment) Act, 2000 that equalised the age of sexual consent for both heterosexuals and homosexuals at 16 years of age.

Uniformity of equality

However, by this time, different UK jurisdictions observed separate legislations regarding homosexual activity. The privacy conditions stipulated in the original Sexual Offences Act, 1967 remained, although they had been subject to varied interpretation by English courts. To resolve this, the UK Parliament enacted the Sexual Offences Act, 2003 which repealed all earlier conflicting legislation, removed the strict privacy conditions attached to homosexual activity and re-drafted sexual offences in a gender neutral manner. A year later, the Civil Partnership Act, 2004 gave same-sex couples the same rights and responsibilities as a civil marriage. And, in 2007, the Equality Act (Sexual Orientation) Regulations came into force to prohibit general discrimination against homosexual persons in the same manner as such prohibition exists in respect of grounds of race, religion, disability, sex and so on.

In the United States

Diversity of state laws

Sodomy laws in the United States of America have followed a different trajectory. A different political and legal system leaves individual US States with wide powers to draft and follow their own constitutions and laws. Accordingly, by 1961 all US States had their own individual anti-sodomy laws, with different definitions of sodomy and homosexuality. In 1962, Illinois became the first US State to repeal its anti-sodomy law. Many States followed suit over the next decades including Connecticut (1971); Colorado and Oregon (1972); Delaware, Hawaii and North Dakota (1973); Ohio (1974); New Hampshire and New Mexico (1975); California, Maine, Washington and West Virginia (1976); Indiana, South Dakota, Wyoming and Vermont (1977); Iowa and Nebraska (1978); New Jersey (1979); Alaska (1980); and, Wisconsin (1983).

Bowers v. Hardwick

However, not all States repealed their anti-sodomy laws. Georgia was one such State that retained a statutory bar to any oral or anal sex between any persons of any sex contained in Georgia Code Annotated §16-6-2 (1984) (“Georgia statute”) which provided, in pertinent part, as follows:

“(a) A person commits the offense of sodomy when he performs or submits to any sexual act involving the sex organs of one person and the mouth or anus of another… (b) A person convicted of the offense of sodomy shall be punished by imprisonment for not less than one nor more than 20 years”

In 1982, a police officer arrested Michael Hardwick in his bedroom for sodomy, an offence which carried a prison sentence of up to twenty years. His case went all the way up to the US Supreme Court which, in 1986, pronounced its judgement in Bowers v. Hardwick 478 US 186 (1986). Although the Georgia statute was framed broadly to include even heterosexual sodomy (anal or oral sex between a man and a woman or two women) within its ambit of prohibited activity, the Court chose to frame the issue at hand rather narrowly. Justice Byron White, speaking for the majority, observed at the outset –

“This case does not require a judgment on whether laws against sodomy between consenting adults in general, or between homosexuals in particular, are wise or
desirable. It raises no question about the right or propriety of state legislative decisions to repeal their laws that criminalize homosexual sodomy, or of state-court decisions invalidating those laws on state constitutional grounds. The issue presented is whether the Federal Constitution confers a fundamental right upon homosexuals to engage in sodomy…”

Privacy and autonomy

Interestingly, Hardwick’s case against the Georgia statute was not grounded on an equality-discrimination argument (since the Georgia statute prohibited even heterosexual sodomy but was only enforced against homosexuals) but on a privacy argument that sought to privilege and immunise private consensual non-commercial sexual conduct from intrusive State intervention. To support this privacy claim, a long line of cases was relied upon that restricted the State’s ability to intervene in, and so upheld the sanctity of, the home, marriage, procreation, contraception, child rearing and so on [See, Carey v. Population Services 431 US 678 (1977), Pierce v. Society of Sisters 268 US 510 (1925) and Meyer v. Nebraska 262 US 390 (1923) on child rearing and education; Prince v. Massachusetts 321 US 158 (1944) on family relationships; Skinner v. Oklahoma ex rel. Williamson 316 US 535 (1942) on procreation; Loving v. Virginia 388 US 1 (1967) on marriage; Griswold v. Connecticut 381 US 479 (1965) and Eisenstadt v. Baird 405 US 438 (1972) on contraception; and Roe v. Wade 410 US 113 (1973) on abortion]. Further, the Court was pressed to declare a fundamental right to consensual homosexual sodomy by reading it into the Due Process clause of the Fourteenth Amendment to the US Constitution.

The 9-judges Court split 5-4 down the middle to rule against all of Hardwick’s propositions and uphold the constitutionality of the Georgia statute. The Court’s majority agreed that cases cited by Hardwick had indeed evolved a right to privacy, but disagreed that this privacy extended to homosexual persons since “(n)o connection between family, marriage, or procreation on the one hand and homosexual activity on the other has been demonstrated…”. In essence, the Court’s majority held that homosexuality was distinct from procreative human sexual behaviour; that homosexual sex could, by virtue of this distinction, be separately categorised and discriminated against; and, hence, homosexual sex did not qualify for the benefit of intimate privacy protection that was available to heterosexuals. What reason did the Court give to support this discrimination? Justice White speaking for the majority gives us a clue: “Proscriptions against that (homosexual) conduct have ancient roots.” Justice White was joined in his majority judgement by Chief Justice Burger, Justice Powell, Justice Rehnquist and Justice O’Connor. His rationale was underscored by Chief Justice Burger who also wrote a short concurring opinion wherein he claimed:

“Decisions of individuals relating to homosexual conduct have been subject to state intervention throughout the history of Western civilization. Condemnation of those practices is firmly rooted in Judeo-Christian moral and ethical standards. Blackstone described “the infamous crime against nature” as an offense of “deeper malignity” than rape, a heinous act “the very mention of which is a disgrace to human nature,” and “a crime not fit to be named.” … To hold that the act of homosexual sodomy is somehow protected as a fundamental right would be to cast aside millennia of moral teaching.”

The majority’s “wilful blindness”: Blackmun’s dissent

The Court’s dissenting opinion was delivered by Justice Blackmun, in which Justice Brennan, Justice Marshall and Justice Stevens joined. At the outset, the Justice Blackmun disagreed with the issue that was framed by the majority led by Justice White: “This case is (not) about “a fundamental right to engage in homosexual sodomy,” as the Court purports to declare…” and further pointed out that the Georgia statute proscribed not just homosexual sodomy, but oral or anal sex committed by any two persons: “…the Court’s almost obsessive focus on homosexual activity is particularly hard to justify in light of the broad language Georgia has used.”. When considering the issue of privacy for intimate sexual conduct, Justice Blackmun criticised the findings of the majority: “Only the most wilful blindness could obscure the fact that sexual intimacy is a sensitive, key relationship of human existence, central to family life, community welfare, and the development of human personality…” And when dealing with the ‘historical morality’ argument that was advanced by Chief Justice Burger, the minority observed:

“The assertion that “traditional Judeo-Christian values proscribe” the conduct involved cannot provide an adequate justification for (§)16-6-2 (of the Georgia Statute). That certain, but by no means all, religious groups condemn the behavior at issue gives the State no license to impose their judgments on the entire citizenry. The legitimacy of secular legislation depends instead on whether the State can advance some justification for its law beyond its conformity to religious doctrine.”

The states respond, privacy is upheld

Bowers was argued and decided over five years in the 1980s. At the time, the USA was witnessing a neo-conservative wave in its society and government, which was headed by a republican conservative. The HIV/AIDS issue had achieved neither the domestic nor international proportions it now occupies and the linkages between HIV/AIDS, homosexuality and the right to health were still unclear. In the years after Bowers, several more US States repealed their sodomy laws.

In some US States, sodomy laws that were not legislatively repealed were judicially struck down. In 1998, the Georgia State Supreme Court, in Powell v. State of Georgia S98A0755, 270 Ga. 327, 510 S.E. 2d 18 (1998), heard a challenge to the same sodomy provision of the Georgia statute that was upheld in by the US Supreme Court in Bowers. In a complete departure from the US Supreme Court’s findings, the Georgia Supreme Court first considered whether the Georgia statute violated individual privacy: “It is clear from the right of privacy appellate jurisprudence…that the “right to be let alone” guaranteed by the Georgia Constitution is far more extensive that the right of privacy protected by the U.S. Constitution…”

Having established that an individual right to privacy existed to protect private consensual sodomy, the Georgia Court then considered whether there was a ‘legitimate State interest’ that justified the State’s restriction of this right. The justifications that were offered by the State included the possibility of child sexual abuse, prostitution and moral degradation of society. The Court found that there already were a number of legal provisions to deter and punish rape, child abuse, trafficking, prostitution and public indecency. Hence: “In light of the existence of these statutes, the sodomy statute’s raison d’ etre can only be to regulate the private sexual conduct of consenting adults, something which Georgians’ right of privacy puts beyond the bounds of government regulation.” By a 2-1 decision, Chief Justice Benham leading the majority, the Georgia Supreme Court struck down the Georgia statute for arbitrarily violating the privacy of individuals. Interestingly, the subjects of the dispute were not homosexual, but two heterosexual adults – a man and a woman. Similar cases where a US State’s sodomy laws were judicially struck down include:

  • Campbell v. Sundquist 926 S.W.2d 250 (1996) – [Tennessee – by the Tennessee Court of Appeals on privacy violation; appeal to the State Supreme Court expressly denied].
  • Commonwealth v. Bonadio 415 A.2d 47 (1980) – [Pennsylvania – by the Pennsylvania Supreme Court on both equality and privacy violations];
  • Doe v. Ventura MC 01-489, 2001 WL 543734 (2001) – [Minnesota – by the Hennepin County District Judge on privacy violation; no appellate challenge];
  • Gryczan v. Montana 942 P.2d 112 (1997) – [Montana – by the Montana Supreme Court on privacy violation];
  • Jegley v. Picado 80 S.W.3d 332 (2001) – [Arkansas – by the Arkansas Supreme Court, on privacy violation];
  • Kentucky v. Wasson 842 S.W.2d 487 (1992) [Kentucky – by the Kentucky Supreme Court on both equality and privacy violations];
  • Massachusetts v. Balthazar 366 Mass. 298, 318 NE2d 478 (1974) and GLAD v. Attorney General 436 Mass. 132, 763 NE2d 38 (2002) – [Massachusetts – by the Superior Judicial Court on privacy violation];
  • People v. Onofre 51 NY 2d 476 (1980) [New York – by the New York Court of Appeals on privacy violation]; and,
  • Williams v. Glendenning No. 98036031/CL-1059 (1999) – [Maryland – by the Baltimore City Circuit Court on both privacy and equality violations; no appellate challenge].

Lawrence v. Texas

These developments made for an uneven field in the matter of legality of homosexual sex with the sodomy laws of most States being repealed by their State legislatures or subject to State judicial invalidation, while the sodomy laws of the remaining States were retained under the shade of constitutional protection afforded by Bowers. Texas was one such State which maintained an anti-sodomy law contained in Texas Penal Code Annotated § 21.06(a) (2003) (“Texas statute”) which criminalised sexual intercourse between two people of the same sex. In 1998, the Texas statute was invoked to arrest two men engaged in private, consensual, non-commercial sodomy. They subsequently challenged the constitutionality of the Texas statute, their case reaching the US Supreme Court. In 2003, the US Supreme Court, in Lawrence v. Texas 539 US 558 (2003) pronounced on the validity of the Texas statute. Interestingly, while the issue under consideration was identical to that decided in Bowers, the Court this time around was presented with detailed arguments on the equality-discrimination aspect of same-sex sodomy laws – which the Bowers Court majority did not consider. The Court split 6-3; the majority struck down the Texas statute. Justice Kennedy, speaking for himself and 4 other judges of the majority, found instant fault with the Bowers Court for framing the issue in question before it as simply whether homosexuals had a fundamental right to engage in sodomy.

Privacy, intimacy, home

This mistake, Justice Kennedy claimed, “…discloses the Court’s own failure… To say that the issue in Bowers was simply the right to engage in certain sexual conduct demeans…the individual…just as it would demean a married couple were it to be said marriage is simply about the right to have sexual intercourse. Their penalties and purposes (of the laws involved)…have more far-reaching consequences, touching upon the most private human conduct, sexual behavior, and in the most private of places, the home.” Justice Kennedy, joined by Justice Stevens, Justice Souter, Justice Ginsburg and Justice Breyer, found that the Texas statute violated the right to privacy granted by the Due Process clause of the US Constitution:

“The petitioners are entitled to respect for their private lives. The State cannot demean their existence or control their destiny by making their private sexual conduct a crime. “It is a promise of the Constitution that there is a realm of personal liberty which the government may not enter.”” [The quote is c.f. Planned Parenthood of Southeastern Pa. v. Casey 505 US 833 (1992)]

Imposed morality is defeated

With the privacy argument established as controlling, Justice Kennedy went to some length to refute the ‘historical morality’ argument that was put forward in Bowers by then Chief Justice Burger: “At the outset it should be noted that there is no longstanding history in this country of laws directed at homosexual conduct as a distinct matter… The sweeping references by Chief Justice Burger to the history of Western civilization and to Judeo-Christian moral and ethical standards did not take account of other authorities pointing in an opposite direction.” To illustrate these other authorities, Justice Kennedy references the ECHR’s decision in Dudgeon supra which was reached five years before Bowers: “Authoritative in all countries that are members of the Council of Europe (21 nations then, 45 nations now), the decision (Dudgeon) is at odds with the premise in Bowers that the claim put forward was insubstantial in our Western civilization.”.

The Court then affirmed that morality could not be a compelling ground to infringe upon a fundamental right: “Our obligation is to define the liberty of all, not to mandate our own moral code”. The lone remaining judge of the majority, Justice O’Connor, based her decision not on the right to privacy but on equality-discrimination considerations. Interestingly, Justice O’Connor sat on the Bowers Court and ruled with the majority in that case. Basing her decision on equal protection grounds allowed her to concur with the majority in Lawrence but not overturn her earlier position in Bowers which had rejected a right to privacy claim. It also enabled her to strike down the Texas statute while not conceding homosexuality as a constitutionally guaranteed private liberty. There were three dissenters: The chief dissent was delivered by Justice Scalia, in which he was joined by Chief Justice Rehnquist and Justice Thomas. Bowers was not merely distinguished by the majority, it was overruled:

“Bowers was not correct when it was decided, and it is not correct today. It ought not to remain binding precedent. Bowers v. Hardwick should be and now is overruled.”

Mastering the Art of Keeping Indians Under Surveillance

by Bhairav Acharya last modified Aug 23, 2015 12:26 PM
In its first year in office, the National Democratic Alliance government has been notably silent on the large-scale surveillance projects it has inherited. This ended last week amidst reports the government is hastening to complete the Central Monitoring System (CMS) within the year.

The article was published in the Wire on May 30, 2015.


In a statement to the Rajya Sabha in 2009, Gurudas Kamat, the erstwhile United Progressive Alliance’s junior communications minister, said the CMS was a project to enable direct state access to all communications on mobile phones, landlines, and the Internet in India. He meant the government was building ‘backdoors’, or capitalising on existing ones, to enable state authorities to intercept any communication at will, besides collecting large amounts of metadata, without having to rely on private communications carriers.

This is not new. Legally sanctioned backdoors have existed in Europe and the USA since the early 1990s to enable direct state interception of private communications. But the laws of those countries also subject state surveillance to a strong regime of state accountability, individual freedoms, and privacy. This regime may not be completely robust, as Edward Snowden’s revelations have shown, but at least it exists on paper. The CMS is not illegal by itself, but it is coloured by the compromised foundation of Indian surveillance law upon which it is built.

Surveillance and social control

The CMS is a technological project. But technology does not exist in isolation; it is contextualised by law, society, politics, and history. Surveillance and the CMS must be seen in the same contexts.

The great sociologist Max Weber claimed the modern state could not exist without monopolising violence. It seems clear the state also entertains the equal desire to monopolise communications technologies. The state has historically shaped the way in which information is transmitted, received, and intercepted. From the telegraph and radio to telephones and the Internet, the state has constantly endeavoured to control communications technologies.

Law is the vehicle of this control. When the first telegraph line was laid down in India, its implications for social control were instantly realised; so the law swiftly responded by creating a state monopoly over the telegraph. The telegraph played a significant role in thwarting the Revolt of 1857, even as Indians attempted to destroy the line; so the state consolidated its control over the technology to obviate future contests.

This controlling impulse was exercised over radio and telephones, which are also government monopolies, and is expressed through the state’s surveillance prerogative. On the other hand, because of its open and decentralised architecture, the Internet presents the single greatest threat to the state’s communications monopoly and dilutes its ability to control society.

Interception in India

The power to intercept communications arises with the regulation of telegraphy. The first two laws governing telegraphs, in 1854 and 1860, granted the government powers to take possession of telegraphs “on the occurrence of any public emergency”. In 1876, the third telegraph law expanded this threshold to include “the interest of public safety”. These are vague phrases and their interpretation was deliberately left to the government’s discretion.

This unclear formulation was replicated in the Indian Telegraph Act of 1885, the fourth law on the subject, which is currently in force today. The 1885 law included a specific power to wiretap. Incredibly, this colonial surveillance provision survived untouched for 87 years even as countries across the world balanced their surveillance powers with democratic safeguards.

The Indian Constitution requires all deprivations of free speech to conform to any of nine grounds listed in Article 19(2). Public emergencies and public safety are not listed. So Indira Gandhi amended the wiretapping provision in 1972 to insert five grounds copied from Article 19(2). However, the original unclear language on public emergencies and public safety remained.

Indira Gandhi’s amendment was ironic because one year earlier she had overseen the enactment of the Defence and Internal Security of India Act, 1971 (DISA), which gave the government fresh powers to wiretap. These powers were not subject to even the minimal protections of the Telegraph Act. When the Emergency was imposed in 1975, Gandhi’s government bypassed her earlier amendment and, through the DISA Rules, instituted the most intensive period of surveillance in Indian history.

Although DISA was repealed, the tradition of having parallel surveillance powers for fictitious emergencies continues to flourish. Wiretapping powers are also found in the Maharashtra Control of Organised Crime Act, 1999 which has been copied by Karnataka, Andhra Pradesh, Arunachal Pradesh, and Gujarat.

Procedural weaknesses

Meanwhile, the Telegraph Act with its 1972 amendment continued to weather criticism through the 1980s. The wiretapping power was largely exercised free of procedural safeguards such as the requirements to exhaust other less intrusive means of investigation, minimise information collection, limit the sharing of information, ensure accountability, and others.

This changed in 1996 when the Supreme Court, on a challenge brought by PUCL, ordered the government to create a minimally fair procedure. The government fell in line in 1999, and a new rule, 419A, was put into the Indian Telegraph Rules, 1951.

Unlike the United States, where a wiretap can only be ordered by a judge when she decides the state has legally made its case for the requested interception, an Indian wiretap is sanctioned by a bureaucrat or police officer. Unlike the United Kingdom, which also grants wiretapping powers to bureaucrats but subjects them to two additional safeguards including an independent auditor and a judicial tribunal, an Indian wiretap is only reviewed by a committee of the original bureaucrat’s colleagues. Unlike most of the world which restricts this power to grave crime or serious security needs, an Indian wiretap can even be obtained by the income tax department.

Rule 419A certainly creates procedure, but it lacks crucial safeguards that impugn its credibility. Worse, the contours of rule 419A were copied in 2009 to create flawed procedures to intercept the content of Internet communications and collect metadata. Unlike rule 419A, these new rules issued under sections 69(2) and 69B(3) of the Information Technology Act 2000 have not been constitutionally scrutinised.

Three steps to tap

Despite its monopoly, the state does not own the infrastructure of telephones. It is dependent on telecommunications carriers to physically perform the wiretap. Indian wiretaps take place in three steps: a bureaucrat authorises the wiretap; a law enforcement officer serves the authorisation on a carrier; and, the carrier performs the tap and returns the information to the law enforcement officer.

There are many moving parts in this process, and so there are leaks. Some leaks are cynically motivated such as Amar Singh’s lewd conversations in 2011. But others serve a public purpose: Niira Radia’s conversations were allegedly leaked by a whistleblower to reveal serious governmental culpability. Ironically, leaks have created accountability where the law has failed.

The CMS will prevent leaks by installing servers on the transmission infrastructure of carriers to divert communications to regional monitoring centres. Regional centres, in turn, will relay communications to a centralised monitoring centre where they will be analysed, mined, and stored. Carriers will no longer perform wiretaps; and, since this obviates their costs of compliance, they are willing participants.

In its annual report of 2012, the Centre for the Development of Telematics (C-DOT), a state-owned R&D centre tasked with designing and creating the CMS, claimed the system would intercept 3G video, ILD, SMS, and ISDN PRI communications made through landlines or mobile phones – both GSM and CDMA.

There are unclear reports of an expansion to intercept Internet data, such as emails and browsing details, as well as instant messaging services; but these remain unconfirmed. There is also a potential overlap with another secretive Internet surveillance programme being developed by the Defence R&D Organisation called NETRA, no details of which are public.

Culmination of surveillance

In its present state, Indian surveillance law is unable to bear the weight of the CMS project, and must be vastly strengthened to protect privacy and accountability before the state is given direct access to communications.

But there is a larger way to understand the CMS in the context of Indian surveillance. Christopher Bayly, the noted colonial historian, writes that when the British set about establishing a surveillance apparatus in colonised India, they came up against an established system of indigenous intelligence gathering. Colonial rule was at its most vulnerable at this point of intersection between foreign surveillance and indigenous knowledge, and the meeting of the two was riven by suspicion. So the colonial state simply co-opted the interface by creating institutions to acquire local knowledge.

The CMS is also an attempt to co-opt the interface between government and the purveyors of communications; because if the state cannot control communications, it cannot control society. Seen in this light, the CMS represents the natural culmination of the progression of Indian surveillance. No challenge against it that does not question the construction of the modern Indian state will be successful.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:04 PM
Privacy enjoys an abundance of meanings. It is claimed in diverse situations every day by everyone against other people, society and the state.

Traditionally traced to classical liberalism’s public/private divide, there are now several theoretical conceptions of privacy that collaborate and sometimes contend. Indian privacy law is evolving in response to four types of privacy claims: against the press, against state surveillance, for decisional autonomy, and in relation to personal information. The Indian Supreme Court has selectively borrowed competing foreign privacy norms, primarily American, to create an unconvincing pastiche of privacy law in India. These developments are undermined by a lack of theoretical clarity and the continuing tension between individual freedoms and communitarian values.

This was published in Economic & Political Weekly, 50(22), 30 May 2015. Download the full article here.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:02 PM

PDF document icon Acharya - The Four Parts of Privacy in India (EPW Insight).pdf — PDF document, 610 kB (625400 bytes)

Multi-stakeholder Advisory Group Analysis

by Jyoti Panday last modified Apr 12, 2016 10:02 AM
This analysis has been done to see the trend in the selection and rotation of the members of the Multistakeholder advisory group (MAG) in the Internet Governance Forum (IGF). The MAG has been functional for nine years from 2006-2015. The analysis is based on data procured, collated and organised by Pranesh Prakash and Jyoti Panday. Shambhavi Singh, Law Student, NLU Delhi who was interning with CIS at the time also assisted with the organisation and analysis of the data.

The researcher has collected the data from the lists of members available in the public domain from 2010-2015. The lists prior to 2010 have been procured by the Centre for Internet and society from the UN Secretariat of the Internet Governance Forum (IGF).

This research is based solely upon the members and the nature of their stake holding has been analysed in the light of MAG terms of reference. No data has been made available regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG (The IGF Secretariat does not share this data).

According to the analysis, in these six years, the MAG has had around 182 members from various stakeholder groups.

We have divided it into five stakeholder groups, Government, Civil Society, Industry, Technical Community and Academia. Any overlap between two or more of these groups has also been taken into account, for example- A member of the Internet Society (ISOC) being both in the Civil Society and Technical Community.

According to the MAG Terms of Reference[1], it is the prerogative of the UN Secretary General to select MAG Members. The general policy is that the MAG members are appointed for a period of one year, which is automatically renewed for 2 more years consecutively depending on their engagement in MAG activities.

There is also a policy of rotating off 1/3rd members of MAG every year for diversity and taking new viewpoints in consideration. There is also an exceptional circumstance where a person might continue beyond three years in case there is a lack of candidates fitting the desired area.

However, it seems like the exception has become the norm as a whopping number of members have continued beyond 3 years, ranging from 4 years up to as long as 8 years, this figure rounds up to around 49. No doubt some of them are exceptional talents and difficult to replace. However, the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the usual term.

S. No.

Stakeholder

Number of years

Total Members continuing beyond 3 years

1

Civil Society

8, 6, 6, 4, 4,

5

2

Government/Industry

4, 5

2

3

Technical community/ Civil society

8, 8, 8, 6, 6, 4, 4, 4, 4,4

10

4

Industry/ Civil society

8, 6,

2

5

Industry

8, 7, 7, 6, 6, 4,

6

6

Industry/Tech Community/ Civil Society

8,

1

7

Government

7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4,

19

8

Academia

6, 6, 5,

3

9

Industry/ Tech community

6,

1

The stakeholders that have continued beyond 8 years have around 39% members from Government and related agencies. The next being Technical Community/Civil Society with around 20% representation, followed by Industry at 12%, 10% from the Civil Society, 6% from Academia, 4% from Government/Industry, 4% from Industry/Civil Society and 2% each from Industry/Technical Community and Industry/Technical Community/Civil Society respectively.

null

Table with overlapping interests merged

S. No.

Stakeholder

Total Members continuing beyond 3 years

1

Civil Society

7 + 9 + 1+1 = 18

2

Government

19

3

Tech Community

9 + 1 + 1+1 = 12

4

Industry

6 + 2 + 1 + 1+2 = 13

5

Academia

3

When the overlap is grouped separately, as in if a Technical Community/Civil Society person is placed both in Technical Community and Civil Society groups individually, then the representation of stakeholder representation is as follows(approximate values)-

Government- 29%

Civil Society- 28%

Industry- 20%

Technical Community-17%

Academia-5%

This clearly shows us that stakeholders from academia generally did not stay on MAG beyond 3 years. Even when all members that have ever been on MAG are taken into consideration, only around 8% representation has been from the academic community. This needs to be taken into account when new MAG members are selected in 2016.

The researcher has also looked at the MAG Representation based on gender and UN Regional Groups. The results of the analysis were as follows-

The ratio of male members is to female members is approximately 16:9 in the MAG and the approximate value in percentage being 64% and 36% respectively.

null

Now coming to the UN Regional Groups, the results that the analysis yielded were as follows-

The Western European and Others Group (WEOG) has the highest representation in MAG, a large number of members being from Switzerland, USA and UK. This is followed by the Asia Pacific Group which has 20% representation. The third largest is the African group with 19% representation followed by Latin American and Caribbean Group (GRULAC) and Eastern European Group with 13% and 12% representation respectively.

null

The representation of developed, developing and Least Developed Countries is as follows-

Developed countries have approximately 42% representation, developing countries having 53% and LDCs having a mere 5% representation. There should be some effort to strive for better LDC representation as they are the most backward when it comes to Global ICT Penetration. [2]

null


[1] Intgovforum.org, 'MAG Terms Of Reference' (2015) <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference> accessed 13 July 2015.

[2] ICT Facts And Figures (1st edn, International Telecommunication Union 2015) <http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdf> accessed 11 July 2015.

Supreme Court Order is a Good Start, but is Seeding Necessary?

by Elonnai Hickok and Rohan George — last modified Sep 07, 2015 01:21 PM
This blog post seeks to unpack the ‘seeding’ process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the court in the context of the seeding process.

Introduction

On August 11th 2015, in the writ petition Justice K.S Puttaswamy (Retd.) & Another vs. Union of India & Others1, the Supreme Court of India issued an interim order regarding the constitutionality of the UIDAI scheme. In response to the order, Dr. Usha Ramanathan published an article titled  'Decoding the Aadhaar judgment: No more seeding, not till the privacy issue is settled by the court' which, among other points, highlights concerns around the seeding of Aadhaar numbers into service delivery databases. She writes that "seeding' is a matter of grave concern in the UID project. This is about the introduction of the number into every data base. Once the number is seeded in various databases, it makes convergence of personal information remarkably simple. So, if the number is in the gas agency, the bank, the ticket, the ration card, the voter ID, the medical records and so on, the state, as also others who learn to use what is called the 'ID platform', can 'see' the citizen at will."2

Building off of this statement, this article seeks to unpack the 'seeding' process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the Court in the context of the seeding process.

What is Seeding?

In the UID scheme, data points within databases of service providers and banks are organized via individual Aadhaar numbers through a process known as 'seeding'. The UIDAI has released two documents on the seeding process - "Approach Document for Aadhaar Seeding in Service Delivery Databases version 1.0" (Version 1.0)3 and "Standard Protocol Covering the Approach & Process for Seeding Aadhaar Number in Service Delivery Databases June 2015 Version 1.1" (Version 1.1)4

According to Version 1.0 "Aadhaar seeding is a process by which UIDs of residents are included in the service delivery database of service providers for enabling Aadhaar based authentication during service delivery."5 Version 1.0 further states that the "Seeding process typically involves data extraction, consolidation, normalization, and matching".6 According to Version 1.1, Aadhaar seeding is "a process by which the Aadhaar numbers of residents are included in the service delivery database of service providers for enabling de-duplication of database and Aadhaar based authentication during service delivery".7 There is an extra clause in Version 1.1's definition of seeding which includes "de-duplication" in addition to authentication.

Though not directly stated, it is envisioned that the Aadhaar number will be seeded into the databases of service providers and banks to enable cash transfers of funds. This was alluded to in the Version 1.1 document with the UIDAI stating "Irrespective of the Scheme and the geography, as the Aadhaar Number of a given Beneficiary finally has to be linked with the Bank Account, Banks play a strategic and key role in Seeding."8

How does the seeding process work?

The seeding process itself can be done through manual/organic processes or algorithmic/in-organic processes. In the inorganic process the Aadhaar database is matched with the database of the service provider - namely the database of beneficiaries, KYR+ data from enrolment agencies, and the EID-UID database from the UIDAI. Once compared and a match is found - for example between KYR fields in the service delivery database and KYR+ fields in the Aadhaar database - the Aadhaar number is seeded into the service delivery database.9

Organic seeding can be carried out via a number of methods, but the recommended method from the UIDAI is door to door collection of Aadhaar numbers from residents which are subsequently uploaded into the service delivery database either manually or through the use of a tablet or smart phone. Perhaps demonstrating the fact that technology cannot be used as a 'patch' for a broken or premature system, organic (manual) seeding is suggested as the preferred process by the UIDAI due to challenges such as lack of digitization of beneficiary records, lack of standardization in Name and Address records, and incomplete data.10

According to the 1.0 Approach Paper, to facilitate the seeding process, the UIDAI has developed an in house software known as Ginger. Service providers that adopt the Aadhaar number must move their existing databases onto the Ginger platform, which then organizes the present and incoming data in the database by individual Aadhaar numbers. This 'organization' can be done automatically or manually. Once organized, data can be queried by Aadhaar number by person's on the 'control' end of the Ginger platform.11

In practice this means that during an authentication in which the UIDAI responds to a service provider with a 'yes' or 'no' response, the UIDAI would have access to at least these two sets of data: 1.) Transaction data (date, time, device number, and Aadhaar number of the individual authenticating) 2.) Data associated to an individual Aadhaar number within a database that has been seeded with Aadhaar numbers (historical and incoming). According to the Approach Document version 1.0, "The objective here is that the seeding process/utility should be able to access the service delivery data and all related information in at least the read-only mode." 12 and the Version 1.1 document states "Software application users with authorized access should be able to access data online in a seamless fashion while providing service benefit to residents." 13

What are the concerns with seeding?

With the increased availability of data analysis and processing technologies, organisations have the ability to link disparate data points stored across databases in order that the data can be related to each other and thereby analysed to derive holistic, intrinsic, and/or latent assessments. This can allow for deeper and more useful insights from otherwise standalone data. In the context of the government linking data, such "relating" can be useful - enabling the government to visualize a holistic and more accurate data and to develop data informed policies through research14. Yet, allowing for disparate data points to be merged and linked to each other raises questions about privacy and civil liberties - as well as more intrinsic questions about purpose, access,  consent and choice.  To name a few, linked data can be used to create profiles of individuals, it can facilitate surveillance, it can enable new and unintended uses of data, and it can be used for discriminatory purposes.

The fact that the seeding process is meant to facilitate extraction, consolidation, normalization and matching of data so it can be queried by Aadhaar number, and that existing databases can be transposed onto the Ginger platform can give rise to Dr. Ramanthan's concerns. She argues that anyone having access to the 'control' end of the Ginger platform can access all data associated to a Aadhaar number, that convergence can now easily be initiated with databases on the Ginger platform,  and that profiling of individuals can take place through the linking of data points via the Ginger platform.

How does the Supreme Court Order impact the seeding process and what still needs to be clarified?

In the interim order the Supreme Court lays out four welcome clarifications and limitations on the UID scheme:

  1. The Union of India shall give wide publicity in the electronic and print media including radio and television networks that it is not mandatory for a citizen to obtain an Aadhaar card;
  2. The production of an Aadhaar card will not be condition for obtaining any benefits otherwise due to a citizen;
  3. The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene. The Aadhaar card may also be used for the purpose of the LPG Distribution Scheme;
  4. The information about an individual obtained by the Unique Identification Authority of India while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation."15

In some ways, the court order addresses some of the concerns regarding the seeding of Aadhaar numbers by limiting the scope of the seeding process to the PDS scheme, but there are still a number of aspects of the scheme as they pertain to the seeding process that need to be addressed by the court.

These include:

The Process of Seeding

Prior to the Supreme Court interim order, the above concerns were quite broad in scope as Aadhaar could be adopted by any private or public entity - and the number was being seeded in databases of banks, the railways, tax authorities, etc. The interim order, to an extent, lessens these concerns by holding that  "The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme…".

However, the Court could have perhaps been more specific regarding what is included under the PDS scheme, because the scheme itself is broad. That said, the restrictions put in place by the court create a form of purpose limitation and a boundary of  proportionality on the UID scheme. By limiting the purpose of the Aadhaar number to use in the PDS system, the  Aadhaar number can only be seeded into the databases of entities involved in the PDS Scheme, rather than any entity that had adopted the number. Despite this, the seeding process is an issue in itself for the following reasons:

Access: Embedding service delivery databases and bank databases with the Aadhaar number allows for the UIDAI or authorized users to access information in these databases. According to version 1.1 of the seeding document from the UIDAI - the UIDAI is carrying out the seeding process through 'seeding agencies'. These agencies can include private companies, public limited companies, government companies, PSUs, semi-government organizations, and NGOs that are registered and operating in India for at least three years.16 Though under contract by the UIDAI, it is unclear what information such organizations would be able to access. This ambiguity leaves the data collected by UIDAI open to potential abuse and unauthorized access. Thus, the Court Ruling fails to provide clarity on the access that the seeding process enables for the UIDAI and for private parties.

Consent: Upon enrolling for an Aadhaar number, individuals have the option of consenting to the UIDAI sharing information in three instances:
  • "I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services."
  • "I want the UIDAI to facilitate opening of a new Bank/Post Office Account linked to my Aadhaar Number.
  • "I have no objection to sharing my information for this purpose""I have no objection to linking my present bank account provided here to my Aadhaar number"17
Aside for the vague and sweeping language of actions users provide consent for, which raises questions about how informed an individual is of the information he consents to share, at no point is an individual provided the option of  consenting  to the UIDAI accessing data - historic or incoming - that is stored in the database of a service provider in the PDS system seeded with the Aadhaar number. Furthermore, as noted earlier, the fact that the UIDAI concedes that a beneficiary has to be linked with a bank account raises questions of consent to this process as linking one's bank account with their Aadhaar number is an optional part of the enrollment process. Thus, even with the restrictions from the court order, if individuals want to use their Aadhaar number to access benefits, they must also seed their number with their bank accounts. On this point, in an order from the Finance Ministry it was clarified that the seeding of Aadhaar numbers into databases is a voluntary decision, but if a beneficiary provides their number on a voluntary basis - it can be seeded into a database.18

Withdrawing Consent: The Court also did not directly address if individuals could withdraw consent after enrolling in the UID scheme - and if they did - whether Aadhaar numbers should be 'unseeded' from PDS related databases. Similarly, the Court did not clarify whether services that have seeded the Aadhaar number, but are not PDS related, now need to unseed the number. Though news items indicate that in some cases (not all) organizations and government departments not involved in the PDS system are stopping the seeding process19, there is no indication of departments undertaking an 'unseeding' process. Nor is there any indication of the UIDAI allowing indivduals enrolled to 'un-enroll' from the scheme. In being silent on issues around consent, the court order inadvertently overlooks the risk of function creep possible through the seeding process, which "allows numerous opportunities for expansion of functions far beyond those stated to be its purpose"20.

Verification and liability: According to Version 1.0 and Version 1.1 of the Seeding documents, "no seeding is better than incorrect seeding". This is because incorrect seeding can lead to inaccuracies in the authentication process and result in individuals entitled to benefits being denied such benefits. To avoid errors in the seeding process the UIDAI has suggested several steps including using the "Aadhaar Verification Service" which verifies an Aadhaar number submitted for seeding against the Aadhaar number and demographic data such as gender and location in the CIDR. Though recognizing the importance of accuracy in the seeding process, the UIDAI takes no responsibility for the same. According to Version 1.1 of the seeding document, "the responsibility of correct seeding shall always stay with the department, who is the owner of the database."21 This replicates a disturbing trend in the implementation of the UID scheme - where the UIDAI 'initiates' different processes through private sector companies but does not take responsibility for such processes. 22

The Scope of the UIDAI's mandate and the necessity of seeding

Aside from the problems within the seeding process itself, there is a question of the scope of the UIDAI's mandate and the role that seeding plays in fulfilling this. This is important in understanding the necessity of the seeding process.

On the official website, the UIDAI has stated that its mandate is "to issue every resident a unique identification number linked to the resident's demographic and biometric information, which they can use to identify themselves anywhere in India, and to access a host of benefits and services." 23 Though the Supreme Court order clarifies the use of the Aadhaar number, it does not address the actual legality of the UIDAI's mandate - as there is no enabling statute in place -and it does not clarify or confirm the scope of the UIDAI's mandate.

In Version 1.0 of the Seeding document the UIDAI has stated the "Aadhaar numbers of enrolled residents are being 'seeded' ie. included in the databases of service providers that have adopted the Aadhaar platform in order to enable authentication via the Aadhaar number during a transaction or service delivery."24 This statement is only partially correct. For only providing and authenticating of an Aadhaar number - seeding is not necessary as the Aadhaar number submitted for verification alone only needs to be compared with the records in the CIDR to complete authentication of the same. Yet, in an example justifying the need for seeding in the Version 1.0 seeding document the UIDAI states "A consolidated view of the entire data would facilitate the social welfare department of the state to improve the service delivery in their programs, while also being able to ensure that the same person is not availing double benefits from two different districts."25 For this purpose, seeding is again unnecessary as it would be simple to correlate PDS usage with a Aadhaar number within the PDS database. Even if limited to the PDS system,  seeding in the databases of service providers is only necessary for the creation and access to comprehensive information about an individual in order to determine eligibility for a service. Further, seeding is only necessary in the databases of banks if the Aadhaar number moves from being an identity factor - to a transactional factor - something that the UIDAI seems to envision as the Version 1.1 seeding document states that Aadhaar is sufficient enough to transfer payments to an individual and thus plays a key role in cash transfers of benefits.26

Conclusion

Despite the fact that adherence to the interim order from the Supreme Court has been adhoc27, the order does provide a number of welcome limitations and clarifications to the UID Scheme. Yet, despite limited clarification from the Supreme Court and further clarification from the Finance Ministry's Order, the process of seeding and its necessity remain unclear. Is the UIDAI taking fully informed consent for the seeding process and what it will enable? Should the UIDAI be liable for the accuracy of the seeding process? Is seeding of service provider and bank databases necessary for the UIDAI to fulfill its mandate? Is the UIDAI's mandate to provide an identifier and an authentication of identity mechanism or is it to provide authentication of eligibility of an individual to receive services? Is this mandate backed by law and with adequate safeguards? Can the court order be interpreted to mean that to deliver services in the PDS system, UIDAI will need access to bank accounts or other transactions/information stored in a service provider's database to verify the claims of the user?

Many news items reflect a concern of convergence arising out of the UID scheme.28 To be clear, the process of seeding is not the same as convergence. Seeding enables convergence which can enable profiling, surveillance, etc. That said, the seeding process needs to be examined more closely by the public and the court to ensure that society can reap the benefits of seeding while avoiding the problems it may pose.


[1]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[2]. Usha Ramanthan. Decoding the Aadhaar judgment: No more seeding, not till the privacy issues is settled by the court. The Indian Express. August 12th 2015. Available at: http://indianexpress.com/article/blogs/decoding-the-aadhar-judgment-no-more-seeding-not-till-the-privacy-issue-is-settled-by-the-court/

[3]. UIDAI. Approach Document for Aadhaar Seeding in Service Delivery Databases. Version 1.0. Available at: https://authportal.uidai.gov.in/static/aadhaar_seeding_v_10_280312.pdf

[4]. UIDAI. Standard Protocol Covering the Approach & Process for Seeding Aadhaar Numbers in Service Delivery Databases. Available at: https://uidai.gov.in/images/aadhaar_seeding_june_2015_v1.1.pdf

[5]. Version 1.0 pg. 2

[6]. Version 1.0 pg. 19

[7]. Version 1.1 pg. 3

[8]. Version 1.1 pg. 7

[9]. Version 1.1 pg. 5 -7

[10]. Version 1.1 pg. 7-13

[11]. Version 1.0 pg 19-22

[12]. Version 1.0 pg. 4

[13]. Version 1.1 pg. 5, figure 3.

[14]. David Card, Raj Chett, Martin Feldstein, and Emmanuel Saez. Expanding Access to Adminstrative Data for Research in the United States. Available at: http://obs.rc.fas.harvard.edu/chetty/NSFdataaccess.pdf

[15]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[16]. Version 1.1 pg. 18

[17]. Aadhaar Enrollment Form from Karnataka State. http://www.karnataka.gov.in/aadhaar/Downloads/Application%20form%20-%20English.pdf

[18]. Business Line. Aadhaar only for foodgrains, LPG, kerosene, distribution. August 27th 2015. Available at: http://www.thehindubusinessline.com/economy/aadhaar-only-for-foodgrains-lpg-kerosene-distribution/article7587382.ece

[19]. Bharti Jain. Election Commission not to link poll rolls to Aadhaar. The Times of India. August 15th 2015. Available at: http://timesofindia.indiatimes.com/india/Election-Commission-not-to-link-poll-rolls-to-Aadhaar/articleshow/48488648.cms

[20]. Graham Greenleaf. “Access all areas': Function creep guaranteed in Australia's ID Card Bill (No.1) Computer Law & Security Review. Volume 23, Issue 4. 2007. Available at:  http://www.sciencedirect.com/science/article/pii/S0267364907000544

[21]. Version 1.1 pg. 3

[22]. For example, the UIDAI depends on private companies to act as enrollment agencies and collect, verify, and enroll individuals in the UID scheme. Though the UID enters into MOUs with these organizations, the UID cannot be held responsible for the security or accuracy of data collected, stored, etc. by these entities. See draft MOU for registrars: https://uidai.gov.in/images/training/MoU_with_the_State_Governments_version.pdf

[23]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[24]. Version 1.0 pg.3

[25]. Version 1.0  pg.4

[26]. Version 1.1 pg. 3

[27]. For example, there are reports of Aadhaar being introduced for different services such as education. See: Tanu Kulkarni. Aadhaar may soon replace roll numbers. The Hindu. August 21st, 2015. For example: http://www.thehindu.com/news/cities/bangalore/aadhaar-may-soon-replace-roll-numbers/article7563708.ece

[28]. For example see: Salil Tripathi. A dangerous convergence. July 31st. 2015. The Live Mint. Available at: http://www.livemint.com/Opinion/xrqO4wBzpPbeA4nPruPNXP/A-dangerous-convergence.html

Are we Throwing our Data Protection Regimes under the Bus?

by Rohan George — last modified Sep 10, 2015 02:02 PM
In this blog post Rohan examines why the principle of consent is providing us increasingly less of an aegis in protecting our data.

Consent is complicated. What we think of as reasonably obtained consent varies substantially with the circumstance. For example, in treating rape cases, the UK justice system has moved to recognise complications like alcohol and its effect on explicit consent[1]. Yet in contracts, consent may be implied simply when one person accepts another’s work on a contract without objections[2]. These situations highlight the differences between the various forms of informed consent and the implications on its validity.

Consent has emerged as a key principle in regulating the use of personal data, and different countries have adopted different regimes, ranging from the comprehensive regimes like of the EU to more sectoral approaches like that in the USA. However, in our modern epoch characterised by the big data analytics that are now commonplace, many commentators have challenged the efficacy and relevance of consent in data protection. I argue that we may even risk throwing our data protection regimes under the proverbial bus should we continue to focus on consent as a key pillar of data protection.

Consent as a tool in Data Protection Regimes

In fact, even a cursory review of current data protection laws around the world shows the extent of the law’s reliance on consent. In the EU for example, Article 7 of the Data Protection Directive, passed in 1995, provides that data processing is only legitimate when “the data subject has unambiguously given his consent”[3]. Article 8, which guards against processing of sensitive data, provides that such prohibitions may be lifted when “the data subject has given his explicit consent to the processing of those data”[4]. Even as the EU attempts to strengthen data protection within the bloc with the proposed reforms to data protection[5], the focus on the consent of data subject remains strong. There are proposals for an “unambiguous consent by the data subject”[6] requirement to be put in place. Such consent will be mandatory before any data processing can occur[7].

Despite adopting very different overall approaches to data protection and privacy, consent is an equally integral part of data protection frameworks in the USA. In his book Protectors of Privacy[8], Abraham Newman describes two main types of privacy legislation: comprehensive and limited. He argues that places like the EU have adopted comprehensive regimes, which primarily seek to protect individuals because of the “informational and power asymmetry” between individuals and organisations[9]. On the other hand, he classifies the American approach as limited, focusing on more sectoral protections and principles of fair information practice instead of overarching legislation[10]. These sectors include the Fair Credit Reporting Act[11] (which governs consumer credit reporting), the Privacy Act[12] (which governs data collected by Federal government) and Electronic Communications Privacy Act[13] (which deals with email communications) among others. However, the Federal Trade Commission describes itself as having only “limited authority over the collection and dissemination of personal data collected online”[14].

This is because the general data processing that is commonplace in today’s era of big data is only regulated by the privacy protections that come from the Federal Trade Commission’s (FTC) Fair Information Practice Principles (FIPPs). Expectedly, consent is equally important under the FTC’s FIPPs. The FTC describes the principle of consent as “the second widely-accepted core principle of fair information practice”[15] in addition to the principle of notice. Other guidelines on fair data processing published by organisations like the Organisation for Economic Cooperation and Development[16] (OECD) or Canadian Standards Association[17] (CSA) also include consent as a key mechanism in data protection.

The origins of consent in privacy and data protection

Given the clearly extensive reliance on consent in data protection, it seems prudent to examine the origins of consent in privacy and data protection. Just why does consent have so much weight in data protection?

One reason is that data protection, along with inextricably linked concerns about privacy, could be said to be rooted in protecting private property. It was argued that the “early parameters of what was to become the right to privacy were set in cases dealing with unconventional property claims”[18], such as unconsented publication of personal letters[19] or photographs[20]. It was the publication of Brandeis and Warren’s well-known article “The Right to Privacy”[21], that developed “the current philosophical dichotomy between privacy and property rights”[22], as they asserted that privacy protections ought to be recognised as a right in and of themselves and needed separate protection[23]. Indeed, it was Warren and Brandeis who famously borrowed Justice Cooley's expression that privacy is the “right to be let alone”[24].

On the other side of the debate are scholars like Epstein and Posner, who see privacy protections as part of protecting personal property under tort law[25]. However, the central point is that most scholars seem to acknowledge the relationship between privacy and private property. Even Brandeis and Warren themselves argued that one general aim of privacy is “to protect the privacy of private life, and to whatever degree and in whatever connection a man's life has ceased to be private”[26].

It is also important to locate the idea of consent within the domain of privacy and private property protections. Ostensibly, consent seems to have the effect of lessening the privacy protections afforded in a particular situation to a person, because by acquiescing to the situation, one could be seen as waiving their privacy concerns. Brandeis and Warren concur with this position as they acknowledge how “the right to privacy ceases upon the publication of the facts by the individual, or with his consent”[27]. They assert that this is “but another application of the rule which has become familiar in the law of literary and artistic property”[28].

Perhaps the most eloquent articulation of the importance of consent in privacy comes from Sir Edward Coke’s idea that “every man’s house is his castle”[29]. Though the ‘Castle Doctrine’ has been used as a justification for protecting one’s property with the use of force[30], I think that implied in the idea of the ‘Castle Doctrine’ is that consent is necessary in order to preserve privacy. If not, why would anyone be justified in preventing trespass, other than to prevent unconsented entry or use of their property. The doctrine of “Volenti non fit injuria”[31], or ‘to one who consents no injury is done’, is thus the very embodiment of the role of consent in protecting private property. And as conceptions of private property develop to recognise that the data one gives out is part of his private property, for example in US v. Jones, which led scholars to assert that “people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties”[32], so does consent act as an important aspect of privacy protection.

Yet, linking privacy with private property is not universally accepted as the conception of privacy. For instance, Alan Westin, in his book Privacy and Freedom[33], describes privacy as “the right to control information about oneself”[34]. Another scholar, Ruth Gavison, contends instead that “our interest in privacy is related to our concern over our accessibility to others: the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others' attention”[35].

While these alternative notions about privacy’s foundational principles may differ from those related to linking privacy with private property, locating consent within these formulations of privacy is possible. Regarding Westin’s argument, I think that implicit in the right to control one’s information are ideas about individual autonomy, which is exercised through giving or withholding one’s consent. Similarly, Gavison herself states that privacy functions to advance “liberty, autonomy and selfhood”[36]. Consent plays a key role in upholding this liberty, autonomy and selfhood that privacy affords us. Clearly therefore, it is far from unfounded to claim that consent is an integral part of protecting privacy.

Consent, Big Data and Data protection

Given the solid underpinnings of the principle of consent in privacy protection, it was hardly a coincidence that consent became an integral part of data protection. However, with the rise of big data practices, one quickly finds that consent ceases to work effectively as a tool for protecting privacy. In a big data context, Solove argues that privacy regulation rooted in consent is ineffective, because garnering consent amidst ubiquitous data collection for all the online services one uses as part of daily life is unmanageable[37]. Additionally, the secondary uses of one’s data are difficult to assess at the point of collection, and subsequently meaningful consent for secondary use is difficult to obtain[38]. This section examines these two primary consequences of prioritising consent amidst Big data practises.

Consent places unrealistic and unfair expectations on the Individual

As noted by Tene and Polonetsky, the first concern is that current privacy frameworks which emphasize informed consent “impose significant, sometimes unrealistic, obligations on both organizations and individuals”[39]. The premise behind this argument stems from the way that consent is often garnered by organisations, especially regarding use of their services. An examination of various terms of use policies from banks, online video streaming websites, social networking sites, online fashion or more general online shopping websites reveals a deluge of information that the user has to comprehend. Moreover, there are a too many “entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity”[40].

As Cate and Mayer-Schönberger note in the Microsoft Global Privacy Summit Summary Report, “almost everywhere that individuals venture, especially online, they are presented with long and complex privacy notices routinely written by lawyers for lawyers, and then requested to either “consent” or abandon the use of the desired service”[41]. In some cases, organisations try to simplify these policies for the users of their service, but such initiatives make up the minority of terms of use policies. Tene and Polonetsky assert that “it is common knowledge among practitioners in the field that privacy policies serve more as liability disclaimers for businesses than as assurances of privacy for consumers”[42].

However, it is equally important to consider the principle of consent from perspective of companies. At a time where many businesses have to comply with numerous regulations and processes in the name of ‘compliance’[43], the obligations for obtaining consent could burden some businesses. Firms have to gather consent amidst enhancing user or customer experiences, which represents a tricky balance to find. For example, requiring consent at every stage may make the user experience much worse. Imagine having to give consent for your profile to be uploaded every time you make a high score in a video game? At the same time, “organizations are expected to explain their data processing activities on increasingly small screens and obtain consent from often-uninterested individuals”[44]. Given these factors, it is somewhat understandable for companies to garner consent for all possible (secondary) uses as otherwise it is not feasible to keep collecting.

Nonetheless, this results in situations where “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[45].The totality of the situation shows the odds stacked against the individual. It could be even argued that this is one manifestation of the informational and power asymmetry that exists between individuals and organisations[46], because users may unwittingly agree to unfair, unclear or even unknown terms and conditions and data practices. Not only are individuals greatly misinformed about data collected about them, but the vast majority of people do not even read these Terms and Conditions or End User license agreements[47]. Solove also argues that “people often lack enough expertise to adequately assess the consequences of agreeing to certain present uses or disclosures of their data”[48].

While the organisational practice of providing extensive and complicated terms of use policies is not illegal, the fact that by one estimation, it may take you would have to take 76 working days to review the privacy policies you have agreed to online[49], or by another, that in the USA the opportunity cost society incurs in reading privacy policies is $781 billion[50], should not go unnoticed. I do think it is unfair for the law to put users into such situations, where they are “forced to make overly complex decisions based on limited information”[51]. There have been laudable attempts by some government organisations like Canada’s Office of the Privacy Commissioner and USA’s Federal Trade Commission to provide guidance to firms to make their privacy policies more accessible[52]. However, these are hard to enforce. Therefore, it can be assumed that when users have neither the expertise nor the rigour to review privacy policies effectively, the consent they provide would naturally be far from informed.

Secondary use, Aggregation and Superficial Consent

What amplifies this informational asymmetry is the potential for the aggregation of individual’s data and subsequent secondary use of that data collected. “Even if people made rational decisions about sharing individual pieces of data in isolation, they greatly struggle to factor in how their data might be aggregated in the future”[53].

This has to do with the prevalence of big data analytics that characterizes our modern epoch, and has major implications for the nature and meaningfulness of the consent users provide. By definition, “big data analysis seeks surprising correlations”[54] and some of its most insightful results are counterintuitive and nearly impossible to conceive at the point of primary data collection. One noteworthy example comes from the USA, with the predictive analytics of Walmart. By studying purchasing patterns of its loyalty card holders[55], the company ascertained that prior to a hurricane the most popular items that people tend to buy are actually Pop Tarts (a pre-baked toaster pastry) and Beer[56]. These correlations are highly counterintuitive and far from what people expect to be necessities before a hurricane. These insights led to Walmart stores being stocked with the most relevant products at the time of need. This is one example of how data might be repurposed and aggregated for a novel purpose, but nonetheless the question about the nature of consent obtained by Walmart for the collection and analysis of the shopping habits of its loyalty card holders stands.

One reason secondary uses make consent less meaningful has been articulated by De Zwart et al, who observe that “the idea of consent becomes unworkable in an environment where it is not known, even by the people collecting and selling data, what will happen to the data”[57]. Taken together with Solove’s aggregation effect, two points become apparent:

  1. Data we consent to be collected about us may be aggregated with other data we may have revealed in the past. While separately they may be innocuous, there is a risk of future aggregation to create new information which one may find overly intrusive and not consent to. However, current data protection regimes make it hard for one to provide such consent, because there is no way for the user to know how his past and present data may be aggregated in the future.
  2. Data we consent to be collected for one specific purpose may be used in a myriad of other ways. The user has virtually no way to know how their data might be repurposed because often time neither do the collectors of that data[58].

Therefore, regulators reliance on principles of purpose limitation and the mechanism of consent for robust data protection seems suboptimal at the very least, as big data practices of aggregation, repurposing and secondary uses become commonplace.

Other problems with the mechanism of consent in the context of Big Data

On one end of the spectrum are situations where organisations garner consent for future secondary uses at the time of data collection. As discussed earlier, this is currently the common practice for organisations and the likelihood of users providing informed consent is low.

However, equally valid is considering the situations on the other end of the spectrum, where obtaining user consent for secondary use becomes too expensive and cumbersome[59]. As a result, potentially socially valuable secondary use of data for research and innovation or simply “the practice of informed and reflective citizenship”[60] may not take place. While potential social research may be hindered by the consent requirement, the reality that one cannot give meaningful consent to an unknown secondary uses of data is more pressing. Essentially, not knowing what you are consenting to scarcely provides the individual with any semblance of strong privacy protections and so the consent that individuals provide is superficial at best.

Many scholars also point to the binary nature of consent as it stands today[61]. Solove describes consent in data protection as nuanced[62] while Cate and Mayer-Schönberger go further to assert that “binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data”. This dichotomous nature of consent further reduces its usefulness in data protection regimes.

Whether data collection is opted into or opted out of also has a bearing on the nature of the consent obtained. Many argue that regulations with options to opt out are not effective as “opt-out consent might be the product of mere inertia or lack of awareness of the option to opt out”[63]. This is in line with initiatives around the world to make gathering consent more explicit by having options to opt in instead of opt out. Noted articulations of the impetus to embrace opt in regimes include ex FTC chairman Jon Leibowitz as early as 2007[64], as well as being actively considered by the EU in the reform of their data protection laws[65].

However, as Solove rightly points out, opt in consent is problematic as well[66]. There are a few reasons for this: first, that many data collectors have the “sophistication and motivation to find ways to generate high opt-in rates”[67] by “conditioning products, services, or access on opting in”[68]. In essence, they leave individuals no choice but to opt into data collection because using their particular product or service is dependant or ‘conditional’ on explicit consent. A pertinent example of this is the end-user license agreement to Apple’s iTunes Store[69]. Solove rightly notes that “if people want to download apps from the store, they have no choice but to agree. This requirement is akin to an opt-in system — affirmative consent is being sought. But hardly any bargaining or choosing occurs in this process”[70]. Second, as stated earlier, obtaining consent runs the risk of impeding potential innovation or research because it is too cumbersome or expensive to obtain[71].

Third, as Tene and Polonetsky argue, “collective action problems threaten to generate a suboptimal equilibrium where individuals fail to opt into societally beneficial data processing in the hope of free-riding on others’ good will”[72]. A useful example to illustrate this comes from another context where obtaining consent is the difference between life and death: organ donation. The gulf in consenting donors between countries with an opt in regime for organ donation and countries with an opt out regime is staggering. Even countries that are culturally similar, such as Austria and Germany, exhibit vast differences in donation rates – Austria at 99% compared to just 12% in Germany[73]. This suggests that in terms of obtaining consent (especially for socially valuable actions), opt in methods may be limiting, because people may have an aversion to anything being presumed about their choices, even if costs of opting out are low[74].

What the above section demonstrates is how consent may be somewhat limited as a tool for data protection regimes, especially in a big data context. That said, consent is not in itself a useless or outdated concept. The problems raised above articulate the problems that relying on consent extensively pose in a big data context. Consent should still remain a part of data protection regimes. However, there are both better ways to obtain consent (for organisations that collect data) as well as other areas to focus regulatory attention on aside from the time of data collection.

What can organisations do better to obtain more meaningful consent

Organisations that collect data could alter the way the obtain user consent. Most people can attest to having checked a box that was lying surreptitiously next to the words ‘I agree’, thereby agreeing to the Terms and Conditions or End-user License Agreement for a particular service or product. This is in line with the need for both parties to assent to the terms of a contract as part of making valid a contract[75]. Some of the more common types of online agreements that users enter into are Clickwrap and Browsewrap agreements. A Clickwrap agreement is “formed entirely in an online environment such as the Internet, which sets forth the rights and obligations between parties”[76]. They “require a user to click "I agree" or “I accept” before the software can be downloaded or installed”[77]. On the other hand, Browsewrap agreements “try to characterize your simple use of their website as your ‘agreement’ to a set of terms and conditions buried somewhere on the site”[78].

Because Browsewrap agreements do not “require a user to engage in any affirmative conduct”[79], the kind of consent that these types of agreements obtain is highly superficial. In fact, many argue that such agreements are slightly unscrupulous because users are seldom aware that such agreements exist[80], often hidden in small print[81] or below the download button[82] for example. And the courts have begun to consider such terms and practices unfair, which “hold website users accountable for terms and conditions of which a reasonable Internet user would not be aware just by using the site”[83]. For example, In re Zappos.com Inc., Customer Data Security Breach Litigation, the court said of their Terms of Use (which is in a browsewrap agreement):

“The Terms of Use is inconspicuous, buried in the middle to bottom of every Zappos.com webpage among many other links, and the website never directs a user to the Terms of Use. No reasonable user would have reason to click on the Terms of Use”[84]

Clearly, courts recognise the potential for consent or assent to be obtained in a hardly transparent or hands on manner. Organisations that collect data should be aware of this and consider other options for obtaining consent.

A few commentators have suggested that organisations switch to using Clickwrap or clickthrough agreements to obtain consent. Undergirding this argument is the fact that courts have on numerous occasions, upheld the validity of a Clickwrap agreement. Such cases include Groff v. America Online, Inc[85] and Hotmail Corporation v. Van Money Pie, Inc[86]. These cases built upon the precedent-setting case of Pro CD v. Zeidenberg, in which the court ruled that “Shrinkwrap licenses are enforceable unless their terms are objectionable on grounds applicable to contracts in general”[87]. Shrinkwrap licenses, which refer to end user license agreements printed on the shrinkwrap of a software product which a user will definitely notice and have the opportunity to read before opening and using the product, and the rules that govern them, have seen application to clickthrough agreements. As Bayley rightly noted, the validity of clickthrough agreements is dependent on “reasonable notice and opportunity to review—whether the placement of the terms and click-button afforded the user a reasonable opportunity to find and read the terms without much effort”[88].

From the perspective of companies and other organisations which attempt to garner consent from users to collect and process their data, utilizing Clickwrap agreements might be one useful solution to consider in obtaining more meaningful and informed consent. In fact Bayley contends that clear Clickwrap agreements are “the “best practice” mechanism for creating a contractual relationship between an online service and a user”[89]. He suggests the following mechanism for acquiring clear and informed consent via contractual agreement[90]:

  1. Conspicuously present the TOS to the user prior to any payment (or other commitment by the user) or installation of software (or other changes to a user’s machine or browser, like cookies, plug-ins, etc.)
  2. Allow the user to easily read and navigate all of the terms (i.e. be in a normal, readable typeface with no scroll box)
  3. Provide an opportunity to print, and/or save a copy of, the terms
  4. Offer the user the option to decline as prominently and by the same method as the option to agree
  5. Ensure the TOS is easy to locate online after the user agrees.

These principles make a lot of sense for organisations, as it requires relatively minor procedural changes instead of more transformational efforts to alter the way the validate their data processing processes entirely.

Herzfield adds two further suggestions to this list. First, organisations should not allow any use of their product or service until “express and active manifestation of assent”[91]. Also, they should institute processes where users re-iterate their consent and assent to the terms of use[92]. He goes further to propose a baseline that organisations should follow: “companies should always provide at least inquiry notice of all terms, and require counterparties to manifest assent, through action or inaction, in a manner that reasonable people would clearly understand to be assent”[93].

While obtaining informed and meaningful consent is neither fool proof nor a process which has widely accepted clear steps, what is clear is that current efforts by organisations may be insufficient. As Cate and Mayer-Schönberger note, “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[94]. One thing they can do to both ensure more meaningful and informed consent (from the perspective of the users) and preventing potential legal action for unscrupulous or unfair terms is to change the way they obtain consent from opt out to opt in.

Conclusion – how should regulation change

In conclusion, the current emphasis and extensive use of consent in data protection seems to be limited in effectively protecting against illegitimate processing of data in a big data context. More people are starting to use online services extensively. This is coupled by the fact that organisations are realizing the value of collecting and analysing user data to carry out data-driven analytics for insights that can improve the efficacy of the product. Clearly, data protection has never been more crucial.

However not only does emphasising consent seem less relevant, because the consent organisations obtain is seldom informed, but it may even jeopardise the intentions of data protection. Commentators are quick to point out how nimble firms are at acquiring consent in newer ways that may comply with laws but still allow them to maintain their advantageous position of asymmetric power. Kuner, Cate, Millard and Svantesson, all eminent scholars in the field of Big data, asked the prescient question: “Is there a proper role for individual consent?”[95]They believe consent still has a role, but that finding this role in the Big data context is challenging[96]. However, there is surprising consensus on the approach that should be taken as data protection regimes shift away from consent.

In fact, the alternative is staring at us in the face: data protection regimes have to look elsewhere, to other points along the data analysis process for aspects to regulate and ensure legitimate and fair processing of data. One compelling idea which had broad-based support during the aforementioned Microsoft Privacy Summit was that “new approaches must shift responsibility away from data subjects toward data users and toward a focus on accountability for responsible data stewardship”[97], ie creating regulations to guide data processing instead of the data collection. De Zwart et al. suggest that regulation must instead “focus on the processes involved in establishing algorithms and the use of the resulting conclusions”[98].

This might involve regulations relating to requiring data collectors to publish the queries they run on the data. This would be a solution that balances maintaining the ‘trade secret’ of the firm, who has creatively designed an algorithm, with ensuring fairness and legitimacy in data processing. One manifestation of this approach is in conceptualising procedural data due process which “would regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata derived from or associated with personal data) in any adjudicative process, including processes whereby Big Data is being used to determine attributes or categories for an individual”[99]. While there is debate regarding the usefulness of a data due process, the idea of data due process is just part of the consortium of ideas surrounding alternatives to consent in data protection. The main point is that “greater transparency should be required if there are fewer opportunities for consent or if personal data can be lawfully collected without consent”[100].

It is also worth considering exactly what a single use of group or individual’s data is, and what types of uses or processes require a “greater form of authorization”[101]. Certain data processes could require special affirmative consent to be procured, which is not applicable for other less intimate matters. Canada’s Office of the Privacy Commissioner released a privacy toolkit for organisations, in which they provide some exceptions to the consent principle, one of which is if data collection “is clearly in the individual’s interests and consent is not available in a timely way”[102]. Some therefore suggest that “if notice and consent are reserved for more appropriate uses, individuals might pay more attention when this mechanism is used”[103].

Another option for regulators is to consider the development and implementation of a sticky privacy policies regime. This refers to “machine-readable policies [that] can stick to data to define allowed usage and obligations as it travels across multiple parties, enabling users to improve control over their personal information”[104]. Sticky privacy policies seem to alleviate the risk of repurposed, unanticipated uses of data because users who consent to giving out their data will be consenting to how it is used thereafter. However, the counter to sticky policies is that it places even greater obligations on users to decide how they would like their data used, not just at one point but for the long term. To expect organisations to state their purposes for future use of individuals data or that individuals are to give informed consent to such uses seems farfetched from both perspectives.

Still another solution draws from the noted scholar Helen Nissenbaum’s work on privacy. She argues that “the benchmark of privacy is contextual integrity”[105]. ”Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it”[106]. According to this line of thinking, legislators should instead focus their attention on what constitutes appropriateness in certain contexts, although this could be a challenging task as contexts merge and understandings of appropriateness change according to the circumstances of a context. .

While there is little consensus regarding the numerous ways to focus regulatory attention on data processing and the uses of data collected, there is more support for a shift away from consent, as exemplified by the Microsoft privacy Summit:

“There was broad general agreement that privacy frameworks that rely heavily on individual notice and consent are neither sustainable in the face of dramatic increases in the volume and velocity of information flows nor desirable because of the burden they place on individuals to understand the issues, make choices, and then engage in oversight and enforcement.”[107] I think Cate and Mayer- Schönberger make for the most valid conclusion to this article, as well as to summarise the debate I have presented. They say that “in short, ensuring individual control over personal data is not only an increasingly unattainable objective of data protection, but in many settings it is an undesirable one as well.”[108] We might very well be throwing the entire data protection regimes under the bus.


[1] Gordon Rayner and Bill Gardner, “Men Must Prove a Woman Said ‘Yes’ under Tough New Rape Rules - Telegraph,” The Telegraph, January 28, 2015, sec. Law and Order, http://www.telegraph.co.uk/news/uknews/law-and-order/11375667/Men-must-prove-a-woman-said-Yes-under-tough-new-rape-rules.html.

[2] Legal Information Institute, “Implied Consent,” accessed August 25, 2015, https://www.law.cornell.edu/wex/implied_consent.

[3] European Parliament, Council of the European Union, Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, 1995, http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:31995L0046.

[4] See supra note 3.

[5] European Commission, “Stronger Data Protection Rules for Europe,” European Commission Press Release Database, June 15, 2015, http://europa.eu/rapid/press-release_MEMO-15-5170_en.htm.

[6] Council of the European Union, “Data Protection: Council Agrees on a General Approach,” June 15, 2015, http://www.consilium.europa.eu/en/press/press-releases/2015/06/15-jha-data-protection/.

[7] See supra note 6.

[8] Abraham L. Newman, Protectors of Privacy: Regulating Personal Data in the Global Economy (Ithaca, NY: Cornell University Press, 2008).

[9] See supra note 8, at 24.

[10] Ibid.

[11] 15 U.S.C. §1681.

[12] 5 U.S.C. § 552a.

[13] 18 U.S.C. § 2510-22.

[14] Federal Trade Commission, “Privacy Online: A Report to Congress,” June 1998, https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf: 40.

[15] See supra note 14, at 8.

[16] Organisation for Economic Cooperation and Development, “2013 OECD Privacy Guidelines,” 2013, http://www.oecd.org/internet/ieconomy/privacy-guidelines.htm.

[17] Canadian Standards Association, “Canadian Standards Association Model Code,” March 1996, https://www.cippguide.org/2010/06/29/csa-model-code/.

[18] Mary Chlopecki, “The Property Rights Origins of Privacy Rights | Foundation for Economic Education,” August 1, 1992, http://fee.org/freeman/the-property-rights-origins-of-privacy-rights.

[19] See Pope v. Curl (1741), available here.

[20] See Prince Albert v. Strange (1849), available here.

[21] Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, no. 5 (December 15, 1890): 193–220, doi:10.2307/1321160.

[22] See supra note 18.

[23] Ibid.

[24] See supra note 21.

[25] See for example, Richard Epstein, “Privacy, Property Rights, and Misrepresentations,” Georgia Law Review, January 1, 1978, 455. And Richard Posner, “The Right of Privacy,” Sibley Lecture Series, April 1, 1978, http://digitalcommons.law.uga.edu/lectures_pre_arch_lectures_sibley/22.

[26] See supra note 21, at 215.

[27] See supra note 21, at 218.

[28] Ibid.

[29] Adrienne W. Fawcett, “Q: Who Said: ‘A Man’s Home Is His Castle’?,” Chicago Tribune, September 14, 1997, http://articles.chicagotribune.com/1997-09-14/news/9709140446_1_castle-home-sir-edward-coke.

[30] Brendan Purves, “Castle Doctrine from State to State,” South Source, July 15, 2011, http://source.southuniversity.edu/castle-doctrine-from-state-to-state-46514.aspx.

[31] “Volenti Non Fit Injuria,” E-Lawresources, accessed August 25, 2015, http://e-lawresources.co.uk/Volenti-non-fit-injuria.php.

[32] Bryce Clayton Newell, “Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 16, 2013), http://papers.ssrn.com/abstract=2341182.

[33] Alan Westin, Privacy and Freedom (Ig Publishing, 2015).

[34] Helen Nissenbaum, “Privacy as Contextual Integrity,” Washington Law Review 79 (2004): 119.

[35] Ruth Gavison, “Privacy and the Limits of Law,” The Yale Law Journal 89, no. 3 (January 1, 1980): 421–71, doi:10.2307/795891: 423.

[36] Ibid.

[37] Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 4, 2012), http://papers.ssrn.com/abstract=2171018: 1888.

[38] Ibid, at 1889.

[39] Omer Tene and Jules Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, September 20, 2012), http://papers.ssrn.com/abstract=2149364: 261.

[40] See supra note 37, at 1881.

[41] Fred H. Cate and Viktor Mayer-Schönberger, “Notice and Consent in a World of Big Data - Microsoft Global Privacy Summit Summary Report and Outcomes,” Microsoft Global Privacy Summit, November 9, 2012, http://www.microsoft.com/en-us/download/details.aspx?id=35596: 3.

[42] See supra note 39.

[43] See for example, US Securities and Exchange Commission, “Corporation Finance Small Business Compliance Guides,” accessed August 26, 2015, https://www.sec.gov/info/smallbus/secg.shtml and Australian Securities & Investments Commission, “Compliance for Small Business,” accessed August 26, 2015, http://asic.gov.au/for-business/your-business/small-business/compliance-for-small-business/.

[44] See supra note 39.

[45] See supra note 41.

[46] See supra note 8, at 24.

[47] See for example, James Daley, “Don’t Waste Time Reading Terms and Conditions,” The Telegraph, September 3, 2014, and Robert Glancy, “Will You Read This Article about Terms and Conditions? You Really Should Do,” The Guardian, April 24, 2014, sec. Comment is free, http://www.theguardian.com/commentisfree/2014/apr/24/terms-and-conditions-online-small-print-information.

[48] See supra note 37, at 1886.

[49] Alex Hudson, “Is Small Print in Online Contracts Enforceable?,” BBC News, accessed August 26, 2015, http://www.bbc.com/news/technology-22772321.

[50] Aleecia M. McDonald and Lorrie Faith Cranor, “Cost of Reading Privacy Policies, The,” I/S: A Journal of Law and Policy for the Information Society 4 (2009 2008): 541

[51] See supra note 41, at 4.

[52] For Canada, see Office of the Privacy Commissioner of Canada, “Fact Sheet: Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency,” October 23, 2013, https://www.priv.gc.ca/resource/fs-fi/02_05_d_56_tips2_e.asp. And Office of the Privacy Commissioner of Canada, “Privacy Toolkit - A Guide for Businesses and Organisations to Canada’s Personal Information Protection and Electronic Documents Act,” accessed August 26, 2015, https://www.priv.gc.ca/information/pub/guide_org_e.pdf.

For USA, see Federal Trade Commission, “Internet of Things: Privacy & Security in a Connected World,” Staff Report (Federal Trade Commission, January 2015), https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf.

[53] See supra note 37, at 1889.

[54] See supra note 39, at 261.

[55] Jakki Geiger, “The Surprising Link Between Hurricanes and Strawberry Pop-Tarts: Brought to You by Clean, Consistent and Connected Data,” The Informatica Blog - Perspectives for the Data Ready Enterprise, October 3, 2014, http://blogs.informatica.com/2014/03/10/the-surprising-link-between-strawberry-pop-tarts-and-hurricanes-brought-to-you-by-clean-consistent-and-connected-data/#fbid=PElJO4Z_kOu.

[56] Constance L. Hays, “What Wal-Mart Knows About Customers’ Habits,” The New York Times, November 14, 2004, http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html.

[57] M. J. de Zwart, S. Humphreys, and B. Van Dissel, “Surveillance, Big Data and Democracy: Lessons for Australia from the US and UK,” Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2, 2014, https://digital.library.adelaide.edu.au/dspace/handle/2440/90048: 722.

[58] Ibid.

[59] See supra note 41, at 3.

[60] Julie E. Cohen, “What Privacy Is For,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 5, 2012), http://papers.ssrn.com/abstract=2175406.

[61] See supra note 37, at 1901.

[62] Ibid.

[63] See supra note 37, at 1899.

[64] Jon Leibowitz, “So Private, So Public: Individuals, The Internet & The paradox of behavioural marketing” November 1, 2007, https://www.ftc.gov/sites/default/files/documents/public_statements/so-private-so-public-individuals-internet-paradox-behavioral-marketing/071031ehavior_0.pdf: 6.

[65] See supra note 5.

[66] See supra note 37, at 1898.

[67] Ibid.

[68] Ibid.

[69] Ibid.

[70] Ibid.

[71] See supra note 41, at 3.

[72] See supra note 39, at 261.

[73] Richard H. Thaler, “Making It Easier to Register as an Organ Donor,” The New York Times, September 26, 2009, http://www.nytimes.com/2009/09/27/business/economy/27view.html.

[74] Ibid.

[75] The Oxford Introductions to U.S. Law: Contracts, 1 edition (New York: Oxford University Press, 2010): 67.

[76] Francis M. Buono and Jonathan A. Friedman, “Maximizing the Enforceability of Click-Wrap Agreements,” Journal of Technology Law & Policy 4, no. 3 (1999), http://jtlp.org/vol4/issue3/friedman.html.

[77] North Carolina State University, “Clickwraps,” Software @ NC State Information Technology, accessed August 26, 2015, http://software.ncsu.edu/clickwraps.

[78] Ed Bayley, “The Clicks That Bind: Ways Users ‘Agree’ to Online Terms of Service,” Electronic Frontier Foundation, November 16, 2009, https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service.

[79] Ibid, at 2.

[80] Ibid.

[81] See Nguyen v. Barnes & Noble Inc., (9th Cir. 2014), available here.

[82] See Specht v. Netscape Communications Corp.,(2d Cir. 2002), available here.

[83] See supra note 78, at 2.

[84] See In Re: Zappos.com, Inc., Customer Data Security Breach Litigation, No. 3:2012cv00325: pg 8 line 23-26, available here.

[85] See Groff v. America Online, Inc., 1998, available here.

[86] Hotmail Corp. v. Van$ Money Pie, Inc., 1998, available here.

[87] ProCD Inc. v. Zeidenberg, (7th. Cir. 1996), available here.

[88] See supra note 78, at 1.

[89] See supra note 78, at 2.

[90] Ibid.

[91] Oliver Herzfeld, “Are Website Terms Of Use Enforceable?,” Forbes, January 22, 2013, http://www.forbes.com/sites/oliverherzfeld/2013/01/22/are-website-terms-of-use-enforceable/.

[92] Ibid.

[93] Ibid.

[94] See supra note 41, at 3.

[95] Christopher Kuner et al., “The Challenge of ‘big Data’ for Data Protection,” International Data Privacy Law 2, no. 2 (May 1, 2012): 47–49, doi:10.1093/idpl/ips003: 49.

[96] Ibid.

[97] See supra note 41, at 5.

[98] See supra note 57, at 723.

[99] Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2013), http://papers.ssrn.com/abstract=2325784: 109.

[100] See supra note 41, at 13.

[101] See supra note 41, at 5.

[102] See supra note 52, Privacy Toolkit, at 14.

[103] See supra note 41, at 6.

[104] Siani Pearson and Marco Casassa Mont, “Sticky Policies: An Approach for Managing Privacy across Multiple Parties,” Computer, 2011.

[105] See supra note 34, at 138.

[106] See supra note 34, at 118.

[107] See supra note 41, at 5.

[108] See supra note 41, at 4.

CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015

by Elonnai Hickok, Vipul Kharbanda and Vanya Rakesh — last modified Sep 02, 2015 05:09 PM
The Centre for Internet & Society (CIS) submitted a clause-by-clause comments on the Human DNA Profiling Bill that was circulated by the Department of Biotechnology on June 9, 2015.

The Centre for Internet and Society is a non-profit research organisation that works on policy issues relating to privacy, freedom of expression, accessibility for persons with diverse abilities, access to knowledge, intellectual property rights and openness. It engages in academic research to explore and affect the shape and form of Internet, along with its relationship with the Society, with particular emphasis on South-South dialogues and exchange. The Centre for Internet and Society was also a member of the Expert Committee which was constituted in the year 2013 by the Department of Biotechnology to discuss the draft Human DNA Profiling Bill.

Missing aspects from the Bill

The Human DNA Profiling Bill, 2015 has overlooked and has not touched upon the following crucial factors :

  • Objects Clause

An ‘objects clause,’ detailing the intention of the legislature and containing principles to inform the application of a statute, in the main body of the statute is an enforceable mechanism to give directions to a statute and can be a formidable primary aid in statutory interpretation. [See, for example, section 83 of the Patents Act, 1970 that directly informed the Order of the Controller of Patents, Mumbai, in the matter of NATCO Pharma and Bayer Corporation in Compulsory Licence Application No. 1 of 2011.] Therefore, the Bill should incorporate an objects clause that makes clear that

“DNA profiles merely estimate the identity of persons, they do not conclusively establish unique identity, therefore forensic DNA profiling should only have probative value and not be considered as conclusive proof.

The Act recognises that all individuals have a right to privacy that must be continuously weighed against efforts to collect and retain DNA and in order to protect this right to privacy the principles of notice, confidentiality, collection limitation, personal autonomy, purpose limitation and data minimization must be adhered to at all times.”

  • Collection and Consent

The Bill does not contain provisions regarding instances when the DNA samples can be collected from the individuals without consent (nor does the Bill establish or refer to an authorization procedure for such collection), when DNA samples can be collected from individuals only with informed consent, and how and in what instances individuals can withdraw their consent.  The issue of whether DNA samples can be collected without the consent of the individual is a vexed one and requires complex questions relating to individual privacy as well as the right against self incrimination. While the question of whether an accused can be made to give samples of blood, semen, etc. which had been in issue in a wide gamut of decisions in India has finally been settled by section 53 of the Code of Criminal Procedure, which allows collection of medical evidence from an accused, thus laying to rest any claims based on the right against self incrimination. However there are still issues dealing with the right to privacy and the violation thereof due to the non-consensual collection of DNA samples. This is an issue which needs to be addressed in this Act itself and should not be left unaddressed as this would only lead to a lack of clarity and protracted court cases to determine this issue. An illustration of this problem is where the Bill allows for collection of intimate body samples. There is a need for inclusion of stringent safeguard measures regarding the same since without such safeguards, the collection of intimate body samples would be an outright infringement of privacy. Further, maintaining a database for convicts and suspects is one thing, however collecting and storing intimate samples of individuals is a gross violation of the citizens’ right to privacy, and without adequate mechanisms regarding consent and security, stands at a huge risk of being misused.

  • Privacy Safeguards

Presently, the Bill is being introduced without comprehensive privacy safeguards in place on issues such as consent, collection, retention, etc. as is evident from the comments made below. Though the DNA Board is given the responsibility of recommending best practices pertaining to privacy  (clause 13 (l)) – this is not adequate given the fact that India does not have a comprehensive privacy legislation. Though section 43A and associated Rules of the Information Technology Act would apply to the collection, use, and sharing of DNA data by DNA laboratories  (as they would fall under the definition of ‘body corporate’ under the IT Act), the National and State Data Banks and the DNA Board would not clearly be body corporate as per the IT Act and would not fall under the ambit of the provision or Rules.  Safeguards are needed to protect against the invasion of informational privacy and physical privacy at the level of these State controlled bodies.  The fact that the Bill is to be introduced into Parliament prior to the enactment of a privacy legislation in India is significant as according to discussions in the Record Notes of the 4h Meeting of the Expert Committee - “the Expert Committee also discussed and emphasized that the Privacy Bill is being piloted by the Government. That Bill will over-ride all the other provisions on privacy issues in the DNA Bill.”

  • Lack of restriction on type of analysis to be performed

The Bill currently does not provide any restriction on the types of analysis that can be performed on a DNA sample or profile. This could allow for DNA samples to be analyzed for purposes beyond basic identification of an individual – such as for health, genetic, or racial purposes. As a form of purpose limitation the Bill should define narrowly the types of analysis that can be performed on a DNA sample.

  • Purpose Limitation

The Bill does not explicitly restrict the use of a DNA sample or DNA profile to the purpose it was originally collected and created for. This could allow for the re-use of samples and profiles for unintended purposes.

  • Annual Public Reporting

The Bill does not require the DNA Board to disclose publicly available information on an annual basis regarding the functioning and financial aspects of matters contained within the Bill. Such disclosure is crucial in ensuring that the public is able to make informed decisions. Categories that could be included in such reports include: Number of DNA profiles added to each indice within the databank, total number of DNA profiles contained in the database, number of DNA profiles deleted from the database, the number of matches between crime scene DNA profiles and DNA profiles, the number of cases in which DNA profiles were used in and the percentage in which DNA profiles assisted in the final conclusion of the case, and the number and categories of DNA profiles shared with international entities.

  • Elimination Indice

An elimination indice containing the profiles of medical professionals, police, laboratory personnel etc. working on a case is necessary in case they contaminate collected samples by accident.

Clause by Clause Recommendations

As stated the Human DNA Profiling Bill 2015 is to regulate the use of DNA analysis of human body substances profiles and to establish the DNA Profiling Board for laying down the standards for laboratories, collection of human body substances, custody trail from collection to reporting and also to establish a National DNA Data Bank.

Comment:

  1. As stated, the purpose of the DNA Human Profiling Bill is to broadly regulate the of DNA analysis and establish a DNA Data Bank.  Despite this, the majority of provisions in the Bill pertain to the collection, use, access etc. of DNA samples and profiles for civil and criminal purposes. The result of this is an 'unbalanced Bill' - with the majority of provisions focusing on issues related to forensic use. At the same time the Bill is not a comprehensive forensic bill – resulting in legislative gaps.
  2. Additionally, the Bill contains provisions beyond the stated purpose. These include:
  • Facilitating the creation of a Data Bank for statistical purposes (Clause 33(e))
  • Establishing state and regional level databanks in addition to a national level databank (Clause 24)
  • Developing procedure and providing for the international sharing of DNA profiles with foreign Governments, organizations, institutions, or agencies. (Clause 29)

Recommendation:

  • The Bill should ideally be limited to regulating the use of DNA samples and profiles for criminal purposes. If the scope remains broad, all purposes should be equally and comprehensively regulated.
  • The stated purpose of the Bill should address all aspects of the Bill. Provisions beyond the scope of the Bill should be removed.

Chapter 1: Preliminary

  • Clause 2: This clause defines the terms used in the Bill.

Comment: A number of terms are incomplete and some terms used in the Bill have not been included in the list of definitions.

Recommendation:

  • The definition of DNA Data bank manager - clause 2 (1)(g) - must be renamed as National DNA Data bank manager.
  • The definition of “DNA laboratory” in clause 2(1)(h) should refer to the specific clauses that empower the Central Government and State Governments to license and recognise DNA laboratories. This is a drafting error.
  • The definition of “DNA profile” in clause 2(1)(i) is too vague. Merely the results of an analysis of a DNA sample may not be sufficient to create an actual DNA profile. Further, the results of the analysis may yield DNA information that, because of incompleteness or lack of information, is inconclusive. These incomplete bits of information should not be recognised as DNA profiles. This definition should be amended to clearly specify the contents of a complete and valid DNA profile that contains, at least, numerical representations of 17 or more loci of short tandem repeats that are sufficient to estimate biometric individuality of a person. The definition of “DNA profile” does not restrict the analysis to forensic DNA profiles: this means additional information, such as health-related information could be analyzed and stored against the wishes of the individual, even though such information plays no role in solving crimes.
  • The term “known sample” that is defined in clause 2(1)(m) is not used anywhere outside the definitions clause and should be removed.
  • The definition of “offender” in clause 2(1)(q) is vague because it does not specify the offenses for which an “offender” needs to be convicted. It is also linked to an unclear definition of the term “under trial”, which does not specify the nature of pending criminal proceedings and, therefore, could be used to describe simple offenses such as, for example, failure to pay an electricity bill, which also attracts criminal penalties.
  • The term “proficiency testing” that is defined in clause 2(1)(t) is not used anywhere in the text of the DNA Bill and should be removed.
  • The definitions of “quality assurance”, “quality manual” and “quality system” serve no enforceable purpose since they are used only in relation to the DNA Profiling Board’s rule making powers under Chapter IX, clause 58. Their inclusion in the definitions clause is redundant. Accordingly, these definitions should be removed.
  • The term “suspect” defined in clause 2(1)(za) is vague and imprecise. The standard by which suspicion is to be measured, and by whom suspicion may be entertained – whether police or others, has not been specified. The term “suspect” is not defined in either the Code of Criminal Procedure, 1973 ("CrPC") or the Indian Penal Code, 1860 ("IPC").
  • The term volunteer defined in clause 2(zf) only addresses consent from the parent or guardian of a child or an incapable person. This term should be amended to include informed consent from any volunteer.

Chapter II: DNA Profiling Board

  • Clause 4: This clause addresses the composition of the DNA Profiling Board.

Comment: The size and composition of the Board that is staffed under clause 4 is extremely large. The number of members remains to be 15, as it was in the 2012 Bill.

Recommendation: Drawing from the experiences of other administrative and regulatory bodies in India, the size of the Board should be reduced to no more than five members. The Board must contain at least:

  • One ex-Judge or senior lawyer
  • Civil society – both institutional and non-institutional
  • Privacy advocates

Note: The reduction of the size of the Board was agreed upon by the Expert Committee from 16 members (2012 Bill) to 11 members. This recommendation has not been incorporated.

  • Clause 5(1): The clause specifies the term of the Chairperson of the DNA Profiling Board to be five years and also states that the person shall not be eligible for re-appointment or extension of the term so specified.

Comment: The Chairperson of the Board, who is first mentioned in clause 5(1), has not been duly and properly appointed.

Recommendation: Clause 4 should be amended to mention the appointment of the Chairperson and other Members.

  • Clause 7: The clause requires members to react on a case-by-case basis to the business of the Board by excusing themselves from deliberations and voting where necessary.

Comment: This clause addresses the issue of conflict of interest only in narrow cases and does not provide penalty if a member fails to adhere to the laid out procedure.

Recommendation: The Bill should require members to make full and public disclosures of their real and potential conflicts of interest and the Chairperson must have the power to prevent such members from voting on interested matters. Failure to follow such anti-collusion and anti-corruption safeguards should attract criminal penalties.

  • Clause 12(5): The clause states that the board shall have the power to co-opt such number of persons as it may deem necessary to attend the meetings of the Board and take part in the proceedings of the board, but such persons will not have the right to vote.

Comment: While serving on the Expert Committee, CIS provided language   regarding how the Board could consult with the public. This language has not been fully incorporated.

Recommendation: As per the recommendation of CIS, the following language should be adopted in the Bill: The Board, in carrying out its functions and activities, shall be required to consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation. The Board shall, while consulting or co-opting persons, ensure that meetings, workshops, and events are conducted at different places in India to ensure equal regional participation and activities.

  • Clause 13: The clause lays down the functions to be performed by the DNA Profiling Board, which includes it’s role in regulation of the DNA Data Banks, DNA Laboratories and techniques to be adopted for collection of the DNA samples.

Comment: While serving on the Expert Committee, CIS recommended that the functions of the DNA Profiling Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Furthermore, this clause delegates a number of functions to the Board that places the Board in the role of a manager and regulator for issues pertaining to DNA Profiling including functions of the DNA Databases, DNA Laboratories, ethical concerns, privacy concerns etc.

Recommendation: As per CIS’s recommendations the functions of the Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Towards this, the Board should be comprised of separate Committees to address these different functions. At the minimum, there should be a Committee addressing regulatory issues pertaining to the functioning of Data Banks and Laboratories and an Ethics Committee to provide independent scrutiny of ethical issues.  Additionally:

  • Clause 13(j) allows the Board to disseminate best practices concerning the collection and analysis of DNA samples to ensure quality and consistency. The process for collection of DNA samples and analysis should be established in the Bill itself or by regulations. Best practices are not enforceable and do not formalize a procedure.
  • Clause 13(q)  allows the Board to establish procedure for cooperation in criminal investigation between various investigation agencies within the country and with international agencies. This procedure, at the minimum, should be subject to oversight by the Ministry of External Affairs.

Chapter III: Approval of DNA Laboratories

  • Clause 15: This clause states that every DNA Laboratory has to make an application before the Board for the purpose of undertaking DNA profiling and also for renewal.

Comment: Though the Bill requires DNA Laboratories to make an application for the undertaking DNA Profiling, it does not clarify that the Lab must receive approval before collection and analysis of DNA samples and profiles.

Recommendation: The Bill should clarify that all DNA Laboratories must receive approval for functioning prior to the collection or analysis of any DNA samples and profiles.

Chapter IV: Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training

  • Clause 19: This clause defines the obligations of a DNA laboratory. Sub-section (d) maintains that one such obligation is the sharing of the 'DNA data' prepared and maintained by the laboratory with the State DNA Data Bank and the National DNA Data Bank.

Comment: ‘DNA Data’ is a new term that has not been defined under Clause 2  of the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?  It is also unclear in what manner and on what basis the information would be shared.

Recommendation: The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks. The flow of and access to data between the State DNA Data Bank and National DNA Data Bank should also be established in the Bill.

  • Clause 22: The clause lays down the measures to be adopted by a DNA Laboratory and 22(h) includes a provision requiring the conducting of annual audits according to prescribed standards.

Comment:

  • The definition of “audit” under Chapter VI in clause 22 under ‘Explanation’ is relevant for measuring the training programmes and laboratory conditions. However, the term “audit” is subsequently used in an entirely different manner in Chapter VII which relates to financial information and transparency.
  • The standards for the destruction of DNA samples have not been included within the list of measures that DNA laboratories must take.

Recommendation:

  • The definition of ‘audit’ must be amended or removed as it is being used in different contexts. The term “audit” has a well established use for financial information that does not require a definition.
  • Standards for the destruction of DNA samples should be developed and included as a measure DNA laboratories must take.
  • Clause 23: This clause lays down the sources for collection of samples for the purpose of DNA profiling. 23(1)(a) includes collection from bodily substances and 23(1)(c) includes clothing and other objects. Explanation (b) provides a definition of 'intimate body sample'.

Comment:

  • Permitting the collection of DNA samples from bodily substances and clothing and other objects allows for the broad collection of DNA samples without contextualizing such collection. In contrast 23(b) Scene of occurrence or scene of crime limits the collection of samples to a specific context.
  • This clause also raises the issue of consent and invasion of privacy of an individual. If “intimate body samples” are to be taken of individuals, then this would be an invasion of the person’s right to bodily privacy if such collection is done without the person’s consent (except in the specific instance when it is done in pursuance of section 53 of the Criminal Procedure Code).

Recommendation:

  • Sources for the collection of DNA samples should be contextualized to prevent broad, unaccounted for, or unregulated collection. Clause (a) and (c) should be deleted and replaced with contexts in which the collection DNA collection would be permitted.
  • The Bill should specify circumstances on which non-intimate samples can be collected and the process for the same.
  • The Bill should specify that intimate body samples can only be taken with informed consent except as per section 53 of the Criminal Procedure Code.
  • The Bill should require that any individual that has a sample taken (intimate and non-intimate) is provided with notice of their rights and the future uses of their DNA sample and profile.

Chapter V: DNA Data Bank

  • Clause 24:This clause addresses establishment of DNA Data Banks at the State and National Level. 24(5) establishes that the National DNA Data Bank will receive data from State DNA Data Banks and store the approved DNA Profiles  as per regulations.

Comment:

  • As noted previously, ‘DNA Data’ is a new term that has not been defined in the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?
  • The process for sharing Data between the State and National Data Banks is not defined.

Recommendation:

  • The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks.
  • The process for the National DNA Data Bank receiving DNA data from State DNA Data Banks and DNA laboratories needs to be defined in the Bill or by regulation. This includes specifying how frequently information will be shared etc.
  • Clause 25: This clause establishes standards for the maintenance of indices by DNA databanks. 25(1) states that every DNA Data Bank needs to maintain the prescribed indices for various categories of data including an index for a crime scene, suspects, offenders, missing persons, unknown deceased persons, volunteers, and other indices as may be specified by regulation. 25(2) states that in addition to the indices, the DNA Data Bank should contain information regarding each of the DNA profiles. It can either be the identity of the person from whose bodily substance the profile was derived in case of a suspect or an offender, or the case reference number of the investigation associated with such bodily substances in other cases. 25(3) states that the indices maintained shall include information regarding the data which is based on the DNA profiling and the relevant records.

Comment:

  • 25(1): The creation of multiple indices cannot be justified and must be limited since collection of biological source material is an invasion of privacy that must be conducted only in strict conditions when the potential harm to individuals is outweighed by the public good. This balance may only be struck when dealing with the collection and profiling of samples from certain categories of offenders. The implications of collecting and profiling DNA samples from corpses, suspects, missing persons and others are vast.  Specifically a 'volunteer' index could possibly be used for racial/community/religious profiling.
  • 25(2): This clause requires the names of individuals to be connected to their profiles, and hence accessible to persons having access to the databank.
  • 25(3) The clause states that only information related to DNA profiling and will be stored in an indice. Yet, it is unclear what such information might be. This could allow inconsistencies in data stored in an indice and could allow for unnecessary information to be stored on an indice.

Recommendation:

  • 25(1) Ideally, DNA databanks should be created for dedicated purposes. This would mean that a databank for forensic purposes should contain only an offenders’ index and a crime scene index while a databank for missing persons would contain only a missing persons indice etc. If numerous indices are going to be contained in one databank, the Bill needs to recognize the sensitivity of each indice as well as the difference between each indice and lay down appropriate and strict conditions for collection of data for such indice, addition of data into the indice, as well as use, access, and retention of data within the indice.
  • 25(2) DNA profiles, once developed, should be maintained with complete anonymity and retained separate from the names of their owners. This amendment becomes even more important if we consider the fact that an “offender” may be convicted by a lower court and have his profile included in the data bank, but may get acquitted later. However, till the time that such person is acquitted, his/her profile with the identifying information would still be in the data bank, which is an invasion of privacy.
  • 25(3) What information will be stored in indices should be clearly defined in the Bill and should be tailored appropriately to each category of indice.
  • Clause 28: This clause addresses the comparison and communication of DNA profiles.  28(1) states that the DNA profile entered in the offenders or crime scene index shall be compared by the DNA Data Bank Manger against profiles contained in the DNA Data Bank and the DNA Data Bank Manager will communicate such information with any court, tribunal, law enforcement agency, or approved DNA laboratory which he may consider appropriate for the purpose of investigation. 28(2) allows for any information relating to a person's DNA profile contained in the suspect's index or offenders' index to be communicated to authorised persons.

Comment:

  • 28(1) (a-c) allows for the DNA Bank Manager to communicate the following: 1.) if the DNA profile is not contained in the Data Bank and what information is not contained, 2.) if the DNA profile is contained in the data bank and what information is contained, and if in the opinion of the Manager, 3.) the DNA profile is similar to one stored in the Databank. These options of communication are problematic as they 1. allow for all associated information to be communicated – even if such information is not necessary, 2.) Allows for the DNA Databank Manager to communicate that a profile is  'similar' without defining what 'similar' would constitute.
  • 28(1) only addresses the comparison of DNA profiles entered  into the offenders index or the crime scene index against all other profiles entered into the DNA Data Bank.
  • 28(1) gives the DNA Data Bank manager broad discretion in determining if information should be communicated and requires no accountability for such a decision.
  • 28(2) only addresses information in the suspect's and offender's index and does not address information in any other index.

Recommendation:

  • Rather than allowing for broad searches across the entire database, the Bill should be clear about which profiles can be compared against which indices. Such distinctions must take into consideration if a profile was taken on consent and what was consented to.
  • Ideally, the response from the DNA Databank Manager should be limited to a 'yes' or 'no' response and only further information should be revealed on receipt of a court order.
  • The Bill should define what constitutes 'similar'
  • A process for determining if information should be communicated should be established in the Bill and followed by the DNA Data Bank Manager. The Manager should also be held accountable through oversight mechanisms for such decisions. This is particularly important, as a DNA laboratory would be a private body.
  • Information stored in any index should be disclosed to only authorized parties.
  • Clause 29: This clause provides for comparison and sharing of DNA profiles with foreign Government, organisations, institutions or agencies. 29(1) allows the DNA Bank Manager to run a comparison of the received profile against all indices in the databank and communicate specified responses through the Central Bureau of Investigation.

Comment: This clause allows for international disclosures of DNA profiles of  Indians through a procedure that is to be established by the Board (see clause 13(q))

Recommendation: The disclosure of DNA profiles of Indians with international entities should be done via the MLAT process as it is the typical process followed when sharing information with international entities for law enforcement purposes.

  • Clause 30: This clause provides for the permanent retention of information pertaining to a convict in the offenders’ index and the expunging of such information in case of a court order establishing acquittal of a person, or the conviction being set aside.

Comment: This clause addresses only the retention and expunging of records of a  convict stored in the offenders index upon the receipt of a court order or the conviction being set aside. This implies that records in all other indices - including volunteers - can be retained permanently. This clause also does not address situations where an individuals DNA profile is added to the databank, but the case never goes to court.

Recommendation: The Bill should establish retention standards and deletion standards for each indice that it creates. Furthermore, the Bill should require the immediate destruction of DNA samples once a DNA profile for identification purposes has been created. An exception to this should be the destruction of samples stored in the crime scene index.

Chapter VI: Confidentiality of and Access to DNA Profiles, Samples, and Records

  • Clause 33: This provision lays down the cases and the persons to which information pertaining to DNA profiles, samples and records stored in the DNA Data Bank shall be made available. Specifically, 33(e) permits disclosure for the creation and maintenance of a population statistics Data Bank.

Comment:

  • This clause addresses disclosure of information in the DNA Data Bank, but does not directly address the use of DNA samples or DNA profiles. This allows for the possibility of re-use of samples and profiles.
  • There is no limitation on the information that can be disclosed. The clause allows for any information stored in the Data Bank to be disclosed for a number of circumstances/to a variety of people.
  • There is no authorization process for the disclosure of such information. Of the circumstances listed – an authorization process is mentioned only for the disclosure of information in the case of investigations relating to civil disputes or other civil matters with the concurrence of the court. This implies that there is no procedure for authorizing the disclosure of information for identification purposes in criminal cases, in judicial proceedings, for facilitating prosecution and adjudication of criminal cases, for the purpose of taking defence by an accused in a criminal case, and for the creation and maintenance of a population statistics Data Bank.

Recommendation:

  • The Bill should establish an authorization process for the disclosure of information stored in a data bank. This process must limit the disclosure of information to what is necessary and proportionate for achieving the requested purpose.
  • Clause 33(e) should be deleted as the non-consensual disclosure of DNA profiles for the study of population genetics is specifically illegal. The use of the database for statistical purposes should be limited to purposes pertaining to understanding effectiveness of the databank.
  • Clause 33(f) should be deleted as it is not necessary for DNA profiles to be stored in a database to be useful for civil purposes. Instead samples for civil purposes are only needed as per the relevant case and specified persons.
  • Clause 33(g) should be deleted as it allows for the scope of cases in which DNA can be disclosed to by expanded as prescribed.
  • Clause 34: This clause allows for access to information for operation maintenance and training.
  • Comment: This clause would allow individuals in training access to data stored on the database for training purposes. This places the security of the databank and the data stored in the databank at risk.
  • Recommendation: Training of individuals should be conducted via simulation only.
  • Clause 35: This clause allows for access to information in the DNA Data Bank for the purpose of a one time keyboard search. A one time keyboard search allows for information from a DNA sample to be compared with information in the index without the information from the DNA sample being included in the index. The clause allows for an authorized individual to carry out such a search on information obtained from an DNA sample lawfully collected for the purpose of criminal investigation, except if the DNA sample was submitted for elimination purposes.
  • Comment: The purpose of this clause is unclear as is the scope. The clause allows for the sample to be compared against 'the index' without specifying which index. The clause also allows for 'information obtained from a DNA sample' rather than a profile.  Thus, the clause appears to allow for any information derived from a DNA sample collected for a criminal investigation to be compared against all data within the databank – without recording such information. Such a comparison is vast in scope and open to abuse.
  • Recommendation: To ensure that this provision is not used for conducting searches outside of the scope of the original purpose, only DNA profiles, rather than 'information derived from a sample' should be allowed to be compared,  only the indices relevant to the sample should be compared, and the search should be authorized and justified.
  • Clause 36 : This clause addresses the restriction of access to information in the crime scene index if the individual is a victim of a specified offense or if the person has been eliminated as a suspect of an investigation.

Comment:

  • This clause only addresses restriction of access to the crime scene index and does not address restriction of access to other indices.
  • This clause only restricts access to the indice for certain category of individual and for a specific status of a person. Oddly, the clause does not include authorization or rank as a means for determining or restricting access.

Recommendation:

  • This clause should be amended to lay down standards for restriction of access for all indices.
  • Access to all information in the databank should be restricted by default and permission should be based on authorization rather than category or status of individual.
  • Clause 38: This clause sets out a post-conviction right related to criminal procedure and evidence.

Comment: This clause would fundamentally alter the nature of India’s criminal justice system, which currently does not contain specific provisions for post-conviction testing rights.

Recommendation: This clause should be deleted and the issue of post conviction rights related to criminal procedure and evidence referenced to the appropriate legislation.  Clause 38 is implicated by Article 20(2) of the Constitution of India and by section 300 of the CrPC. The principle of autrefois acquit that informs section 300 of the CrPC specifically deals with exceptions to the rule against double jeopardy that permit re-trials. [See, for instance, Sangeeta Mahendrabhai Patel (2012) 7 SCC 721.] The person must be duly accorded with a right to know rules may provide for- the authorized persons to whom information relating to a person’s DNA profile contained in the offenders’ index shall be communicated. Alternatively, this right could be limited only to accused persons who’s trial is still at the stage of production of evidence in the Trial Court. This suggestion is being made because unless the right as it currently stands, is limited in some manner, every convict with the means to engage a lawyer would ask for DNA analysis of the evidence in his/her case thereby flooding the system with useless requests risking a breakdown of the entire machinery.

Chapter VII: Finance, Accounts, and Audit

Clause 39: This clause allows the Central Government to make grants and loans to the DNA Board after due appropriation by Parliament.

Comment: This clause allows the Central Government to grant and loan money to the DNA Board, but does not require any proof or justification for the sum of money being given.

Recommendation: This clause should require a formal cost benefit analysis, and financial assessment prior to the giving of any grants or loans.

Chapter VIII: Offences and Penalties

Chapter IX: Miscellaneous

Clause 53: This clause allows protects the Central Government and the Members of the Board from suit, prosecution, or other legal proceedings for actions that they have taken in good faith.

Comment: Though it is important to take into consideration if an action has been taken in good faith, absolving the Government and Board from accountability for actions leaves little course of redress for the individual. This is particularly true as the Central Government and the Board are given broad powers under the Bill.

Recommended: If the Central Government and the Board will be protected for actions taken in good faith, their powers should be limited. Specifically, they should not have the ability to widen the scope of the Bill.

Clause 57: This clause states that the Central Government will have the powers to make Rules for a number of defined issues.

Comment: 57(d) allows for the regulations to be created regarding the use of population statistics Data Bank created and maintained for the purposes of identification research and protocol development or quality control.

Recommendation: 57(d) should be deleted as any use for the creation of a population statistics Data Bank created and maintained for the purposes of identification research and protocol  development or quality control is beyond the scope of the Bill.

  • Clause 58: This clause empowers the Board to make regulations regarding a number of aspects related to the Bill.
  • Comment: There a number of functions that the Board can make regulations for that should be defined within the Bill itself to ensure that the scope of the Bill does not expand without Parliamentary oversight and approval.
  • Recommendation: 58(2)(g) should be deleted as it allows the Board to create regulations for other relevant uses of DNA techniques and technologies, 58(2)(u) should be deleted as it allows the Board to include new categories of indices to databanks, and 58(2) (aa) should be deleted as it allows the Board to decide which other indices a DNA profile may be compared with in the case of sharing of DNA profiles with foreign Governments, organizations, or institutions.

Clause 61: This clause states that no civil court will have jurisdiction to entertain any suit or proceeding in respect of any matter which the Board is empowered to determine and no injunction shall be granted.

Comment: This clause in practice will limit the recourse that individuals can take and will exclude the Board from the oversight of civil or criminal courts.

Recommendation: The power to collect, store and analyse human DNA samples has wide reaching consequences for people whose samples are being utilised for this purpose, specially if their samples are being labeled in specific indexes such as “index of offenders”, etc. The individual should therefore have a right to approach the court of law to safeguard his/her rights. Therefore this provision barring the jurisdiction of the courts should be deleted.

Schedule

  • Schedule A: The schedule refers to section 33(f) which allows for disclosure of information in relation to DNA profiles, DNA samples, and records in a DNA Data Bank to be communicated in cases of investigations relating to civil disputes or other civil matters or offenses or cases listed in the schedule with the concurrence of the court.

Comment: As 33(f) requires the concurrence of the court for disclosure of information, it is unclear what purpose the schedule serves. If the Schedule is meant to serve as a guide to the Court on appropriate instances for the disclosure of information stored in the DNA databank – the schedule is too general by listing entire Acts, while at the same time being too specific by naming specific Acts. Ideally, courts should use principles and the greater public interest to reach a decision as to whether or not disclosure of information in the DNA databank is appropriate. At a minimum these principles should include necessity (of the disclosure) and proportionality (of the type/amount of information disclosed).

Recommendation: As we recommended the deletion of clause 33(f) as it is not necessary to databank DNA profiles for civil purposes, the schedule should also be deleted.

  • Note: The schedule differs drastically from previous drafts and from discussions  held in the Expert Committee and recommendations agreed upon. As per the Meeting Minutes of the Expert Committee meeting held on November 10th 2014 “The Committee recommended incorporation of the comments received from the members of the Expert Committee appropriately in the draft Bill...Point no. 1 suggested by Mr. Sunil Abraham in the Schedule of the draft Bill to define the cases in which DNA samples can be collected without consent by incorporating point no. 1 (I.e 'Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the code of Criminal Procedure, 1973)

Download CIS submission here. See the cover letter here.

CIS Human DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:04 PM

PDF document icon CIS_Human_DNA_Profiling_Bill_Comments.pdf — PDF document, 200 kB (204983 bytes)

Cover Letter for DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:05 PM

PDF document icon CIS Cover Letter.pdf — PDF document, 105 kB (107663 bytes)

Data Flow in the Unique Identification Scheme of India

by Vidushi Marda last modified Sep 03, 2015 05:02 PM
This note analyses the data flow within the UID scheme and aims at highlighting vulnerabilities at each stage. The data flow within the UID Scheme can be best understood by first delineating the organizations involved in enrolling residents for Aadhaar. The UIDAI partners with various Registrars usually a department of the central or state Government, and some private sector agencies like LIC etc– through a Memorandum of Understanding for assisting with the enrollment process of the UID project.

Many thanks to Elonnai Hickok for her invaluable guidance, input and feedback


These Registrars then appoint Enrollment Agencies that enroll residents by collecting the necessary data and sharing this with the UIDAI for de-duplication and issuance of an Aadhaar number, at enrolment centers that they set up. The data flow process of the UID is described below:[1]

Data Capture

  • Filling out an enrollment form – To enroll for an Aadhaar number, individuals are required to provide proof of address and proof of identity. These documents are verified by an official at the enrollment center.

Vulnerability: Though an official is responsible for verifying these documents, it is unclear how this verification is completed. It is possible for fraudulent proof of address and proof of identity to be verified and approved by this official.

  • The 'introducer' system: For individuals who do not have a Proof of Identity, Proof of Address etc the UIDAI has established an 'introducer' system. The introducer verifies that the individual is who they claim to be and that they live where they claim to live.

Vulnerability: This introducer is akin to the introducer concept in banking; except that here, the introducer must be approved by the Registrar, and need not know the person bring enrolled. This leads to questions of authenticity and validity of the data collected and verified by an 'introducer'. The Home Ministry in 2012, indicated that this must be reviewed.[2]

  • Categories of data for enrollment: The UIDAI has a standard enrollment form and list of documents required for enrollment. This includes: name, address, birth date, gender, proof of address and proof of identity. Some MoUs (Memorandum of Understanding) permit for the Registrars to collect additional information in addition to what is required by the UIDAI. This could be any information the Registrar deems necessary for any purpose.

Vulnerability: The fact that a Registrar may collect any information they deem necessary and for any purpose leads to concerns regarding (1) informed consent – as individuals are in placed in a position of having to provide this information as it is coupled with the Aadhaar enrollment process (2) unauthorized collection - though the MOU between the UIDAI and the Registrar has authorized the Registrar to collect additional information – if the information is personal in nature and the Registrar is a body corporate it must be collected as per the Information Technology Rules 2011 under section 43A. It is unclear if Registrars that are body corporates are collecting data in accordance to these rules. (3) As Registrars are permitted to collect any data they deem necessary for any purpose – this leads to concerns regarding misuse of this data..[3]

  • Verification of Resident’s Documents: true copies of original  documents, after verification are sent to the Registrar for “permanent storage.”[4]

Vulnerability: It is unclear as to what extent and form this storage takes place. There is no clarity on who is responsible for the data once collected, and the permissible uses of such data are also unclear. The contracts between the UID and Registry claim that guidelines must be followed, while the guidelines state that, “The documents are required to be preserved by Registrar till the UIDAI finalizes its document storage agency” and states that the “Registrars must ensure that the documents are stored in a safe and secure manner and protected from unauthorized access.” [5] The question of what is “unauthorized access”, “secure storage”, when is data transferred to the UIDAI and when the UIDAI will access it and why remain unanswered. Moreover, there is nothing about deleting documents once the MoU lapses. The guidelines in question were also developed post facto.

  • Data collection for enrollment: After verification of proof of address and proof of identity, operators at the enrolling the agency will be enrolling individuals.  Data Collection is completed by operators at the enrolling agency. This includes the digitization of enrollment forms and collection of biometrics. Enrollment information is manually collected and entered into computers operating software provided by the UIDAI and then transferred to the UIDAI. Biometrics are collected through devices that have been provided by third parties such as Accenture and L1Identity Solutions.

Vulnerability: After data is collected by enrollment operators it is  possible for data leakage to occur at the point of collection or during transfer to the Registrar and UIDAI. Data operators, are therefore not answerable to the UIDAI, but to a private agency; a fact which has been the cause of concern even within the government.[6] There have also been instances of sub contracting which leads to more complications in respect of accountability. Misuse[7] and loss of data is a very real possibility, and irregularities have been reported as well.[8] By relying on technology that is provided by third parties (in many cases foreign third parties) data collected by these devices is also available to these companies while at the same time the companies are not regulated by Indian law.

  • Import pre-enrolment data into Aadhaar enrollment client, Syncing NPR/census data into the software: The National Population Register (NPR) enrolls usual residents, and is governed by the Citizenship Rules, which prescribe a penalty for non disclosure of information.

Vulnerability: Biometrics does not form part of the Rules that govern NPR data collection; the Citizenship Rules, 2003. In many ways, collection of biometrics without amending the citizenship laws amounts to a worrying situation. The NPR hands over information that it collects to UIDAI, biometrics collected as part of the UIDAI is included in the NPR, leading to concerns surrounding legality and security of such data.

  • Resident’s consent: for “whether the resident has agreed to share the captured information with organizations engaged in delivery of welfare services.”

Vulnerability: This allows the UIDAI to use data in an almost unfettered fashion. The enrolment form reads, “‘‘I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services.” Informed consent, Vague. What info and with whom. Why is necessary for the UIDAI to share this information, when the organization is only supposed to be a passive intermediary? Does beyond the mandate of the UIDAI, which is only to provide and authenticate the number.

  • Biometric exceptions: The operator checks if the resident’s eyes/hands are amputated/missing, and after the Supervisor verifies the same, the record is made as an exception and only the individuals photograph is recorded.

Vulnerability: There has widespread misuse of this clause, with data being fabricated to fall into this category, making it unreliable as a whole. In March 2013, 3.84 lakh numbers were cancelled as they were based on fraudulent use of the exception clause. [9]

  • Operator checks if resident wants Aadhaar enabled bank account: The UID project was touted to be a scheme that would ensure access to benefits and subsidies that are provided through cash transfers as well as enabling financial inclusion. Subsequently, the need for a Aadhaar embedded bank account was made essential to avail of these benefits. The operator at this point checks whether the resident would like to open such a bank account.

Vulnerability: The data provided at the time of linking UID with a bank account cannot be corrected or retracted. Although this has the vision of financial inclusion, it is now a threat of exclusion.

  • Capturing biometrics- The UIDAI scheme includes assigning each individual a unique identification number after collecting their demographic and biometric information. One Time Passwords are used to manually override a situation in which biometric identification fails.[10] The UIDAI data collection process was revamped in 2012 to include best finger detection and multiple try method.[11]

Vulnerabilities: The collection process is not always accurate, in fact, 70% of the residents who enrolled in Salt Lake, will have to re-enroll due to discrepancies at the time of enrollment.[12] Further, a large number of people in India are unable to give biometric information due to manual labour, or cataracts etc.

After such data is entered, the Operator shows such data to the Resident or Introducer or Head of the Family (as the case may be) for validation.

  • Operator Sign off – Each set of data needs to be verified by an Operator whose fingerprint is already stored in the system.

Vulnerability: Vesting authority to sign off in an operator allows for  signing off on inaccurate or fraudulent data. For example, the issuance of aadhaar numbers to biometric exceptions highlight issues surrounding misuse and unreliability of this function.[13]

After this, the Enrolment operator gets supervisor’s sign off for any exceptions that might exist, Acknowledgement and consent for enrolment is stored. Any correction to specified data can be made within 96 hours.

Document Storage, Back up and Sync

After gathering and verifying all the information about the resident, the Enrolment Agency Operator will store photocopies of the documents of the resident. These Agencies also backup data “from time to time” (recommended to be twice a day), and maintain it for a minimum of 60 days. They also sync with the server every 7-10 days.

Vulnerability: The security implications of third party operators storing information is greatly exacerbated by the fact that these operators use technology and devices from companies have close ties to intelligence agencies in other countries; L-1 Identity Solutions have close ties with America’s CIA, Accenture with French intelligence etc. [14]

Transfer of Demographic and Biometric Data Collected to CIDR

“First mile logistics” include transferring data by using Secure File Transfer Protocol) provided by UIDAI or through a “suitable carrier” such as India Post.

Vulnerability: There is no engagement between the UIDAI and the enrolling agencies; the registrars engage private enrolment agencies, and not the UIDAI. Further, the scope of people authorized to collect information, the information that can be collected, how such information is stored etc are all vague. In 2009, there was a notification that claimed that the UIDAI owns the database[15] but there is no indication on how it may be used, how this might react to instances of identity fraud, etc.

Data De-duplication and Aadhar Generation at CIDR

On receiving biometric information, the de-duplication is done to ensure that each individual is given only one UID number.

Vulnerability:

  • This de-duplication is carried out by private companies, some of which are not of indian origin and thus are also not bound by Indian law. Also, the volume of Aadhaar numbers rejected due to quality or technical reasons is a cause of worry; the count reaching 9 crores in May 2015.[16]
  • The MoUs promise registrars access to information contained in the Aadhaar letter, although individuals are ensured that such letter is only sent to them. [17]
  • General compliance and de-duplication has been an issue, with over 34,000 people being issued more than one Aadhaar number,[18] and innumerable examples of faulty Aadhaar cards being issued.[19]

[1] Enrolment Process Essentials : UIDAI , (December 13,2012), http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[2] UIDAI to review biometric data collection process of 60 crore resident Indians: P Chidambaram, Economic Times, (Jan 31, 2012), http://articles.economictimes.indiatimes.com/2012-01-31/news/31010619_1_biometrics-uidai-national-population-register.

[3]See: an MoU signed between the UIDAI and the Government of Madhya Pradesh. Also see: Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[4]http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[5] Document Storage Guidelines for Registrars – Version 1.2, https://uidai.gov.in/images/mou/D11%20Document%20Storage%20Guidelines%20for%20Registrars%20final%2005082010.pdf

[6] Arindham Mukherjee, Lola Nayar, Aadhaar,A Few Basic Issues, Outlook India, (December 5, 2011), http://dataprivacylab.org/TIP/2011sept/India4.pdf.

[7] Aadhaar: UIDAI probing several cases of misuse of personal data, The Hindu, (April 29, 2012), http://www.thehindubusinessline.com/economy/aadhar-uidai-probing-several-cases-of-misuse-of-personal-data/article3367092.ece.

[8] Harsimran Julka, UIDAI wins court battle against HCL technologies, The Economic Times, (October 4, 2011), http://articles.economictimes.indiatimes.com/2011-10-04/news/30242553_1_uidai-bank-guarantee-hp-and-ibm.

[9] Chetan Chauhan, UIDAI cancels 3.84 lakh fake Aadhaar numbers, The Hindustan Times, (December 26, 2012), http://www.hindustantimes.com/newdelhi/uidai-cancels-3-84-lakh-fake-aadhaar-numbers/article1-980634.aspx.

[10] Usha Ramanathan, “Inclusion project that excludes the poor”, The Statesman (July 4, 2013).

[11] UIDAI to Refresh Data Collection Process, Zee News, (February 7, 2012) http://zeenews.india.com/news/delhi/uidai-to-refresh-data-collection-process_757251.html.

[12] Snehal Sengupta, Queue up again to apply for Aadhaar, The Telegraph, (February 27, 2015), http://www.telegraphindia.com/1150227/jsp/saltlake/story_5642.jsp#.VayjDZOqqko

[13] Chauhan, supra note 7.

[14] Usha Ramanathan, Three Supreme Court Orders Later, What’s the Deal with Aadhaar? Yahoo News, (April 13, 2015), https://in.news.yahoo.com/three-supreme-court-orders-later--what-s-the-deal-with-aadhaar-094316180.html.

[15] Usha Ramanathan, “Threat of Exclusion and of Surveillance, The Statesman (July 2, 2013).

[16] Over 9 Crore Aadhaar enrolments rejected by UIDAI, Zee News (May 8, 2015).

[17] Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[18] Surabhi Agarwal, Duplicate Aadhar numbers within estimate, Live Mint (March 5, 2013).

[19] Usha Ramanathan, “Outsourcing enrolment, gathering dogs and trees”, The Statesman (August 7, 2013).

The seedy underbelly of revenge porn

by Prasad Krishna last modified Sep 27, 2015 02:25 PM
Intimate photos posted by angry exes are becoming part of an expanding online body of dirty work.

The article by Sandhya Soman was published in the Times of India on August 23, 2015.


Three lakh 'Likes' aren't easy to come by. But Geeta isn't gloating. She's livid, and waiting for the day a video-sharing site will take down the popular clip of her having sex with her vengeful ex-husband. "Every other day somebody calls or messages to say they've seen me," says Geeta.

She is not alone. Two weeks ago, law student Shrutanjaya Bhardwaj Whatsapped women he knew asking if any of them had come across cases of online sexual harassment. In a few hours, his phone was filled with tales of harassment by ex-boyfriends and strangers. Instances ranged from strangers publishing morphed photographs on Facebook, to ex-husbands and boyfriends circulating intimate photos and videos on porn sites. Of the 40 responses, around 25 were cases of abuse by former partners. "I have heard friends talking about the problem, but never realized it was this bad," says Bhardwaj.

These days, revenge is best served online - it travels faster and has potential for greater damage. But despite the widespread nature of the crime, many targets hesitate to complain for fear of being shamed and blamed. "A 15-year-old girl is going to worry about how her parents will react if she talks about it," says Chinmayi Arun, research director, Centre for Communication Governance at Delhi National Law University. There is also fear of harassment by the police, says Rohini Lakshane, researcher, Centre for Internet and Society. Worst of all is the waiting. "Even if a police complaint is filed, it takes ages to find out who shot it, who uploaded it and where it is circulated. Such content is mirrored across many sites," she says.

Geeta is familiar with the routine. Her harassment started with photographs sent to family, friends and colleagues. After an acrimonious divorce, several videos were released in 2013. "There were some 25-30 videos on various sites.

After an FIR was filed, the police wrote to websites and some of the links were removed," says Geeta, who has been flagging content on a popular site, which has not yet responded to her privacy violation report. "My face is seen clearly on it. People even come up to me in restaurants saying they've seen it. How do I get on with my life?" asks a distraught Geeta. She also recently filed an affidavit supporting the controversial porn ban PIL in a last-ditch effort to erase the abuse that began after her divorce.

The cyber cell officer in charge of her case says he had got websites to shut down several URLs but was thwarted by the repeal of section 66A of the IT Act that dealt with offensive messages sent electronically. When asked why section 67 (cyber pornography) of the same act and various sections in the criminal law couldn't be used, the officer says that only 66A is applicable to the evidence he has. "I asked for more links and she sent them to me. We'll see if other sections can be applied," he says. Lawyers and activists, argue that existing laws are good enough like sections 354A (sexual harassment), 354C (voyeurism), 354D (stalking) and 509 (outraging modesty) of the IPC.

Though there are no official statistics for what is popularly referred to as 'revenge' porn, there is a flood of such images online. Lakshane, who studied consent in amateur pornography for the NGO-run EroTICs India project in 2014, found clandestinely shot clips to exhibitionist ones where faces are blurred or cropped.

Social activist Sunita Krishnan has raised the red flag over several video clips, including two that show gang rape, which were circulated on Whatsapp. Some of the content she came across showed familiarity between the man and woman, indicating an existing relationship. In one clip, the man says: "How dare you go with that fellow. What you did it to him, do it to me."

Most home-grown clips end up on desi sites with servers abroad, making it difficult to take down content. Some do have a policy of asking for consent of people in the frame. But Lakshane, who wanted to test this policy, says when she approached one website that has servers abroad saying that she had a sexually explicit video, the reply was a one-liner asking her to send it. "They didn't ask for any consent emails," she says. In lieu of payment, they offered her a free account on another file-sharing site, which seemed to partner with the site. With no financial links to those submitting videos, sites like these make money out of subscriptions from consumers, or ads.

A few months ago, the CBI arrested a man from Bengaluru for uploading porn clips, using high-end editing software and cameras. Kaushik Kuonar allegedly headed a syndicate and was supposed to be behind the rape clips reported by Krishnan. "I am skeptical of the idea of amateur porn being randomly available across the Internet. There seem to be people like the man in Bengaluru who are apparently sourcing, distributing and making money out of it," says Chinmayi Arun. "He had 474 clips, including some of rape," adds Krishnan.

Social media companies, meanwhile, say they're working with authorities to prevent such violations. Facebook spokesperson says the company removes content that violates its community standards. It also works with the women and child development ministry to help women stay safe online. Google, Microsoft, Twitter and Reddit have promised to remove links to revenge porn on request, while countries like Japan and Israel have made it illegal.

In India, the National Commission for Women started a consultation on online harassment but is yet to submit a report. In the absence of clarity, activists like Krishnan endorse the banning of porn sites. Not all agree with sweeping solutions. Lakshane says sometimes a court order helps to get tech companies to act faster on requests as in the case of a 2012 sex tape scandal where Google removed search results to 360 web pages. Also, the term 'revenge' porn, she says, is a misnomer as the videos are meant to shame women. "These are not movies where actors get paid. Somebody else is making money off this gross violation of privacy."

Human DNA Profiling Bill 2012 v/s 2015 Bill

by Vanya Rakesh last modified Sep 06, 2015 02:10 PM
This entry analyses the Human DNA Profiling Bill introduced in 2012 with the provisions of the 2015 Bill

A comparison of changes that have been introduced in the Human DNA Profiling Bill, June 2015.

  • Definitions:

1. 2012 Bill: The definition of "analytical procedure" was included under clause 2 (1) (a) and was defined as an orderly step by step procedure designed to ensure operational uniformity.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

2. 2012 Bill: The definition of "audit" was earlier defined under clause 2 (1) (b) and was defined as an inspection used to evaluate, confirm or verify activity related to quality.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

3. 2012 Bill: There was no definition of "bodily substance".

2015 Bill: Clause 2(1) (b) defines bodily substance to be any biological material of or from a body of the person (whether living or dead) and includes intimate/non-intimate body samples as well.

4. 2012 Bill: The definition of "calibration" was included under clause 2 (1) (d) in the previous Bill.

2015 Bill: The definition has been removed from the definition clause and has been included as an explanation under clause 22.

5. 2012 Bill: Previously "DNA Data Bank" was defined under clause 2(1)(h) as a consolidated DNA profile storage and maintenance facility, whether in computerized or other form, containing the indices as mentioned in the Bill.

2015 Bill: However, in this version, the definition has been briefed under clause 2(1) (f) to mean as a DNA Data Bank as established under clause 24.

6. 2012 Bill: Previously a "DNA Data Bank Manager" was defined clause 2(1) (i) as the person responsible for supervision, execution and maintenance of the DNA Data Bank.

2015 Bill: In the new Bill, it is defined clause 2(1) (g) as a person appointed under clause 26.

7. 2012 Bill: Under clause 2(1) (j), the definition of "DNA laboratory" was defined to be any laboratory established to perform DNA procedures.

8. 2015 Bill: Under clause 2(1) (h) "DNA laboratory" has been now defined to be any laboratory established to perform DNA profiling.

9. 2012 Bill: "DNA procedure" was defined under clause 2(1) (k) as a procedure to develop DNA profile for use in the applicable instances as specified in the Schedule.

2015 Bill: This definition has been removed from the Bill.

10. 2012 Bill: There was no definition of "DNA Profiling".

2015 Bill: DNA profiling has been defined under clause 2(1) (j) as a procedure to develop DNA profile for human identification.

11. 2012 Bill: "DNA testing" was defined under clause 2(1) (n) as the identification and evaluation of biological evidence using DNA technologies for use in the applicable instances.

2015 Bill: This definition has been removed.

12. 2012 Bill: "forensic material" was defined under clause 2(1) (o) as biological material of or from the body of a person living or dead, and representing an intimate body sample or non-intimate body sample.

2015 Bill: This definition has been included under the definition of "bodily substance" under clause 2(1) (b).

13. 2012 Bill: "intimate body sample" was defined under clause 2(1) (q).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

14. 2012 Bill: "intimate forensic procedure" was defined under 2(1) (r).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

15. 2012 Bill: "non-intimate body sample" was defined under clause 2(1) (v) in 2012 Bill.

2015 Bill: The definition of "non-intimate body sample" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

16. 2012 Bill: "non-intimate forensic procedure" was defined under clause 2(1) (w) in 2012 Bill.

2015 Bill: The definition of "non-intimate forensic procedure" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

17. 2012 Bill: "undertrial" was defined under clause 2(1) (zk) as a person against whom a criminal proceeding is pending in a court of law.

2015 Bill: The definition now states such a person against whom charges have been framed for a specified offence in a court of law under clause 2(1) (zc).

  • DNA Profiling Board:

1. 2012 Bill: Under clause 4 (a), the Bill stated that a renowned molecular biologist must be appointed as the Chairperson.

2015 Bill: Under clause 4 addressing Composition of the Board, the Bill states that the Board shall consist of a Chairperson who shall be appointed by the Central Government and must have at least fifteen years' experience in the field of biological sciences.

2. 2012 Bill: Under clause 4 (i), the Chairman of National Bioethics Committee of Department of Biotechnology, Government of India was to be included as a member under the DNA Profiling Board.

2015 Bill: This member has been removed from the composition.

3. 2012 Bill: Under clause 4 (m), the term of 1 person from the field of genetics was not mentioned in the 2012 Bill.

2015 Bill: In this Bill under clause 4 (m), it has been stated that such a person must have minimum experience of twelve years in the field.

4. 2012 Bill: The term of 2 people from the field of biological sciences was not mentioned in the 2012 Bill under clause 4 (l).

2015 Bill: Under clause 4 (l), it has been stated that such 2 people must have minimum experience of twelve years in the field.

5. The following members have been included in the 2015 Bill-

i. Chairman of National Human Rights Commission or his nominees, as an ex-officio member under clause 4 (a).

ii. Secretary to Government of India, Ministry of Law and Justice or his nominees (not below rank of Joint Secretary), as an ex-officio member under clause 4 (b).

6. 2012 Bill: Under clause 5, the term of the members was not uniform and varied for all members.

2015 Bill: The term of people from the field of biological sciences and the person from the field of genetics has been states to be five years from the date of their entering upon the office, and would be eligible for re-appointment for not more than 2 consecutive terms.

Also, the age of a Chairperson or a member cannot exceed seventy years.

The term of members under clauses (c), (f), (h), and (i) of clause 4 is 3 years and for others the term shall continue as long as they hold the office.

  • Chief Executive Officer:

2012 Bill: Earlier it was stated in the Bill under clause 10 (3) that such a person should be a scientist with understanding of genetics and molecular biology.

2015 Bill: The Bill states under clause 11 (3) that the CEO shall be a person possessing qualifications and experience in science or as specified under regulations. The specific experience has been removed.

A new clause- 12(5) addresses power of the Board to co-opt the number of people for attending the meetings and take part in proceedings; however such a person shall be devoid of voting rights. Also, such a person shall be entitled to specified allowances for attending the meetings.

  • Officers and Other Employees of Board:

2012 Bill: The Bill stated under clause 11 (3) that the Board may appoint consultants required to assist in the discharge of its functions on such terms and conditions as may be specified by the regulations.

2015 Bill: The 2015 Bill states under clause 12 (3) that the Board may appoint experts to assist for discharging its functions and may hold consultations with people whose rights may be affected by DNA profiling.

  • Functions of the Board:

2012 Bill: 26 functions were stated in the 2012 Bill.

2015 Bill: The number of the functions has been reduced to 22 with a few changes based on recommendations of Expert Committee.

  • Power of Board to withdraw approval:

2015 Bill: The circumstances in which the Board could withdraw its approval have not been changed from the 2012 Bill (previously under clause 16). There's an addition to the list as provided under clause 17 (1) (d) wherein the Board can also withdraw its approval in case the DNA laboratory fails to comply with any directions issued by the DNA Profiling Board or any such regulatory Authority under any other Act.

  • Obligations of DNA Laboratory:

2015 Bill: There is an addition to the list of obligations to be undertaken by a DNA laboratory under clause 19 (d). The laboratory has an additional obligation to share the DNA data prepared and maintained by it with the State DNA Data Bank and the National DNA Data Bank.

  • Qualification and experience of Head, technical and managerial staff and employees of DNA Laboratory:

2012 Bill: The previous Bill clearly mandated under clause 19 (2) the qualifications of the Head of every DNA laboratory to be a person possessing educational qualifications of Doctorate in Life Sciences from a recognised University with knowledge and understanding of the foundation of molecular genetics as applied to DNA work and such other qualifications as may be specified by regulations made by the Board.

2015 Bill: The provision has been generalized and provides under clause 20 (1) for a person to be possess the specified educational qualifications and experience.

  • Measures to be taken by DNA Laboratory:

2012 Bill: In the previous Bill, there were separate clauses with regard to security, minimization of contamination, evidence control system, validation process, analytical procedure, equipment calibration and maintenance, audits of laboratory to be followed by a DNA Laboratory.

2015 Bill: In the 2015 Bill, these measures to be adopted by DNA Laboratory have been included under one clause itself-clause 22.

  • Infrastructure and training:

2012 Bill: The specific provisions regarding infrastructure, fee, recruitment, training and installing of security system in the DNA Laboratory were present in the Bill under clauses 28-31.

2015 Bill: These provisions have been removed from the 2015 Bill.

  • Sources and manner of collection of samples for DNA profiling:

2012 Bill: Part II of the Schedule in the Bill provided for sources and manner of collection of samples for DNA Profiling.

The sources include: Tissue and skeleton remains and Already preserved body fluids and other samples.

Also, it provided for a list of the manner in which the profiling can be done:

(1) Medical Examination (2) Autopsy examination (3) Exhumation

Also, provision for collection of intimate and non-intimate body samples was provided as an Explanation.

2015 Bill: Under Clause 23, the sources include bodily substances and other sources as specified in Regulations. The other sources remain unchanged.

Also, provision for collection of intimate and non-intimate body samples is addressed in clause 23(2).

The explanation to the provision states what would be implied by the terms medical practitioner, intimate body sample, intimate forensic procedure, non-intimate body sample and non-intimate forensic procedure.

  • DNA Data Bank:

- Establishment:

2012 Bill: The Bill did not specify any location for establishment of the National DNA Data Bank.

2015 Bill: The Bill states under clause 24 (1) that the Central Government shall establish a National DNA Data Bank at Hyderabad.

-Maintenance of indices of DNA Data Bank:

2012 Bill: Apart from the DNA profiles, every DNA Data Bank shall contain the identity of the person from whose body the substances are taken in case of a profile in the offenders' index as under clause 32 (6) (a).

2015 Bill: Clause 25 (2) (a) states that the DNA Data Bank shall contain the identity for the suspects' or offenders' index.

  • DNA Data Bank Manager:

2012 Bill: The Bill States under clause 33 (1) that a DNA Data Bank Manger shall be appointed for conducting all operations of the National DNA Data Bank. The functions were not specific.

2015 Bill: The Bill states under clause 26 (1) specifically that a DNA Data Bank Manger shall be appointed for the purposes of execution, maintenance and supervision of the National DNA Data Bank.

- Qualification:

2012 Bill: In the previous Bill, it was stated under clause 33 (3) that the DNA Data Bank Manager must be a scientist with understanding of computer applications and statistics.

2015: The Bill states under clause 26 (2) that the DNA Data Bank Manager must possess educational qualification in science and any such experience as prescribed by the regulations.

  • Officers and other employees of the National DNA Data Bank:

2012 Bill: The Bill stated under clause 34 (3) that the Board may appoint consultants required to assist in the discharge of the functions of the DNA Data Banks.

2015 Bill: The Bill provides under clause 27 (3) that the Board may appoint experts required to assist in the discharge of the functions of the DNA Data Banks

  • Comparison and Communication of DNA profiles:

2015 Bill: The New Bill specifically addresses comparison and communication the DNA profiles as that in the offenders' or crime scene index under clause 28 (1). Also, there is an additional provision under clause 29 (3) which states that the National DNA Data Bank Manger may communicate a DNA profile through Central Bureau of Investigation on request of a court, tribunal, law enforcement agency or DNA laboratory to the Government of a foreign State, an international organization or institution of Government.

  • Use of DNA profiles and DNA samples and records:

2012 Bill: The Bill provided under clause 39 that all DNA profiles, samples and records would be used solely for purpose of facilitating identification of perpetrator of an offence as listed under the Schedule. The proviso to this provision addressed the fact that such samples could be used to identify victims of accidents or disaster or missing persons, or any purpose of civil dispute.

2015 Bill: The Bill restricts the use of all DNA profiles, samples and records solely for purpose of facilitating identification of a person under the Act under clause 32.

  • DNA Profiling Board Fund:

2012 Bill: The Bill stated under clause 47 (2) that the financial power for the application of monies of the Fund shall be delegated to the Board in such manner as may be prescribed and as may be specified by the regulations made by the Board.

Also, the Bill stated that the Fund shall be applied for meeting remuneration requirements to be paid to the consultants under clause 47 (3) (c).

2015 Bill: This provision has not been included in the Bill. Also, the Bill does not include the provision of paying the remuneration to the experts from the Fund.

  • Delegation of Powers:

2012 Bill: The Bill provided under clause 61 that The Board may delegate its powers and functions to the Chairperson or any other Member or officer of the Board subject to such conditions, if necessary.

2015 Bill: This provision has not been included in the 2015 Bill.

  • Powers of Board to make rules:

2012 Bill: The Bill provided for an exhaustive list consisting of 33 powers listed under clause 65.

2015 Bill: The Bill provides for a list of 27 powers of the Board under clause 57.

  • Schedule:

2012 Bill: In the list of offense where human DNA profiling would be applicable, there was an inclusion of any law as may be specified by the regulations made by the Board.

2015 Bill: This provision has been removed from the 2015 Bill.

Responsible Data Forum: Discussion on the Risks and Mitigations of releasing Data

by Vanya Rakesh last modified Sep 06, 2015 02:29 PM

The Responsible Data Forum initiated a discussion on 26th August 2015 to discuss the risks and mitigations of releasing data.

The discussion was regarding the question of adoption of adequate measures to mitigate risks to people and communities when some data is prepared to be released or for sharing purposes.

The following concerns entailed the discussion:

  • What is risk- risks in releasing development data and PII
  • What kinds of risks are there
  • Risk to whom?
  • Risks in dealing with PII, discussed by way of several examples
  • What is missing from the world

The first thing to be done is that if a dataset is made, then you have the responsibility that no harm is caused to the people who are connected to the dataset and a balance must be created between good use of the data on one hand and protecting data subjects, sources and managers on the other.

To answer what is risk, it was defined to be the “probability of something happening multiplied by the resulting cost or benefit if it does” (Oxford English Dictionary). So it is based on cost/benefit, probability, and a subject. For probability, all possible risks must be considered and work in terms of how much harm would happen and how likely that is about to happen. These issues must be considered necessarily.

An example in this context was that of the Syrian government where the bakeries were targeted as the bombers knew where the bakeries are, making them easy targets. It was discussed how in this backdrop of secure data release mechanism, local context is an important issue.

Another example of bad practice was the leak of information in the Ashley Madison case wherein several people have committed suicide.

  • Kinds of risk:
  1. physical harm:

The next point of discussion was regarding kinds of the physical risks to data subjects when there is release/sharing of data related to them. Some of them were:

  1. i.  security issues
  2. ii. hate speech
  3. iii. voter issues
  4. iv. police action

Hence PII goes both ways- where some choose to run the risk of PII being identified; on the other hand some run the risk of being identified as the releaser of information.

  1. Legal harms- to explain what can be legal harms posed in releasing or sharing data, an example was discussed of an image marking exercise of a military camp wherein people joined in, marked military equipment and discovered people who are from that country.
  2. Reputational harm as an organization primarily.
  3. Privacy breach- which can lead to all sorts of harms.
  • Risk to whom?

Data subjects – this includes:

  1. i.  Data collectors
  2. ii. Data processing team
  3. iii. Person releasing the data
  4. iv. Person using the data

Also, the likely hood of risk ranges from low, medium and high. We as a community are at a risk at worse.

  • PII:

- Any data which can be used to identify any specific individual. Such information does not only include names, addresses or phone numbers but could also be data sets that don’t in themselves identify an individual.

For example, in some places sharing of social security number is required for HIV+ status check-up; hence, one needs to be aware of the environment of data sets that go into it. In another situation where there is a small population and there is a need to identify people of a street, village or town for the purpose of religion, then even this data set can put them to risk.

Hence, awareness with respect to the demographics is important to ascertain how many people reside in that place, be aware of the environment and accordingly decide what data set must be made.

- Another way to mitigate risks at the time of release/sharing of data is partial release only to some groups, like for the purpose of academics or to data subjects.

- Different examples were discussed to identify how release of data irresponsibly has affected the data subjects and there is a need to work to mitigate harms caused in such cases.

Example- in the New York City taxi case data about every taxi ride was released-including pickup and drop locations, times, fares. Here it becomes more problematic if someone is visiting strip clubs, then re-identification takes place and this necessitates protection of people against such insinuation.

This shows how data sets can lead to re-identification, even when it is not required. Hence, the involved actors must understand the responsibilities when engaging in data collection or release and accordingly mitigate the risks so associated.

- A concern was raised over collection and processing of the information of genetic diseases of a small population since practically it is not possible to guarantee that the information of data subjects to whom the data relates will not be released or exposed or it won’t be re-identifiable. Though best efforts would be made by experts, however, realistically, it is not possible to guarantee people that they will not be identified. So the question of informing people of such risks is highly crucial. It is suggested that one way of mitigating risks is involving the people and letting them know. Awareness regarding potential impact by breach of data or identification is very important.

- Another factor for consideration is the context in which the information was collected. The context for collection of data seems to change over a period of time. For example, many human rights funders want information on their websites changed or removed in the backdrop of changing contexts, circumstances and situation. In this case also, the collection and release of data and the risks associated become important due to changing contexts.

  • What is missing from the world?

Though recognition of risks has been done and is an ongoing process, what is missing from the world are uniform guidelines, rules or law. There are no policies for informed consent or for any means to mitigate risks collectively in a uniform manner. There must be adoption of principles of necessity, proportionality and informed consent.

Connected Choices

by Melissa Hathaway — last modified Sep 09, 2015 01:26 AM

Modern societies are in the middle of a strategic, multi-dimensional competition for money, power and control over all aspects of the Internet and the Internet economy. Ms. Hathaway will discuss the increasing pace of discord and the competing interests that are unfolding in the current debate concerning the control and governance of the Internet and its infrastructure. Some countries are more prepared for and committed to winning tactical battles than others on the road to asserting themselves as an Internet power. Some are acutely aware of what is at stake; the question is whether they will be the master or the victim of these multi-layered power struggles as subtle and not-so-subtle connected choices are being made. Understanding this debate requires an appreciation of the entangled economic, technical, regulatory, political, and social interests implicated by the Internet. Those states that are prepared for and understand the many facets the Internet presents will likely end up on top.

Anonymity in Cyberspace

by Sunil Abraham last modified Sep 09, 2015 01:31 AM

While security threats require one to be identified in the Cyberspace, on the other hand, the need for privacy and freedom of speech without being targeted, calls for providing means for  anonymous browsing and ability to express without being identified. Where do we draw the line , and how do we balance it? The group will dwell on need for anonymity in various sectors such as government, commercial, employers etc. Apart from security & privacy, the presentation will also cover social and technological perspectives.

DIDP Request #11: NETmundial Principles

by Aditya Garg — last modified Sep 14, 2015 03:08 PM
The Centre for Internet & Society (CIS) followed up on the implementation of the NETmundial Principles that ICANN has been endorsing by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

22 July 2015

To:

Mr. Fadi Chehade, CEO and President

Mr. Steve Crocker, Chairman of the Board

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Sub: Details of documents within ICANN regarding implementation of NETmundial Principles and documents modified within ICANN as a result of the same

It  is  our  understanding  that  ICANN  is  one  of  the founding  members  of  the  NETmundial Initiative. And hence, it has been credited in the public forum for championing the Initiative.[1]

Mr.  Fadi  Chehade,  CEO  and  President  of  ICANN,  has  maintained  that  it  is  time  for  the  global community to act and implement the Principles set forth in the initiative.[2]

ICANN itself, in response to one of our earlier requests, has acknowledged that "NETmundial Principles are high-level statements that permeate through the work of any entity –particularly a multistakeholder entity like ICANN."[3]

We,  therefore,  request  for  all  existing  documents  within  ICANN  which  represent  its  efforts  to implement  the  NETmundial  Principles  within  its  working.  Additionally, we would  also  want  to request  for  all  the  documents  which  were  modified  as  the  result  of  ICANN’s support of the NETmundial Initiave, highlighting the modification so made.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.

Thank you very much.

Warm regards,
Aditya Garg,
1st Year, National Law University, Delhi for Centre for Internet & Society

ICANN Response

ICANN in their response pointed to an earlier DIDP request that we had sent in, and they replied along the same lines. They brought to our attention that ICANN was not responsible for the implementation of the NETMundial Principles, despite it being one of the founding members of the Initiative. They reiterated their earlier statement of ICANN not being the “…home for implementation of the NETmundial Principles or the evolution of multistakeholder participation in Internet governance.”  They have failed to provide us with documentary proof of the implementation of these principles, and have only pointed to statements which indicate a potential prospective adoption of said the initiative [4]; the responses have been near identical to those for the earlier DIDP request, which you can find here.

Further, ICANN claims that the information we seek falls within the scope of the exceptions to disclosure they lay down, as it is not within their operational activities, an explanation that fails to satisfy us. As always, they have used the wide scope of their non-disclosure policy to avoid providing us with the requisite information.

The request can be found here, and ICANN’s response has been linked here.


[1]. See McCarthy, I’m Begging You To Join, The Register (12 December 2014), http://www.theregister.co.uk/2014/12/12/im_begging_you_to_join_netmundial_initiative_gets_desperate/

[2]. See NETmundial Initiative Goes Live, Gobal Internet Community Invited to Participate (Press Release), https://www.netmundial.org/press-release-1

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141228-1-NETmundial, https://www.icann.org/en/system/files/files/cis-netmundial-response-27jan15-en.pdf

[4]. Such as Objective 4.3 of their Strategic Five Year Plan. “Demonstrate leadership by implementing best practices in multistakeholder mechanisms within the distributed Internet governance ecosystem while encouraging all stakeholders to implement the principles endorsed at NETmundial” at https://www.icann.org/en/system/files/files/strategic-plan-2016-2020-10oct14-en.pdf

DIDP Request #12: Revenues

by Aditya Garg — last modified Sep 14, 2015 03:32 PM
The Centre for Internet & Society (CIS) sought information from ICANN on their revenue streams by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

CIS Request

22 July 2015

To:

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Mr. Samiran Gupta, ICANN India

All other members of Staff involved in accounting and financial tasks

Sub: Raw data with respect to granular income/revenue statements of ICANN from 1999-2011

We  would  like  to  thank  ICAN  for  their  prompt  response  to  our  earlier  requests.  We appreciate that the granular Revenue Details  for FY14  have been  posted online.[1] We also appreciate that a similar  document  has  been  posted  for  FY13.[2]

And  we  hope  that  one  for  FY12  would  be  posted soon, as noted by you in your Response to our Request No. 20141222-1.[3]

As also noted by you in the same request, similar reports cannot be prepared for FY99 to  FY11 since “[i]t would be extremely time consuming and overly burdensome to cull through the raw data in order to compile the reports for the prior years”.[4]

Additionally, it was also mentioned that the “relevant information is available in other public available documents”.[5]

Hence, we  would like to request  for the raw  data for years FY99 to FY11, for our research on accountability  and  transparency  mechanisms  in  Internet  governance,  specifically  of  ICANN. Additionally,  we  would  also  like  to  request  for  the links  to  such  public  documents where the information is available.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.
Thank you very much.
Warm regards,
Aditya Garg,  
I Year, National Law University, Delhi
For Centre for Internet & Society
W: http://cis-india.org

ICANN Response

ICANN referred to our earlier DIDP request (see here) where we had sought for a detailed report of their granular income and revenue statements from 1999-2014. They refused to disclose the data on grounds that it would be ‘time consuming’ and ‘overly burdensome’, which is a ground for refusal as per their exceptions to disclosure.

Our request may be found here, and their response is linked to here.


[1]. See FY14 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2014-revenue-source-01may15-en.pdf.

[2]. See FY13 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2013-revenue-source-01may15-en.pdf

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

[4]. Id

[5]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

India’s digital check

by Sunil Abraham last modified Sep 15, 2015 02:55 PM
All nine pillars of Digital India directly correlate with policy research conducted at the Centre for Internet and Society, where I have worked for the last seven years. This allows our research outputs to speak directly to the priorities of the government when it comes to digital transformation.

The article was originally published by DNA on July 8, 2015.


Broadband Highways and Universal Access to Mobile Connectivity: The first two pillars have been combined in this paragraph because they both require spectrum policy and governance fixes. Shyam Ponappa, a distinguished fellow at our Centre calls for the leveraging of shared spectrum and also shared backhaul infrastructure. Plurality in spectrum management, for eg, unlicensed spectrum should be promoted for accelerating backhaul or last mile connectivity, and also for community or local government broadband efforts. Other ideas that have been considered by Ponappa include getting state owned telcos to exit completely from the last mile and only focus on running an open access backhaul through Bharat Broadband Limited. Network neutrality regulations are also required to mitigate free speech, diversity and competition harms as ISPs and TSPs innovate with business models such as zero-rating.

Public Internet Access Programme: Continuing investments into Common Service Centres (CSCs) for almost a decade may be questionable and therefore a citizen’s audit should be undertaken to determine how the programme may be redesigned. The reinventing of post offices is very welcome, however public libraries are also in need urgent reinventing. CSCs, post offices and public libraries should all leverage long range WiFi for Internet and intranet, empowering BYOD [Bring Your Own Device] users. Applications will take time to develop and therefore immediate emphasis should be on locally caching Indic language content. State Public Library Acts need to be amended to allow for borrowing of digital content. Flat-fee licensing regimes must be explored to increase access to knowledge and culture. Commons-based peer production efforts like Wikipedia and Wikisource need to be encouraged.

e-Governance: Reforming Government through Technology: DeitY, under the leadership of free software advocate Secretary RS Sharma, has accelerated adoption and implementation of policies supporting non-proprietary approaches to intellectual property in e-governance. Policies exist and are being implemented for free and open source software, open standards and electronic accessibility for the disabled. The proprietary software lobby headed by Microsoft and industry associations like NASSCOM have tried to undermine these policies but have failed so far.

The government should continue to resist such pressures. Universal adoption of electronic signatures within government so that there is a proper audit trail for all communications and transactions should be made an immediate priority. Adherence to globally accepted data protection principles such as minimisation via “form simplification and field reduction” for Digital India should be applauded. But on the other hand the mandatory requirement of Aadhaar for DigiLocker and eSign amounts to contempt of the Supreme Court order in this regard.

e-Kranti — Electronic Delivery of Services: The 41 mission mode projects listed are within the top-down planning paradigm with a high risk of failure — the funds reserved for these projects should instead be converted into incentives for those public, private and public private partnerships that accelerate adoption of e-governance. The dependency on the National Informatics Centre (NIC) for implementation of e-governance needs to be reduced, SMEs need to be able to participate in the development of e-governance applications. The funds allocated for this area to DeitY have also produced a draft bill for Electronic Services Delivery. This bill was supposed to give RTI-like teeth to e-governance service by requiring each government department and ministry to publish service level agreements [SLAs] for each of their services and prescribing punitive action for responsible institutions and individuals when there was no compliance with the SLAs.

Information for All: The open data community and the Right to Information movement in India are not happy with the rate of implementation of National Data Sharing and Accessibility Policy (NDSAP). Many of the datasets on the Open Data Portal are of low value to citizens and cannot be leveraged commercially by enterprise. Publication of high-value datasets needs to be expedited by amending the proactive disclosure section of the Right to Information Act 2005.

Electronics Manufacturing: Mobile patent wars have begun in India with seven big ticket cases filed at the Delhi High Court. Our Centre has written an open letter to the previous minister for HRD and the current PM requesting them to establish a device level patent pool with a compulsory license of 5%. Thereby replicating India’s success at becoming the pharmacy of the developing world and becoming the lead provider of generic medicines through enabling patent policy established in the 1970s. In a forthcoming paper with Prof Jorge Contreras, my colleague Rohini Lakshané will map around fifty thousand patents associated with mobile technologies. We estimate around a billion USD being collected in royalties for the rights-holders whilst eliminating legal uncertainties for manufacturers of mobile technologies.

IT for Jobs: Centralised, top-down, government run human resource development programmes are not useful. Instead the government needs to focus on curriculum reform and restructuring of the education system. Mandatory introduction of free and open source software will give Indian students the opportunity to learn by reading world-class software. They will then grow up to become computer scientists rather than computer operators. All projects at academic institutions should be contributions to existing free software projects — these projects could be global or national, for eg, a local government’s e-governance application. The budget allocated for this pillar should instead be used to incentivise research by giving micro-grants and prizes to those students who make key software contributions or publish in peer-reviewed academic journals or participate in competitions. This would be a more systemic approach to dealing with the skills and knowledge deficit amongst Indian software professionals.

Early Harvest Programmes: Many of the ideas here are very important. For example, secure email for government officials — if this was developed and deployed in a decentralised manner it would prevent future surveillance of the Indian government by the NSA. But a few of the other low-hanging fruit identified here don’t really contribute to governance. For example, biometric attendance for bureaucrats is just glorified bean-counting — it does not really contribute to more accountability, transparency or better governance.


The author works for the Centre for Internet and Society which receives funds from Wikimedia Foundation that has zero-rating alliances with telecom operators in many countries across the world

Sustainable Smart Cities India Conference 2015, Bangalore

by Vanya Rakesh last modified Sep 21, 2015 02:24 AM
Nispana Innovative Platforms organized a Sustainable Smart Cities India Conference 2015, in Bangalore on 3rd and 4th September, 2015. The event saw participation from people across various sectors including Government Representatives from Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers. National and International experts and stakeholders were also present to discuss the opportunities and challenges in creating smart and responsible cities as well as citizens, and creating a roadmap for converting the smart cities vision into a reality that is best suited for India.

The objective of the conference was to discuss the meaning of a smart city, the promises made, the challenges and possible solutions for implementation of ideas by transforming Indian Cities towards a Sustainable and Smart Future.

Smart Cities Mission

Considering the pace of rapid urbanization in India, it has been estimated that the urban population would rise by more than 400 million people by the year 2050[1] and would contribute nearly 75% to India’s GDP by the year 2030. It has been realized that to foster such growth, well planned cities are of utmost importance. For this, the Indian government has come up with a Smart Cities initiative to drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology, especially technology that leads to Smart outcomes.

Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY2015-16 to FY2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporating the learnings into the Mission. This initiative aims to focus on area-based development in the form of redevelopment, or developing new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all, especially the poor and the disadvantaged.[2]

What is being done?

The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year.The Government has come up with 2 missions:Atal Mission for Rejuvenation and Urban Transformation (AMRUT) and Smart Cities Mission for the purpose of achieving urban transformation.The vision is to preserve India’s traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. Additionally, Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission.

What is a Smart City?

Over the two-day conference, various speakers shared a common sentiment that the Governments’ mission does not clearly define what encompasses the idea of a Smart City. There is no universally accepted definition of a Smart City and its conceptualization varies from city to city and country to country.

A global consensus on the idea of a smart city is a city which is livable, sustainable and inclusive. Hence, it would mean a city which has mobility, healthcare, smart infrastructure, smart people, traffic maintenance, efficient waste resource management, etc.

Also, there is a global debate at United Nations regarding developmental goals. One of these goals is gender equality which is very important for the smart city initiative. According to this, a smart city must be such where the women have a life free from violence, must be made to participate and are economically empowered.

Promises

The promises of the Smart City mission include:

Make a sustainable future, reduce carbon footprint, adequate water supply, assured electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing especially for the poor, robust IT connectivity and digitalization, good governance, especially e-Governance and citizen participation, sustainable environment, safety and security of citizens, particularly women, children and the elderly, and health and education.

The vision is to preserve country’s traditional architecture, culture & ethnicity while implementing modern technology. It was discussed how the Smart City Mission is currently attracting global investment, will create new job opportunities, improve communications and infrastructure, decrease pollution and ultimately improve the quality of living.

Challenges

The main challenges for implementation of these objectives are with respect to housing, dealing

with existing cities and adopting the idea of retro-fitting.

Also, another challenge is that of eradicating urban poverty, controlling environment degradation, formulating a fool-proof plan, proper waste management mechanism, widening roads but not at the cost of pedestrians and cyclist and building cities which are inclusive and cater to the needs of women, children and disabled people.

Some of the top challenges will include devising a fool-proof plan to develop smart cities, meaningful public-private partnership, increasing the renewable energy, water supply, effective waste management, traffic management, meeting power demand, urban mobility, ICT connectivity, e-governance, etc., while preparing for new threats that can emerge with implementation of these new technologies.

What needs to be done?

The following suggestions were made by the experts to successfully implement government’s vision of creating successful smart cities in India.

  • Focus on the 4 P’s: Public-Private-People Partnership since people very much form a part of the cities.
  • Integration of organizations, government bodies, and the citizens. The Government can opt for a sentiment analysis.
  • Active participation by state governments since Land is a state subject under the Constitution. There must be a detailed framework to monitor the progress and the responsibilities must be clearly demarcated.
  • Detailed plans, policies and guidelines
  • Strengthen big data initiatives
  • Resource maximization
  • Make citizens smart by informing them and creating awareness
  • Need for competent people to run the projects
  • Visionary leadership
  • Create flexible and shared spaces for community development.

National/International case studies

Several national and international case studies were discussed to list down practical challenges to enable the selected Indian cities learn from their mistakes or include successful schemes in their planning from its inception.

  • Amsterdam Smart City: It is said to be a global village which was transformed into a smart city by involving the people. They took views of the citizens to make the plan a success. The role of big data and open data was highly emphasized. Also, it was suggested that there must be alignment with respect to responsibilities with the central, state and district government to avoid overlap of functions. The city adopted smart grid integration to make intelligent infrastructure and subsidized initiatives to make the city livable.
  • GIFT City, Gujarat: This is an ICT based sustainable city which is a Greenfield development. It is strategically situated. One of the major features of the City is a utility tunnel for providing repair services and the top of the tunnel can be utilized as a walking/jogging track. The city has smart fire safety measures, wide roads to control traffic, smart regulations.
  • TEL AVIV Smart City, Israel: It has been named as the Mediterranean cool city with young and free spirted people. The city comprises of creative class with 3 T’s-talent, technology and tolerance. The city welcomes startups and focuses on G2G, G2C and C2C initiatives by adopting technologically equipped initiatives for effective governance and community building programmes.

Participation

The event saw participation from people across various sectors including Government Representatives of Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers.

  • Foundation for Futuristic Cities: The conference saw participation from this think tank based out of Hyderabad working on establishing vibrant smart cities for a vibrant India. They are currently working on developing a "Smart City Protocol" for Indian cities collaborating with Technology, Government and Corporate partners by making a framework for Smart Cities, Big Data and predictive analytics for safe cities, City Sentiment Analysis, Situation Awareness Tools and mobile Apps for better city life by way of Hackathons and Devthons.
  • Centre for SMART cities, Bangalore: This is a research organization which aims to address the challenge of collaborating and sharing knowledge, resources and best practices that exist both in the private sector and governments/municipal bodies in a usable form and format.
  • BDP – India (Studio Leader – Urbanism): The Organization is based out of Delhi and is involved in providing services relating to master planning, urbanism, design and landscape design. The team includes interior designers, engineers, urbanists, sustainability experts, lighting designers, etc. The vision is to help build and create high quality, effective and inspiring built spaces.
  • UN Women: It is a United Nations Organization working on gender equality, women empowerment and elimination of discrimination. They strive to strengthen rights of women by working with women, men, feminists, women’s networks, governments, local authorities and civil society to create national strategies to advance gender equality in line with national and international priorities. The UN negotiated the 2030 Agenda for Sustainable Development in August 2015 (which would be formally adopted by World leaders in September 2015) and it feature 17 sustainable development goals, one of them being achievement of gender equality and empowerment of all women and girls.
  • Elematic India Pvt. Ltd.: The Company is a leading supplier of precast concrete technology worldwide providing smart solutions for concrete buildings to help enable build smart cities with safe infrastructure.

Conclusion

The event discussed in great detail about what a smart city would look like in a country like India where every city has different demographics, needs and resources.

The Participants had a mutual understanding that a city is not gauged by its length and width, but by the broadness of its vision and height of its dream. The initiative of creating smart cities would echo across the country as a whole and would not be limited to the urban centers. Hence, the plan must be inclusive in implementation and right from its inception, the people and their needs must be given due consideration to make it a success. The issue of the road ahead was resonating in the minds of many, as to how would this exactly happen. Hence, the first step, as was suggested by the experts, was to involve the citizens by primarily informing them, taking their suggestions and planning the project for every city accordingly. While focusing on cities which would be made better by human ingenuity and technology, along with building mechanism for housing, commerce, transportation and utilities, it must not be forgotten that technology is timely, but culture is timeless. The cities must not be faceless and community space must be built with walkable spaces with smart utilization of limited resources. Also, it must be ensured that the cities do not cater to the needs of the elite and skilled population, but also the less privileged community. Adequate urban mapping must be done to ensure placement for community facilities, such as restrooms, trash bins, and information kiosks.

A story shared from personal experience by an expert Architect in building Green infrastructure was highly instrumental in setting the tone of the conference and is bound to stay with many of the participants. The son of the Architect, a small child from Baroda left his father speechless when he questioned him about the absence of butterflies from the Big City of Mumbai since he used to play with butterflies every morning in his hometown in Gujarat. The incident was genuinely thought provoking and left every architect, government representative and engineer thinking that before they step on to build a smart cities with technologically equipped infrastructure and utilities - can we, as a country, come together and ensure to build a smart city with butterflies? Can we pay equal attention to sustainability, environment and requirements of a community in the smart city that is envisioned by the Government to make the city livable and inclusive?

Questions that I, as a participant, am left with are:

  • Building a Greenfield project is comparatively easier than upgrading the existing cities into Smart ones, which requires planning and optimum utilization of resources. The role of local bodies needs to be strengthened which would primarily require skilled workforce, beginning from planning to execution. Therefore, what must be done to make the current cities “Smarter” and how encourage and fund ordinary citizens to redefine and prioritize local needs?
  • The conference touched upon the need for a well-planned policy framework to govern the smart cities; however, what was missing was a discussion on the kind of policies that would be required for every city to ensure governance and monitor the operations. Chalking out well thought of urban policies is the first step towards implementation of the Project and requires deliberation in this regard.
  • The Government plans seem to cater to the needs of a handful of sections of the society and must focus on safety of women, chalk out initiatives to build basic utilities like public toilets, plan the infrastructure keeping in mind the disabled individuals, etc.

This is of paramount importance since it is necessary for the Government to consider who would be the potential inhabitants of these future smart cities and what would be their particular needs. Before the cities are made better by use of technology, there is a requirement of more toilets as a basic utility. Thus, instead of focusing on technological advancement as the sole foundation to make lives of the people easy, the cities must have provision of utilities which are accessible to develop livable smart cities. Hence, what measures would the Government and other bodies involved in the plan take to ensure that the urban enclaves would not oversee the under privileged class?

Another issue that went unnoticed during the two-day event was pertaining to the Fundamental Rights of individuals within the city. For example, the right of privacy, right to access services and utilities, right to security, etc. These basic rights must be given due recognition by the smart city developers to uphold the spirit of these internationally accepted Human Rights principles. Therefore, it is important to ask how these future cities are going to address the rights of its people in the cities?

Apart from plans of working on waste management, another important factor that must not be overlooked is sustainability in terms of maximization of the available resources in the best possible ways and techniques to be adopted to stop the fast paced degradation of the environment.

The conference could suggest more solutions to adopt measures like rain water harvesting, better sewage management in the existing cities.

Also, the importance of big data in building the smart cities was emphasized by many experts. However, the question of regulation of data being generated and released was not talked about. Use of big data analytics involves massive streaming of data which required regulation and control over its use and generation to ensure such information is not misutilised in any way. In such a scenario, how would these cities regulate and govern big data techniques to make the infrastructure and utilities technologically efficient on one hand, but also to use the large data sets in a monitored fashion on the other?

An answer to these crucial issues and questions would have brought about a lot of clarity in minds of all the officials, planners and the potential residents of the Smart Cities in India.


[1] 2014 revision of the World Urbanization Prospects, United Nations, Department of Economic and Social Affairs, July 2014, Available at : http://www.un.org/en/development/desa/publications/2014-revision-world-urbanization-prospects.html

[2] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

Peering behind the veil of ICANN’s DIDP (I)

by Padmini Baruah — last modified Oct 15, 2015 02:42 AM
One of the key elements of the process of enhancing democracy and furthering transparency in any institution which holds power is open access to information for all the stakeholders. This is critical to ensure that there is accountability for the actions of those in charge of a body which utilises public funds and carries out functions in the public interest.

As the body which “...coordinates the Internet Assigned Numbers Authority (IANA) functions, which are key technical services critical to the continued operations of the Internet's underlying address book, the Domain Name System (DNS)[1], the centrality of ICANN in regulating the Internet (a public good if there ever was one) makes it vital that ICANN’s decision-making processes, financial flows, and operations are open to public scrutiny. ICANN itself echoes the same belief, and upholds “...a proven commitment to accountability and transparency in all of its practices[2], which is captured in their By-Laws and Affirmation of Commitments. In furtherance of this, ICANN has created its own Documentary Information Disclosure Policy, where it promises to “...ensure that information contained in documents concerning ICANN's operational activities, and within ICANN's possession, custody, or control, is made available to the public unless there is a compelling reason for confidentiality.[3]

ICANN has a vast array of documents that are already in the public domain, listed here. These include annual reports, budgets, registry reports, speeches, operating plans, correspondence, etc. However, their Documentary Information Disclosure Policy falls short of meeting international standards for information disclosure. In this piece, I have focused on an examination of their defined conditions for non-disclosure of information, which seem to undercut the entire process of transparency that the DIDP process aims towards upholding. The obvious comparison that comes to mind is with the right to information laws that governments the world over have enacted in furtherance of democracy. While ICANN cannot be equated to a democratically elected government, it nonetheless does exercise sufficient regulatory power of the functioning of the Internet for it to owe a similar degree of information to all the stakeholders in the internet community. In this piece, I have made an examination of ICANN’s conditions for non-disclosure, and compared it to the analogous exclusions in India’s Right to Information Act, 2005

ICANN’ꜱ Defined Conditions for Non-Disclosure versus Exclusions in Indian Law :

ICANN, in its DIDP policy identifies a lengthy list of conditions as being sufficient grounds for non-disclosure of information. One of the most important indicators of a strong transparency law is said to be minimum exclusions.[4] However, as seen from the table below, ICANN’s exclusions are extensive and vast, and this has been a barrier in the way of free flow of information. An analysis of their responses to various DIDP requests (available here) shows that the conditions for non-disclosure have been invoked in over 50 of the 85 requests responded to (as of 11.09.2015); i.e., over two-thirds of the requests that ICANN receives are subjected to the non-disclosure policies.

In contrast, an analysis of India’s Right to Information Act, considered to be among the better drafted transparency laws of the world, reveals a much narrower list of exclusions that come in the way of a citizen obtaining any kind of information sought. The table below compares the two lists:

No.

ICANN[5]

India

Analysis

1.

Information provided by or to a government or international organization which was to be kept confidential or would materially affect ICANN’s equation with the concerned body.

Information, disclosure of which would prejudicially affect the sovereignty and integrity of India, the security, "strategic, scientific or economic" interests of the State, relation with foreign State or lead to incitement of an offense[6]/ information received in confidence from foreign government[7]

The threshold for both the bodies is fairly similar for this exclusion.

2.

Internal (staff/Board) information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process

Cabinet papers including records of deliberations of the Council of Ministers, Secretaries and other officers, provided that such decisions the reasons thereof, and the material on the basis of which the decisions were taken shall be made public after the decision has been taken, and the matter is complete, or over (unless subject to these exemptions)[8]

The Indian law is far more transparent as it ultimately allows for the records of internal deliberation to be made public after the decision is taken.

3.

Information related to the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process

No similar provision in Indian Law.

This is an additional restriction that ICANN introduces in addition to the one above, which in itself is quite broad.

4.

Records relating to an individual's personal information

Information which relates to personal information the disclosure of which has no relationship to any public activity or interest, or which would cause unwarranted invasion of the privacy of the individual (but it is also provided that the information which cannot be denied to the Parliament or a State Legislature shall not be denied by this exemption);[9]

Again, the Indian law contains a proviso for information with “relationship to any public activity or interest

5.

Proceedings of internal appeal mechanisms and investigations.

Information which has been expressly forbidden to be published by any court of law or tribunal or the disclosure of which may constitute contempt of court;[10]

While ICANN prohibits the disclosure of all proceedings, in India, the exemption is only to the limited extent of information that the court prohibits from being made public.

6.

Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[11]

This is fairly similar for both lists.

7.

Confidential business information and/or internal policies and procedures.

No similar provision in Indian Law. This is encapsulated in the abovementioned provision

This is fairly similar in both lists.

8.

Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice.

Information, the disclosure of which would endanger the life or physical safety of any person or identify the source of information or assistance given in confidence for law enforcement or security purposes;[12]

This is fairly similar for both lists.

9.

Information subject to any kind of privilege, which might prejudice any investigation

Information, the disclosure of which would cause a breach of privilege of Parliament or the State Legislature[13]/Information which would impede the process of investigation or apprehension or prosecution of offenders;[14]

This is fairly similar in both lists.

10.

Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication.

No similar provision in Indian Law

This exclusion is not present in Indian law, and it is extremely broadly worded, coming in the way of full transparency.

11.

Information that relates in any way to the security and stability of the Internet

No similar provision in Indian Law

This is perhaps necessary to ICANN’s role as the IANA Functions Operator. However, given the large public interest in this matter, there should be some proviso to make information in this regard available to the public as well.

12.

Trade secrets and commercial and financial information not publicly disclosed by ICANN.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[15]

This is fairly similar in both cases.

13.

Information requests:

● which are not reasonable;

● which are excessive or overly burdensome

● complying with which is not feasible

● which are made with an abusive or vexatious purpose or by a vexatious or querulous individual.

No similar provision in Indian Law

Of all the DIDP exclusions, this is the one which is most loosely worded. The terms in this clause are not clearly defined, and it can effectively be used to deflect any request sought from ICANN because of its extreme subjectivity. What amounts to ‘reasonable’? Whom is the process going to ‘burden’? What lens does ICANN use to define a ‘vexatious’ purpose? Where do we look for answers?

14.

No similar provision in ICANN’s DIDP.

Information available to a person in his fiduciary relationship, unless the competent authority is satisfied that the larger public interest warrants the disclosure of such information;[16]

-

15.

No similar provision in ICANN’s DIDP.

Information which providing access to would involve an infringement of copyright subsisting in a person other than the State.[17]

-

Thus, the net cast by the DIDP exclusions policy is more vast than even than that of a democratic state’s transparency law. Clearly, the exclusions above have effectively allowed ICANN to dodge answers to most of the requests floating its way. One can only hope that ICANN realises that these exclusions come in the way of the transparency that they are so committed to, and does away with this unreasonably wide range on the road to the IANA Transition.


[1] https://www.icann.org/resources/pages/welcome-2012-02-25-en

[2] https://www.icann.org/resources/accountability

[3] https://www.icann.org/resources/pages/didp-2012-02-25-en

[4] Shekhar Singh, India: Grassroot Initiatives in Tʜᴇ Rɪɢʜᴛ ᴛᴏ Kɴᴏᴡ 19, 44 (Ann Florin ed., 2007)

[5] In a proviso, ICANN’s DIDP states that all these exemptions can be overridden if the larger public interest is higher. However, this has not yet been reflected in their responses to any DIDP requests.

[6] Section 8(1)(a), Right to Information Act, 2005.

[7] Section 8(1)(f), Right to Information Act, 2005.

[8] Section 8(1)(i), Right to Information Act, 2005.

[9] Section 8(1)(j), Right to Information Act, 2005.

[10] Section 8(1)(b), Right to Information Act, 2005.

[11] Section (1)(d), Right to Information Act, 2005

[12] Section 8(1)(g), Right to Information Act, 2005.

[13] Section 8(1)(c), Right to Information Act, 2005.

[14] Section 8(1)(h), Right to Information Act, 2005.

[15] Section (1)(d), Right to Information Act, 2005

[16] Section 8(1)(e), Right to Information Act, 2005.

[17] Section 9, Right to Information Act, 2005.

Hits and Misses With the Draft Encryption Policy

by Sunil Abraham last modified Sep 26, 2015 04:46 PM
Most encryption standards are open standards. They are developed by open participation in a publicly scrutable process by industry, academia and governments in standard setting organisations (SSOs) using the principles of “rough consensus” – sometimes established by the number of participants humming in unison – and “running code” – a working implementation of the standard. The open model of standards development is based on the Free and Open Source Software (FOSS) philosophy that “many eyes make all bugs shallow”.

The article was published in the Wire on September 26, 2015.


This model has largely been a success but as Edward Snowden in his revelations has told us, the US with its large army of mathematicians has managed to compromise some of the standards that have been developed under public and peer scrutiny. Once a standard is developed, its success or failure depends on voluntary adoption by various sections of the market – the private sector, government (since in most markets the scale of public procurement can shape the market) and end-users. This process of voluntary adoption usually results in the best standards rising to the top. Mandates on high quality encryption standards and minimum key-sizes are an excellent idea within the government context to ensure that state, military, intelligence and law enforcement agencies are protected from foreign surveillance and traitors from within. In other words, these mandates are based on a national security imperative.

However, similar mandates for corporations and ordinary citizens are based on a diametrically opposite imperative – surveillance. Therefore these mandates usually require the use of standards that governments can compromise usually via a brute force method (wherein supercomputers generate and attempt every possible key) and smaller key-lengths for it is generally the case that the smaller the key-length the quicker it is for the supercomputers to break in. These mandates, unlike the ones for state, military, intelligence and law enforcement agencies, interfere with the market-based voluntary adoption of standards and therefore are examples of inappropriate regulation that will undermine the security and stability of information societies.

Plain-text storage requirement

First, the draft policy mandates that Business to Business (B2B) users and Consumer to Consumer (C2C) users store equivalent plain text (decrypted versions) of their encrypted communications and storage data for 90 days from the date of transaction. This requirement is impossible to comply with for three reasons. Foremost, encryption for web sessions are based on dynamically generated keys and users are not even aware that their interaction with web servers (including webmail such as Gmail and Yahoo Mail) are encrypted. Next, from a usability perspective, this would require additional manual steps which no one has the time for as part of their daily usage of technologies. Finally, the plain text storage will become a honey pot for attackers. In effect this requirement is as good as saying “don’t use encryption”.

Second, the policy mandates that B2C and “service providers located within and outside India, using encryption” shall provide readable plain-text along with the corresponding encrypted information using the same software/hardware used to produce the encrypted information when demanded in line with the provisions of the laws of the country. From the perspective of lawful interception and targeted surveillance, it is indeed important that corporations cooperate with Indian intelligence and law enforcement agencies in a manner that is compliant with international and domestic human rights law. However, there are three circumstances where this is unworkable: 1) when the service providers are FOSS communities like the TOR project which don’t retain any user data and as far as we know don’t cooperate with any government; 2) when the service provider provides consumers with solutions based on end-to-end encryption and therefore do not hold the private keys that are required for decryption; and 3) when the Indian market is too small for a foreign provider to take requests from the Indian government seriously.

Where it is technically possible for the service provider to cooperate with Indian law enforcement and intelligence, greater compliance can be ensured by Indian participation in multilateral and multi-stakeholder internet governance policy development to ensure greater harmonisation of substantive and procedural law across jurisdictions. Options here for India include reform of the Mutual Legal Assistance Treaty (MLAT) process and standardisation of user data request formats via the Internet Jurisdiction Project.

Regulatory design

Governments don’t have unlimited regulatory capability or capacity. They have to be conservative when designing regulation so that a high degree of compliance can be ensured. The draft policy mandates that citizens only use “encryption algorithms and key sizes will be prescribed by the government through notification from time to time.” This would be near impossible to enforce given the burgeoning multiplicity of encryption technologies available and the number of citizens that will get online in the coming years. Similarly the mandate that “service providers located within and outside India…must enter into an agreement with the government”, “vendors of encryption products shall register their products with the designated agency of the government” and “vendors shall submit working copies of the encryption software / hardware to the government along with professional quality documentation, test suites and execution platform environments” would be impossible for two reasons: that cloud based providers will not submit their software since they would want to protect their intellectual property from competitors, and that smaller and non-profit service providers may not comply since they can’t be threatened with bans or block orders.

This approach to regulation is inspired by license raj thinking where enforcement requires enforcement capability and capacity that we don’t have. It would be more appropriate to have a “harms”-based approach wherein the government targets only those corporations that don’t comply with legitimate law enforcement and intelligence requests for user data and interception of communication.

Also, while the “Technical Advisory Committee” is the appropriate mechanism to ensure that policies remain technologically neutral, it does not appear that the annexure of the draft policy, i.e. “Draft Notification on modes and methods of Encryption prescribed under Section 84A of Information Technology Act 2000”, has been properly debated by technical experts. According to my colleague Pranesh Prakash, “of the three symmetric cryptographic primitives that are listed – AES, 3DES, and RC4 – one, RC4, has been shown to be a broken cipher.”

The draft policy also doesn’t take into account the security requirements of the IT, ITES, BPO and KPO industries that handle foreign intellectual property and personal information that is protected under European or American data protection law. If clients of these Indian companies feel that the Indian government would be able to access their confidential information, they will take their business to competing countries such as the Philippines.

And the good news is…

On the other hand, the second objective of the policy, which encourages “wider usage of digital Signature by all entities including Government for trusted communication, transactions and authentication” is laudable but should have ideally been a mandate for all government officials as this will ensure non-repudiation. Government officials would not be able to deny authorship for their communications or approvals that they grant for various applications and files that they process.

Second, the setting up of “testing and evaluation infrastructure for encryption products” is also long overdue. The initiation of “research and development programs … for the development of indigenous algorithms and manufacture of indigenous products” is slightly utopian because it will be a long time before indigenous standards are as good as the global state of the art but also notable as an important start.

The more important step for the government is to ensure high quality Indian participation in global SSOs and contributions to global standards. This has to be done through competition and market-based mechanisms wherein at least a billion dollars from the last spectrum auction should be immediately spent on funding existing government organisations, research organisations, independent research scholars and private sector organisations. These decisions should be made by peer-based committees and based on publicly verifiable measures of scientific rigour such as number of publications in peer-reviewed academic journals and acceptance of “running code” by SSOs.

Additionally the government needs to start making mathematics a viable career in India by either employing mathematicians directly or funding academic and independent research organisations who employ mathematicians. The basis of all encryptions standards is mathematics and we urgently need the tribe of Indian mathematicians to increase dramatically in this country.

Cyber 360 Agenda

by Prasad Krishna last modified Oct 02, 2015 03:41 PM

PDF document icon Agenda & Speakers - Cyber 360 conference-1.pdf — PDF document, 886 kB (907878 bytes)

Open Governance and Privacy in a Post-Snowden World : Webinar

by Vanya Rakesh last modified Oct 04, 2015 11:09 AM
On 10th September 2015, the OGP Support Unit, the Open Government Guide, and the World Bank held a webinar on “Open Governance and Privacy in a Post-Snowden World” presented by Carly Nyst, Independent consultant and former Legal Director of Privacy International and Javier Ruiz, Policy Director of Open Rights Group. This is a summary of the key issues that were discussed by the speakers and the participants.

See Open Governance and Privacy in a Post-Snowden World


Summary

The webinar discussed how Government surveillance has become an important and key issue in the 21st century, thanks to Edward Snowden. The main concern raised was with respect to what a democracy should look like in the present day. Should the states’ use of technology enable state surveillance or an open government? Typically, there is a balance that must be achieved between the privacy of an individual and the security of the state – particularly as the former is primarily about social rights and collective interest of citizens.

At the international level, the right to privacy has been recognized as a basic human right and an enabler of other individual freedoms. This right encapsulates protection of personal data where citizens have the authority to choose whether to share or reveal their personal data or not. Due to technological advancement that has enabled collection, storage and sharing of personal data, the right to privacy and data protection frameworks have become of utmost importance and relevance with regard to open government efforts. Therefore, it is important for Governments to be transparent in handling sensitive data that they collect and use.

Many countries have also introduced laws to balance the right to privacy and right to information.  The role of the private sector and NGOs involved in enabling an open and transparent government must also be duly addressed at a national level.

Key Questions:

  • Why should the government release information?

There are multiple reasons for doing so including:

For the purposes of research and public policy (which relates to healthcare, social issues, economics, national statistics, census, etc.)

Transparency and accountability (politicians, registers, public expenses, subsidies, fraud, court records, education)

Public participation and public services (budgets, anti-corruption, engagement, and e-governance).

However, all these have certain risks and privacy implications:

  1. Risk of identification of individual: Any individual whose information is released has the risk of identification, followed by issues like identity theft, discrimination, stigmatization or repression. Normally, the solution for this would be anonymization of the data; however, this is not an absolute solution. Privacy laws can generally cope with such risks, but with pseudonymous data it becomes difficult in preventing identification.
  2. Profiling of social categories which can lead to discrimination: In such a situation, policies and other legislations regulating the use of data and providing remedy for violations can help.
  3. Exploitation and unfair/unethical use of information: When understanding the potential exploitation of information it is useful to consider who is going to benefit from the release of information.  For example, in UK, with respect to release of Health Data, the main concern is that people and companies will benefit commercially from the information released, despite of the result potentially being improved drugs and treatment.
  • What are the Solutions?

The webinar also discussed potential solutions to the questions and challenges posed. For example, when commitments of Open Government Data Partnership are considered, privacy legislations must also be proposed. Further, key stakeholders must make commitments to take pro-active measures to reduce informational asymmetries between the state and citizens.  To reduce the risks, measures must be taken to publish what information the State has or what the Government knows about the citizens. For example, in UK, within the civil society network, it is being duly considered in the national plan that the government will publicize how it will share data and have a centralized view on the process of information handling and usage of the data.

The Open Government Guide provides for Illustrative Commitments like enactment of data protection legislation, establishing programmes for awareness and assessment of their impact, giving citizens control of their personal information and the right to redress when that information is misused, etc.

Surveillance

The issue of surveillance and the role of privacy in an open government context was also discussed.  The need for creating a balance between the legitimate interest of national security and the privacy of individuals was emphasized. With the rise of digital technologies, many governmental measures pertaining to surveillance intervene in individual privacy. There are many forms of surveillance and this has serious privacy implications, especially in developing countries. For example:

  1. Communications surveillance
  2. Visual surveillance
  3. Travel surveillance

This raises the question: When is surveillance legitimate and when must it be allowed?

The International Principles on the Application of Human Rights to Communications Surveillance acts as a soft law and tries to set out what a good surveillance system looks like by ensuring that governments are in compliance with international human rights law.

In essence surveillance does not violate privacy, however, there must be a clear and foreseeable legal framework laying circumstances when the government has the power to collect data and when individuals might be able to foresee when they might be under surveillance.

Also, a competent judicial authority must be established to oversee surveillance and keep a check on executive power by placing restrictions on privacy invasions. The actions of the government must be proportionate and the benefits must not outweigh harm caused by surveillance.

Role of openness in a “mass surveillance” state

Surveillance measures that are being undertaken by governments are increasingly secretive. The European court of Human Rights has held that Secret surveillance may undermine democracy under the cloak of protecting it. Hence, open government and openness will work towards protecting privacy and not undermining it.

To balance the measure of government surveillance with privacy, there is a need to publish laws regulating such powers; publish transparency reports about surveillance, interception and access to communications data; reform legislations relating to surveillance by state agencies to ensure it complies with human rights and establish safeguards to ensure that new technologies used for surveillance and interception respect the right to privacy.

Conclusion

The conclusion one can draw is that Privacy concerns have gained importance in today’s data driven world. The main question that needs to be answered is whether Government’s should adopt surveillance measures or adopt an Open Government?

Considering equal importance of national security and privacy of individuals, it is required that a balance must be crafted between the two. This could be possibly done by enacting foreseeable and clear laws outlining scope of surveillance by the Government on one hand, and informing citizens about such measures on the other. Establishment of a competent judicial authority to keep a check on Government actions is also suggested to work out the delicate balance between surveillance and privacy.

The Legal Validity of Internet Bans: Part I

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:18 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

 

 

In recent months, there has been a spree of bans on access to Internet services in India states, for different reasons. In Gujarat, the State government banned access to mobile Internet (data services) citing breach of peace during the Hardik Patel agitation. In Godhra in Gujarat, mobile Internet was banned as a precautionary measure during Ganesh visarjan. In Kashmir, mobile Internet was banned for three days or more because the government feared that people would share pictures of slaughter of animals during Eid on social media, which would spark unrest across the state.

Can State or Central governments impose a ban on Internet access? If the State or its officials anticipate disorder or a disturbance of ‘public tranquility’, can Internet access through mobiles be banned? According to a recent order of the Gujarat High Court: Yes; Section 144 of the Code of Criminal Procedure, 1973 (“CrPC”) empowers the State government machinery to impose a temporary ban.

But the Gujarat High Court’s order neglects the scope of Section 69A, IT Act, and wrongly finds that the State government can exercise blocking powers under Section 144, CrPC. In this post and the next, we argue that it is Section 69A of the Information Technology Act, 2000 (“IT Act”) which is the legal provision empowering the State to block access to the Internet (including data services), and not Section 144, CrPC. Section 69A covers blocks to Internet access, and since it is a special law dealing with the Internet, it prevails over the general Code of Criminal Procedure.

Moreover, the blocking powers must stay within constitutional boundaries prescribed in, inter alia, Article 19 of the Constitution. Blocking powers are, therefore, subject to the widely-accepted tests of legality (foresight and non-arbitrariness), legitimacy of the grounds for restriction of fundamental rights and proportionality, calling for narrowly tailored restrictions causing minimum disruptions and/or damage.

In Section I of this post, we set out a brief record of the events that preceded the blocking of access to data services (mobile Internet) in several parts of Gujarat. Then in Section II, we summarise the order of the Gujarat High Court, dismissing the petition challenging the State government’s Internet-blocking notification under Section 144, CrPC. In the next post, we examine the scope of Section 69A, IT Act to determine whether it empowers the State and Central government agencies to carry out blocks on Internet access through mobile phones (i.e., data services such as 2G, 3G and 4G) under certain circumstances. We submit that Section 69A does, and that Section 144, CrPC cannot be invoked for this purpose.

I. The Patidar Agitation in Gujarat:

This question arose in the wake of agitation in Gujarat in the Patel community. The Patels or Patidars are politically and economically influential in Gujarat, with several members of the community holding top political, bureaucratic and industrial positions. In the last couple of months, the Patidars have been agitating, demanding to be granted status as Other Backward Classes (OBC). OBC status would make the community eligible for reservations and quotas in educational institutions and for government jobs.

Towards this demand, the Patidars organised multiple rallies across Gujarat in August 2015. The largest rally, called the Kranti Rally, was held in Ahmedabad, Gujarat’s capital city, on August 25, 2015. Hardik Patel, a leader of the agitation, reportedly went on hunger strike seeking that the Patidars’ demands be met by the government, and was arrested as he did not have permission to stay on the rally grounds after the rally. While media reports vary, it is certain that violence and agitation broke out after the rally. Many were injured, some lost their lives, property was destroyed, businesses suffered; the army was deployed and curfew imposed for a few days across the State.

In addition to other security measures, the State government also imposed a ban on mobile Internet services across different parts of Gujarat. Reportedly, Hardik Patel had called for a state-wide bandh over Whatsapp. The police citedconcerns of rumour-mongering and crowd mobilisation through Whatsapp” as a reason for the ban, which was instituted under Section 144, Code of Criminal Procedure, 1973 (“CrPC”). In most of Gujarat, the ban lasted six days, from August 25 to 31, 2015, while it continued in Ahmedabad and Surat for longer.

II. The Public Interest Litigation:

A public interest petition was filed before the Gujarat High Court, challenging the mobile Internet ban. Though the petition was dismissed at the preliminary stage by Acting Chief Justice Jayant Patel and Justice Anjaria by an oral order delivered on September 15, 2015, the legal issues surrounding the ban are important and the order calls for some reflection.

In the PIL, the petitioner prayed that the Gujarat High Court declare that the notification under Section 144, CrPC, which blocked access to mobile Internet, is “void ab initio, ultra vires and unconstitutional” (para 1 of the order). The ban, argued the petitioner, violated Articles 14, 19 and 21 of the Constitution by being arbitrary and excessive, violating citizens’ right to free speech and causing businesses to suffer extensive economic damage. In any event, the power to block websites was specifically granted by Section 69A, IT Act, and so the government’s use of Section 144, CrPC to institute the mobile Internet block was legally impermissible. Not only this, but the government’s ban was excessive in that mobile Internet services were completely blocked; had the government’s concerns been about social media websites like Whatsapp or Facebook, the government could have suspended only those websites using Section 69A, IT Act. And so, the petitioner prayed that the Gujarat High Court issue a writ “permanently restraining the State government from imposing a complete or partial ban on access to mobile Internet/broadband services” in Gujarat.

The State Government saw things differently, of course. At the outset, the government argued that there was “sufficient valid ground for exercise of power” under Section 144, CrPC, to institute a mobile Internet block (para 4 of the order). Had the blocking notification not been issued, “peace could not have been restored with the other efforts made by the State for the maintenance of law and order”. The government stressed that Section 144, CrPC notifications were generally issued as a “last resort”, and in any case, the Internet had not been shut down in Gujarat; broadband and WiFi services continued to be active throughout. Since the government was the competent authority to evaluate law-and-order situations and appropriate actions, the Court ought to dismiss the petition, the State prayed.

The Court agreed with the State government, and dismissed the petition without issuing notice (para 9 of the order). The Court examined two issues in its order (very briefly):

  1. The scope and distinction between Section 144, CrPC and Section 69A, IT Act, and whether the invocation of Section 144, CrPC to block mobile Internet services constituted an arbitrary exercise of power;
  2. The proportionality of the blocking notification (though the Court doesn’t use the term ‘proportionality’).

We will examine the Court’s reading of Section 69A, IT Act and Section 144, CrPC, to see whether their fields of operation are in fact different.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

The Legal Validity of Internet Bans: Part II

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:17 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

As we saw earlier, the Gujarat High Court held that Section 144, CrPC empowers the State apparatus to order blocking of access to data services. According to the Court, Section 69A, IT Act can be used to block certain websites, while under Section 144, CrPC, the District Magistrate can direct telecom companies like Vodafone and Airtel, who extend the facility of Internet access. In effect, the High Court agreed with the State government’s argument that the scope of Section 69A, IT Act covers only blocking of certain websites, while Section 144, CrPC grants a wider power.

This is what the Court said (para 9 of the order):

If the comparison of both the sections in the field of operations is made, barring certain minor overlapping more particularly for public order [sic], one can say that the area of operation of Section 69A is not the same as that of Section 144 of the Code. Section 69A may in a given case also be exercised for blocking certain websites, whereas under Section 144 of the Code, directions may be issued to certain persons who may be the source for extending the facility of internet access. Under the circumstances, we do not find that the contention raised on behalf of the petitioner that the resort to only Section 69A was available and exercise of power under Section 144 of the Code was unavailable, can be accepted.” (emphases ours)

We submit that the High Court’s reasoning failed to examine the scope of Section 69A, IT Act thoroughly. Section 69A does, in fact, empower the government to order blocking of access to data services, and it is a special law. Importantly, it sets forth a procedure that State governments, union territories and the Central Governments must follow to order blocks on websites or data services.

I. Special Law Prevails Over General Law

The IT Act, 2000 is a special law dealing with matters relating to the Internet, including offences and security measures. The CrPC is a general law of criminal procedure.

When a special law and a general law cover the same subject, then the special law supersedes the general law. This is a settled legal principle. Several decisions of the Supreme Court attest to this fact. To take an example, in Maya Mathew v. State of Kerala, (2010) 3 SCR 16 (18 February 2010), when there was a contention between the Special Rules for Kerala State Homoeopathy Services and the general Rules governing state and subordinate services. The Supreme Court held that when a special law and a general law both govern a matter, the Court should try to interpret them harmoniously as far as possible. But if the intention of the legislature is that one law should prevail over another, and this intention is made clear expressly or impliedly, then the Court should give effect to this intention.

On the basis of this principle, let’s take a look at the IT Act, 2000. Section 81, IT Act expressly states that the provisions of the IT Act shall have overriding effect, notwithstanding anything inconsistent with any other law in force. Moreover, in the Statement of Objects and Reasons of the IT (Amendment) Bill, 2006, the legislature clearly notes that amendments inserting offences and security measures into the IT Act are necessary given the proliferation of the Internet and e-transactions, and the rising number of offences. These indicate expressly the legislature’s intention for the IT Act to prevail over general laws like the CrPC in matters relating to the Internet.

Now, we will examine whether the IT Act empowers the Central and State governments to carry out complete blocks on access to the Internet or data services, in the event of emergencies. If the IT Act does cover such a situation, then the CrPC should not be used to block data services. Instead, the IT Act and its Rules should be invoked.

II. Section 69A, IT Act Allows Blocks on Internet Access

Section 69A(1), IT Act says:

“Where the Central Government or any of its officer specially authorised by it in this behalf is satisfied that it is necessary or expedient so to do, in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above, it may subject to the provisions of sub-section (2) for reasons to be recorded in writing, by order, direct any agency of the Government or intermediary to block for access by the public or cause to be blocked for access by the public any information generated, transmitted, received, stored or hosted in any computer resource.” (emphasis ours)

Essentially, Section 69A says that the government can block (or cause to be blocked) for access by the public, any information generated, transmitted, etc. in any computer resource, if the government is satisfied that such a measure is in the interests of public order.

Does this section allow the government to institute bans on Internet access in Gujarat? To determine this, we will examine each underlined term from above.

Access: Section 2(1)(a), IT Act defines access as “...gaining entry into, instructing or communicating with… resources of a computer, computer system or computer network”.

Computer resource: Section 2(1)(k), IT Act defines computer resource as “computer, computer system, computer network...”

Information: Section 2(1)(v), IT Act defines information as “includes… data, message, text, images, sound, voice...”

So ‘blocking for access’ under Section 69A includes preventing gaining entry or communicating with the resources of a computer, computer system or computer network, and it includes blocking communication of data, message, text, images, sound, etc. Now two questions arise:

(1) Do 2G and 3G services, broadband and Wifi fall within the definition of ‘computer network’?

Computer network: Section 2(1)(j), IT Act defines computer network as “inter-connection of one or more computers or computer systems or communication device…” by “...use of satellite, microwave, terrestrial line, wire, wireless or other communication media”.

(2) Do mobile phones that can connect to the Internet (we say smartphones for simplicity) qualify as fall within the definition of ‘computer resource’?

Communication device: Section 2(1)(ha), IT Act defines communication device as “cell phones, personal digital assistance or combination of both or any other device used to communicate, send or transmit any text, video, audio or image”.

So a cell phone is a communication device. A computer network is an inter-connection of communication devices by wire or wireless connections. A computer network is a computer resource also. Blocking of access under Section 69A, IT Act includes, therefore, gaining entry into or communicating with the resources of a computer network, which is an interconnection of communication devices, including smartphones. Add to this, the fact that any information (data, message, text, images, sound, voice) can be blocked, and the conclusion seems clear.

The power to block access to Internet services (including data services) can be found within Section 69A, IT Act itself, the special law enacted to cover matters relating to the Internet. Not only this, the IT Act envisages emergency situations when blocking powers may need to be invoked.

III. Section 69A Permits Blocking in Emergency Situations

Section 69A, IT Act doesn’t act in isolation. The Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“Blocking Rules”) operate together with Section 69A(1).

Rule 9 of the Blocking Rules deals with blocking of information in cases of emergency. It says that in cases of emergency, when “no delay is acceptable”, the Designated Officer (DO) shall examine the request for blocking. If it is within the scope of Section 69A(1) (i.e., within the grounds of public order, etc.), then the DO can submit the request to the Secretary, Department of Electronics and Information Technology (DeitY). If the Secretary is satisfied of the need to block during the emergency, then he may issue a reasoned order for blocking, in writing as an interim measure. The intermediaries do not need to be heard in such a situation.

After a blocking order is issued during an urgent situation, the DO must bring the blocking request to the Committee for Examination of Request constituted under Rule 7, Blocking Rules. There is also a review process, by a Review Committee that meets every two months to evaluate whether blocking directions are in compliance with Section 69A(1) [Rule 14].

We submit, therefore, that the Gujarat High Court erred in holding that Section 144, CrPC is the correct legal provision to enable Internet bans. Not only does Section 69A, IT Act cover blocking of access to Internet services, but it also envisages blocking in emergency situations. As a special law for matters surrounding the Internet, Section 69A should prevail over the general law provision of Section 144, CrPC.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

GSMA Conference Invite

by Prasad Krishna last modified Oct 14, 2015 01:49 AM

PDF document icon Conference Invite.pdf — PDF document, 68 kB (70004 bytes)

Participants of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:49 AM

PDF document icon PARTICIPANTS - I&J MEETING BERLIN 8.-9.10.2015.pdf — PDF document, 131 kB (134278 bytes)

Agenda of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:52 AM

PDF document icon AGENDA - I&J MEETING BERLIN 8.-9.10.2015-2.pdf — PDF document, 96 kB (99176 bytes)

Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India

by Jyoti Panday last modified Oct 14, 2015 02:40 PM
The European Court of Justice has invalidated a European Commission decision, which had previously concluded that the 'Safe Harbour Privacy Principles' provide adequate protections for European citizens’ privacy rights for the transfer of personal data between European Union and United States. The inadequacies of the framework is not news for the European Commission and action by ECJ has been a long time coming. The ruling raises important questions about how the claims of citizenship are being negotiated in the context of the internet, and how increasingly the contestations of personal data are being employed in the discourse.

The European Court of Justice (ECJ) has invalidated a European Commission (EC) decision1 which had previously concluded that the 'Safe Harbor Privacy Principles'2 provide adequate protections for European citizens’ privacy rights3 for the transfer of personal data between European Union and United States. This challenge stems from the claim that public law enforcement authorities in America obtain personal data from organisations in safe harbour for incompatible and disproportionate purposes in violation of the Safe Harbour Privacy Principles. The court's judgment follows the advice of the Advocate General of the Court of Justice of the European Union (CJEU) who recently opined4 that US practices allow for large-scale collection and transfer of personal data belonging to EU citizens without them benefiting from or having access to judicial protection under US privacy laws. The inadequacies of the framework is not news for the Commission and action by ECJ has been a long time coming. The ruling raises important questions about how increasingly the contestations of personal data are being employed in asserting claims of citizenship in context of the internet.

As the highest court in Europe, the ECJ's decisions are binding on all member states. With this ruling the ECJ has effectively restrained US firms from indiscriminate collection and sharing of European citizens’ data on American soil. The implications of the decision are significant, because it shifts the onus of evaluating protections of personal data for EU citizens from the 4,400 companies5 subscribing to the system onto EU privacy watchdogs. Most significantly, in addressing the rights of a citizen against an established global brand, the judgement goes beyond political and legal opinion to challenge the power imbalance that exists with reference to US based firms.

Today, the free movement of data across borders is a critical factor in facilitating trade, financial services, governance, manufacturing, health and development. However, to consider the ruling as merely a clarification of transatlantic mechanisms for data flows misstates the real issue. At the heart of the judgment is the assessment whether US firms apply the tests of ‘necessity and proportionality’ in the collection and surveillance of data for national security purposes. Application of necessity and proportionality test to national security exceptions under safe harbor has been a sticking point that has stalled the renegotiation of the agreement that has been underway between the Commission and the American data protection authorities.6

For EU citizens the stake in the case are even higher, as while their right to privacy is enshrined under EU law, they have no administrative or judicial means of redress, if their data is used for reasons they did not intend. In the EU, citizens accessing and agreeing to use of US based firms are presented with a false choice between accessing benefits and giving up on their fundamental right to privacy. In other words, by seeking that governments and private companies provide better data protection for the EU citizens and in restricting collection of personal data on a generalised basis without objective criteria, the ruling is effectively an assertion of ‘data sovereignty’. The term ‘data sovereignty’, while lacking a firm definition, refers to a spectrum of approaches adopted by different states to control data generated in or passing through national internet infrastructure.7 Underlying the ruling is the growing policy divide between the US and EU privacy and data protection standards, which may lead to what is referred to as the balkanization8 of the internet in the future.

US-EU Data Protection Regime

The safe harbor pact between the EU and US was negotiated in the late 1990s as an attempt to bridge the different approaches to online privacy. Privacy is addressed in the EU as a fundamental human right while in the US it is defined under terms of consumer protection, which allow trade-offs and exceptions when national security seems to be under threat. In order to address the lower standards of data protection prevalent in the US, the pact facilitates data transfers from EU to US by establishing certain safeguards equivalent to the requirements of the EU data protection directive. The safe harbor provisions include firms undertaking not to pass personal information to third parties if the EU data protection standards are not met and giving users right to opt out of data collection.9

The agreement was due to be renewed by May 201510 and while negotiations have been ongoing for two years, EU discontent on safe harbour came to the fore following the Edward Snowden revelations of collection and monitoring facilitated by large private companies for the PRISM program and after the announcement of the TransAtlantic Trade and Investment Partnership (TTIP).11 EU member states have mostly stayed silent as they run their own surveillance programs often times, in cooperation with the NSA. EU institutions cannot intervene in matters of national security however, they do have authority on data protection matters. European Union officials and Members of Parliament have expressed shock and outrage at the surveillance programs unveiled by Snowden's 2013 revelations. Most recently, following the CJEU Advocate General’s opinion, 50 Members of European Parliament (MEP) sent a strongly worded letter the US Congress hitting back on claims of ‘digital protectionism’ emanating from the US12. In no uncertain terms the letter clarified that the EU has different ideas on privacy, platforms, net neutrality, encryption, Bitcoin, zero-days, or copyright and will seek to improve and change any proposal from the EC in the interest of our citizens and of all people.

Towards Harmonization

In November 2013, as an attempt to minimize the loss of trust following the Snowden revelations, the European Commission (EC) published recommendations in its report on 'Rebuilding Trust is EU-US Data Flows'.13 The recommendations revealed two critical initiatives at the EU level—first was the revision of the EU-US safe harbor agreement14 and second the adoption of the 'EU-US Umbrella Agreement15'—a framework for data transfer for the purpose of investigating, detecting, or prosecuting a crime, including terrorism. The Umbrella Agreement was recently initialed by EU and US negotiators and it only addresses the exchange of personal data between law enforcement agencies.16 The Agreement has gained momentum in the wake of recent cases around issues of territorial duties of providers, enforcement jurisdictions and data localisation.17 However, the adoption of the Umbrella Act depends on US Congress adoption of the Judicial Redress Act (JRA) as law.18

Judicial Redress Act

The JRA is a key reform that the EC is pushing for in an attempt to address the gap between privacy rights and remedies available to US citizens and those extended to EU citizens, including allowing EU citizens to sue in American courts. The JRA seeks to extend certain protections under the Privacy Act to records shared by EU and other designated countries with US law enforcement agencies for the purpose of investigating, detecting, or prosecuting criminal offenses. The JRA protections would extend to records shared under the Umbrella Agreement and while it does include civil remedies for violation of data protection, as noted by the Center for Democracy and Technology, the present framework does not provide citizens of EU countries with redress that is at par with that which US persons enjoy under the Privacy Act.19

For example, the measures outlined under the JRA would only be applicable to countries that have outlined appropriate privacy protections agreements for data sharing for investigations and ‘efficiently share’ such information with the US. Countries that do not have agreements with US cannot seek these protections leaving the personal data of their citizens open for collection and misuse by US agencies. Further, the arrangement leaves determination of 'efficiently sharing' in the hands of US authorities and countries could lose protection if they do not comply with information sharing requests promptly. Finally, JRA protections do not apply to non-US persons nor to records shared for purposes other than law enforcement such as intelligence gathering. JRA is also weakened by allowing heads of agencies to exercise their discretion to seek exemption from the Act and opt out of compliance.

Taken together the JRA, the Umbrella Act and the renegotiation of the Safe Harbor Agreement need considerable improvements. It is worth noting that EU’s acceptance of the redundancy of existing agreements and in establishing the independence of national data protection authorities in investigating and enforcing national laws as demonstrated in the Schrems and in the Weltimmo20 case point to accelerated developments in the broader EU privacy landscape.

Consequences

The ECJ Safe Harbor ruling will have far-reaching consequences for the online industry. Often, costly government rulings solidify the market dominance of big companies. As high regulatory costs restrict the entrance of small and medium businesses the market, competition is gradually wiped out. Further, complying with high standards of data protection means that US firms handling European data will need to consider alternative legal means of transfer of personal data. This could include evolving 'model contracts' binding them to EU data protection standards. As Schrems points out, “Big companies don’t only rely on safe harbour: they also rely on binding corporate rules and standard contractual clauses.”21

The ruling is good news for European consumers, who can now approach a national regulator to investigate suspicions of data mishandling. EU data protection regulators may be be inundated with requests from companies seeking authorization of new contracts and with consumer complaints. Some are concerned that the ruling puts a dent in the globalized flow of data22, effectively requiring data localization in Europe.23 Others have pointed out that it is unclear how this decision sits with other trade treaties such as the TPP that ban data localisation.24 While the implications of the decision will take some time in playing out, what is certain is that US companies will be have to restructure management, storage and use of data. The ruling has created the impetus for India to push for reforms to protect its citizens from harms by US firms and improve trade relations with EU.

The Opportunity for India

Multiple data flows taking place over the internet simultaneously and that has led to ubiquity of data transfers o ver the Internet, exposing individuals to privacy risks. There has also been an enhanced economic importance of data processing as businesses collect and correlate data using analytic tools to create new demands, establish relationships and generate revenue for their services. The primary concern of the Schrems case may be the protection of the rights of EU citizens but by seeking to extend these rights and ensure compliance in other jurisdictions, the case touches upon many underlying contestations around data and sovereignty.

Last year, Mr Ram Narain, India Head of Delegation to the Working Group Plenary at ITU had stressed, “respecting the principle of sovereignty of information through network functionality and global norms will go a long way in increasing the trust and confidence in use of ICT.”25 In the absence of the recognition of privacy as a right and empowering citizens through measures or avenues to seek redressal against misuse of data, the demand of data sovereignty rings empty. The kind of framework which empowered an ordinary citizen in the EU to approach the highest court seeking redressal based on presumed overreach of a foreign government and from harms abetted by private corporations simply does not exist in India. Securing citizen’s data in other jurisdictions and from other governments begins with establishing protection regimes within the country.

The Indian government has also stepped up efforts to restrict transfer of data from India including pushing for private companies to open data centers in India.26 Negotiating data localisation does not restrict the power of private corporations from using data in a broad ways including tailoring ads and promoting products. Also, data transfers impact any organisation with international operations for example, global multinationals who need to coordinate employee data and information. Companies like Facebook, Google and Microsoft transfer and store data belonging to Indian citizens and it is worth remembering that the National Security Agency (NSA) would have access to this data through servers of such private companies. With no existing measures to restrict such indiscriminate access, the ruling purports to the need for India to evolve strong protection mechanisms. Finally, the lack of such measures also have an economic impact, as reported in a recent Nasscom-Data Security Council of India (DSCI) survey27 that pegs revenue losses incurred by the Indian IT-BPO industry at $2-2.5 billion for a sample size of 15 companies. DSCI has further estimated that outsourcing business can further grow by $50 billion per annum once India is granted a “data secure” status by the EU.28 EU’s refusal to grant such a status is understandable given the high standard of privacy as incorporated under the European Union Data Protection Directive a standard to which India does not match up, yet. The lack of this status prevents the flow of data which is vital for Digital India vision and also affects the service industry by restricting the flow of sensitive information to India such as information about patient records.

Data and information structures are controlled and owned by private corporations and networks transcend national borders, therefore the foremost emphasis needs to be on improving national frameworks. While, enforcement mechanisms such as the Mutual Legal Assistance Treaty (MLAT) process or other methods of international cooperation may seem respectful of international borders and principles of sovereignty,29 for users that live in undemocratic or oppressive regimes such agreements are a considerable risk. Data is also increasingly being stored across multiple jurisdictions and therefore merely applying data location lens to protection measures may be too narrow. Further it should be noted that when companies begin taking data storage decisions based on legal considerations it will impact the speed and reliability of services.30 Any future regime must reflect the challenges of data transfers taking place in legal and economic spaces that are not identical and may be in opposition. Fundamentally, the protection of privacy will always act as a barrier to the free flow of information even so, as the Schrems case ruling points out not having adequate privacy protections could also restrict flow of data, as has been the case for India.

The time is right for India to appoint a data controller and put in place national frameworks, based on nuanced understanding of issues of applying jurisdiction to govern users and their data. Establishing better protection measures will not only establish trust and enhance the ability of users to control data about themselves it is also essential for sustaining economic and social value generated from data generation and collection. Suggestions for such frameworks have been considered previously by the Group of Experts on Privacy constituted by the Planning Commission.31 By incorporating transparency in mechanisms for data and access requests and premising requests on established necessity and proportionality Indian government can lead the way in data protection standards. This will give the Indian government more teeth to challenge and address both the dangers of theft of data stored on servers located outside of India and restrain indiscriminate access arising from terms and conditions of businesses that grant such rights to third parties. 

1 Commission Decision of 26 July 2000 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce (notified under document number C(2000) 2441) (Text with EEA relevance.) Official Journal L 215 , 25/08/2000 P. 0007 -0047 2000/520/EC: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000D0520:EN:HTML

2 Safe Harbour Privacy Principles Issued by the U.S. Department of Commerce on July 21, 2000 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

4 Advocate General’s Opinion in Case C-362/14 Maximillian Schrems v Data Protection Commissioner Court of Justice of the European Union, Press Release, No 106/15 Luxembourg, 23 September 2015 http://curia.europa.eu/jcms/upload/docs/application/pdf/2015-09/cp150106en.pdf

5 Jennifer Baker, ‘EU desperately pushes just-as-dodgy safe harbour alternatives’, The Register, October 7, 2015 http://www.theregister.co.uk/2015/10/07/eu_pushes_safe_harbour_alternatives/ 

6 Draft Report, General Data Protection Regulation, Committee on Civil Liberties, Justice and Home Affairs, European Parliament, 2009-2014 http://www.europarl.europa.eu/meetdocs/2009_2014/documents/libe/pr/922/922387/922387en.pdf

7 Dana Polatin-Reuben, Joss Wright, ‘An Internet with BRICS Characteristics: Data Sovereignty and the Balkanisation of the Internet’, University of Oxford, July 7, 2014 https://www.usenix.org/system/files/conference/foci14/foci14-polatin-reuben.pdf

8 Sasha Meinrath, The Future of the Internet: Balkanization and Borders, Time, October 2013 http://ideas.time.com/2013/10/11/the-future-of-the-internet-balkanization-and-borders/

9 Safe Harbour Privacy Principles, Issued by the U.S. Department of Commerce, July 2001 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

10 Facebook case may force European firms to change data storage practices, The Guardian, September 23, 2015 http://www.theguardian.com/us-news/2015/sep/23/us-intelligence-services-surveillance-privacy

11 Privacy Tracker, US-EU Safe Harbor Under Pressure, August 2, 2013 https://iapp.org/news/a/us-eu-safe-harbor-under-pressure

12 Kieren McCarthy, Privacy, net neutrality, security, encryption ... Europe tells Obama, US Congress to back off, The Register, 23 September, 2015 http://www.theregister.co.uk/2015/09/23/european_politicians_to_congress_back_off/

13 Communication from the Commission to the European Parliament and the Council, Rebuilding Trust in EU-US Data Flows, European Commission, November 2013 http://ec.europa.eu/justice/data-protection/files/com_2013_846_en.pdf

14 Safe Harbor on trial in the European Union, Access Blog, September 2014 https://www.accessnow.org/blog/2014/11/13/safe-harbor-on-trial-in-the-european-union

15 European Commission - Fact Sheet Questions and Answers on the EU-US data protection "Umbrella agreement", September 8, 2015 http://europa.eu/rapid/press-release_MEMO-15-5612_en.htm 

16 McGuire Woods, ‘EU and U.S. reach “Umbrella Agreement” on data transfers’, Lexology, September 14, 2015 http://www.lexology.com/library/detail.aspx?g=422bca41-2d54-4648-ae57-00d678515e1f

17 Andrew Woods, Lowering the Temperature on the Microsoft-Ireland Case, Lawfare September, 2015 https://www.lawfareblog.com/lowering-temperature-microsoft-ireland-case

18 Jens-Henrik Jeppesen, Greg Nojeim, ‘The EU-US Umbrella Agreement and the Judicial Redress Act: Small Steps Forward for EU Citizens’ Privacy Rights’, October 5, 2015 https://cdt.org/blog/the-eu-us-umbrella-agreement-and-the-judicial-redress-act-small-steps-forward-for-eu-citizens-privacy-rights/

19 Ibid 18.

20 Landmark ECJ data protection ruling could impact Facebook and Google, The Guardian, 2 October, 2015 http://www.theguardian.com/technology/2015/oct/02/landmark-ecj-data-protection-ruling-facebook-google-weltimmo

21 Julia Powles, Tech companies like Facebook not above the law, says Max Schrems, The Guardian, Octover 9, 2015 http://www.theguardian.com/technology/2015/oct/09/facebook-data-privacy-max-schrems-european-court-of-justice

22 Adam Thierer, Unintended Consequences of the EU Safe Harbor Ruling, The Technology Liberation Front, October 6, 2015 http://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/#more-75831

23 Anupam Chander, Tweeted ECJ #schrems ruling may effectively require data localization within Europe, https://twitter.com/AnupamChander/status/651369730754801665

24 Lokman Tsui, Tweeted, “If the TPP bans data localization, but the ECJ ruling effectively mandates it, what does that mean for the internet?” https://twitter.com/lokmantsui/status/651393867376275456

26 Sounak Mitra, Xiaomi bets big on India despite problems, Business Standard, December 2014 http://www.business-standard.com/article/companies/xiaomi-bets-big-on-india-despite-problems-114122201023_1.html

27 Neha Alawadi, Ruling on data flow between EU & US may impact India’s IT sector, Economic Times,October 7, 2015 http://economictimes.indiatimes.com/articleshow/49250738.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

28 Pranav Menon, Data Protection Laws in India and Data Security- Impact on India and Data Security-Impact on India - EU Free Trade Agreement, CIS Access to Knowledge, 2011 http://cis-india.org/a2k/blogs/data-security-laws-india.pdf

29 Surendra Kumar Sinha, India wants Mutual Legal Assistance treaty with Bangladesh, Economic Times, October 7, 2015 http://economictimes.indiatimes.com/articleshow/49262294.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

30 Pablo Chavez, Director, Public Policy and Government Affairs, Testifying before the U.S. Senate on transparency legislation, November 3, 2013 http://googlepublicpolicy.blogspot.in/2013/11/testifying-before-us-senate-on.htm 

31 Report of the Group of Experts on Privacy (Chaired by Justice A P Shah, Former Chief Justice, Delhi High Court), Planning Commission, October 2012 http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

 

 

 

Peering behind the veil of ICANN's DIDP (II)

by Padmini Baruah — last modified Oct 15, 2015 03:14 AM
In a previous blog post, I had introduced the concept of ICANN’s Documentary Information Disclosure Policy (“DIDP”) and their extremely vast grounds for non-disclosure. In this short post, I have made an analysis of every DIDP request that ICANN has ever responded to, to point out the flaws in their policy that need to be urgently remedied.

Read the previous blog post here. Every DIDP request that ICANN has ever responded to can be accessed here.


The table here is a comprehensive breakdown of all the different DIDP requests that ICANN has responded to. This table is to be read with this document, which has a numbered list of the different non-disclosure exceptions outlined in ICANN’s policy. What I sought to scrutinize was the number of times ICANN has provided satisfactory information, the number of times it has denied information, and the grounds for the same. What we found was alarming:

  1. Of a total of 91 requests (as of 13/10/2015), ICANN has fully and positively responded to only 11.
  2. It has responded partially to 47 of 91 requests, with some amount of information (usually that which is available as public records).
  3. It has not responded at all to 33 of 91 requests.
  4. The Non-Disclosure Clause (1)[1] has been invoked 17 times.
  5. The Non-Disclosure Clause (2)[2] has been invoked 39 times.
  6. The Non-Disclosure Clause (3)[3] has been invoked 31 times.
  7. The Non-Disclosure Clause (4)[4] has been invoked 5 times.
  8. The Non-Disclosure Clause (5)[5] has been invoked 34 times.
  9. The Non-Disclosure Clause (6)[6] has been invoked 35 times.
  10. The Non-Disclosure Clause (7)[7] has been invoked once.
  11. The Non-Disclosure Clause (8)[8] has been invoked 22 times.
  12. The Non-Disclosure Clause (9)[9] has been invoked 30 times.
  13. The Non-Disclosure Clause (10)[10] has been invoked 10 times.
  14. The Non-Disclosure Clause (11)[11] has been invoked 12 times.
  15. The Non-Disclosure Clause (12)[12] has been invoked 18 times.

This data is disturbing because it reveals that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide.

Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. Therefore, it is clear that there is a failure on part of ICANN to uphold the transparency it claims to stand for, and this needs to be remedied at the earliest.

Pie Chart 1


 

Pie Chart 2


[1]Information provided by or to a government or international organization, or any form of recitation of such information, in the expectation that the information will be kept confidential and/or would or likely would materially prejudice ICANN's relationship with that party

[2]Internal information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process by inhibiting the candid exchange of ideas and communications, including internal documents, memoranda, and other similar communications to or from ICANN Directors, ICANN Directors' Advisors, ICANN staff, ICANN consultants, ICANN contractors, and ICANN agents

[3]Information exchanged, prepared for, or derived from the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process between and among ICANN, its constituents, and/or other entities with which ICANN cooperates by inhibiting the candid exchange of ideas and communications

[4]Personnel, medical, contractual, remuneration, and similar records relating to an individual's personal information, when the disclosure of such information would or likely would constitute an invasion of personal privacy, as well as proceedings of internal appeal mechanisms and investigations

[5]Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement

[6]Confidential business information and/or internal policies and procedures

[7]Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice

[8]Information subject to the attorney– client, attorney work product privilege, or any other applicable privilege, or disclosure of which might prejudice any internal, governmental, or legal investigation

[9]Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication

[10]Information that relates in any way to the security and stability of the Internet, including the operation of the L Root or any changes, modifications, or additions to the root zone

[11]Trade secrets and commercial and financial information not publicly disclosed by ICANN

[12]Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual

Comments on the Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

by Geetha Hariharan last modified Oct 16, 2015 02:44 AM
On 9 October 2015, the Zero Draft of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released. Comments were sought on the Zero Draft from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

These comments were prepared by Geetha Hariharan with inputs from Sumandro Chattapadhyay, Pranesh Prakash, Sunil Abraham, Japreet Grewal and Nehaa Chaudhari. Download the comments here.


  1. The Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“Zero Draft”) is divided into three sections: (A) ICT for Development; (B) Internet Governance; (C) Implementation and Follow-up. CIS’ comments follow the same structure.
  2. The Zero Draft is a commendable document, covering crucial areas of growth and challenges surrounding the WSIS. The Zero Draft makes detailed references to development-related challenges, noting the persistent digital divide, the importance of universal access, innovation and investment, and of enabling legal and regulatory environments conducive to the same. It also takes note of financial mechanisms, without which principles would remain toothless. Issues surrounding Internet governance, particularly net neutrality, privacy and the continuation of the IGF are included in the Zero Draft.
  3. However, we believe that references to these issues are inadequate to make progress on existing challenges. Issues surrounding ICT for Development and Internet Governance have scarcely changed in the past ten years. Though we may laud the progress so far achieved, universal access and connectivity, the digital divide, insufficient funding, diverse and conflicting legal systems surrounding the Internet, the gender divide and online harassment persist. Moreover, the working of the IGF and the process of Enhanced Cooperation, both laid down with great anticipation in the Tunis Agenda, have been found wanting.
  4. These need to be addressed more clearly and strongly in the Zero Draft. In light of these shortcomings, we suggest the following changes to the Zero Draft, in the hope that they are accepted.
    A. ICT for Development
  5. Paragraphs 16-21 elaborate upon the digital divide – both the progresses made and challenges. While the Zero Draft recognizes the disparities in access to the Internet among countries, between men and women, and of the languages of Internet content, it fails to attend to two issues.
  6. First, accessibility for persons with disabilities continues to be an immense challenge. Since the mandate of the WSIS involves universal access and the bridging of the digital divide, it is necessary that the Zero Draft take note of this continuing challenge.
  7. We suggest the insertion of Para 20A after Para 20:
    “20A. We draw attention also to the digital divide adversely affecting the accessibility of persons with disabilities. We call on all stakeholders to take immediate measures to ensure accessibility for persons with disabilities by 2020, and to enhance their capacity and access to ICTs.”
  8. Second, while the digital divide among the consumers of ICTs has decreased since 2003-2005, the digital production divide goes unmentioned. The developing world continues to have fewer producers of technology compared to their sheer concentration in the developed world – so much so that countries like India are currently pushing for foreign investment through missions like ‘Digital India’. Of course, the Zero Draft refers to the importance of private sector investment (Para 31). But it fails to point out that currently, such investment originates from corporations in the developed world. For this digital production divide to disappear, restrictions on innovation – restrictive patent or copyright regimes, for instance – should be removed, among other measures. Equitable development is the key.
  9. Ongoing negotiations of plurilateral agreements such as the Trans-Pacific Partnership (TPP) go unmentioned in the Zero Draft. This is shocking. The TPP has been criticized for its excessive leeway and support for IP rightsholders, while incorporating non-binding commitments involving the rights of users (see Clause QQ.G.17 on copyright exceptions and limitations, QQ.H.4 on damages and QQ.C. 12 on ccTLD WHOIS, https://wikileaks.org/tpp-ip3/WikiLeaks-TPP-IP-Chapter/WikiLeaks-TPP-IP-Chapter-051015.pdf). Plaudits for progress make on the digital divide would be lip service if such agreements were not denounced.
  10. Therefore, we propose the addition of Para 20B after Para 20:
    “20B. We draw attention also to the digital production divide among countries, recognizing that domestic innovation and production are instrumental in achieving universal connectivity. Taking note of recent negotiations surrounding restrictive and unbalanced plurilateral trade agreements, we call on stakeholders to adopt policies to ensure globally equitable development, removing restrictions on innovation and conducive to fostering domestic and local production.”
  11. Paragraph 22 of the Zero Draft acknowledges that “school curriculum requirements for ICT, open access to data and free flow of information, fostering of competition, access to finance”, etc. have “in many countries, facilitated significant gains in connectivity and sustainable development”.
  12. This is, of course, true. However, as Para 23 also recognises, access to knowledge, data and innovation have come with large costs, particularly for developing countries like India. These costs are heightened by a lack of promotion and adoption of open standards, open access, open educational resources, open data (including open government data), and other free and open source practices. These can help alleviate costs, reduce duplication of efforts, and provide an impetus to innovation and connectivity globally.
  13. Not only this, but the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of “Data Revolution for Sustainable Development”. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.
  14. As such, we suggest that following clause be inserted as Para 23A to the Zero Draft:

“23A. We recognize the importance of access to open, affordable, and reliable technologies and services, open access to knowledge, and open data, including open government data, and encourage all stakeholders to explore concrete options to facilitate the same.”

15. Paragraph 30 of the Zero Draft laments “the lack of progress on the Digital Solidarity Fund”, and calls “for a review of options for its future”.

16. The Digital Solidarity Fund was established with the objective of “transforming the digital divide into digital opportunities for the developing world” through voluntary contributions [Para 28, Tunis Agenda]. It was an innovative financial mechanism to help bridge the digital divide between developed and developing countries. This divide continues to exist, as the Zero Draft itself recognizes in Paragraphs 16-21.

17. Given the persistent digital divide, a “call for review of options” as to the future of the Digital Solidarity Fund is inadequate to enable developing countries to achieve parity with developed countries. A stronger and more definite commitment is required.

18. As such, we suggest the following language in place of the current Para 30:

“30. We express concern at the lack of progress on the Digital Solidarity Fund, welcomed in Tunis as an innovative financial mechanism of a voluntary nature, and we call for voluntary commitments from States to revive and sustain the Digital Solidarity Fund.”

19. Paragraph 31 of the Zero Draft recognizes the importance of “legal and regulatory frameworks conducive to investment and innovation”. This is eminently laudable. However, a broader vision is more compatible with paving the way for affordable and widespread access to devices and technology necessary for universal connectivity.

20. We suggest the following additions to Para 31:

“31. We recognise the critical importance of private sector investment in ICT access, content and services, and of legal and regulatory frameworks conducive to local investment and expansive, permissionless innovation.”

B. Internet Governance

21. Paragraph 32 of the Zero Draft recognizes the “general agreement that the governance of the Internet should be open, inclusive, and transparent”. Para 37 takes into account “the report of the CSTD Working Group on improvements to the IGF”. Para 37 also affirms the intention of the General Assembly to extend the life of the IGF by (at least) another 5 years, and acknowledges the “unique role of the IGF”.

22. The IGF is, of course, unique and crucial to global Internet governance. In the last 10 years, major strides have been made among diverse stakeholders in beginning and sustaining conversations on issues critical to Internet governance. These include issues such as human rights, inclusiveness and diversity, universal access to connectivity, emerging issues such as net neutrality, the right to be forgotten, and several others. Through its many arms like the Dynamic Coalitions, the Best Practices Forums, Birds-of-a-Feather meetings and Workshops, the IGF has made it possible for stakeholders to connect.

23. However, the constitution and functioning of the IGF have not been without lament and controversy. Foremost among the laments was the IGF’s evident lack of outcome-orientation; this continues to be debatable. Second, the composition and functioning of the MAG, particularly its transparency, have come under the microscope several times. One of the suggestions of the CSTD Working Group on Improvements to the IGF concerned the structure and working methods of the Multistakeholder Advisory Group (MAG). The Working Group recommended that the “process of selection of MAG members should be inclusive, predictable, transparent and fully documented” (Section II.2, Clause 21(a), Page 5 of the Report).

24. Transparency in the structure and working methods of the MAG are critical to the credibility and impact of the IGF. The functioning of the IGF depends, in a large part, on the MAG. The UN Secretary General established the MAG, and it advises the Secretary General on the programme and schedule of the IGF meetings each year (see <http://www.intgovforum.org/cms/mag/44-about-the-mag>). Under its Terms of Reference, the MAG decides the main themes and sub-themes for each IGF, sets or modifies the rules of engagement, organizes the main plenary sessions, coordinates workshop panels and speakers, and crucially, evaluates the many submissions it receives to choose from amongst them the workshops for each IGF meeting. The content of each IGF, then, is in the hands of the MAG.

25. But the MAG is not inclusive or transparent. The MAG itself has lamented its opaque ‘black box approach’ to nomination and selection. Also, CIS’ research has shown that the process of nomination and selection of the MAG continues to be opaque. When CIS sought information on the nominators of the MAG, the IGF Secretariat responded that this information would not be made public (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

26. Further, our analysis of MAG membership shows that since 2006, 26 persons have served for 6 years or more on the MAG. This is astounding, since under the MAG Terms of Reference, MAG members are nominated for a term of 1 year. This 1-year-term is “automatically renewable for 2 more consecutive years”, but such renewal is contingent on an evaluation of the engagement of MAG members in their activities (see <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference>). MAG members ought not serve for over 3 consecutive years, in accordance with their Terms of Reference. But out of 182 MAG members, around 62 members have served more than the 3-year terms designated by their Terms of Reference (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

27. Not only this, but our research showed 36% of all MAG members since 2006 have hailed from the Western European and Others Group (see <http://cis-india.org/internet-governance/blog/mag-analysis>). This indicates a lack of inclusiveness, though the MAG is certainly more inclusive than the composition and functioning of other I-Star organisations such as ICANN.

28. Tackling these infirmities within the MAG would go a long way in ensuring that the IGF lives up to its purpose. Therefore, we suggest the following additions to Para 37:

“37. We acknowledge the unique role of the Internet Governance Forum (IGF) as a multistakeholder platform for discussion of Internet governance issues, and take note of the report and recommendations of the CSTD Working Group on improvements to the IGF, which was approved by the General Assembly in its resolution, and ongoing work to implement the findings of that report. We reaffirm the principles of openness, inclusiveness and transparency in the constitution, organisation and functioning of the IGF, and in particular, in the nomination and selection of the Multistakeholder Advisory Group (MAG). We extend the IGF mandate for another five years with its current mandate as set out in paragraph 72 of the Tunis Agenda for the Information Society. We recognize that, at the end of this period, progress must be made on Forum outcomes and participation of relevant stakeholders from developing countries.”

29. Paragraphs 32-37 of the Zero Draft make mention of “open, inclusive, and transparent” governance of the Internet. It fails to take note of the lack of inclusiveness and diversity in Internet governance organisations – extending across representation, participation and operations of these organisations. In many cases, mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle. In substantive terms, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For example, the ‘diversity’ mailing list of IETF has very low traffic. Within ICANN, 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed.

30. We propose, therefore, the addition of a Para 37A after Para 37:

“37A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, and call upon these organisations to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

31. Paragraphs 36 of the Zero Draft notes that “a number of member states have called for an international legal framework for Internet governance.” But it makes no reference to ICANN or the importance of the ongoing IANA transition to global Internet governance. ICANN and its monopoly over several critical Internet resources was one of the key drivers of the WSIS in 2003-2005. Unfortunately, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

32. Therefore, we suggest the following Para 37A after Para 37:

“37A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

33. Paragraph 43 of the Zero Draft encourages “all stakeholders to ensure respect for privacy and the protection of personal information and data”. But the Zero Draft inadvertently leaves out the report of the Office of the UN High Commissioner for Human Rights on digital privacy, ‘The right to privacy in the digital age’ (A/HRC/27/37). This report, adopted by the Human Rights Council in June 2014, affirms the importance of the right to privacy in our increasingly digital age, and offers crucial insight into recent erosions of privacy. It is both fitting and necessary that the General Assembly take note of and affirm the said report in the context of digital privacy.

34. We offer the following suggestion as an addition to Para 43:

“43. We emphasise that no person shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, or correspondence, consistent with countries’ applicable obligations under international human rights law. In this regard, we acknowledge the report of the Office of the UN High Commissioner for Human Rights, ‘The right to privacy in the digital age’ (A/HRC/27/37, 30 June 2014), and take note of its findings. We encourage all stakeholders to ensure respect for privacy and the protection of personal information and data.”

35. Paragraphs 40-44 of the Zero Draft state that communication is a fundamental human need, reaffirming Article 19 of the Covenant on Civil and Political Rights, with its attendant narrow limitations. The Zero Draft also underscores the need to respect the independence of the press. Particularly, it reaffirms the principle that the same rights that people enjoy offline must also be protected online.

36. Further, in Para 31, the Zero Draft recognizes the “critical importance of private sector investment in ICT access, content, and services”. This is true, of course, but corporations also play a crucial role in facilitating the freedom of speech and expression (and all other related rights) on the Internet. As the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online. They are, in sum, intermediaries without whom the Internet cannot function.

37. Given this, it is essential that the outcome document of the WSIS+10 Overall Review recognize and affirm the role of the private sector, and crucially, its responsibilities to respect and protect human rights online.

38. We suggest, therefore, the insertion of the following paragraph Para 42A, after Para 42:

“42A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

C. Implementation and Follow-up

39. Para 57 of the Zero Draft calls for a review of the WSIS Outcomes, and leaves a black space inviting suggestions for the year of the review. How often, then, should the review of implementation of WSIS+10 Outcomes take place?

40. It is true, of course, that reviews of the implementation of WSIS Outcomes are necessary to take stock of progress and challenges. However, we caution against annual, biennal or other such closely-spaced reviews due to concerns surrounding budgetary allocations.

41. Reviews of implementation of outcomes (typically followed by an Outcome Document) come at considerable cost, which are budgeted and achieved through contributions (sometimes voluntary) from states. Were Reviews to be too closely spaced, budgets that ideally ought to be utilized to bridge digital divides and ensure universal connectivity, particularly for developing states, would be misspent in reviews. Moreover, closely-spaced reviews would only provide superficial quantitative assessments of progress, but would not throw light on longer term or qualitative impacts.

Comments on the Zero Draft of the UN General Assembly

by Prasad Krishna last modified Oct 16, 2015 02:41 AM

PDF document icon Final_CIS_Comments_UNGA_WSIS_Zero_Draft.pdf — PDF document, 478 kB (490106 bytes)

CyFy Agenda

by Prasad Krishna last modified Oct 16, 2015 03:01 AM

PDF document icon CyFyAgendaFinal.pdf — PDF document, 190 kB (195156 bytes)

The 'Global Multistakholder Community' is Neither Global Nor Multistakeholder

by Pranesh Prakash last modified Nov 03, 2016 10:42 AM
CIS research shows how Western, male, and industry-driven the IANA transition process actually is.

 

In March 2014, the US government announced that they were going to end the contract they have with ICANN to run something called the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

Analysis of the process since then shows how flawed the “global multistakeholder community” that converges at ICANN has not actually represented the disparate interests and concerns of different stakeholders. CIS research has found that the discussions around IANA transition have not been driven by the “global multistakeholder community”, but mostly by males from industry in North America and Western Europe.

CIS analysed the five main mailing lists where the IANA transition plan was formulated: ICANN’s ICG Stewardship and CCWG Accountability lists; IETF’s IANAPLAN list; and the NRO’s IANAXFER list and CRISP lists. What we found was quite disheartening.

  • A total of 239 individuals participated cumulatively, across all five lists.
  • Only 98 substantively contributed to the final shape of the ICG proposal, if one takes a count of 20 mails (admittedly, an arbitrary cut-off) as a substantive contribution, with 12 of these 98 being ICANN staff some of whom were largely performing an administrative function.

We decided to look at the diversity within these substantive contributors using gender, stakeholder grouping, and region. We relied on public records, including GNSO SOI statements, and extensive searches on the Web. Given that, there may be inadvertent errors, but the findings are so stark that even a few errors wouldn’t affect them much.

  • 2 in 5 (39 of 98, or 40%) were from a single country: the United States of America.
  • 4 in 5 (77 of 98) were from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries.
  • None were from the EEC (Eastern European and Russia) group, and only 5 of 98 from all of GRULAC (Latin American and Caribbean Group).
  • 4 in 5 (77 of 98) were male and 21 were female.
  • 4 in 5 (76 of 98) were from industry or the technical community, and only 4 (or 1 in 25​) were identifiable as primarily speaking on behalf of governments.

This shows also that the process has utterly failed in achieving the recommendation of Paragraph 6 of the 3 in 5 registrars are from the United States of America (624 out of 1010, as of March 2014, according to ICANN's accredited registrars list), with only 0.6% being from the 54 countries in Africa (7 out of 1010).

  • 45% of all the registries are from the United States of America! (307 out of 672 registries listed in ICANN’s registry directory in August 2015.)
  • 66% (34 of 51) of the Business Constituency at ICANN are from a single country: the United States of America. (N.B.: This page doesn’t seem to be up-to-date.)
  • This shows that businesses from the United States of America continues to dominate ICANN to a very significant degree, and this is also reflected in the nature of the dialogue within ICANN, including the fact that the proposal that came out of the ICANN ‘global multistakeholder community’ on IANA transition proposes a clause that requires the ‘IANA Functions Operator’ to be a US-based entity. For more on that issue, see this post on the jurisdiction issue at ICANN (or rather, on the lack of a jurisdiction issue at ICANN).

    Policy Brief: Oversight Mechanisms for Surveillance

    by Elonnai Hickok last modified Nov 24, 2015 06:09 AM

    Download the PDF


    Introduction

    Across jurisdictions, the need for effective and relevant oversight mechanisms (coupled with legislative safeguards) for state surveillance has been highlighted by civil society, academia, citizens and other key stakeholders.[1] A key part of oversight of state surveillance is accountability of intelligence agencies. This has been recognized at the international level. Indeed, the Organization for Economic Co-operation and Development, The United Nations, the Organization for Security and Cooperation in Europe, the Parliamentary Assembly of the Council of Europe, and the Inter-Parliamentary Union have all recognized that intelligence agencies need to be subject to democratic accountability.[2] Since 2013, the need for oversight has received particular attention in light of the information disclosed through the 'Snowden Revelations'. [3] Some countries such as the US, Canada, and the UK have regulatory mechanisms for the oversight of state surveillance and the intelligence community, while many other countries – India included - have piecemeal oversight mechanisms in place. The existence of regulatory mechanisms for state surveillance does not necessarily equate to effective oversight – and piecemeal mechanisms – depending on how they are implemented, could be more effective than comprehensive mechanisms. This policy brief seeks to explore the purpose of oversight mechanisms for state surveillance, different forms of mechanisms, and what makes a mechanism effective and comprehensive. The brief also reviews different oversight mechanisms from the US, UK, and Canada and provides recommendations for ways in which India can strengthen its present oversight mechanisms for state surveillance and the intelligence community.

    What is the purpose and what are the different components of an oversight mechanism for State Surveillance?

    The International Principles on the Application of Human Rights to Communication Surveillance, developed through a global consultation with civil society groups, industry, and international experts recommends that public oversight mechanisms for state surveillance should be established to ensure transparency and accountability of Communications Surveillance. To achieve this, mechanisms should have the authority to:

    • Access all potentially relevant information about State actions, including, where appropriate, access to secret or classified information;
    • Assess whether the State is making legitimate use of its lawful capabilities;
    • Evaluate whether the State has been comprehensively and accurately publishing information about the use and scope of Communications Surveillance techniques and powers in accordance with its Transparency obligations publish periodic reports and other information relevant to Communications Surveillance;
    • Make public determinations as to the lawfulness of those actions, including the extent to which they comply with these Principles[4]

    What can inform oversight mechanisms for state surveillance?

    The development of effective oversight mechanisms for state surveillance can be informed by a number of factors including:

    • Rapidly changing technology – how can mechanisms adapt, account for, and evaluate perpetually changing intelligence capabilities?
    • Expanding surveillance powers – how can mechanisms evaluate and rationalize the use of expanding agency powers?
    • Tensions around secrecy, national interest, and individual rights – how can mechanisms respect, recognize, and uphold multiple competing interests and needs including an agency's need for secrecy, the government's need to protect national security, and the citizens need to have their constitutional and fundamental rights upheld?
    • The structure, purpose, and goals of specific intelligence agencies and circumstances– how can mechanisms be sensitive and attuned to the structure, purpose, and functions of differing intelligence agencies and circumstances?

    These factors lead to further questions around:

    • The purpose of an oversight mechanism: Is an oversight mechanism meant to ensure effectiveness of an agency? Perform general reviews of agency performance? Supervise the actions of an agency? Hold an agency accountable for misconduct?
    • The structure of an oversight mechanism: Is it internal? External? A combination of both? How many oversight mechanisms that agencies should be held accountable to?
    • The functions of an oversight mechanism: Is an oversight mechanism meant to inspect? Evaluate? Investigate? Report?
    • The powers of an oversight mechanism: The extent of access that an oversight mechanism needs and should have to the internal workings of security agencies and law enforcement to carry out due diligence? The extent of legal backing that an oversight mechanism should have to hold agencies legally accountable.

    What oversight mechanisms for State Surveillance exist in India?

    In India the oversight 'ecosystem' for state surveillance is comprised of:

    1. Review committee: Under the Indian Telegraph Act 1885 and the Rules issued thereunder (Rule 419A), a Central Review Committee that consists of the Cabinet Secretary, Secretary of Legal Affairs to the Government of India, Secretary of Department of Telecommunications to the Government of India is responsible for meeting on a bi-monthly basis and reviewing the legality of interception directions. The review committee has the power to revoke the directions and order the destruction of intercepted material.[5] This review committee is also responsible for evaluating interception, monitoring, and decryption orders issued under section 69 of the Information Technology Act 2000.[6] and orders for the monitoring and collection of traffic data under section 69B of the Information Technology Act 2000.[7]
    2. Authorizing Authorities: The Secretary in the Ministry of Home Affairs of the Central Government is responsible for authorizing requests for the interception, monitoring, and decryption of communications issued by central agencies.[8] The Secretary in charge of the Home Department is responsible for authorizing requests for the interception, monitoring, and decryption of communications from state level agencies and law enforcement.[9] The Secretary to the Government of India in the Department of Information Technology under the Ministry of Communications and Information Technology is responsible for authorizing requests for the monitoring and collection of traffic data.[10] Any officer not below the rank of Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary in this behalf, may authorize the interception of communications in case of an emergency.[11] A Commissioner of Police, District Superintendent of Police or Magistrate may issue requests for stored data to any postal or telegraph authority.[12]
    3. Administrative authorities: India does not have an oversight mechanism for intelligence agencies, but agencies do report to different authorities. For example: The Intelligence Bureau reports to the Home Minister, the Research and Anaylsis Wing is under the Cabinet Secretariat and reports to the Prime Minister, the Joint Intelligence Committee (JIC), National Technical Research Organisation (NTRO) and Aviation Research Centre (ARC) report to the National Security Adviser; and the National Security Council Secretariat under the NSA which serves the National Security Council.[13]

    It is important to note that though India has a Right to Information Act, but most of the security agencies are exempt from the purview of the Act[14] as is disclosure of any information that falls under the purview of the Official Secrets Act 1923.[15] [Note: There is no point in listing out all the exceptions given in section 8 and other sections as well. I think the point is sufficiently made when we say that security agencies are exempt from the purview of the Act.] The Official Secrets Act does not provide a definition of an 'official secret' and instead protects information: pertaining to national Security, defence of the country, affecting friendly relations with foreign states, etc.[16] Information in India is designated as classified in accordance to the Manual of Departmental Security Instruction which is circulated by the Ministry of Home Affairs. According to the Public Records Rules 1997, “classified records" means the files relating to the public records classified as top-secret, confidential and restricted in accordance with the procedure laid down in the Manual of Departmental Security Instruction circulated by the Ministry of Home affairs from time to time;”[17] Bi-annually officers evaluate and de-classify classified information and share the same with the national archives.[18] In response to questions raised in the Lok Sabha on the 5th of May 2015 regarding if the Official Secrets Act, 1923 will be reviewed, the number of classified files stored with the Government under the Act, and if the Government has any plans to declassify some of the files – the Ministry of Home Affairs clarified that a committee consisting of Secretaries of the Ministry of Home Affairs, the Department of Personnel and Training, and the Department of Legal Affairs has been established to examine the provisions of the Official Secrets Act, 1923 particularly in light of the Right to Information Act, 2005. The Ministry of Home Affairs also clarified that the classification and declassification of files is done by each Government Department as per the Manual of Departmental Security Instructions, 1994 and thus there is no 'central database of the total number of classified files'.[19]

    How can India's oversight mechanism for state surveillance be clarified?

    Though these mechanisms establish a basic framework for an oversight mechanism for state surveillance in India, there are aspects of this framework that could be clarified and there are ways in which the framework could be strengthened.

    Aspects of the present review committee that could be clarified:

    1. Powers of the review committee: Beyond having the authority to declare that orders for interception, monitoring, decryption, and collection of traffic data are not within the scope of the law and order for destruction of any collected information – what powers does the review committee have? Does the committee have the power to compel agencies to produce additional or supporting evidence? Does the committee have the power to compel information from the authorizing authority?
    2. Obligations of the review committee: The review committee is required to 'record its findings' as to whether the interception orders issued are in accordance with the law. Is there a standard set of questions/information that must be addressed by the committee when reviewing an order? Does the committee only review the content of the order or do they also review the implementation of the order? Beyond recording its findings, are there any additional reporting obligations that the review committee must fulfill?
    3. Accountability of the review committee: Does the review committee answer to a higher authority? Do they have to submit their findings to other branches of the government – such as Parliament? Is there a mechanism to ensure that the review committee does indeed meet every two months and review all orders issued under the relevant sections of the Indian Telegraph Act 1885 and the Information Technology Act 2008?

    Proposed oversight mechanisms in India

    Oversight mechanisms can help with avoiding breaches of national security by ensuring efficiency and effectiveness in the functioning of security agencies. The need for the oversight of state surveillance is not new in India. In 1999 the Union Government constituted a Committee with the mandate of reviewing the events leading up to Pakistani aggression in Kargil and to recommend measures towards ensuring national security. Though the Kargil Committee was addressing surveillance from the perspective of gathering information on external forces, there are parellels in the lessons learned for state surveillance. Among other findings, in their Report the Committee found a number of limitations in the system for collection, reporting, collation, and assessment of intelligence. The Committee also found that there was a lack of oversight for the intelligence community in India – resulting in no mechanisms for tasking the agencies, monitoring their performance and overall functioning, and evaluating the quality of the work.

    The Committee also noted that such a mechanism is a standard feature in jurisdictions across the world. The Committee emphasized this need from an economic perspective – that without oversight – the Government and the nation has no way of evaluating whether or not they are receiving value for their money. The Committee recommended a review of the intelligence system with the objective of solving such deficiencies.[20]

    In 2000 a Group of Ministers was established to review the security and intelligence apparatus of the country. In their report issued to the Prime Minister, the Group of Ministers recommended the establishment of an Intelligence Coordination Group for the purpose of providing oversight of intelligence agencies at the Central level. Specifically the Intelligence Coordination Group would be responsible for:

    • Allocation of resources to the intelligence agencies
    • Consideration of annual reviews on the quality of inputs
    • Approve the annual tasking for intelligence collection
    • Oversee the functions of intelligence agencies
    • Examine national estimates and forecasts[21]

    Past critiques of the Indian surveillance regime have included the fact that intelligence agencies do not come under the purview of any overseeing mechanism including Parliament, the Right to Information Act 2005, or the General Comptroller of India.

    In 2011, Manish Tewari, who at the time was a Member of Parliament from Ludhiana, introduced the Private Member's Bill - “The Intelligence Services (Powers and Regulation) Bill” proposed stand alone statutory regulation of intelligence agencies. In doing so it sought to establish an oversight mechanism for intelligence agencies within and outside of India. The Bill was never introduced into Parliament.[22] Broadly, the Bill sought to establish: a National Intelligence and Security Oversight Committee which would oversee the functionings of intelligence agencies and would submit an annual report to the Prime Minister, a National Intelligence Tribunal for the purpose of investigating complaints against intelligence agencies, an Intelligence Ombudsman for overseeing and ensuring the efficient functioning of agencies, and a legislative framework regulating intelligence agencies.[23]

    Proposed policy in India has also explored the possibility of coupling surveillance regulation and oversight with private regulation and oversight. In 2011 the Right to Privacy Bill was drafted by the Department of Personnel and Training. The Bill proposed to establish a “Central Communication Interception Review Committee” for the purposes of reviewing orders for interception issued under the Telegraph Act. The Bill also sought to establish an authorization process for surveillance undertaken by following a person, through CCTV's, or other electronic means.[24] In contrast, the 2012 Report of the Group of Experts on Privacy, which provided recommendations for a privacy framework for India, recommended that the Privacy Commissioner should exercise broad oversight functions with respect to interception/access, audio & video recordings, the use of personal identifiers, and the use of bodily or genetic material.[25]

    A 2012 report by the Institute for Defence Studies and Analyses titled “A Case for Intelligence Reforms in India” highlights at least four 'gaps' in intelligence that have resulted in breaches of national security including: zero intelligence, inadequate intelligence, inaccurate intelligence, and excessive intelligence – particularly in light of additional technical inputs and open source inputs.[26] In some cases, an oversight mechanism could help in remediating some of these gaps. Returning to the 2012 IDSA Report, the Report recommends the following steps towards an oversight mechanism for Indian intelligence:

    • Establishing an Intelligence Coordination Group (ICG) that will exercise oversight functions for the intelligence community at the Central level. This could include overseeing functions of the agencies, quality of work, and finances.
    • Enacting legislation defining the mandates, functions, and duties of intelligence agencies.
    • Holding intelligence agencies accountable to the Comptroller & Auditor General to ensure financial accountability.
    • Establishing a Minister for National Security & Intelligence for exercising administrative authority over intelligence agencies.
    • Establishing a Parliamentary Accountability Committee for oversight of intelligence agencies through parliament.
    • Defining the extent to which intelligence agencies can be held accountable to reply to requests pertaining to violations of privacy and other human rights issued under the Right to Information Act.

    Highlighting the importance of accountable surveillance frameworks, in 2015 the external affairs ministry director general of India Santosh Jha stated at the UN General Assembly that the global community needs to "to create frameworks so that Internet surveillance practices motivated by security concerns are conducted within a truly transparent and accountable framework.”[27]

    In what ways can India's mechanisms for state surveillance be strengthened?

    Building upon the recommendations from the Kargil Committee, the Report from the Group of Ministers, the Report of the Group of Experts on Privacy, the Draft Privacy Bill 2011, and the IDSA report, ways in which the framework for oversight of state surveillance in India could be strengthened include:

    • Oversight to enhance public understanding, debate, accountability, and democratic governance: State surveillance is unique in that it is enabled with the objective of protecting a nations security. Yet, to do so it requires citizens of a nation to trust the actions taken by intelligence agencies and to allow for possible access into their personal lives and possible activities that might infringe on their constitutional rights (such as freedom of expression) for a larger outcome of security. Because of this, oversight mechanisms for state surveillance must balance securing national security while submitting itself to some form of accountability to the public.
    • Independence of oversight mechanisms: Given the Indian context, it is particularly important that an oversight mechanism for surveillance powers and the intelligence community is capable of addressing and being independent from political interference. Indeed, the majority of cases regarding illegal interceptions that have reached the public sphere pertain to the surveillance of political figures and political turf wars.[28] Furthermore, though the current Review Committee established in the Indian Telegraph Act does not have a member from the Ministry of Home Affairs (the Ministry responsible for authorizing interception requests), it is unclear how independent this committee is from the authorizing Ministry. To ensure non-biased oversight, it is important that oversight mechanisms are independent.
    • Legislative regulation of intelligence agencies: Currently, intelligence agencies are provided surveillance powers through the Information Technology Act and the Telegraph Act, but beyond the National Intelligence Agency Act which establishes the National Intelligence Agency, there is no legal mechanism creating, regulating and overseeing intelligence agencies using these powers. In the 'surveillance ecosystem' this creates a policy vacuum, where an agency is enabled through law with a surveillance power and provided a procedure to follow, but is not held legally accountable for the effective, ethical, and legal use of the power. To ensure legal accountability of the use of surveillance techniques, it is important that intelligence are created through legislation that includes oversight provisions.
    • Comprehensive oversight of all intrusive measures: Currently the Review Committee established under the Telegraph Act is responsible for the evaluation of orders for the interception, monitoring, decryption, and collection of traffic data. The Review Committee is not responsible for reviewing the implementation or effectiveness of such orders and is not responsible for reviewing orders for access to stored information or other forms of electronic surveillance. This situation is a result of 1. Present oversight mechanisms not having comprehensive mandates 2. Different laws in India enabling different levels of access and not providing a harmonized oversight mechanism and 3.Indian law not formally addressing and regulating emerging surveillance technologies and techniques. To ensure effectiveness, it is important for oversight mechanisms to be comprehensive in mandate and scope.
    • Establishment of a tribunal or redress mechanism: India currently does not have a specified means for individuals to seek redress for unlawful surveillance or surveillance that they feel has violated their rights. Thus, individuals must take any complaint to the courts. The downsides of such a system include the fact that the judiciary might not be able to make determinations regarding the violation, the court system in India is overwhelmed and thus due process is slow, and given the sensitive nature of the topic – courts might not have the ability to immediately access relevant documentation. To ensure redress, it is important that a tribunal or a redress mechanism with appropriate powers is established to address complaints or violations pertaining to surveillance.
    • Annual reporting by security agencies, law enforcement, and service providers: Information regarding orders for surveillance and the implementation of the same is not disclosed by the government or by service providers in India.[29] Indeed, service providers by law are required to maintain the confidentiality of orders for the interception, monitoring, or decryption of communications and monitoring or collection of traffic data. At the minimum, an oversight mechanism should receive annual reports from security agencies, law enforcement, and service providers with respect to the surveillance undertaken. Edited versions of these Reports could be shared with Parliament and the public.
    • Consistent and mandatory reviews of relevant legislation: Though committees have been established to review various legislation and policy pertaining to state surveillance, the time frame for these reviews is not clearly defined by law. These reviews should take place on a consistent and publicly stated time frame. Furthermore, legislation enabling surveillance in India do not require review and assessment for relevance, adequacy, necessity, and proportionality after a certain period of time. Mandating that legislation regulating surveillance is subject to review on a consistent is important in ensuring that the provisions are relevant, proportionate, adequate, and necessary.
    • Transparency of classification and declassification process and centralization of de-classified records: Currently, the Ministry of Home Affairs establishes the process that government departments must follow for classifying and de-classifying information. This process is not publicly available and de-classified information is stored only with the respective department. For transparency purposes, it is important that the process for classification of records be made public and the practice of classification of information take place in exceptional cases. Furthermore, de-classified records should be stored centrally and made easily accessible to the public.
    • Executive and administrative orders regarding establishing of agencies and surveillance projects should be in the public domain: Intelligence agencies and surveillance projects in India are typically enabled through executive orders. For example, NATGRID was established via an executive order, but this order is not publicly available. As a form of transparency and accountability to the public, it is important that if executive orders establish an agency or a surveillance project, these are made available to the public to the extent possible.
    • Oversight of surveillance should incorporate privacy and cyber/national security: Increasingly issues of surveillance, privacy, and cyber security are interlinked. Any move to establish an oversight mechanism for surveillance and the intelligence committee must incorporate and take into consideration privacy and cyber security. This could mean that an oversight mechanism for surveillance in India works closely with CERT-IN and a potential privacy commissioner or that the oversight mechanism contains internal expertise in these areas to ensure that they are adequately considered.
    • Oversight by design: Just like the concept of privacy by design promotes the ideal that principles of privacy are built into devices, processes, services, organizations, and regulation from the outset – oversight mechanisms for state surveillance should also be built in from the outset of surveillance projects and enabling legislation. In the past, this has not been the practice in India– the National Intelligence Grid was an intelligence system that sought to link twenty one databases together – making such information easily and readily accessible to security agencies – but the oversight of such a system was never defined.[30] Similarly, the Centralized Monitoring System was conceptualized to automate and internalize the process of intercepting communications by allowing security agencies to intercept communications directly and bypass the service provider.[31] Despite amending the Telecom Licenses to provide for the technical components of this project, oversight of the project or of security agencies directly accessing information has yet to be defined.[32]

    Examples of oversight mechanisms for State Surveillance: US, UK, Canada and United States

    United States

    In the United States the oversight 'ecosystem' for state surveillance is made up of:

    The Foreign Intelligence Surveillance Court

    The U.S Foreign Intelligence Surveillance Court (FISA) is the predominant oversight mechanism for state surveillance and oversees and authorizes the actions of the Federal Bureau of Investigation and the National Security Agency.[33] The court was established by the enactment of the Foreign Intelligence Surveillance Act 1978 and is governed by Rules of Procedure, the current Rules being formulated in 2010.[34] The Court is empowered to ensure compliance with the orders that it issues and the government is obligated to inform the Court if orders are breached.[35] FISA allows for individuals who receive an order from the Court to challenge the same,[36] and public filings are available on the Court's website.[37] Additionally, organizations, including the American Civil Liberties Union[38] and the Electronic Frontier Foundation, have filed motions with the Court for release of records. [39] Similarly, Google has approached the Court for the ability to publish aggregate information regarding FISA orders that the company recieves.[40]

    Government Accountability Office

    The U.S Government Accountability Office (GAO) is an independent office that works for Congress and conducts audits, investigates, provides recommendations, and issues legal decisions and opinions with regard to federal government spending of taxpayer's money by the government and associated agencies including the Defence Department, the FBI, and Homeland Security.[41] The head of the GAO is the Comptroller General of the United States and is appointed by the President. The GAO will initiate an investigation if requested by congressional committees or subcommittees or if required under public law or committee reports. The GOA has reviewed topics relating to Homeland Security, Information Security, Justice and Law Enforcement, National Defense, and Telecommunications.[42] For example, in June 2015 the GOA completed an investigation and report on 'Foreign Terrorist Organization Process and U.S Agency Enforcement Actions” [43] and an investigation on “Cyber Security: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies”.[44]

    Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence

    The U.S. Senate Select Committee on Intelligence is a standing committee of the U.S Senate with the mandate to review intelligence activities and programs and ensure that these are inline with the Constitution and other relevant laws. The Committee is also responsible for submitting to Senate appropriate proposals for legislation, and for reporting to Senate on intelligence activities and programs.[45] The House Permanent Select Committee holds similar jurisdiction. The House Permanent Select Committee is committed to secrecy and cannot disclose classified information excepted authorized to do so. Such an obligation does not exist for the Senate Select Committee on Intelligence and the committee can disclose classified information publicly on its own.[46]

    Privacy and Civil Liberties Oversight Board (PCLOB)

    The Privacy and Civil Liberties Oversight Board was established by the Implementing Recommendations of the 9/11 Commission Act of 2007 and is located within the executive branch.[47] The objective of the PCLOB is to ensure that the Federal Government's actions to combat terrorism are balanced against privacy and civil liberties. Towards this, the Board has the mandate to review and analyse ant-terrorism measures the executive takes and ensure that such actions are balanced with privacy and civil liberties, and to ensure that privacy and civil liberties are liberties are adequately considered in the development and implementation of anti-terrorism laws, regulations and policies.[48] The Board is responsible for developing principles to guide why, whether, when, and how the United States conducts surveillance for authorized purposes. Additionally, officers of eight federal agencies must submit reports to the PCLOB regarding the reviews that they have undertaken, the number and content of the complaints, and a summary of how each complaint was handled. In order to fulfill its mandate, the Board is authorized to access all relevant records, reports, audits, reviews, documents, papers, recommendations, and classified information. The Board may also interview and take statements from necessary personnel. The Board may request the Attorney General to subpoena on the Board's behalf individuals outside of the executive branch.[49]

    To the extent possible, the Reports of the Board are made public. Examples of recommendations that the Board has made in the 2015 Report include: End the NSA”s bulk telephone records program, add additional privacy safeguards to the bulk telephone records program, enable the FISC to hear independent views on novel and significant matters, expand opportunities for appellate review of FISC decisions, take advantage of existing opportunities for outside legal and technical input in FISC matters, publicly release new and past FISC and DISCR decisions that involve novel legal, technical, or compliance questions, publicly report on the operation of the FISC Special Advocate Program, Permit Companies to Disclose Information about their receipt of FISA production orders and disclose more detailed statistics on surveillance, inform the PCLOB of FISA activities and provide relevant congressional reports and FISC decisions, begin to develop principles for transparency, disclose the scope of surveillance authorities affecting US Citizens.[50]

    The Wiretap Report

    The Wiretap Report is an annual compilation of information provided by federal and state officials regarding applications for interception orders of wire, oral, or electronic communications, data address offenses under investigation, types and locations of interception devices, and costs and duration of authorized intercepts.[51] When submitting information for the report a judge will include the name and jurisdiction of the prosecuting official who applied for the order, the criminal offense under investigation, the type of intercept device used, the physical location of the device, and the duration of the intercept. Prosecutors provide information related to the cost of the intercept, the number of days the intercept device was in operation, the number of persons whose communications were intercepted, the number of intercepts, and the number of incriminating intercepts recorded. Results of the interception orders such as arrest, trials, convictions, and the number of motions to suppress evidence are also noted in the prosecutor reports. The Report is submitted to Congress and is legally required under Title III of the Omnibus Crime Control and Safe Streets Act of 1968. The report is issued by the Administrative Office of the United States Courts.[52]

    United Kingdom

    The Intelligence and Security Committee (ISC) of Parliament

    The Intelligence Security Committee was established by the Intelligence Services Act 1994. Members are appointed by the Prime Minster and the Committee reports directly to the same. Additionally, the Committee submits annual reports to Parliament. Towards this, the Committee can take evidence from cabinet ministers, senior officials, and from the public.[53] The most recent report of the Committee is the 2015 “Report on Privacy and Security”.[54] Members of the Committee are subject to the Official Secrets Act 1989 and have access to classified material when carrying out investigations.[55]

    Joint Intelligence Committee (JIC)

    This Joint Intelligence Committee is located in the Cabinet office and is broadly responsible for overseeing national intelligence organizations and providing advice to the Cabinet on issues related to security, defense, and foreign affairs. The JIC is overseen by the Intelligence and Security Committee.[56]

    The Interception of Communications Commissioner

    The Interception of Communications Commissioner is appointed by the Prime Minster under the Regulation of Investigatory Powers Act 2000 for the purpose of reviewing surveillance conducted by intelligence agencies, police forces, and other public authorities. Specifically, the Commissioner inspects the interception of communications, the acquisition and disclosure of communications data, the interception of communications in prisons, and the unintentional electronic interception.[57] The Commissioner submits an annual report to the Prime Minister. The Reports of the Commissioner are publicly available.[58]

    The Intelligence Services Commissioner

    The Intelligence Services Commissioner is an independent body appointed by the Prime Minister that is legally empowered through the Regulation of Investigatory Powers Act (RIPA) 2000. The Commissioner provides independent oversight on the use of surveillance by UK intelligence services.[59] Specifically, the Commissioner is responsible for reviewing authorized interception orders and the actions and performance of the intelligence services.[60] The Commissioner is also responsible for providing assistance to the Investigatory Powers Tribunal, submitting annual reports to the Prime Minister on the discharge of its functions, and advising the Home Office on the need of extending the Terrorism Prevention and Investigation Measures regime.[61] Towards these the Commissioner conducts in-depth audits on the orders for interception to ensure that the surveillance is within the scope of the law, that the surveillance was necessary for a legally established reason, that the surveillance was proportionate, that the information accessed was justified by the privacy invaded, and that the surveillance authorized by the appropriate official. The Commissioner also conducts 'site visits' to ensure that orders are being implemented as per the law.[62] As a note, the Intelligence Services Commissioner does not undertake any subject that is related to the Interception of Communications Commissioner. The Commissioner has access to any information that he feels is necessary to carry out his investigations. The Reports of the Intelligence Service Commissioner are publicly available.[63]

    Investigatory Powers Tribunal

    The Investigatory Powers Tribunal is a court which investigates complaints of unlawful surveillance by public authorities or intelligence/law enforcement agencies.[64] The Tribunal was established under the Regulation of Investigatory Powers Act 2000 and has a range of oversight functions to ensure that public authorities act and agencies are in compliance with the Human Rights Act 1998.[65] The Tribunal specifically is an avenue of redress for anyone who believes that they have been a victim of unlawful surveillance under RIPA or wider human rights infringements under the Human Rights Act 1998. The Tribunal can provide seven possible outcomes for any application including 'found in favor of complainant, no determination in favour of complainant, frivolous or vexatious, out of time, out of jurisdiction, withdrawn, or no valid complaint.[66] The Tribunal has the authority to receive and consider evidence in any form, even if inadmissible in an ordinary court.[67] Where possible, cases are available on the Tribunal's website. Decisions by the Tribunal cannot be appealed, but can be challenged in the European Court of Human Rights.[68]

    Canada

    In Canada the oversight 'ecosystem' for state surveillance includes:

    Security Intelligence Review Committee

    The Security Intelligence Review Committee is an independent body that is accountable to the Parliament of Canada and reports on the Canadian Security Intelligence Service.[69] Members of the Security Intelligence Review Committee are appointed by the Prime Minister of Canada. The committee conducts reviews on a pro-active basis and investigates complaints. Committee members have access to classified information to conduct reviews. The Committee submits an annual report to Parliament and an edited version is publicly available. The 2014 Report was titled “Lifting the Shroud of Secrecy”[70] and includes reviews of the CSIS's activities, reports on complaints and subsequent investigations, and provides recommendations.

    Office of the Communications Security Establishment Commissioner

    The Communications Security Commissioner conducts independent reviews of Communications Security Establishment (CSE) activities to evaluate if they are within the scope of Canadian law.[71] The Commissioner submits a report to Parliament on an annual basis and has a number of powers including the power to subpoena documents and personnel.[72] If the Commissioner believes that the CSE has not complied with the law – it must report this to the Attorney General of Canada and to the Minister of National Defence. The Commissioner may also receive information from persons bound to secrecy if they deem it to be in the public interest to disclose such information.[73] The Commissioner is also responsible for verifying that the CSE does not surveil Canadians and for promoting measures to protect the privacy of Canadians.[74] When conducting a review, the Commissioner has the ability to examine records, receive briefings, interview relevant personnel, assess the veracity of information, listen to intercepted voice recordings, observe CSE operators and analysts to verify their work, examine CSI electronic tools, systems and databases to ensure compliance with the law.[75]

    Office of the Privacy Commissioner

    The Office of the Privacy Commissioner of Canada (OPC) oversees the implementation of and compliance with the Privacy Act and the Personal information and Electronic Documents Act.[76]

    The OPC is an independent body that has the authority to investigate complaints regarding the handling of personal information by government and private companies, but can only comment on the activities of security and intelligence agencies. For example, in 2014 the OPC issued the report “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber Surveillance”[77] The OPC can also provide testimony to Parliament and other government bodies.[78] For example, the OPC has made appearances before the Senate Standing Committee of National Security and Defense on Bill C-51.[79] The OPC cannot conduct joint audits or investigations with other bodies.[80]

    Annual Interception Reports

    Under the Criminal Code of Canada, regional governments must issue annual interception reports. The reports must include number of individuals affected by interceptions, average duration of the interception, type of crimes investigated, numbers of cases brought to court, and number of individuals notified that interception had taken place.[81]

    Conclusion

    The presence of multiple and robust oversight mechanisms for state surveillance does not necessarily correlate to effective oversight. The oversight mechanisms in the UK, Canada, and the U.S have been criticised. For example, Canada . For example, the Canadian regime has been characterized as becoming weaker it has removed one of its key over sight mechanisms – the Inspector General of the Canadian Security Intelligence Service which was responsible for certifying that the Service was in compliance with law.[82]

    Other weaknesses in the Canadian regime that have been highlighted include the fact that different oversight bodies do not have the authority to share information with each other, and transparency reports do not include many new forms of surveillance.[83] Oversight mechanisms in the U.S on the other hand have been criticized as being opaque[84] or as lacking the needed political support to be effective.[85] The UK oversight mechanism has been criticized for not having judicial authorization of surveillance requests, have opaque laws, and for not having a strong right of redress for affected individuals.[86] These critiques demonstrate that there are a number of factors that must come together for an oversight mechanism to be effective. Public transparency and accountability to decision making bodies such as Parliament or Congress can ensure effectiveness of oversight mechanisms, and are steps towards providing the public with means to debate in an informed manner issues related to state surveillance and allows different bodies within the government the ability to hold the state accountable for its actions.


      .[1]. For example, “Public Oversight” is one of the thirteen Necessary and Proportionate principles on state communications surveillance developed by civil society and academia globally, that should be incorporated by states into communication surveillance regimes. The principles can be accessed here: https://en.necessaryandproportionate.org/

      [2]. Hans Born and Ian Leigh, “Making Intelligence Accountable. Legal Standards and Best Practice for Oversight of Intelligence Agencies.” Pg. 13. 2005. Available at: http://www.prsindia.org/theprsblog/wp-content/uploads/2010/07/making-intelligence.pdf. Last accessed: August 6, 2015.

      [3]. For example, this point was made in the context of the UK. For more information see: Nick Clegg, 'Edward Snowden's revelations made it clear: security oversight must be fit for the internet age,”. The Guardian. March 3rd 2014. Available at: http://www.theguardian.com/commentisfree/2014/mar/03/nick-clegg-snowden-security-oversight-internet-age. Accessed: July 27, 2015.

      [4]. International Principles on the Application of Human Rights to Communications Surveillance. Available at: https://en.necessaryandproportionate.org/

      [5]. Sub Rules (16) and (17) of Rule 419A, Indian Telegraph Rules, 1951. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf Note: This review committee is responsible for overseeing interception orders issued under the Indian Telegraph Act and the Information Technology Act.

      [6]. Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information Rules 2009. Definition q. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [7]. Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Definition (n). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [8]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [9]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [10]. Definition (d) and section 3 of the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [11]. Rule 1, of the 419A Rules, Indian Telegraph Act 1885. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act.

      [12]. Section 92, CrPc. Available at: http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

      [13]. Press Information Bureau GOI. Reconstitution of Cabinet Committees. June 19th 2014. Available at: http://pib.nic.in/newsite/PrintRelease.aspx?relid=105747. Accessed August 6, 2015.

      [14]. Press Information Bureau, Government of India. Home minister proposes radical restructuring of security architecture. Available at: http://www.pib.nic.in/newsite/erelease.aspx?relid=56395. Accessed August 6, 2015.

      [15]. Section 24 read with Schedule II of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [16]. Section 8 of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [17]. Abhimanyu Ghosh. “Open Government and the Right to Information”. Legal Services India. Available at: http://www.legalservicesindia.com/articles/og.htm. Accessed: August 8, 2015

      [18]. Public Record Rules 1997. Section 2. Definition c. Available at: http://nationalarchives.nic.in/writereaddata/html_en_files/html/public_records97.html. Accessed: August 8, 2015

      [19]. Times of India. Classified information is reviewed after 25-30 years. April 13th 2015. Available at: http://timesofindia.indiatimes.com/india/Classified-information-is-reviewed-after-25-30-years/articleshow/46901878.cms. Accessed: August 8, 2015.

      [20]. Government of India. Ministry of Home Affairs. Lok Sabha Starred Question No 557. Available at: http://mha1.nic.in/par2013/par2015-pdfs/ls-050515/557.pdf.

      [21]. The Kargil Committee report Executive Summanry. Available at: http://fas.org/news/india/2000/25indi1.htm. Accessed: August 6, 2015.

      [22]. PIB Releases. Group of Ministers Report on Reforming the National Security System”. Available at: http://pib.nic.in/archieve/lreleng/lyr2001/rmay2001/23052001/r2305200110.html. Last accessed: August 6, 2015

      [23]. The Observer Research Foundation. “Manish Tewari introduces Bill on Intelligence Agencies Reform. August 5th 2011. Available at: http://www.observerindia.com/cms/sites/orfonline/modules/report/ReportDetail.html?cmaid=25156&mmacmaid=20327. Last accessed: August 6, 2015.

      [24]. The Intelligence Services (Powers and Regulation) Bill, 2011. Available at: http://www.observerindia.com/cms/export/orfonline/documents/Int_Bill.pdf. Accessed: August 6, 2015.

      [25]. The Privacy Bill 2011. Available at: https://bourgeoisinspirations.files.wordpress.com/2010/03/draft_right-to-privacy.pdf

      [26]. The Report of Group of Experts on Privacy. Available at: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

      [27]. Institute for Defence Studies and Analyses. “A Case for Intelligence Reforms in India”. Available at: http://www.idsa.in/book/AcaseforIntelligenceReformsinIndia.html. Accessed: August 6, 2015.

      [28]. India Calls for Transparency in internet Surveillance. NDTV. July 3rd 2015. Available at: http://gadgets.ndtv.com/internet/news/india-calls-for-transparency-in-internet-surveillance-710945. Accessed: July 6, 2015.

      [29]. Lovisha Aggarwal. “Analysis of News Items and Cases on Surveillance and Digital Evidence in India”. Available at: http://cis-india.org/internet-governance/blog/analysis-of-news-items-and-cases-on-surveillance-and-digital-evidence-in-india.pdf

      [30]. Rule 25 (4) of the Information Technology (Procedures and Safeguards for the Interception, Monitoring, and Decryption of Information Rules) 2011. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [31]. Ministry of Home Affairs, GOI. National Intelligence Grid. Available at: http://www.davp.nic.in/WriteReadData/ADS/eng_19138_1_1314b.pdf. Last accessed: August 6, 2015

      [32]. Press Information Bureau, Government of India. Centralised System to Monitor Communications Rajya Sabha. Available at: http://pib.nic.in/newsite/erelease.aspx?relid=54679. Last accessed: August 6, 2015.

      [33]. Department of Telecommunications. Amendemnt to the UAS License agreement regarding Central Monitoring System. June 2013. Available at: http://cis-india.org/internet-governance/blog/uas-license-agreement-amendment

      [34]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [35]. United States Foreign Intelligence Surveillance Court. Rules of Procedure 2010. Available at: http://www.fisc.uscourts.gov/sites/default/files/FISC%20Rules%20of%20Procedure.pdf

      [36]. United States Foreign Intelligence Court. Honorable Patrick J. Leahy. 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf

      [37]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [38]. Public Filings – U.S Foreign Intelligence Surveillance Court. Available at: http://www.fisc.uscourts.gov/public-filings

      [39]. ACLU. FISC Public Access Motion – ACLU Motion for Release of Court Records Interpreting Section 215 of the Patriot Act. Available at: https://www.aclu.org/legal-document/fisc-public-access-motion-aclu-motion-release-court-records-interpreting-section-215

      [40]. United States Foreign Intelligence Surveillance Court Washington DC. In Re motion for consent to disclosure of court records or, in the alternative a determination of the effect of the Court's rules on statutory access rights. Available at: https://www.eff.org/files/filenode/misc-13-01-opinion-order.pdf

      [41]. Google Official Blog. Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests. February 3rd 2014. Available at: http://googleblog.blogspot.in/2014/02/shedding-some-light-on-foreign.html

      [42]. U.S Government Accountability Office. Available at: http://www.gao.gov/key_issues/overview#t=1. Last accessed: August 8, 2015.

      [43]. Report to Congressional Requesters. Combating Terrorism: Foreign Terrorist Organization Designation Proces and U.S Agency Enforcement Actions. Available at: http://www.gao.gov/assets/680/671028.pdf. Accessed: August 8, 2015

      [44]. United States Government Accountability Office. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. Available: http://www.gao.gov/assets/680/670935.pdf. Last accessed: August 6, 2015.

      [45]. Committee Legislation. Available at: http://ballotpedia.org/United_States_Senate_Committee_on_Intelligence_(Select)#Committee_legislation

      [46]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8, 2015

      [47]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [48]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [49]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8th 2015

      [50]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statistics-reports/analysisreports/wiretap-reports

      [51]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statisticsreports/
      analysis-reports/wiretap-reports/faqs-wiretap-reports#faq-What-information-does-the-AO-receive-from-prosecutors?. Last Accessed: August 8th 2015

      [52]. Intelligence and Security Committee of Parliament. Transcripts and Public Evidence. Available at: http://isc.independent.gov.uk/public-evidence. Last accessed: August 8th 2015.

      [53]. Intelligence and Security Committee of Parliament. Special Reports. Available at http://isc.independent.gov.uk/committee-reports/special-reports. Last accessed: August 8th 2015.

      [54]. Hugh Segal. The U.K. has legislative oversight of surveillance. Why not Canada. The Globe and Mail. June 12th 2013. Available at: http://www.theglobeandmail.com/globe-debate/uk-haslegislative-oversight-of-surveillance-why-not-canada/article12489071/. Last accessed: August 8th 2015

      [55]. The Joint Intelligence Committee home page. For more information see: https://www.gov.uk/government/organisations/national-security/groups/joint-intelligence-committee

      [56]. Interception of Communications Commissioner's Office. RIPA. Available at: http://www.iocco-uk.info/sections.asp?sectionID=2&type=top. Last accessed: August 8th 2015

      [57]. Interception of Communications Commissioner's Office. Reports. Available at: http://www.iocco-uk.info/sections.asp?sectionID=1&type=top. Last accessed: August 8th 2015

      [58]. The Intelligence Services Commissioner's Office Homepage. For more information see: http://intelligencecommissioner.com/

      [59]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [60]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [61]. The Intelligence Services Commissioner's Office. What we do. Available at: http://intelligencecommissioner.com/content.asp?id=5. Last Accessed: August 8th 2015.

      [62]. The Intelligence Services Commissioner's Office. Intelligence Services Commissioner's Annual Reports. Available at: http://intelligencecommissioner.com/content.asp?id=19. Last
      accessed: August 8th 2015

      [63]. The Investigatory Powers Tribunal Homepage. Available at: http://www.ipt-uk.com/

      [64]. The Investigatory Powers Tribunal – Functions – Key role. Available at: http://www.ipt-uk.com/section.aspx?pageid=1

      [65]. Investigatory Powers Tribunal. Functions – Decisions available to the Tribunal. Available at: http://www.ipt-uk.com/section.aspx?pageid=4. Last accessed: August 8th 2015

      [66]. Investigator Powers Tribunal. Operation - Available at: http://www.ipt-uk.com/section.aspx?pageid=7

      [67]. Investigatory Powers Tribunal. Operation- Differences to the ordinary court system. Available at: http://www.ipt-uk.com/section.aspx?pageid=7. Last accessed: August 8th 2015

      [68]. Security Intelligence Review Committee – Homepage. Available at: http://www.sirc-csars.gc.ca/index-eng.html

      [69]. SIRC Annual Report 2013-2014: Lifting the Shroud of Secrecy. Available at: http://www.sirccsars. gc.ca/anrran/2013-2014/index-eng.html. Last accessed: August 6th 2015.

      [70]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [71]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [72]. The Office of the Communications Security Establishment – Mandate. Available at: http://www.ocsecbccst.gc.ca/mandate/index_e.php

      [73]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [74]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [75]. Office of the Privacy Commissioner of Canada. Homepage. Available at: https://www.priv.gc.ca/index_e.ASP

      [76]. Office of the Privacy Commissioner of Canada. Reports and Publications. Special Report to Parliament “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber-Surveillance. January 28th 2014. Available at: https://www.priv.gc.ca/information/srrs/201314/sr_cic_e.asp

      [77]. Office of the Privacy Commissioner of Canada. Available at: https://www.priv.gc.ca/index_e.asp. Last accessed: August 6th 2015.

      [78]. Office of the Privacy Commissioner of Canada. Appearance before the Senate Standing Commitee National Security and Defence on Bill C-51, the Anti-Terrorism Act, 2015. Available at: https://www.priv.gc.ca/parl/2015/parl_20150423_e.asp. Last accessed: August 6th 2015.

      [79]. Office of the Privacy Commissioner of Canada. Special Report to Parliament. January 8th 2014. Available at: https://www.priv.gc.ca/information/sr-rs/201314/sr_cic_e.asp. Last accessed: August 6th 2015.

      [80]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [81]. Patrick Baud. The Elimination of the Inspector General of the Canadian Security Intelligence Serive. May 2013. Ryerson University. Available at; http://www.academia.edu/4731993/The_Elimination_of_the_Inspector_General_of_the_Canadian_Security_Intelligence_Service

      [82]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [83]. Glenn Greenwald. Fisa court oversight: a look inside a secret and empty process. The Guardian. June 19th 2013. Available at: http://www.theguardian.com/commentisfree/2013/jun/19/fisa-court-oversight-process-secrecy, Nadia Kayyali. Privacy and Civil Liberties Oversight Board to NSA: Why is Bulk Collection of Telelphone Records Still Happening? February 2105. Available at :https://www.eff.org/deeplinks/2015/02/privacy-and-civil-liberties-oversight-board-nsa-whybulk-collection-telephone. Last accessed: August 8th 2015.

      [84]. Scott Shance. The Troubled Life of the Privacy and Civil Liberties Oversight Board. August 9th 2012. The Caucus. Available at: http://thecaucus.blogs.nytimes.com/2012/08/09/thetroubled-life-of-the-privacy-and-civil-liberties-oversight-board/?_r=0. Last accessed: August 8th 2015

      [85]. The Open Rights Group. Don't Spy on Us. Reforming Surveillance in the UK. September 2014. Available at: https://www.openrightsgroup.org/assets/files/pdfs/reports/DSOU_Reforming_surveillance_old.pdf

      [86].

    Do we need a Unified Post Transition IANA?

    by Pranesh Prakash, Padmini Baruah and Jyoti Panday — last modified Oct 27, 2015 12:46 AM
    As we stand at the threshold of the IANA Transition, we at CIS find that there has been little discussion on the question of how the transition will manifest. The question we wanted to raise was whether there is any merit in dividing the three IANA functions – names, numbers and protocols – given that there is no real technical stability to be gained from a unified Post Transition IANA. The analysis of this idea has been detailed below.

    The Internet Architecture Board, in a submission to the NTIA in 2011 claims that splitting the IANA functions would not be desirable.[1] The IAB notes, “There exists synergy and interdependencies between the functions, and having them performed by a single operator facilitates coordination among registries, even those that are not obviously related,” and also that that the IETF makes certain policy decisions relating to names and numbers as well, and so it is useful to have a single body. But they don’t say why having a single email address for all these correspondences, rather than 3 makes any difference: Surely, what’s important is cooperation and coordination. Just as IETF, ICANN, NRO being different entities doesn’t harm the Internet, splitting the IANA function relating to each entity won’t harm the Internet either. Instead will help stability by making each community responsible for the running of its own registers, rather than a single point of failure: ICANN and/or “PTI”.

    A number of commentators have supported this viewpoint in the past: Bill Manning of University of Southern California’s ISI (who has been involved in DNS operations since DNS started), Paul M. Kane (former Chairman of CENTR's Board of Directors), Jean-Jacques Subrenat (who is currently an ICG member), Association française pour le nommage Internet en coopération (AFNIC), the Internet Governance Project, InternetNZ, and the Coalition Against Domain Name Abuse (CADNA).

    The Internet Governance Project stated: “IGP supports the comments of Internet NZ and Bill Manning regarding the feasibility and desirability of separating the distinct IANA functions. Structural separation is not only technically feasible, it has good governance and accountability implications. By decentralizing the functions we undermine the possibility of capture by governmental or private interests and make it more likely that policy implementations are based on consensus and cooperation.”[2]

    Similarly, CADNA in its 2011 submission to NTIA notes that that in the current climate of technical innovation and the exponential expansion of the Internet community, specialisation of the IANA functions would result in them being better executed. The argument is also that delegation of the technical and administrative functions among other capable entities (such as the IETF and IAB for protocol parameters, or an international, neutral organization with understanding of address space protocols as opposed to RIRs) determined by the IETF is capable of managing this function would ensure accountability in Internet operation. Given that the IANA functions are mainly registry-maintenance function, they can to a large extent be automated. However, a single system of automation would not fit all three.

    Instead of a single institution having three masters, it is better for the functions to be separated. Most importantly, if one of the current customers wishes to shift the contract to another IANA functions operator, even if it isn’t limited by contract, it is limited by the institutional design, since iana.org serves as a central repository. This limitation didn’t exist, for instance, when the IETF decided to enter into a new contract for the RFC Editor role. This transition presents the best opportunity to cleave the functions logically, and make each community responsible for the functioning of their own registers, with IETF, which is mostly funded by ISOC, taking on the responsibility of handing the residual registries, and a discussion about the .ARPA and .INT gTLDs.

    From the above discussion, three main points emerge:

    • Splitting of the IANA functions allows for technical specialisation leading to greater efficiency of the IANA functions.
    • Splitting of the IANA functions allows for more direct accountability, and no concentration of power.
    • Splitting of the IANA functions allows for ease of shifting of the {names,number,protocol parameters} IANA functions operator without affecting the legal structure of any of the other IANA function operators.

    [1]. IAB response to the IANA FNOI, July 28, 2011. See: https://www.iab.org/wp-content/IAB-uploads/2011/07/IANA-IAB-FNOI-2011.pdf

    [2]. Internet Governance Project, Comments of the Internet Governance Project on the NTIA's "Request for Comments on the Internet Assigned Numbers Authority (IANA) Functions" (Docket # 110207099-1099-01) February 25, 2011 See: http://www.ntia.doc.gov/federal-register-notices/2011/request-comments-internet-assigned-numbers-authority-iana-functions

    Connected Trouble

    by Sunil Abraham last modified Oct 28, 2015 04:47 PM
    The internet of things phenomenon is based on a paradigm shift from thinking of the internet merely as a means to connect individuals, corporations and other institutions to an internet where all devices in (insulin pumps and pacemakers), on (wearable technology) and around (domestic appliances and vehicles) humans beings are connected.

    The guest column was published in the Week, issue dated November 1, 2015.


    Proponents of IoT are clear that the network effects, efficiency gains, and scientific and technological progress unlocked would be unprecedented, much like the internet itself.

    Privacy and security are two sides of the same coin―you cannot have one without the other. The age of IoT is going to be less secure thanks to big data. Globally accepted privacy principles articulated in privacy and data protection laws across the world are in conflict with the big data ideology. As a consequence, the age of internet of things is going to be less stable, secure and resilient. Three privacy principles are violated by most IoT products and services.

    Data minimisation

    According to this privacy principle, the less the personal information about the data subject that is collected and stored by the data controller, the more the data subject's right to privacy is protected. But, big data by definition requires more volume, more variety and more velocity and IoT products usually collect a lot of data, thereby multiplying risk.

    Purpose limitation

    This privacy principle is a consequence of the data minimisation principle. If only the bare minimum of personal information is collected, then it can only be put to a limited number of uses. But, going beyond that would harm the data subject. IoT innovators and entrepreneurs are trying to rapidly increase features, efficiency gains and convenience. Therefore, they don't know what future purposes their technology will be put to tomorrow and, again by definition, resist the principle of purpose limitation.

    Privacy by design

    Data protection regulation required that products and services be secure and protect privacy by design and not as a superficial afterthought. IoT products are increasingly being built by startups that are disrupting markets and taking down large technology incumbents. The trouble, however, is that most of these startups do not have sufficient internal security expertise and in their tearing hurry to take products to the market, many IoT products may not be comprehensively tested or audited from a privacy perspective.

    There are other cyber security principles and internet design principles that are disregarded by the IoT phenomenon, further compromising security and privacy of users.

    Centralisation

    Most of the network effects that IoT products contribute to require centralisation of data collected from users and their devices. For instance, if users of a wearable physical activity tracker would like to use gamification to keep each other motivated during exercise, the vendor of that device has to collect and store information about all its users. Since some users always wear them, they become highly granular stores of data that can also be used to inflict privacy harms.

    Decentralisation was a key design principle when the internet was first built. The argument was that you can never take down a decentralised network by bombing any of the nodes. Unfortunately, because of the rise of internet monopolies like Google, the age of cloud computing, and the success of social media giants, the internet is increasingly becoming centralised and, therefore, is much more fragile than it used be. IoT is going to make this worse.

    Complexity

    The more complex a particular technology is, the more fragile and vulnerable it is. This is not necessarily true but is usually the case given that more complex technology needs more quality control, more testing and more fixes. IoT technology raises complexity exponentially because the devices that are being connected are complex themselves and were not originally engineered to be connected to the internet. The networks they constitute are nothing like the internet which till now consisted of clients, web servers, chat servers, file servers and database servers, usually quite removed from the physical world. Compromised IoT devices, on the other hand, could be used to inflict direct harm on life and property.

    Death of the air gap

    The things that will be connected to the internet were previously separated from the internet through the means of an air gap. This kept them secure but also less useful and usable. In other words, the very act of connecting devices that were previously unconnected will expose them to a range of attacks. Security and privacy related laws, standards, audits and enforcement measures are the best way to address these potential pitfalls. Governments, privacy commissioners and data protections authorities across the world need to act so that the privacy of people and the security of our information society are protected.

    Breaking Down ICANN Accountability: What It Is and What the Internet Community Wants

    by Ramya Chandrasekhar last modified Nov 05, 2015 03:29 PM
    At the recent ICANN conference held in Dublin (ICANN54), one issue that was rehashed and extensively deliberated was ICANN's accountability and means to enhance the same. In light of the impending IANA stewardship transition from the NTIA to the internet's multi-stakeholder community, accountability of ICANN to the internet community becomes that much more important. In this blog post, some aspects of the various proposals to enhance ICANN's accountability have been deconstructed and explained.

    The Internet Corporation for Assigned Names and Numbers, known as ICANN, is a private not-for-profit organization, registered in California. Among other functions, it is tasked with carrying out the IANA function[1], pursuant to a contract between the US Government (through the National Telecommunications and Information Administration – NTIA) and itself. Which means, as of now, there exists legal oversight by the USG over ICANN with regard to the discharge of these IANA functions.[2]

    However, in 2014, the NTIA, decided to completely handover stewardship of the IANA functions to the internet’s ‘global multistakeholder community’. But the USG put down certain conditions before this transition could be effected, one of which was to ensure that there exists proper accountability within the ICANN.[3]

    The reason for this, was that the internet community feared a shift of ICANN to a FIFA-esque organization with no one to keep it in check, post the IANA transition if these accountability concerns weren’t addressed.[4]

    And thus, to answer these concerns, the Cross Community Working Group (CCWG-Accountability) has come up with reports that propose certain changes to the structure and functioning of ICANN.

    In light of the discussions that took place at ICANN54 in Dublin, this blog post is directed towards summarizing some of these proposals - those pertaining to the Independent Review Process or IRP (explained below) as well the various accountability models that are the subject of extensive debate both on and off the internet.

    Building Blocks Identified by the CCWG-Accountability

    The CCWG-Accountability put down four “building blocks”, as they call it, on which all their work is based. One of these is what is known as the Independent Review Process (or IRP). This is a mechanism by which internal complaints, either by individuals or by SOs/ACs[5], are addressed. However, the current version of the IRP is criticized for being an inefficient mechanism of dispute resolution.[6]

    And thus the CCWG-Accountability proposed a variety of amendments to the same.

    Another building block that the CCWG-Accountability identified is the need for an “empowered internet community”, which means more engagement between the ICANN Board and the internet community, as well as increased oversight by the community over the Board. As of now, the USG acts as the oversight-entity. Post the IANA transition however, the community feels they should step in and have an increased say with regard to decisions taken by the ICANN Board.

    As part of empowering the community, the CCWG-Accountability identified five core areas in which the community needs to possess some kind of powers or rights. These areas are – review and rejection of the ICANN budget, strategic plans and operating plans; review, rejection and/or approval of standard bylaws as well fundamental bylaws; review and rejection of Board decisions pertaining to IANA functions; appointment and removal of individual directors on the Board; and recall of the entire Board itself. And it is with regard to what kind of powers and rights are to be vested with the community that a variety of accountability models have been proposed, both by the CCWG-Accountability as well as the ICANN Board. However, of all these models, discussion is now primarily centered on three of them – the Sole Member Model (SMM), the Sole Designator Model (SDM) and the Multistakeholder Empowerment Model (MEM).

    What is the IRP?

    The Independent Review Process or IRP is the dispute resolution mechanism, by which complaints and/or oppositions by individuals with regard to Board resolutions are addressed. Article 4 of the ICANN bylaws lay down the specifics of the IRP. As of now, a standing panel of six to nine arbitrators is constituted, from which a panel is selected for hearing every complaint. However, the primary criticism of the current version of the IRP is the restricted scope of issues that the panel passes decisions on.[7]

    The bylaws explicitly state that the panel needs to focus on a set on procedural questions while hearing a complaint – such as whether the Board acted in good faith or exercised due diligence in passing the disputed resolution.

    Changes Proposed by the Internet Community to Enhance the IRP

    To tackle this and other concerns with the existing version of the IRP, the CCWG-Accountability proposed a slew of changes in the second draft proposal that they released in August this year. What they proposed is to make the IRP arbitral panel hear complaints and decide the matter on both procedural (as they do now) and substantive grounds. In addition, they also propose a broadening of who all have locus to initiate an IRP, to include individuals, groups and other entities. Further, they also propose a more precedent-based method of dispute resolution, wherein a panel refers to and uses decisions passed by past panels in arriving at a decision.

    At the 19th October “Enhancing ICANN-Accountability Engagement Session” that took place in Dublin as part of ICANN54, the mechanism to initiate an IRP was explained by Thomas Rickert, CCWG Co-Chair.[8]

    Briefly, the modified process is as follows -

    • An objection may be raised by any individual, even a non-member.
    • This individual needs to find an SO or an AC that shares the objection.
    • A “pre-call” or remote meeting between all the SOs and ACs is scheduled, to see if objection receives prescribed threshold of approval from the community.
    • If this threshold is met, dialogue is undertaken with the Board, to see if the objection is sustained by the Board.
    • If this dialogue also fails, then IRP can be initiated.

    The question of which “enforcement model” empowers the community arises post the initiation of this IRP, and in the event that the community receives an unfavourable decision through the IRP or that the ICANN Board refuses to implement the IRP decision. Thus, all the “enforcement models” retain the IRP as the primary method of internal dispute resolution.

    The direction that the CCWG-Accountability has taken with regard to enhancement of the IRP is heartening. And these proposals have received large support from the community. What is to be seen now is whether these proposals will be fully implemented by the Board or not, in addition to all the other proposals made by the CCWG.

    Enforcement  – An Overview of the Different Models

    In addition to trying to enhance the existing dispute resolution mechanism, the CCWG-Accountability also came up with a variety of “enforcement models”, by which the internet community would be vested with certain powers. And in response to the models proposed by the CCWG-Accountability, the ICANN Board came up with a counter proposal, called the MEM.

    Below is a tabular representation of what kinds of powers are vested with the community under the SMM, the SDM and the MEM.

    Power

    SMM

    SDM

    MEM

    Reject/Review Budget, Strategies and OPs.

    +

    Review/Reject Board decisions with regard to IANA functions.

    Sole Member has the reserved power to reject the budget up to 2 times.

    Member also has standing to enforce bylaw restrictions on the budget, etc.

    Sole Designator can only trigger Board consultations if opposition to budget, etc exists. Further, bylaws specify how many times such a consultation can be triggered.

    Designator only possesses standing to enforce this consultation.

    Community can reject Budget up to two times. Board is required by bylaws to reconsider budget post such rejection, by consulting with the community. If still no change is made, then community can initiate process to recall the Board.

    Reject/Review amendments to Standard bylaws and Fundamental bylaws

    Sole Member has right to veto these changes. Further, member also standing to enforce this right under the relevant Californian law.

    Sole Designator can also veto these changes. However, ambiguity regarding standing of designator to enforce this right.

    No veto power granted to any SO or AC.

    Each SO and AC evaluate if they want to voice the said objection. If certain threshold of agreement reached, then as per the bylaws, the Board cannot go ahead with the amendment.

    Appointment and Removal of individual ICANN directors

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    The SOs/ACs cannot appoint individual directors. But they can initiate process for their removal.

    However, directors can only be removed for breach of or on the basis of certain clauses in a “pre-service letter” that they sign.

    Recall of ICANN Board

    Sole Member has the power to recall Board.

    Further, it has standing to enforce this right in Californian courts.

    Sole Designator also has the power to recall the Board.

    However, ambiguity regarding standing to enforce this right.

    Community is not vested with power to recall the Board.

    However, if simultaneous trigger of pre-service letters occurs, in some scenarios, only then can something similar to a recall of the Board occur.

    A Critique of these Models

    SMM:

    The Sole Member Model (or SMM) was discussed and adopted in the second draft proposal, released in August 2015. This model is in fact the simplest and most feasible variant of all the other membership-based models, and has received substantial support from the internet community. The SMM proposes only one amendment to the ICANN bylaws - a move from having no members to one member, while ICANN itself retains its character as a non-profit mutual-benefit corporation under Californian laws.

    This “sole member” will be the community as a whole, represented by the various SOs and ACs. The SOs and ACs require no separate legal personhood to be a part of this “sole member”, but can directly participate. This participation is to be effected by a voting system, explained in the second draft, which allocates the maximum number of votes each SO and AC can cast. This ensures that each SO/AC doesn’t have to cast a unanimous vote, but each differing opinion within an SO/AC is given equal weight.

    SDM:

    A slightly modified and watered down version of the SMM, proposed by the CCWG-Accountability as an alternative to the same, is the “Sole Designator Model” or the SDM. Such a model requires an amendment to the ICANN bylaws, by which certain SOs/ACs are assigned “designator” status. By virtue of this status, they may then exercise certain rights - the right to recall the Board in certain scenarios and the right to veto budgets and strategic plans.

    However, there is some uncertainty in Californian law regarding who can be a designator - an individual or an entity as well. So whether unincorporated associations, such as the SOs and ACs, can be a “designator” as per the law is a question that doesn’t have a clear answer yet.

    Where most discussion with respect to the SDM has occurred has been in the area of the designator being vested with the power to “spill” or remove all the members of the ICANN Board. The designator is vested with this power as a sort of last-resort mechanism for the community’s voice to be heard. However, an interesting point raised in one of the Accountability sessions at ICANN54 was the almost negligible probability of this course of action ever being taken, i.e. the Board being “spilled”. So while in theory this model seems to vest the community with massive power, in reality, because the right to “spill” the Board may never be invoked, the SDM is actually a weak enforceability model.

    Other Variants of the Designator Model:

    The CCWG-Accountability, in both its first and second report, discussed variants of the designator model as well. A generic SO/AC Designator model was discussed in the first draft. The Enhanced SO/AC Designator model, discussed in the second draft, also functions along similar lines. However, only those SOs and ACs that wanted to be made designators apply to become so, as opposed to the requirement of a mandatory designator under the SDM model.

    After the second draft released by the CCWG-Accountability and the counter-proposal released by the ICANN Board (see below for the ICANN Board’s proposal), discussion was mostly directed towards the SMM and the MEM. However, the discussion with regard to the designator model has recently been revived by members of the ALAC at ICANN54 in Dublin, who unanimously issued a statement supporting the SDM.[9] And following this, many more in the community have expressed their support towards adopting the designator model.[10]

    MEM:

    The Multi-stakeholder Enforcement Model or MEM was the ICANN Board’s counter-model to all the models put forth by the CCWG-Accountability, specifically the SMM. However, there is no clarity with regard to the specifics of this model. In fact, the vagueness surrounding the model is one of the biggest criticisms of the model itself.

    The CCWG-Accountability accounts for possible consequences of implementation every model by a mechanism known as “stress-tests”. The Board’s proposal, on the other hand, rejects the SMM due to its “unintended consequences”, but does not provide any clarity on what these consequences are or what in fact the problems with the SMM itself are.[11]

    In addition, many are opposed to the Board proposal in general because it wasn’t created by the community, and therefore not reflective of the community’s views, as opposed to the SMM.[12]

    Instead, the Board’s solution is to propose a counter-model that doesn’t in fact fix the existing problems of accountability.

    What is known of the MEM though, gathered primarily from an FAQ published on the ICANN community forum, is this: The community, through the various SOs and ACs, can challenge any action of the Board that is CONTRADICTORY TO THE FUNDAMENTAL BYLAWS only, through a binding arbitration. The arbitration panel will be decided by the Board and the arbitration itself will be financed by ICANN. Further, this process will not replace the existing Independent Review Process or IRP, but will run parallely.

    Even this small snippet of the MEM is filled with problems. Concerns of neutrality with regard to the arbitral panel and challenge of the award itself have been raised.[13]

    Further, the MEM seems to be in direct opposition to the ‘gold standard’ multi-stakeholder model of ICANN. Essentially, there is no increased accountability of the ICANN under the MEM, thus eliciting severe opposition from the community.

    What is interesting to note about all these models, is that they are all premised on ICANN continuing to remain within the jurisdiction of the United States. And even more surprising is that hardly anyone questions this premise. However, at ICANN54 this issue received a small amount of traction, enough for the setting up of an ad-hoc committee to address these jurisdictional concerns. But even this isn’t enough traction. The only option now though is to wait and see what this ad-hoc committee, as well as the CCWG-Accountability through its third draft proposal to be released later this year, comes up with.


    [1]. The IANA functions or the technical functions are the name, number and protocol functions with regard to the administration of the Domain Name System or the DNS.

    [2]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [3]. http://www.theregister.co.uk/2015/10/19/congress_tells_icann_quit_escaping_accountability/?page=1

    [4]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [5]. SOs are Supporting Organizations and ACs are Advisory Committees. They form part of ICANN’s operational structure.

    [6]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [7]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [8]. Thomas Rickert (GNSO-appointed CCWG co-chair) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 15,16) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [9]. http://www.brandregistrygroup.org/alac-throws-spanner-in-icann-accountability-discussions

    [10]. http://www.theregister.co.uk/2015/10/22/internet_community_icann_accountability/

    [11]. http://www.theregister.co.uk/2015/09/07/icann_accountability_latest/

    [12]. http://www.circleid.com/posts/20150923_empire_strikes_back_icann_accountability_at_the_inflection_point/

    [13]. http://www.internetgovernance.org/2015/09/06/icann-accountability-a-three-hour-call-trashes-a-year-of-work/

    Bios and Photos of Speakers for Big Data in the Global South International Workshop

    by Prasad Krishna last modified Nov 06, 2015 02:01 AM

    PDF document icon Bios&Photos_BigDataWorkshop.pdf — PDF document, 1825 kB (1869456 bytes)

    Comments on the Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

    by Geetha Hariharan last modified Nov 18, 2015 06:33 AM
    Following the comment-period on the Zero Draft, the Draft Outcome Document of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released on 4 November 2015. Comments were sought on the Draft Outcome Document from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

     

    The WSIS+10 Overall Review of the Implementation of WSIS Outcomes, scheduled for December 2015, comes as a review of the WSIS process initiated in 2003-05. At the December summit of the UN General Assembly, the WSIS vision and mandate of the IGF are to be discussed. The Draft Outcome Document, released on 4 November 2015, is towards an outcome document for the summit. Comments were sought on the Draft Outcome Document. Our comments are below.

    1. The Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“the current Draft”) stands considerably altered from the Zero Draft. With references to development-related challenges, the Zero Draft covered areas of growth and challenges of the WSIS. It noted the persisting digital divide, the importance of innovation and investment, and of conducive legal and regulatory environments, and the inadequacy of financial mechanisms. Issues crucial to Internet governance such as net neutrality, privacy and the mandate of the IGF found mention in the Zero Draft.
    2. The current Draft retains these, and adds to them. Some previously-omitted issues such as surveillance, the centrality of human rights and the intricate relationship of ICTs to the Sustainable Development Goals, now stand incorporated in the current Draft. This is most commendable. However, the current Draft still lacks teeth with regard to some of these issues, and fails to address several others.
    3. In our comments to the Zero Draft, CIS had called for these issues to be addressed. We reiterate our call in the following paragraphs.

    (1) ICT for Development

    1. In the current Draft, paragraphs 14-36 deal with ICTs for development. While the draft contains rubrics like ‘Bridging the digital divide’, ‘Enabling environment’, and ‘Financial mechanisms’, the following issues are unaddressed:
    2. Equitable development for all;
    3. Accessibility to ICTs for persons with disabilities;
    4. Access to knowledge and open data.

    Equitable development

    1. In the Geneva Declaration of Principles (2003), two goals are set forth as the Declaration’s “ambitious goal”: (a) the bridging of the digital divide; and (b) equitable development for all (¶ 17). The current Draft speaks in detail about the bridging of the digital divide, but the goal of equitable development is conspicuously absent. At WSIS+10, when the WSIS vision evolves to the creation of inclusive ‘knowledge societies’, equitable development should be both a key principle and a goal to stand by.
    2. Indeed, inequitable development underscores the persistence of the digital divide. The current Draft itself refers to several instances of inequitable development; for ex., the uneven production capabilities and deployment of ICT infrastructure and technology in developing countries, landlocked countries, small island developing states, countries under occupation or suffering natural disasters, and other vulnerable states; lack of adequate financial mechanisms in vulnerable parts of the world; variably affordable (or in many cases, unaffordable) spread of ICT devices, technology and connectivity, etc.
    3. What underscores these challenges is the inequitable and uneven spread of ICTs across states and communities, including in their production, capacity-building, technology transfers, gender-concentrated adoption of technology, and inclusiveness.
    4. As such, it is essential that the WSIS+10 Draft Outcome Document reaffirm our commitment to equitable development for all peoples, communities and states.
    5. We suggest the following inclusion to paragraph 5 of the current Draft:
    “5. We reaffirm our common desire and commitment to the WSIS vision to build an equitable, people-centred, inclusive, and development-oriented Information Society…”

    Accessibility for persons with disabilities

    10. Paragraph 13 of the Geneva Declaration of Principles (2003) pledges to “pay particular attention to the special needs of marginalized and vulnerable groups of society” in the forging of an Information Society. Particularly, ¶ 13 recognises the special needs of older persons and persons with disabilities.

    11. Moreover, ¶ 31 of the Geneva Declaration of Principles calls for the special needs of persons with disabilities, and also of disadvantaged and vulnerable groups, to be taken into account while promoting the use of ICTs for capacity-building. Accessibility for persons with disabilities is thus core to bridging the digital divide – as important as bridging the gender divide in access to ICTs.

    12. Not only this, but the WSIS+10 Statement on the Implementation of WSIS Outcomes (June 2014) also reaffirms the commitment to “provide equitable access to information and knowledge for all… including… people with disabilities”, recognizing that it is “crucial to increase the participation of vulnerable people in the building process of Information Society…” (¶8).

    13. In our previous submission, CIS had suggested language drawing attention to this. Now, the current Draft only acknowledges that “particular attention should be paid to the specific ICT challenges facing… persons with disabilities…” (paragraph 11). It acknowledges also that now, accessibility for persons with disabilities constitutes one of the core elements of quality (paragraph 22). However, there is a glaring omission of a call to action, or a reaffirmation of our commitment to bridging the divide experienced by persons with disabilities.

    14. We suggest, therefore, the addition of the following language the addition of paragraph 24A to the current Draft. Sections of this suggestion are drawn from ¶8, WSIS+10 Statement on the Implementation of WSIS Outcomes.

    "24A. Recalling the UN Convention on the rights of people with disabilities, the Geneva principles paragraph 11, 13, 14 and 15, Tunis Commitment paras 20, 22 and 24, and reaffirming the commitment to providing equitable access to information and knowledge for all, building ICT capacity for all and confidence in the use of ICTs by all, including youth, older persons, women, indigenous and nomadic peoples, people with disabilities, the unemployed, the poor, migrants, refugees and internally displaced people and remote and rural communities, it is crucial to increase the participation of vulnerable people in the building process of information Society and to make their voice heard by stakeholders and policy-makers at different levels. It can allow the most fragile groups of citizens worldwide to become an integrated part of their economies and also raise awareness of the target actors on the existing ICTs solution (such as tolls as e- participation, e-government, e-learning applications, etc.) designed to make their everyday life better. We recognise need for continued extension of access for people with disabilities and vulnerable people to ICTs, especially in developing countries and among marginalized communities, and reaffirm our commitment to promoting and ensuring accessibility for persons with disabilities. In particular, we call upon all stakeholders to honour and meet the targets set out in Target 2.5.B of the Connect 2020 Agenda that enabling environments ensuring accessible telecommunication/ICT for persons with disabilities should be established in all countries by 2020.”

    Access to knowledge and open data

    15. The Geneva Declaration of Principles dedicates a section to access to information and knowledge (B.3). It notes, in ¶26, that a “rich public domain” is essential to the growth of Information Society. It urges that public institutions be strengthened to ensure free and equitable access to information (¶26), and also that assistive technologies and universal design can remove barriers to access to information and knowledge (¶25). Particularly, the Geneva Declaration advocates the use of free and open source software, in addition to proprietary software, to meet these ends (¶27).

    16. It was also recognized in the WSIS+10 Statement on the Implementation of WSIS Outcomes (‘Challenges-during implementation of Action Lines and new challenges that have emerged’) that there is a need to promote access to all information and knowledge, and to encourage open access to publications and information (C, ¶¶9 and 12).

    17. In our previous submission, CIS had highlighted the importance of open access to knowledge thus: “…the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of ‘Data Revolution for Sustainable Development’. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.”

    18. This crucial element is missing from the current Draft of the WSIS+10 Outcome Document. Of course, the current Draft notes the importance of access to information and free flow of data. But it stops short of endorsing and advocating the importance of access to knowledge and free and open source software, which are essential to fostering competition and innovation, diversity of consumer/ user choice and ensuring universal access.

    19. We suggest the following addition – of paragraph 23A to the current Draft:

    "23A. We recognize the need to promote access for all to information and knowledge, open data, and open, affordable, and reliable technologies and services, while respecting individual privacy, and to encourage open access to publications and information, including scientific information and in the research sector, and particularly in developing and least developed countries.”

    (2) Human Rights in Information Society

    20. The current Draft recognizes that human rights have been central to the WSIS vision, and reaffirms that rights offline must be protected online as well. However, the current Draft omits to recognise the role played by corporations and intermediaries in facilitating access to and use of the Internet.

    21. In our previous submission, CIS had noted that “the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online”.

    22. We reiterate our suggestion for the inclusion of paragraph 43A to the current Draft:

    "43A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

    (3) Internet Governance

    The support for multilateral governance of the Internet

    23. While the section on Internet governance is not considerably altered from the zero draft, there is a large substantive change in the current Draft. The current Draft states that the governance of the Internet should be “multilateral, transparent and democratic, with full involvement of all stakeholders” (¶50). Previously, the zero draft recognized the “the general agreement that the governance of the Internet should be open, inclusive, and transparent”.

    24. A return to purely ‘multilateral’ Internet governance would be regressive. Governments are, without doubt, crucial in Internet governance. As scholarship and experience have both shown, governments have played a substantial role in shaping the Internet as it is today: whether this concerns the availability of content, spread of infrastructure, licensing and regulation, etc. However, these were and continue to remain contentious spaces.

    25. As such, it is essential to recognize that a plurality of governance models serve the Internet, in which the private sector, civil society, the technical community and academia play important roles. We recommend returning to the language of the zero draft in ¶32: “open, inclusive and transparent governance of the Internet”.

    Governance of Critical Internet Resources

    26. It is curious that the section on Internet governance in both the zero and the current Draft makes no reference to ICANN, and in particular, to the ongoing transition of IANA stewardship and the discussions surrounding the accountability of ICANN and the IANA operator. The stewardship of critical Internet resources, such as the root, is crucial to the evolution and functioning of the Internet. Today, ICANN and a few other institutions have a monopoly over the management and policy-formulation of several critical Internet resources.

    27. While the WSIS in 2003-05 considered this a troubling issue, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

    28. Therefore, we reiterate our suggestion to add paragraph 53A after Para 53:

    "53A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

    Inclusiveness and Diversity in Internet Governance

    29. The current Draft, in ¶52, recognizes that there is a need to “promote greater participation and engagement in Internet governance of all stakeholders…”, and calls for “stable, transparent and voluntary funding mechanisms to this end.” This is most commendable.

    30. The issue of inclusiveness and diversity in Internet governance is crucial: today, Internet governance organisations and platforms suffer from a lack of inclusiveness and diversity, extending across representation, participation and operations of these organisations. As CIS submitted previously, the mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle in many cases.

    31. As we submitted before, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For ex., 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based.

    32. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed. Representation must include not only those from developing countries, but must also extend across gender and communities.

    33. We propose, therefore, the addition of a paragraph 51A after Para 51:

    "51A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, including in their representation, participation and operations. We note with concern that the representation of developing countries, of women, persons with disabilities and other vulnerable groups, is far from equitable and adequate. We call upon organisations involved in Internet governance to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

     


    Prepared by Geetha Hariharan, with inputs from Sunil Abraham and Japreet Grewal. All comments submitted towards the Draft Outcome Document may be found at this link.

    Summary Report Internet Governance Forum 2015

    by Jyoti Panday last modified Nov 30, 2015 10:47 AM
    Centre for Internet and Society (CIS), India participated in the Internet Governance Forum (IGF) held at Poeta Ronaldo Cunha Lima Conference Center, Joao Pessoa in Brazil from 10 November 2015 to 13 November 2015. The theme of IGF 2015 was ‘Evolution of Internet Governance: Empowering Sustainable Development’. Sunil Abraham, Pranesh Prakash & Jyoti Panday from CIS actively engaged and made substantive contributions to several key issues affecting internet governance at the IGF 2015. The issue-wise detail of their engagement is set out below.

    INTERNET GOVERNANCE

    I. The Multi-stakeholder Advisory Group to the IGF organised a discussion on Sustainable Development Goals (SDGs) and Internet Economy at the Main Meeting Hall from 9:00 am to 12:30 pm on 11 November, 2015. The discussions at this session focused on the importance of Internet Economy enabling policies and eco-system for the fulfilment of different SDGs. Several concerns relating to internet entrepreneurship, effective ICT capacity building, protection of intellectual property within and across borders were availability of local applications and content were addressed. The panel also discussed the need to identify SDGs where internet based technologies could make the most effective contribution. Sunil Abraham contributed to the panel discussions by addressing the issue of development and promotion of local content and applications. List of speakers included:

    1. Lenni Montiel, Assistant-Secretary-General for Development, United Nations

    2. Helani Galpaya, CEO LIRNEasia

    3. Sergio Quiroga da Cunha, Head of Latin America, Ericsson

    4. Raúl L. Katz, Adjunct Professor, Division of Finance and Economics, Columbia Institute of Tele-information

    5. Jimson Olufuye, Chairman, Africa ICT Alliance (AfICTA)

    6. Lydia Brito, Director of the Office in Montevideo, UNESCO

    7. H.E. Rudiantara, Minister of Communication & Information Technology, Indonesia

    8. Daniel Sepulveda, Deputy Assistant Secretary, U.S. Coordinator for International and Communications Policy at the U.S. Department of State  

    9. Deputy Minister Department of Telecommunications and Postal Services for the republic of South Africa

    10. Sunil Abraham, Executive Director, Centre for Internet and Society, India

    11. H.E. Junaid Ahmed Palak, Information and Communication Technology Minister of Bangladesh

    12. Jari Arkko, Chairman, IETF

    13. Silvia Rabello, President, Rio Film Trade Association

    14. Gary Fowlie, Head of Member State Relations & Intergovernmental Organizations, ITU

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2327-2015-11-11-internet-economy-and-sustainable-development-main-meeting-room

    Video link Internet economy and Sustainable Development here https://www.youtube.com/watch?v=D6obkLehVE8

     II. Public Knowledge organised a workshop on The Benefits and Challenges of the Free Flow of Data at Workshop Room 5 from 11:00 am to 12:00 pm on 12 November, 2015. The discussions in the workshop focused on the benefits and challenges of the free flow of data and also the concerns relating to data flow restrictions including ways to address them. Sunil Abraham contributed to the panel discussions by addressing the issue of jurisdiction of data on the internet. The panel for the workshop included the following.

    1. Vint Cerf, Google

    2. Lawrence Strickling, U.S. Department of Commerce, NTIA

    3. Richard Leaning, European Cyber Crime Centre (EC3), Europol

    4. Marietje Schaake, European Parliament

    5. Nasser Kettani, Microsoft

    6. Sunil Abraham, CIS India

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2467-2015-11-12-ws65-the-benefits-and-challenges-of-the-free-flow-of-data-workshop-room-5

    Video link https://www.youtube.com/watch?v=KtjnHkOn7EQ

     III. Article 19 and Privacy International organised a workshop on Encryption and Anonymity: Rights and Risks at Workshop Room 1 from 11:00 am to 12:30 pm on 12 November, 2015. The workshop fostered a discussion about the latest challenges to protection of anonymity and encryption and ways in which law enforcement demands could be met while ensuring that individuals still enjoyed strong encryption and unfettered access to anonymity tools. Pranesh Prakash contributed to the panel discussions by addressing concerns about existing south Asian regulatory framework on encryption and anonymity and emphasizing the need for pervasive encryption. The panel for this workshop included the following.

    1. David Kaye, UN Special Rapporteur on Freedom of Expression

    2. Juan Diego Castañeda, Fundación Karisma, Colombia

    3. Edison Lanza, Organisation of American States Special Rapporteur

    4. Pranesh Prakash, CIS India

    5. Ted Hardie, Google

    6. Elvana Thaci, Council of Europe

    7. Professor Chris Marsden, Oxford Internet Institute

    8. Alexandrine Pirlot de Corbion, Privacy International

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2407-2015-11-12-ws-155-encryption-and-anonymity-rights-and-risks-workshop-room-1

    Video link available here https://www.youtube.com/watch?v=hUrBP4PsfJo

     IV. Chalmers & Associates organised a session on A Dialogue on Zero Rating and Network Neutrality at the Main Meeting Hall from 2:00 pm to 4:00 pm on 12 November, 2015. The Dialogue provided access to expert insight on zero-rating and a full spectrum of diverse views on this issue. The Dialogue also explored alternative approaches to zero rating such as use of community networks. Pranesh Prakash provided a detailed explanation of harms and benefits related to different approaches to zero-rating. The panellists for this session were the following.

    1. Jochai Ben-Avie, Senior Global Policy Manager, Mozilla, USA

    2. Igor Vilas Boas de Freitas, Commissioner, ANATEL, Brazil

    3. Dušan Caf, Chairman, Electronic Communications Council, Republic of Slovenia

    4. Silvia Elaluf-Calderwood, Research Fellow, London School of Economics, UK/Peru

    5. Belinda Exelby, Director, Institutional Relations, GSMA, UK

    6. Helani Galpaya, CEO, LIRNEasia, Sri Lanka

    7. Anka Kovacs, Director, Internet Democracy Project, India

    8. Kevin Martin, VP, Mobile and Global Access Policy, Facebook, USA

    9. Pranesh Prakash, Policy Director, CIS India

    10. Steve Song, Founder, Village Telco, South Africa/Canada

    11. Dhanaraj Thakur, Research Manager, Alliance for Affordable Internet, USA/West Indies

    12. Christopher Yoo, Professor of Law, Communication, and Computer & Information Science, University of Pennsylvania, USA

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2457-2015-11-12-a-dialogue-on-zero-rating-and-network-neutrality-main-meeting-hall-2

     V. The Internet & Jurisdiction Project organised a workshop on Transnational Due Process: A Case Study in MS Cooperation at Workshop Room 4 from 11:00 am to 12:00 pm on 13 November, 2015. The workshop discussion focused on the challenges in developing an enforcement framework for the internet that guarantees transnational due process and legal interoperability. The discussion also focused on innovative approaches to multi-stakeholder cooperation such as issue-based networks, inter-sessional work methods and transnational policy standards. The panellists for this discussion were the following.

    1. Anne Carblanc Head of Division, Directorate for Science, Technology and Industry, OECD

    2. Eileen Donahoe Director Global Affairs, Human Rights Watch

    3. Byron Holland President and CEO, CIRA (Canadian ccTLD)

    4. Christopher Painter Coordinator for Cyber Issues, US Department of State

    5. Sunil Abraham Executive Director, CIS India

    6. Alice Munyua Lead dotAfrica Initiative and GAC representative, African Union Commission

    7. Will Hudsen Senior Advisor for International Policy, Google

    8. Dunja Mijatovic Representative on Freedom of the Media, OSCE

    9. Thomas Fitschen Director for the United Nations, for International Cooperation against Terrorism and for Cyber Foreign Policy, German Federal Foreign Office

    10. Hartmut Glaser Executive Secretary, Brazilian Internet Steering Committee

    11. Matt Perault, Head of Policy Development Facebook

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2475-2015-11-13-ws-132-transnational-due-process-a-case-study-in-ms-cooperation-workshop-room-4

    Video link Transnational Due Process: A Case Study in MS Cooperation available here https://www.youtube.com/watch?v=M9jVovhQhd0

     VI. The Internet Governance Project organised a meeting of the Dynamic Coalition on Accountability of Internet Governance Venues at Workshop Room 2 from 14:00 – 15:30 on 12 November, 2015. The coalition brought together panelists to highlight the challenges in developing an accountability framework for internet governance venues that include setting up standards and developing a set of concrete criteria. Jyoti Panday provided the perspective of civil society on why acountability is necessary in internet governance processes and organizations. The panelists for this workshop included the following.

    1. Robin Gross, IP Justice

    2. Jeanette Hofmann, Director Alexander von Humboldt Institute for Internet and Society

    3. Farzaneh Badiei, Internet Governance Project

    4. Erika Mann, Managing Director Public PolicyPolicy Facebook and Board of Directors ICANN

    5. Paul Wilson, APNIC

    6. Izumi Okutani, Japan Network Information Center (JPNIC)

    7. Keith Drazek , Verisign

    8. Jyoti Panday, CIS

    9. Jorge Cancio, GAC representative

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c23/dynamic-coalition-on-accountability-of-internet-governance-venues?iframe=no&w=&sidebar=yes&bg=no

    Video link https://www.youtube.com/watch?v=UIxyGhnch7w

     VII. Digital Infrastructure Netherlands Foundation organized an open forum at Workshop Room 3 from 11:00 – 12:00 on 10 November, 2015. The open forum discussed the increase in government engagement with “the internet” to protect their citizens against crime and abuse and to protect economic interests and critical infrastructures. It brought together panelists topresent ideas about an agenda for the international protection of ‘the public core of the internet’ and to collect and discuss ideas for the formulation of norms and principles and for the identification of practical steps towards that goal. Pranesh Prakash participated in the e open forum. Other speakers included

    1. Bastiaan Goslings AMS-IX, NL

    2. Pranesh Prakash CIS, India

    3. Marilia Maciel (FGV, Brasil

    4. Dennis Broeders (NL Scientific Council for Government Policy)

    Detailed description of the open forum is available here http://schd.ws/hosted_files/igf2015/3d/DINL_IGF_Open%20Forum_The_public_core_of_the_internet.pdf

    Video link available here https://www.youtube.com/watch?v=joPQaMQasDQ

    VIII. UNESCO, Council of Europe, Oxford University, Office of the High Commissioner on Human Rights, Google, Internet Society organised a workshop on hate speech and youth radicalisation at Room 9 on Thursday, November 12. UNESCO shared the initial outcome from its commissioned research on online hate speech including practical recommendations on combating against online hate speech through understanding the challenges, mobilizing civil society, lobbying private sectors and intermediaries and educating individuals with media and information literacy. The workshop also discussed how to help empower youth to address online radicalization and extremism, and realize their aspirations to contribute to a more peaceful and sustainable world. Sunil Abraham provided his inputs. Other speakers include

    1. Chaired by Ms Lidia Brito, Director for UNESCO Office in Montevideo

    2.Frank La Rue, Former Special Rapporteur on Freedom of Expression

    3. Lillian Nalwoga, President ISOC Uganda and rep CIPESA, Technical community

    4. Bridget O’Loughlin, CoE, IGO

    5. Gabrielle Guillemin, Article 19

    6. Iyad Kallas, Radio Souriali

    7. Sunil Abraham executive director of Center for Internet and Society, Bangalore, India

    8. Eve Salomon, global Chairman of the Regulatory Board of RICS

    9. Javier Lesaca Esquiroz, University of Navarra

    10. Representative GNI

    11. Remote Moderator: Xianhong Hu, UNESCO

    12. Rapporteur: Guilherme Canela De Souza Godoi, UNESCO

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c1X/ws-128-mitigate-online-hate-speech-and-youth-radicalisation?iframe=no&w=&sidebar=yes&bg=no

    Video link to the panel is available here https://www.youtube.com/watch?v=eIO1z4EjRG0

     INTERMEDIARY LIABILITY

    IX. Electronic Frontier Foundation, Centre for Internet Society India, Open Net Korea and Article 19 collaborated to organize a workshop on the Manila Principles on Intermediary Liability at Workshop Room 9 from 11:00 am to 12:00 pm on 13 November 2015. The workshop elaborated on the Manila Principles, a high level principle framework of best practices and safeguards for content restriction practices and addressing liability for intermediaries for third party content. The workshop saw particpants engaged in over lapping projects considering restriction practices coming togetehr to give feedback and highlight recent developments across liability regimes. Jyoti Panday laid down the key details of the Manila Principles framework in this session. The panelists for this workshop included the following.

    1. Kelly Kim Open Net Korea,

    2. Jyoti Panday, CIS India,

    3. Gabrielle Guillemin, Article 19,

    4. Rebecca McKinnon on behalf of UNESCO

    5. Giancarlo Frosio, Center for Internet and Society, Stanford Law School

    6. Nicolo Zingales, Tilburg University

    7. Will Hudson, Google

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2423-2015-11-13-ws-242-the-manila-principles-on-intermediary-liability-workshop-room-9

    Video link available here https://www.youtube.com/watch?v=kFLmzxXodjs

     ACCESSIBILITY

    X. Dynamic Coalition on Accessibility and Disability and Global Initiative for Inclusive ICTs organised a workshop on Empowering the Next Billion by Improving Accessibility at Workshop Room 6 from 9:00 am to 10:30 am on 13 November, 2015. The discussion focused on the need and ways to remove accessibility barriers which prevent over one billion potential users to benefit from the Internet, including for essential services. Sunil Abraham specifically spoke about the lack of compliance of existing ICT infrastructure with well established accessibility standards specifically relating to accessibility barriers in the disaster management process. He discussed the barriers faced by persons with physical or psychosocial disabilities. The panelists for this discussion were the following.

    1. Francesca Cesa Bianchi, G3ICT

    2. Cid Torquato, Government of Brazil

    3. Carlos Lauria, Microsoft Brazil

    4. Sunil Abraham, CIS India

    5. Derrick L. Cogburn, Institute on Disability and Public Policy (IDPP) for the ASEAN(Association of Southeast Asian Nations) Region

    6. Fernando H. F. Botelho, F123 Consulting

    7. Gunela Astbrink, GSA InfoComm

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2438-2015-11-13-ws-253-empowering-the-next-billion-by-improving-accessibility-workshop-room-3

    Video Link Empowering the next billion by improving accessibility https://www.youtube.com/watch?v=7RZlWvJAXxs

     OPENNESS

    XI. A workshop on FOSS & a Free, Open Internet: Synergies for Development was organized at Workshop Room 7 from 2:00 pm to 3:30 pm on 13 November, 2015. The discussion was focused on the increasing risk to openness of the internet and the ability of present & future generations to use technology to improve their lives. The panel shred different perspectives about the future co-development of FOSS and a free, open Internet; the threats that are emerging; and ways for communities to surmount these. Sunil Abraham emphasised the importance of free software, open standards, open access and access to knowledge and the lack of this mandate in the draft outcome document for upcoming WSIS+10 review and called for inclusion of the same. Pranesh Prakash further contributed to the discussion by emphasizing the need for free open source software with end‑to‑end encryption and traffic level encryption based on open standards which are decentralized and work through federated networks. The panellists for this discussion were the following.

    1. Satish Babu, Technical Community, Chair, ISOC-TRV, Kerala, India

    2. Judy Okite, Civil Society, FOSS Foundation for Africa

    3. Mishi Choudhary, Private Sector, Software Freedom Law Centre, New York

    4. Fernando Botelho, Private Sector, heads F123 Systems, Brazil

    5. Sunil Abraham, CIS India

    6. Pranesh Prakash, CIS India

    7. Nnenna Nwakanma- WWW.Foundation

    8. Yves MIEZAN EZO, Open Source strategy consultant

    9. Corinto Meffe, Advisor to the President and Directors, SERPRO, Brazil

    10. Frank Coelho de Alcantara, Professor, Universidade Positivo, Brazil

    11. Caroline Burle, Institutional and International Relations, W3C Brazil Office and Center of Studies on Web Technologies

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2468-2015-11-13-ws10-foss-and-a-free-open-internet-synergies-for-development-workshop-room-7

    Video link available here https://www.youtube.com/watch?v=lwUq0LTLnDs



    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    by Nishant Shah last modified Nov 23, 2015 01:27 PM
    The idea of a 'digital' Diwali reduces our social relationships to a ledger of give and take. The last fortnight, I have been bombarded with advertisements selling the idea of a “Digital Diwali”. We have become so used to the idea that everything that is digital is modern, better and more efficient.
    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    For me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. (Source: Reuters)

    The article was published in the Indian Express on November 22, 2015.


    I have WhatsApp messages with exploding fireworks, singing greeting cards that chant mystic sounding messages, an app that turns my smartphone into a flickering diya, another app that remotely controls the imitation LED candles on my windows, an invitation to Skype in for a puja at a friend’s house 3,000 km away, and the surfeit of last minute shopping deals, each one offering a dhamaka of discounts.

    However, to me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. Think of Diwali as sharing the fundamental logic that governs the digital — the logic of counting. As we explode with joy this festive season, we count our blessings, our loved ones, the gifts and presents that we exchange. If we are on the new Fitbit trend, we count the calories we consume and burn as we make our way through parties where it is important to see and be seen, compare and contrast, connect with all the people who could be thought of as friends, followers, connectors, or connections.

    While there is no denying that there is a sociality that the festival brings in, there is also a cruel algebra of counting that comes along with it. It is no surprise that as we celebrate the victory of good over evil and right over wrong, we also simultaneously bow our heads to the goddess of wealth in this season.

    Look beyond the glossy surface of Diwali festivities, and you realise that it is exactly like the digital. Digital is about counting. It is right there in the name — digits refers to numbers. Or digits refer to fingers — these counting appendages which we can manipulate and flex in order to achieve desired results. At the core of digital systems is the logic of counting, and counting, as anybody will tell us, is not a benign process. What gets counted, gets accounted for, thus producing a ledger of give and take which often becomes the measure of our social relationships.

    I remember, as a child, my mother meticulously making a note of every gift or envelope filled with money that ever came our way from the relatives, so that there would be precise and exact reciprocation. I am certain that there is now an app which can keep a track of these exchanges. I am not suggesting that these occasions of gifting are merely mercenary, but they are embodiments of finely calibrated values and worth of relationships defined by proximity, intimacy, hierarchy and distance. The digital produces and works on a similar algorithm, which is often as inscrutable and opaque as the unspoken codes of the Diwali ledger.

    There is something else that happens with counting. The only things that can have value are things that have value. I don’t know which ledger counts the coming together of my very distributed family for an evening of chatting, talking, sharing lives and laughter. I don’t know how anybody would reciprocate that one late night when a cousin came to our home and spent hours with my younger brother making a rangoli to surprise the rest of us. I have no idea how they will ever reciprocate gifts that one of the younger kids made at school for all the members of the family.

    Diwali is about the things, but like the digital system, these are things that cannot be counted. And within the digital system, things that cannot be counted are things that get discounted. They become unimportant. They become noise, or rubbish. Our social networks are counting systems that might notice the low frequency of my connections with my extended family but they cannot quantify the joy I hear in the voice of my grandmother when I call her from a different time-zone to catch up with her. Digital systems can only deal with things with value and not their worth.

    I do want to remind myself that there is more to this occasion than merely counting. And for once, I want to go beyond the digital, where my memories of the past and the expectations of the future are not shaped by the digital systems of counting and quantifying. Instead, I want Diwali to be analogue. I shall still be mediating my collectivity with the promises of connectivity, but I want to think of this moment as beyond the logics and logistics of counting that codify our social transactions and take such a central location in our personal functioning. This Diwali, I am rooting for a post-digital Diwali, that accounts for all those things that cannot be counted, but are sometimes the only things that really count.

    CIS Submission on CCWG-Accountability 2nd Draft Proposal on Work Stream 1 Recommendations

    by Pranesh Prakash last modified Nov 23, 2015 02:58 PM
    The Centre for Internet & Society (CIS) submitted the below to ICANN's CCWG-Accountability.

    The CCWG Accountability proposal is longer than many countries' constitutions.  Given that, we will keep our comments brief, addressing a very limited set of the issues in very broad terms.

    Human Rights

    ICANN is unique in many ways.  It is a global regulator that has powers of taxation to fund its own operation.  ICANN is not a mere corporation. For such a regulator, ensuring fair process (what is often referred to as "natural justice") as well as substantive human rights (such as the freedom of expression, right against discrimination, right to privacy, and cultural diversity), are important.  Given this, the narrow framing of "free expression and the free flow of information" in Option 1, we believe Option 2 is preferable.

    Diversity

    We are glad that diversity is being recognized as an important principle.  As we noted during the open floor session at ICANN49: [We are] extremely concerned about the accountability of ICANN to the global community.  Due to various decisions made by the US government relating to ICANN's birth, ICANN has had a troubled history with legitimacy.  While it has managed to gain and retain the confidence of the technical community, it still lacks political legitimacy due to its history.  The NTIA's decision has presented us an opportunity to correct this.

    However, ICANN can't hope to do so without going beyond the current ICANN community, which while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

    Of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.  In a session yesterday, a large number of the policies that favour entrenched incumbents from richer countries were discussed.  But without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies.

    This is true not just of the business sector, but of all the 'stakeholders' that are part of global Internet policymaking, whether they follow the ICANN multistakeholder model or another.  A look at the board members of the Internet Architecture Board, for instance, would reveal how skewed the technical community can be, whether in terms of geographic or gender diversity.

    Without greater diversity within the global Internet policymaking communities, there is no hope of equity, respect for human rights — civil, political, cultural, social and economic — and democratic functioning, no matter how 'open' the processes seem to be, and no hope of ICANN accountability either.

    Meanwhile, there are those who are concerned that diversity should not prevail over skill and experience.  Those who have the greatest skill and experience will be those who are insiders in the ICANN system.  To believe that being an insider in the ICANN system ought to be privileged over diversity is wrong.  A call for diversity isn't just political correctness.  It is essential for legitimacy of ICANN as a globally-representative body, and not just one where the developed world (primarily US-based persons) makes policies for the whole globe, which is what it has so far been.  Of course, this cannot be corrected overnight, but it is crucial that this be a central focus of the accountability initiative.

    Jurisdiction, Membership Models and Voting Rights

    The Sole-Member Community Mechanism (SMCM) that has been proposed seems in large part the best manner provided under Californian law relating to public benefit corporations of dealing with accountability issues, and is the lynchpin of the whole accountability mechanism under workstream.

    However, the jurisdictional analysis laid down in 11.3 will only be completed post-transition, as part of workstream. Thus the SMCM may not necessarily be the best model under a different legal jurisdiction. It would be useful to discuss the dependency between these more clearly.  In this vein, it is essential that the Article XVIII Section 1 not be designated a fundamental bylaw.  Further, it would be useful to add that for some limited aspects of the transition (such as IANA functioning), ICANN should seek to enter into a host country agreement to provide legal immunity, thus providing a qualification to para 125 ("ICANN accountability requires compliance with applicable legislation, in jurisdictions where it operates.") since the IANA functions operator ought not be forced by a country not to honour requests made by, for example, North Korea.

    It should also be noted that accountability needs independence, which may be of two kinds: independence of financial source, and independence of appointment.  From what one could gather from the CCWG proposal, the Independent Review Panel will be funded by the budget the ICANN Board prepares, while the appointment process is still unclear.

    One of the most important accountability mechanisms with regard to the IANA functions is that of changing the IANA Functions Operator.  As per the CWG Stewardship's current proposal, the "Post-Transition IANA" won't be an entity that is independent of ICANN.  If the PTI's governance is permanently made part of ICANN's fundamental bylaws (as an affiliate controlled by ICANN), how is it proposed that the IFO be moved from PTI to some other entity if the IANA Functions Review Team so decides? Additionally, for such an important function, the composition of the IFRT should not be left unspecified.

    While it is welcome that a separation is proposed between the IANA budget and budget for rest of ICANN's functioning, the current discussion around budgets seems to be based on the assumption that all IANA functions will be funded by ICANN, whereas if the IANA functions are separated, each community might fund it separately.  That provides two levels of insulation to IANA functions operator(s): separate sources of operational revenue, as well as separate budgets within ICANN.

    It should be noted that there have been some responses that express concern about the shifting of existing power structures within ICANN through some of the proposed alternative voting allocations in the SMCM. However, rather than present arguments as to why these shifts would be beneficial or harmful for ICANN's overall accountability, these responses seem to assume that shift from the current power structures are harmful.  This is an unfounded assumption and cannot be a valid reason, nor can speculation of how the United States Congress will behave be a valid reason for rejecting an otherwise valid proposal.  If there are harms, they ought to be clearly articulated: shifts from the status quo and fear of the US Congress aren't valid harms.  Thus, while it is important to consider how different voting rights models might change the status quo while arriving at any judgments, that cannot be the sole criterion for judgment of its merits.  Further, as the French government notes:

    [T]he French Government still considers that linking Stress Test 18 to a risk of capture of ICANN by governments and NTIA’s requirement that no “government-led or intergovernmental organization solution would be acceptable”, makes no sense. . . . Logically, the risk of capture of ICANN by governments in the future is as low as it is now and in any case, it cannot lead to a “government-led or intergovernmental organization solution”.

    While dealing with the question of relative voting proportions, the community must remembered that not all parts of the world are equally developed with regard to the domain name industry and with respect to civil society as those countries in North America, Western Europe, and other developed nations, and thus may not find adequate representation via the SOs.  In many parts of the world, civil society organizations — especially those focussed on Internet governance and domain name policies — are non-existent.  Thus a system that privileges the SOs to the exclusion of other components of a multistakeholder governance model would not be representative or diverse.  A multistakeholder model cannot disproportionately represent business interests over all other interests.

    In this regard, the comments of former ICANN Chairperson, Rod Beckstrom, at ICANN43 ought to be recalled:

    ICANN must be able to act for the public good while placing commercial and financial interests in the appropriate context . . . How can it do this if all top leadership is from the very domain name industry it is supposed to coordinate independently?

    As Kieren McCarthy points out about ICANN:

    The Board does have too many conflicted members
    The NomCom is full of conflicts
    There are not enough independent voices within the organization

    Reforms in these ought to be as crucial to accountability as the membership model.

    The current mechanisms for ensuring transparency, such as the DIDP process, are wholly inadequate.  We have summarized our experience with the DIDP process, and how often we were denied information on baseless grounds in this table.

    Predictive Policing: What is it, How it works, and its Legal Implications

    by Rohan George — last modified Nov 24, 2015 04:31 PM
    This article reviews literature surrounding big data and predictive policing and provides an analysis of the legal implications of using predictive policing techniques in the Indian context.

    Introduction

    For the longest time, humans have been obsessed with prediction. Perhaps the most well-known oracle in history, Pythia, the infallible Oracle of Delphi was said to predict future events in hysterical outbursts on the seventh day of the month, inspired by the god Apollo himself. This fascination with informing ourselves about future events has hardly subsided in us humans. What has changed however is the methods we employ to do so. The development of Big data technologies for one, has seen radical applications into many parts of life as we know it, including enhancing our ability to make accurate predictions about the future.

    One notable application of Big data into prediction caters to another basic need since the dawn of human civilisation, the need to protect our communities and cities. The word 'police' itself originates from the Greek word 'polis', which means city. The melding of these two concepts prediction and policing has come together in the practice of Predictive policing, which is the application of computer modelling to historical crime data and metadata to predict future criminal activity[1]. In the subsequent sections, I will attempt an introduction of predictive policing and explain some of the main methods within the domain of predictive policing. Because of the disruptive nature of these technologies, it will also be prudent to expand on the implications predictive technologies have for justice, privacy protections and protections against discrimination among others.

    In introducing the concept of predictive policing, my first step is to give a short explanation about current predictive analytics techniques, because these techniques are the ones which are applied into a law enforcement context as predictive policing.

    What is predictive analysis

    Facilitated by the availability of big data, predictive analytics uses algorithms to recognise data patterns and predict future outcomes[2]. Predictive analytics encompasses data mining, predictive modeling, machine learning, and forecasting[3]. Predictive analytics also relies heavily on machine learning and artificial intelligence approaches [4]. The aim of such analysis is to identify relationships among variables that may not be immediately apparent using hypothesis-driven methods.[5] In the mainstream media, one of the most infamous stories about the use of predictive analysis comes from USA, regarding a department store Target and their data analytics practices [6]. Target mined data from purchasing patterns of people who signed onto their baby registry. From this they were able to predict approximately when customers may be due and target advertisements accordingly. In the noted story, they were so successful that they predicted pregnancy before the pregnant girl's father knew she was pregnant. [7]

    Examples of predictive analytics

    • Predicting the success of a movie based on its online ratings[8]
    • Many universities, sometimes in partnership with other firms use predictive analytics to provide course recommendations to students, track student performance, personalize curriculum to individual students and foster networking between students.[9]
    • Predictive Analysis of Corporate Bond Indices Returns[10]

    Relationship between predictive analytics and predictive policing

    The same techniques used in many of the predictive methods mentioned above find application into some predictive policing methods. However two important points need to be raised:

    First, predictive analytics is actually a subset of predictive policing. This is because while the steps in creating a predictive model, of defining a target variable, exposing your model to training data, selecting appropriate features and finally running predictive analysis [11] maybe the same in a policing context, there are other methods which may be used to predict crime, but which do not rely on data mining. These techniques may instead use other methods, such as some of those detailed below along with data about historical crime to generate predictions.

    In her article "Policing by Numbers: Big Data and the Fourth Amendment"[12], Joh categorises 3 main applications of Big data into policing. These are Predictive Policing, Domain Awareness systems and Genetic Data Banks. Genetic data banks refer to maintaining large databases of DNA that was collected as part of the justice system. Issues arise when the DNA collected is repurposed in order to conduct familial searches, instead of being used for corroborating identity. Familial searches may have disproportionate impacts on minority races. Domain Awareness systems use various computer software and other digital surveillance tools such as Geographical Information Systems [13] or more illicit ones such as Black Rooms[14] to "help police create a software-enhanced picture of the present, using thousands of data points from multiple sources within a city" [15]. I believe Joh was very accurate in separating Predictive Policing from Domain Awareness systems, especially when it comes to analysing the implications of the various applications of Big data into policing.

    In such an analysis of the implications of using predictive policing methods, the issues surrounding predictive technologies often get conflated with larger issues about the application of big data into law enforcement. That opens the debate up to questions about overly intrusive evidence gathering and mass surveillance systems, which though used along with predictive technology, are not themselves predictive in nature. In this article, I aim to concentrate on the specific implications that arise due to predictive methods.

    One important point regarding the impact of predictive policing is how the insights that predictive policing methods offer are used. There is much support for the idea that predictive policing does not replace policing methods, but actually augments them. The RAND report specifically cites one myth about predictive policing as "the computer will do everything for you[16]". In reality police officers need to act on the recommendations provided by the technologies.

    What is Predictive policing?

    Predictive policing is the "application of analytical techniques-particularly quantitative techniques-to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions".[17] It is important to note that the use of data and statistics to inform policing is not new. Indeed, even twenty years ago, before the deluge of big data we have today, law enforcement regimes such as the New York Police Department (NYPD) were already using crime data in a major way. In order to keep track of crime trends, NYPD used the software CompStat[18] to map "crime statistics along with other indicators of problems, such as the locations of crime victims and gun arrests"[19]. The senior officers used the information provided by CompStat to monitor trends of crimes on a daily basis and such monitoring became an instrumental way to track the performance of police agencies[20]. CompStat has since seen application in many other jurisdictions [21].

    But what is new is the amount of data available for collection, as well as the ease with which organisations can analyse and draw insightful results from that data. Specifically, new technologies allow for far more rigorous interrogation of data and wide-ranging applications, including adding greater accuracy to the prediction of future incidence of crime.

    Predictive Policing methods

    Some methods of predictive policing involve application of known standard statistical methods, while other methods involve modifying these standard techniques. Predictive techniques that forecast future criminal activities can be framed around six analytic categories. They all may overlap in the sense that multiple techniques are used to create actual predictive policing software and in fact it is similar theories of criminology which undergird many of these methods, but the categorisation in such a way helps clarify the concept of predictive policing. The basis for the categorisation below comes from a RAND Corporation report entitled 'Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations' [22], which is a comprehensive and detailed contribution to scholarship in this nascent area.

    Hot spot analysis: Methods involving hot spot analysis attempt to "predict areas of increased crime risk based on historical crime data"[23]. The premise behind such methods lies in the adage that "crime tends to be lumpy" [24]. Hot Spot analysis seeks to map out these previous incidences of crime in order to inform potential future crime.

    Regression methods: A regression aims to find relationships between independent variables (factors that may influence criminal activity) and certain variables that one aims to predict. Hence, this method would track more variables than just crime history.

    Data mining techniques: Data mining attempts to recognise patterns in data and use it to make predictions about the future. One important variant in the various types of data mining methods used in policing are different types of algorithms that are used to mine data in different ways. These are dependent on the nature of the data the predictive model was trained on and will be used to interrogate in the future. Two broad categories of algorithms commonly used are clustering algorithms and classification algorithms:

    · Clustering algorithms "form a class of data mining approaches that seek to group data into clusters with similar attributes" [25]. One example of clustering algorithms is spatial clustering algorithms, which use geospatial crime incident data to predict future hot spots for crime[26].

    · Classification algorithms "seek to establish rules assigning a class or label to events"[27]. These algorithms use training data sets "to learn the patterns that determine the class of an observation"[28] The patterns identified by the algorithm will be applied to future data, and where applicable, the algorithm will recognise similar patterns in the data. This can be used to make predictions about future criminal activity for example.

    Near-repeat methods: Near-repeat methods work off the assumption that future crimes will take place close to timing and location of current crimes. Hence, it could be postulated that areas of high crime will experience more crime in the near future[29]. This involves the use of a 'self-exciting' algorithm, very similar to algorithms modelling earthquake aftershocks [30]. The premise undergirding such methods is very similar to that of hot spot analysis.

    Spatiotemporal analysis: Using "environmental and temporal features of the crime location" [31] as the basis for predicting future crime. By combining the spatiotemporal features of the crime area with crime incident data, police could use the resultant information to predict the location and time of future crimes. Examples of factors that may be considered include timing of crimes, weather, distance from highways, time from payday and many more.

    Risk terrain analysis: Analyses other factors that are useful in predicting crimes. Examples of such factors include "the social, physical, and behavioural factors that make certain areas more likely to be affected by crime"[32]

    Various methods listed above are used, often together, to predict the where and when a crime may take place or even potential victims. The unifying thread which relates these methods is their dependence on historical crime data.

    Examples of predictive policing:

    Most uses of predictive policing that have been studied and reviewed in scholarly work come from the USA, though I will detail one case study from Derbyshire, UK. Below is a collation of various methods that are a practical application of the methods raised above.

    Hot Spot analysis in Sacramento: In February 2011, Sacramento Police Department began using hot spot analysis along with research on optimal patrol time to act as a sufficient deterrent to inform how they patrol high-risk areas. This policy was aimed at preventing serious crimes by patrolling these predicted hot spots. In places where there was such patrolling, serious crimes reduced by a quarter with no significant increases such crimes in surrounding areas[33].

    Data Mining and Hot Spot Mapping in Derbyshire, UK: The Safer Derbyshire Partnership, a group of law enforcement agencies and municipal authorities sought to identify juvenile crime hotspots[34]. They used MapInfo software to combine "multiple discrete data sets to create detailed maps and visualisations of criminal activity, including temporal and spatial hotspots" [35]. This information informed law enforcement about how to optimally deploy their resources.

    Regression models in Pittsburgh: Researchers used reports from Pittsburgh Bureau of Police about violent crimes and "leading indicator" [36] crimes, crimes that were relatively minor but which could be a sign of potential future violent offences. The researcher ran analysis of areas with violent crimes, which were used as the dependent variable in analysing whether violent crimes in certain areas could be predicted by the leading indicator data. From the 93 significant violent crime areas that were studied, 19 areas were successfully predicted by the leading indicator data.[37]

    Risk terrain modelling analysis in Morris County, New Jersey: Police in Morris County, used risk terrain analysis to tackle violent crimes and burglaries. They considered five inputs in their model: "past burglaries, the address of individuals recently arrested for property crimes, proximity to major highways, the geographic concentration of young men and the location of apartment complexes and hotels." [38] The Morris County law enforcement officials linked the significant reductions in violent and property crime to their use of risk terrain modelling[39].

    Near-repeat & hot spot analysis used by Santa Cruz Police Department: Uses PredPol software that applies the Mohler's algorithm [40] to a database with five years' worth of crime data to assess the likelihood of future crime occurring in the geographic areas within the city. Before going on shift, officers receive information identifying 15 such areas with the highest probability of crime[41]. The initiative has been cited as being very successful at reducing burglaries, and was used in Los Angeles and Richmond, Virginia[42].

    Data Mining and Spatiotemporal analysis to predict future criminal activities in Chicago: Officers in Chicago Police Department made visits to people their software predicted were likely to be involved in violent crimes[43], guided by an algorithm-generated "Heat List"[44]. Some of the inputs used in the predictions include some types of arrest records, gun ownership, social networks[45] (police analysis of social networking is also a rising trend in predictive policing[46]) and generally type of people you are acquainted with [47] among others, but the full list of the factors are not public. The list sends police officers (or sometimes mails letters) to peoples' homes to offer social services or deliver warnings about the consequences for offending. Based in part on the information provided by the algorithm, officers may provide people on the Heat List information about vocational training programs or warnings about how Federal Law provides harsher punishments for reoffending[48].

    Predictive policing in India

    In this section, I map out some of the developments in the field of predictive policing within India. On the whole, predictive policing is still very new in India, with Jharkhand being the only state that appears to already have concrete plans in place to introduce predictive policing.

    Jharkhand Police

    The Jharkhand police began developing their IT infrastructure such as a Geographic Information System (GIS) and Server room when they received funding for Rs. 18.5 crore from the Ministry of Home Affairs[49]. The Open Group on E-governance (OGE), founded as a collaboration between the Jharkhand Police and National Informatics Centre[50], is now a multi-disciplinary group which takes on different projects related to IT[51]. With regards to predictive policing, some members of OGE began development in 2013 of data mining software which will scan online records that are digitised. The emerging crime trends "can be a building block in the predictive policing project that the state police want to try."[52]

    The Jharkhand Police was also reported in 2012 to be in the final stages of forming a partnership with IIM-Ranchi[53]. It was alleged the Jharkhand police aimed to tap into IIM's advanced business analytics skills [54], skills that can be very useful in a predictive policing context. Mr Pradhan suggested that "predictive policing was based on intelligence-based patrol and rapid response"[55] and that it could go a long way to dealing with the threat of Naxalism in Jharkhand[56].

    However, in Jharkhand, the emphasis appears to be targeted at developing a massive Domain Awareness system, collecting data and creating new ways to present that data to officers on the ground, instead of architecting and using predictive policing software. For example, the Jharkhand police now have in place "a Naxal Information System, Crime Criminal Information System (to be integrated with the CCTNS) and a GIS that supplies customised maps that are vital to operations against Maoist groups"[57]. The Jharkhand police's "Crime Analytics Dashboard" [58] shows the incidence of crime according to type, location and presents it in an accessible portal, providing up-to-date information and undoubtedly raises the situational awareness of the officers. Arguably, the domain awareness systems that are taking shape in Jharkhand would pave the way for predictive policing methods to be applied in the future. These systems and hot spot maps seem to be the start of a new age of policing in Jharkhand.

    Predictive Policing Research

    One promising idea for predictive policing in India comes from the research conducted by Lavanya Gupta and others entitled "Predicting Crime Rates for Predictive Policing"[59], which was a submission for the Gandhian Young Technological Innovation Award. The research uses regression modelling to predict future crime rates. Drawing from First Information Reports (FIRs) of violent crimes (murder, rape, kidnapping etc.) from Chandigarh Police, the team attempted "to extrapolate annual crime rate trends developed through time series models. This approach also involves correlating past crime trends with factors that will influence the future scope of crime, in particular demographic and macro-economic variables" [60]. The researchers used early crime data as the training data for their model, which after some testing, eventually turned out to have an accuracy of around 88.2%.[61] On the face of it, ideas like this could be the starting point for the introduction of predictive policing into India.

    The rest of India's law enforcement bodies do not appear to be lagging behind. In the 44th All India police science congress, held in Gandhinagar, Gujarat in March this year, one of the Themes for discussion was the "Role of Preventive Forensics and latest developments in Voice Identification, Tele-forensics and Cyber Forensics"[62].Mr A K Singh, (Additional Director General of Police, Administration) the chairman of the event also said in an interview that there was to be a round-table DGs (Director General of Police) held at the conference to discuss predictive policing[63]. Perhaps predictive policing in India may not be that far away from reality.

    CCTNS and the building blocks of Predictive policing

    The Ministry of Home Affairs conceived of a Crime and Criminals Tracking and Network System (CCTNS) as part of national e-Governance plans. According to the website of the National Crime Records Bureau (NCRB), CCTNS aims to develop "a nationwide networked infrastructure for evolution of IT-enabled state-of-the-art tracking system around 'investigation of crime and detection of criminals' in real time" [64]

    The plans for predictive policing seem in the works, but first steps that are needed in India across police forces involve digitizing data collection by the police, as well as connecting law enforcement agencies. The NCRB's website described the current possibility of exchange of information between neighbouring police stations, districts or states as being "next to impossible"[65]. The aim of CCTNS is precisely to address this gap and integrate and connect the segregated law enforcement arms of the state in India, which would be a foundational step in any initiatives to apply predictive methods.

    What are the implications of using predictive policing? Lessons from USA

    Despite the moves by law enforcement agencies to adopt predictive policing, one reality is that the implications of predictive policing methods are far from clear. This section will examine these implications on the carriage of justice and its use in law, as well as how it impacts privacy concerns for the individual. It frames the existing debates surrounding these issues with predictive policing, and aims to apply these principles into an Indian context.

    Justice, Privacy & IV Amendment

    Two key concerns about how predictive policing methods may be used by law enforcement relate to how insights from predictive policing methods are acted upon and how courts interpret them. In the USA, this issue may finds its place under the scope of IV Amendment jurisprudence. The IV amendment states that all citizens are "secure from unreasonable searches and seizures of property by the government"[66]. In this sense, the IV amendment forms the basis for search and surveillance law in the USA.

    A central aspect of the IV Amendment jurisprudence is drawn from United States v. Katz. In Katz, the FBI attached a microphone to the outside of a public phone booth to record the conversations of Charles Katz, who was making phone calls related to illegal gambling. The court ruled that such actions constituted a search within the auspices of the 4th amendment. The ruling affirmed constitutional protection of all areas where someone has a "reasonable expectation of privacy"[67].

    Later cases have provided useful tests for situations where government surveillance tactics may or may not be lawful, depending on whether it violates one's reasonable expectation of privacy. For example, in United States v. Knotts, the court held that "police use of an electronic beeper to follow a suspect surreptitiously did not constitute a Fourth Amendment search"[68]. In fact, some argue that that the Supreme Court's reasoning in such cases suggests " any 'scientific enhancement' of the senses used by the police to watch activity falls outside of the Fourth Amendment's protections if the activity takes place in public"[69]. This reasoning is based on the third party doctrine which holds that "if you voluntarily provide information to a third party, the IV Amendment does not preclude the government from accessing it without a warrant"[70]. The clearest exposition of this reasoning was in Smith v. Maryland, where the presiding judges noted that "this Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties"[71].

    However, the third party has seen some challenge in recent time. In United States v. Jones, it was ruled that the government's warrantless GPS tracking of his vehicle 24 hours a day for 28 days violated his Fourth Amendment rights[72]. Though the majority ruling was that warrantless GPS tracking constituted a search, it was in a concurring opinion written by Justice Sonya Sotomayor that such intrusive warrantless surveillance was said to infringe one's reasonable expectation of privacy. As Newell reflected on Sotomayor's opinion,

    "Justice Sotomayor stated that the time had come for Fourth Amendment jurisprudence to discard the premise that legitimate expectations of privacy could only be found in situations of near or complete secrecy. Sotomayor argued that people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties"[73].

    She said that the court's current reasoning on what constitutes reasonable expectations of privacy in information disclosed to third parties, such as email or phone records or even purchase histories, is "ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks"[74].

    Predictive policing vs. Mass surveillance and Domain Awareness Systems

    However, there is an important distinction to be drawn between these cases and evidence from predictive policing. This has to do with the difference in nature of the evidence collection. Arguably, from Jones and others, what we see is that use of mass surveillance and domain awareness systems, drawing from Joh's categorisation of domain awareness systems as being distinct from predictive policing mentioned above, could potentially encroach on one's reasonable expectation of privacy. However, I think that predictive policing, and the possible implications for justice associated with it, its predictive harms, are quite distinct from what has been heard by courts thus far.

    The reason for distinct risks between predictive harms and privacy harms originating from information gathering is related to the nature of predictive policing technologies, and how they are used. It is highly unlikely that the evidence submitted by the State to indict an offender will be mainly predictive in nature. For example, would it be possible to convict an accused person solely on the premise that he was predicted to be highly likely to commit a crime, and that subsequently he did? The legal standard of proving guilt beyond a reasonable doubt [75] can hardly be met solely on predictive evidence for a multitude of reasons. Predictive policing methods could at most, be said to inform police about the risk of someone committing a crime or of crime happening at a certain location, as demonstrated above.

    Predictive policing and Criminal Procedure

    It may therefore pay to analyse how predictive policing may be used across the various processes within the criminal justice system. In fact, in an analysis of the various stages of criminal procedure, from opening an investigation to gathering evidence, followed by arrest, trial, conviction and sentencing, we see that as the individual gets subject to more serious incursions or sanctions by the state, it takes a higher standard of certainty about wrongdoing and a higher burden of proof, in order to legitimize that particular action.

    Hence, at more advanced stages of the criminal justice process such as seeking arrest warrants or trial, it is very unlikely that predictive policing on its own can have a tangible impact, because the nature of predictive evidence is probability based. It aims to calculate the risk of future crime occurring based on statistical analysis of past crime data[76]. While extremely useful, probabilities on their own will not come remotely close meet the legal standards of proving 'guilt beyond reasonable doubt'. It may be at the earlier stages of the criminal justice process that evidence predictive policing might see more widespread application, in terms of applying for search warrants and searching suspicious people while on patrol.

    In fact, in the law enforcement context, prediction as a concept is not new to justice. Both courts and law enforcement officials already make predictions about future likelihood of crimes. In the case of issuing warrants, the IV amendment makes provisions that law enforcement officials show that the potential search is based "upon probable cause"[77] in order for a judge to grant a warrant. In US v. Brinegar, probable cause was defined as existing "where the facts and circumstances within the officers' knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed" [78]. Again, this legal standard seems too high for predictive evidence meet.

    However, the police also have an important role to play in preventing crimes by looking out for potential crimes while on patrol or while doing surveillance. When the police stop a civilian on the road to search him, reasonable suspicion must be established. This standard of reasonable suspicion was defined in most clearly in Terry v. Ohio, which required police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[79]. Therefore, "reasonable suspicion that 'criminal activity may be afoot' is at base a prediction that the facts and circumstances warrant the reasonable prediction that a crime is occurring or will occur"[80]. Despite the assertion that "there are as of yet no reported cases on predictive policing in the Fourth Amendment context"[81], examining the impact of predictive policing on the doctrine of reasonable suspicion could be very instructive in understanding the implications for justice and privacy [82].

    Predictive Policing and Reasonable Suspicion

    Ferguson's insightful contribution to this area of scholarship involves the identification of existing areas where prediction already takes place in policing, and analogising them into a predictive policing context[83]. These three areas are: responding to tips, profiling, and high crime areas (hot spots).

    Tips

    Tips are pieces of information shared with the police by members of the public. Often tips, either anonymous or from known police informants, may predict future actions of certain people, and require the police to act on this information. The precedent for understanding the role of tips in probable cause comes from Illinois v. Gates[84]. It was held that "an informant's 'veracity,' 'reliability,' and 'basis of knowledge'-remain 'highly relevant in determining the value'"[85] of the said tip. Anonymous tips need to be detailed, timely and individualised enough[86] to justify reasonable suspicion [87]. And when the informant is known to be reliable, then his prior reliability may justify reasonable suspicion despite lacking a basis in knowledge[88].

    Ferguson argues that whereas predictive policing cannot provide individualised tips, it is possible to consider reliable tips about certain areas as a parallel to predictive policing[89]. And since the courts had shown a preference for reliability even in the face of a weak basis in knowledge, it is possible to see the reasonable suspicion standard change in its application[90]. It also implies that IV protections may be different in places where crime is predicted to occur [91].

    Profiling

    Despite the negative connotations and controversial overtones at the mere sound of the word, profiling is already a method commonly used by law enforcement. For example, after a crime has been committed and general features of the suspect identified by witnesses, police often stop civilians who fit this description. Another example of profiling is common in combating drug trafficking[92], where agents keep track of travellers at airports to watch for suspicious behaviour. Based on their experience of common traits which distinguish drug traffickers from regular travellers (a profile), agents may search travellers if they fit the profile[93]. In the case of United States v. Sokolow[94], the courts "recognized that a drug courier profile is not an irrelevant or inappropriate consideration that, taken in the totality of circumstances, can be considered in a reasonable suspicion determination" [95]. Similar lines of thinking could be employed in observing people exchanging small amounts of money in an area known for high levels of drug activity, conceiving predictive actions as a form of profile[96].

    It is valid to consider predictive policing as a form of profiling[97], but Ferguson argues that the predictive policing context means this 'new form' of profiling could change IV analysis. The premise behind such an argument lies in the fact that a prediction made by some algorithm about potential high risk of crime in a certain area, could be taken in conjunction observations of ordinarily innocuous events. Read in the totality of circumstances, these two threads may justify individual reasonable suspicion [98]. For example, a man looking into cars at a parking lot may not by itself justify reasonable suspicion, but taken together with a prediction of high risk of car theft at that locality, it may well justify reasonable suspicion. It is this impact of predictive policing, which influences the analysis of reasonable suspicion in a totality of circumstances that may represent new implications for courts looking at IV amendment protections.

    Profiling, Predictive Policing and Discrimination

    The above sections have already brought up the point that law enforcement agencies already utilize profiling methods in their operations. Also, as the sections on how predictive analytics works and on methods of predictive policing make clear, predictive policing definitely incorporates the development of profiles for predicting future criminal activity. Concerns about predictive models generate potentially discriminatory predictions therefore are very serious, and need addressing. Potential discrimination may be either overt, though far less likely, or unintended. A valuable case study of which sheds light on such discriminatory data mining practices can be found in US Labour law. It was shown how predictive models could be discriminatory at various stages, from conceptualising the model and training it with training data, to eventually selecting inappropriate features to search for [99]. It is also possible for data scientists to (intentionally or not) use proxies for identifiers like race, income level, health condition and religion. Barocas and Selbst argue that "the current distribution of relevant attributes-attributes that can and should be taken into consideration in apportioning opportunities fairly-are demonstrably correlated with sensitive attributes" [100]. Hence, what may result is unintended discrimination, as predictive models and their subjective and implicit biases are reflected in predicted decisions, or that the discrimination is not even accounted for in the first place. While I have not found any case law where courts have examined such situations in a criminal context, at the very least, law enforcement agencies need to be aware of these possibilities and guard against any forms of discriminatory profiling.

    However, Ferguson argues that "the precision of the technology may in fact provide more protection for citizens in broadly defined high crime areas" [101]. This is because the label of a 'high-crime area' may no longer apply to large areas but instead to very specific areas of criminal activity. This implies that previously defined areas of high crime, like entire neighbourhoods may not be scrutinised in such detail. Instead, police now may be more precise in locating and policing areas of high crime, such as an individual street corner or a particular block of flats instead of an entire locality.

    Hot Spots

    Courts have also considered the existence of notoriously 'high-crime areas as part of considering reasonable suspicion[102]. This was seen in Illinois v. Wardlow [103], where the "high crime nature of an area can be considered in evaluating the officer's objective suspicion"[104]. Many cases have since applied this reasoning without scrutinising the predictive value of such a label. In fact, Ferguson asserts that such labelling has questionable evidential value[105]. He uses the facts of the Wardlow case itself to challenge the 'high crime area' factor. Ferguson cites the reasoning of one of the judges in the case:

    "While the area in question-Chicago's District 11-was a low-income area known for violent crimes, how that information factored into a predictive judgment about a man holding a bag in the afternoon is not immediately clear."[106]

    Especially because "the most basic models of predictive policing rely on past crimes"[107], it is likely that the predictive policing methods like hot spot or spatiotemporal analysis and risk terrain modelling may help to gather or build data models about high crime areas. Furthermore, the mathematical rigour of the predictive modelling could help clarify the term 'high crime area'. As Ferguson argues, "courts may no longer need to rely on the generalized high crime area terminology when more particularized and more relevant information is available" [108].

    Summary

    Ferguson synthesises four themes to which encapsulate reasonable suspicion analysis:

    1. Predictive information is not enough on its own. Instead, it is "considered relevant to the totality of circumstances, but must be corroborated by direct police observation"[109].
    2. The prediction must also "be particularized to a person, a profile, or a place, in a way that directly connects the suspected crime to the suspected person, profile, or place"[110].
    3. It must also be detailed enough to distinguish a person or place from others not the focus of the prediction [111].
    4. Finally, predicted information becomes less valuable over time. Hence it must be acted on quickly or be lost [112].

    Conclusions from America

    The main conclusion to draw from the analysis of the parallels between existing predictions in IV amendment law and predictive policing is that "predictive policing will impact the reasonable suspicion calculus by becoming a factor within the totality of circumstances test"[113]. Naturally, it reaffirms the imperative for predictive techniques to collect reliable data [114] and analyse it transparently[115]. Moreover, in order for courts to evaluate the reliability of the data and the processes used (since predictive methods become part of the reasonable suspicion calculus), courts need to be able to analyse the predictive process. This has implications for the how hearings may be conducted, for how legal adjudicators may require training and many more. Another important concern is that the model of predictive information and police corroboration or direct observation[116] may mean that in areas which were predicted to have low risk of crime, the reasonable suspicion doctrine works against law enforcement. There may be less effort paid to patrolling these other areas as a result of predictions.

    Implications for India

    While there have been no cases directly involving predictive policing methods, it would be prudent to examine the parts of Indian law which would inform the calculus on the lawfulness of using predictive policing methods. A useful lens to examine this might be found in the observation that prediction is not in itself a novel concept in justice, and is already used by courts and law enforcement in numerous circumstances.

    Criminal Procedure in Non-Warrant Contexts

    The most logical way to begin analysing the legal implications of predictive policing in India may probably involve identifying parallels between American and Indian criminal procedure, specifically searching for instances where 'reasonable suspicion' or some analogous requirement exists for justifying police searches.

    In non-warrant scenarios, we find conditions for officers to conduct such a warrantless search in Section 165 of the Criminal Procedure Code (Cr PC). For clarity purposes I have stated section 165 (1) in full:

    "Whenever an officer in charge of a police station or a police officer making an investigation has reasonable grounds for believing that anything necessary for the purposes of an investigation into any offence which he is authorised to investigate may be found in any place with the limits of the police station of which he is in charge, or to which he is attached, and that such thing cannot in his opinion be otherwise obtained without undue delay, such officer may, after recording in writing the grounds of his belief and specifying in such writing, so far as possible, the thing for which search is to be made, search, or cause search to be made, for such thing in any place within the limits of such station." [117]

    However, India differs from the USA in that its Cr PC allows for police to arrest individuals without a warrant as well. As observed in Gulab Chand Upadhyaya vs State Of U.P, "Section 41 Cr PC gives the power to the police to arrest without warrant in cognizable offences, in cases enumerated in that Section. One such case is of receipt of a 'reasonable complaint' or 'credible information' or 'reasonable suspicion'" [118] Like above, I have stated section 41 (1) and subsection (a) in full:

    "41. When police may arrest without warrant.

    (1) Any police officer may without an order from a Magistrate and without a warrant, arrest any person-

    (a) who has been concerned in any cognizable offence, or against whom a reasonable complaint has been made, or credible information has been received, or a reasonable suspicion exists, of his having been so concerned"[119]

    In analysing the above sections of Indian criminal procedure from a predictive policing angle, one may find both similarities and differences between the proposed American approach and possible Indian approaches to interpreting or incorporating predictive policing evidence.

    Similarity of 'reasonable suspicion' requirement

    For one, the requirement for "reasonable grounds" or "reasonable suspicion" seems to be analogous to the American doctrine of reasonable suspicion. This suggests that the concepts used in forming reasonable suspicion, for the police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[120] may also be useful in the Indian context.

    One case which sheds light on an Indian interpretation of reasonable suspicion or grounds is State of Punjab v. Balbir Singh[121]. In that case, the court observes a requirement for "reason to believe that such an offence under Chapter IV has been committed and, therefore, an arrest or search was necessary as contemplated under these provisions"[122] in the context of Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act, 1985[123]. In examining the requirement of having "reason to believe", the court draws on Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act[124], where the judge observed that "the expression 'reason to believe' is not synonymous with subjective satisfaction of the officer. The belief must be held in good faith; it cannot be merely a pretence….."[125]

    In light of this, the judge in Balbir Singh remarked that "whether there was such reason to believe and whether the officer empowered acted in a bona fide manner, depends upon the facts and circumstances of the case and will have a bearing in appreciation of the evidence" [126]. The standard considered by the court in Balbir Singh and Partap Singh is different from the 'reasonable suspicion' or 'reasonable grounds' standard as per Section 41 and 165 of Cr PC. But I think the discussion can help to inform our analysis of the idea of reasonableness in law enforcement actions. Of importance was the court requirement of something more than mere "pretence" as well as a belief held in good faith. This could suggest that in fact the reasoning in American jurisprudence about reasonable suspicion might be at least somewhat similar to how Indian courts view reasonable suspicion or grounds in the context of predictive policing, and therefore how we could similarly conjecture that predictive evidence could form part of the reasonable suspicion calculus in India as well.

    Difference in judicial treatment of illegally obtained evidence - Indian lack of exclusionary rules

    However, the apparent similarity of how police in America and India may act in non-warrant situations - guided by the idea of reasonable suspicion - is only veneered by linguistic parallels. Despite the existence of such conditions which govern the searches without a warrant, I believe that Indian courts currently may provide far less protection against unlawful use of predictive technologies. The main premise behind this argument is that Indian courts refuse to exclude evidence that was obtained in breaches of the conditions of sections of the Cr PC. What exists in place of evidentiary safeguards is a line of cases in which courts routinely admit unlawfully or illegally obtained evidence. Without protections against unlawfully gathered evidence being considered relevant by courts, any regulations on search or conditions to be met before a search is lawful become ineffective. Evidence may simply enter the courtroom through a backdoor.

    In the USA, this is by and large, not the case. Although there are exceptions to these rules, exclusionary rules are set out to prevent admission of evidence which violates the constitution[127]. "The exclusionary rule applies to evidence gained from an unreasonable search or seizure in violation of the Fourth Amendment "[128]. Mapp v. Ohio [129] set the precedent for excluding unconstitutionally gathered evidence, where the court ruled that "all evidence obtained by searches and seizures in violation of the Federal Constitution is inadmissible in a criminal trial in a state court" [130].

    Any such evidence which then leads law enforcement to collect new information may also be excluded, as part of the "fruit of the poisonous tree" doctrine[131], established in Silverthorne Lumber Co. v. United States [132]. The doctrine is a metaphor which suggests that if the source of certain evidence is tainted, so is 'fruit' or derivatives from that unconstitutional evidence. One such application was in Beck v. Ohio[133], where the courts overturned a petitioner's conviction because the evidence used to convict him was obtained via an unlawful arrest.

    However in India's context, there is very little protection against the admission and use of unlawfully gathered evidence. In fact, there are a line of cases which lay out the extent of consideration given to unlawfully gathered evidence - both cases that specifically deal with the rules as per the Indian Cr PC as well as cases from other contexts - which follow and develop this line of reasoning of allowing illegally obtained evidence.

    One case to pay attention to is State of Maharastra v. Natwarlal Damodardas Soni - in this case, the Anti-Corruption Bureau searched the house of the accused after receiving certain information as a tip. The police "had powers under the Code of Criminal Procedure to search and seize this gold if they had reason to believe that a cognizable offence had been committed in respect thereof"[134]. Justice Sarkaria, in delivering his judgement, observed that for argument's sake, even if the search was illegal, "then also, it will not affect the validity of the seizure and further investigation"[135]. The judge drew reasoning from Radhakishan v. State of U.P[136]. This which was a case involving a postman who had certain postal items that were undelivered recovered from his house. As the judge in Radhakishan noted:

    "So far as the alleged illegality of the search is concerned, it is sufficient to say that even assuming that the search was illegal the seizure of the articles is not vitiated. It may be that where the provisions of Sections 103 and 165 of the Code of Criminal Procedure, are contravened the search could be resisted by the person whose premises are sought to be searched. It may also be that because of the illegality of the search the court may be inclined to examine carefully the evidence regarding the seizure. But beyond these two consequences no further consequence ensues." [137]

    Shyam Lal Sharma v. State of M.P.[138] was also drawn upon, where it was held that "even if the search is illegal being in contravention with the requirements of Section 165 of the Criminal Procedure Code, 1898, that provision ceases to have any application to the subsequent steps in the investigation"[139].

    Even in Gulab Chand Upadhyay, mentioned above, the presiding judge contended that even "if arrest is made, it does not require any, much less strong, reasons to be recorded or reported by the police. Thus so long as the information or suspicion of cognizable offence is "reasonable" or "credible", the police officer is not accountable for the discretion of arresting or no arresting"[140].

    A more complete articulation of the receptiveness of Indian courts to admit illegally gathered evidence can be seen in the aforementioned Balbir Singh. The judgement aimed to:

    "dispose of one of the contentions that failure to comply with the provisions of Cr PC in respect of search and seizure even up to that stage would also vitiate the trial. This aspect has been considered in a number of cases and it has been held that the violation of the provisions particularly that of Sections 100, 102, 103 or 165 Cr PC strictly per se does not vitiate the prosecution case. If there is such violation, what the courts have to see is whether any prejudice was caused to the accused and in appreciating the evidence and other relevant factors, the courts should bear in mind that there was such a violation and from that point of view evaluate the evidence on record."[141]

    The judges then consulted a series of authorities on the failure to comply with provisions of the Cr PC:

    1. State of Punjab v. Wassan Singh[142]: "irregularity in a search cannot vitiate the seizure of the articles"[143].
    2. Sunder Singh v. State of U.P[144]: 'irregularity cannot vitiate the trial unless the accused has been prejudiced by the defect and it is also held that if reliable local witnesses are not available the search would not be vitiated."[145]
    3. Matajog Dobey v.H.C. Bhari[146]: "when the salutory provisions have not been complied with, it may, however, affect the weight of the evidence in support of the search or may furnish a reason for disbelieving the evidence produced by the prosecution unless the prosecution properly explains such circumstance which made it impossible for it to comply with these provisions."[147]
    4. R v. Sang[148]: "reiterated the same principle that if evidence was admissible it matters not how it was obtained."[149] Lord Diplock, one of the Lords adjudicating the case, observed that "however much the judge may dislike the way in which a particular piece of evidence was obtained before proceedings were commenced, if it is admissible evidence probative of the accused's guilt "it is no part of his judicial function to exclude it for this reason". [150] As the judge in Balbir Singh quoted from Lord Diplock, a judge "has no discretion to refuse to admit relevant admissible evidence on the ground that it was obtained by improper or unfair means. The court is not concerned with how it was obtained."[151]

    The vast body of case law presented above provides observers with a clear image of the courts willingness to admit and consider illegally obtained evidence. The lack of safeguards against admission of unlawful evidence are important from the standpoint of preventing the excessive or unlawful use of predictive policing methods. The affronts to justice and privacy, as well as the risks of profiling, seem to become magnified when law enforcement use predictive methods more than just to augment their policing techniques but to replace some of them. The efficacy and expediency offered by using predictive policing needs to be balanced against the competing interest of ensuring rule of law and due process. In the Indian context, it seems courts sparsely consider this competing interest.

    Naturally, weighing in on which approach is better depends on a multitude of criteria like context, practicality, societal norms and many more. It also draws on existing debates in administrative law about the role of courts, which may emphasise protecting individuals and preventing excessive state power (red light theory) or emphasise efficiency in the governing process with courts assisting the state to achieve policy objectives (green light theory) [152].

    A practical response may be that India should aim to embrace both elements and balance them appropriately, although what an appropriate balance again may vary. There are some who claim that this balance already exists in India. Evidence for such a claim may come from R.M. Malkani v. State of Maharashtra[153], where the court considered whether an illegally tape-recorded conversation could be admissible. In its reasoning, the court drew from Kuruma, Son of Kanju v. R. [154], noting that

    " if evidence was admissible it matters not how it was obtained. There is of course always a word of caution. It is that the Judge has a discretion to disallow evidence in a criminal case if the strict rules of admissibility would operate unfairly against the accused. That caution is the golden rule in criminal jurisprudence"[155].

    While this discretion exists at least principally in India, in practice the cases presented above show that judges rarely exercise that discretion to prevent or bar the admission of illegally obtained evidence or evidence that was obtained in a manner that infringed the provisions governing search or arrest in the Cr PC. Indeed, the concern is that perhaps the necessary safeguards required to keep law enforcement practices, including predictive policing techniques, in check would be better served by a greater focus on reconsidering the legality of unlawfully gathered evidence. If not, evidence which should otherwise be inadmissible may find its way into consideration by existing legal backdoors.

    Risk of discriminatory predictive analysis

    Regarding the risk of discriminatory profiling, Article 15 of India's Constitution[156] states that "the State shall not discriminate against any citizen on grounds only of religion, race, caste, sex, place of birth or any of them" [157]. The existence of constitutional protection for such forms of discrimination suggests that India will be able to guard against discriminatory predictive policing. However, as mentioned before, predictive analytics often discriminates institutionally, "whereby unconscious implicit biases and inertia within society's institutions account for a large part of the disparate effects observed, rather than intentional choices"[158]. As in most jurisdictions, preventing these forms of discrimination are much harder. Especially in a jurisdiction whose courts are already receptive to allowing admission of illegally obtained evidence, the risk of discriminatory data mining or prejudiced algorithms being used by police becomes magnified. Because the discrimination may be unintentional, it may be even harder for evidence from discriminatory predictive methods to be scrutinised or when applicable, dismissed by the courts.

    Conclusion for India

    One thing which is eminently clear from the analysis of possible interpretations of predictive evidence is that Indian Courts have had no experience with any predictive policing cases, because the technology itself is still at a nascent stage. There is in fact a long way to go before predictive policing will become used on a scale similar to that of USA for example.

    But, even in places where predictive policing is used much more prominently, there is no precedent to observe how courts may view predictive policing. Ferguson's method of locating analogous situations to predictive policing which courts have already considered is one notable approach, but even this does not provide complete answer. One of his main conclusions that predictive policing will affect the reasonable suspicion calculus, or in India's case, contribute to 'reasonable grounds' in some ways, is perhaps the most valid one.

    However, what provides more cause for concern in India's context are the limited protections against use of unlawfully gathered evidence. The lack of 'exclusionary rules' unlike those present in the US amplifies the various risks of predictive policing because individuals have little means of redress in such situations where predictive policing may be used unjustly against them.

    Yet, the promise of predictive policing remains undeniably attractive for India. The successes predictive policing methods seem to have had In the US and UK coupled with the more efficient allocation of law enforcement's resources as a consequence of adapting predictive policing evidence this point. The government recognises this and seems to be laying the foundation and basic digital infrastructure required to utilize predictive policing optimally. One ought also to ask whether it is the even within the court's purview to decide what kind of policing methods are to be permissible through evaluating the nature of evidence. There is a case to be made for the legislative arm of the state to provide direction on how predictive policing is to be used in India. Perhaps the law must also evolve with the changes in technology, especially if courts are to scrutinise the predictive policing methods themselves.


    [1] Joh, Elizabeth E. "Policing by Numbers: Big Data and the Fourth Amendment." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 1, 2014. http://papers.ssrn.com/abstract=2403028.

    [2] Tene, Omer, and Jules Polonetsky. "Big Data for All: Privacy and User Control in the Age of Analytics." Northwestern Journal of Technology and Intellectual Property 11, no. 5 (April 17, 2013): 239.

    [3] Datta, Rajbir Singh. "Predictive Analytics: The Use and Constitutionality of Technology in Combating Homegrown Terrorist Threats." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 1, 2013. http://papers.ssrn.com/abstract=2320160.

    [4] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [5] Ibid.

    [6] Duhigg, Charles. "How Companies Learn Your Secrets." The New York Times, February 16, 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

    [7] Ibid.

    [8] Lijaya, A, M Pranav, P B Sarath Babu, and V R Nithin. "Predicting Movie Success Based on IMDB Data." International Journal of Data Mining Techniques and Applications 3 (June 2014): 365-68.

    [9] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [10] Sangvinatsos, Antonios A. "Explanatory and Predictive Analysis of Corporate Bond Indices Returns." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, June 1, 2005. http://papers.ssrn.com/abstract=891641.

    [11] Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 13, 2015. http://papers.ssrn.com/abstract=2477899.

    [12] Joh, supra note 1.

    [13] US Environmental Protection Agency. "How We Use Data in the Mid-Atlantic Region." US EPA. Accessed November 6, 2015. http://archive.epa.gov/reg3esd1/data/web/html/.

    [14] See here for details of blackroom.

    [15] Joh, supra note 1, at pg 48.

    [16] Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith and John S. Hollywood. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: RAND Corporation, 2013. http://www.rand.org/pubs/research_reports/RR233. Also available in print form.

    [17] Ibid, at pg 2.

    [18] Chan, Sewell. "Why Did Crime Fall in New York City?" City Room. Accessed November 6, 2015. http://cityroom.blogs.nytimes.com/2007/08/13/why-did-crime-fall-in-new-york-city/.

    [19] Bureau of Justice Assistance. "COMPSTAT: ITS ORIGINS, EVOLUTION, AND FUTURE IN LAW ENFORCEMENT AGENCIES," 2013. http://www.policeforum.org/assets/docs/Free_Online_Documents/Compstat/compstat%20-%20its%20origins%20evolution%20and%20future%20in%20law%20enforcement%20agencies%202013.pdf.

    [20] 1996 internal NYPD article "Managing for Results: Building a Police Organization that Dramatically Reduces Crime, Disorder, and Fear."

    [21] Bratton, William. "Crime by the Numbers." The New York Times, February 17, 2010. http://www.nytimes.com/2010/02/17/opinion/17bratton.html.

    [22] RAND CORP, supra note 16.

    [23] RAND CORP, supra note 16, at pg 19.

    [24] Joh, supra note 1, at pg 44.

    [25] RAND CORP, supra note 16, pg 38.

    [26] Ibid.

    [27] RAND CORP, supra note 16, at pg 39.

    [28] Ibid.

    [29] RAND CORP, supra note 16, at pg 41.

    [30] Data-Smart City Solutions. "Dr. George Mohler: Mathematician and Crime Fighter." Data-Smart City Solutions, May 8, 2013. http://datasmart.ash.harvard.edu/news/article/dr.-george-mohler-mathematician-and-crime-fighter-166.

    [31] RAND CORP, supra note 16, at pg 44.

    [32] Joh, supra note 1, at pg 45.

    [33] Ouellette, Danielle. "Dispatch - A Hot Spots Experiment: Sacramento Police Department," June 2012. http://cops.usdoj.gov/html/dispatch/06-2012/hot-spots-and-sacramento-pd.asp.

    [34] Pitney Bowes Business Insight. "The Safer Derbyshire Partnership." Derbyshire, 2013. http://www.mapinfo.com/wp-content/uploads/2013/05/safer-derbyshire-casestudy.pdf.

    [35] Ibid.

    [36] Daniel B Neill, Wilpen L. Gorr. "Detecting and Preventing Emerging Epidemics of Crime," 2007.

    [37] RAND CORP, supra note 16, at pg 33.

    [38] Joh, supra note 1, at pg 46.

    [39] Paul, Jeffery S, and Thomas M. Joiner. "Integration of Centralized Intelligence with Geographic Information Systems: A Countywide Initiative." Geography and Public Safety 3, no. 1 (October 2011): 5-7.

    [40] Mohler, supra note 30.

    [41] Ibid.

    [42] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement
    Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

    [43] Gorner, Jeremy. "Chicago Police Use Heat List as Strategy to Prevent Violence." Chicago Tribune. August 21, 2013. http://articles.chicagotribune.com/2013-08-21/news/ct-met-heat-list-20130821_1_chicago-police-commander-andrew-papachristos-heat-list.

    [44] Stroud, Matt. "The Minority Report: Chicago's New Police Computer Predicts Crimes, but Is It Racist?" The Verge. Accessed November 13, 2015. http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

    [45] Moser, Whet. "The Small Social Networks at the Heart of Chicago Violence." Chicago Magazine, December 9, 2013. http://www.chicagomag.com/city-life/December-2013/The-Small-Social-Networks-at-the-Heart-of-Chicago-Violence/.

    [46] Lester, Aaron. "Police Clicking into Crimes Using New Software." Boston Globe, March 18, 2013. https://www.bostonglobe.com/business/2013/03/17/police-intelligence-one-click-away/DzzDbrwdiNkjNMA1159ybM/story.html.

    [47] Stanley, Jay. "Chicago Police 'Heat List' Renews Old Fears About Government Flagging and Tagging." American Civil Liberties Union, February 25, 2014. https://www.aclu.org/blog/chicago-police-heat-list-renews-old-fears-about-government-flagging-and-tagging.

    [48] Rieke, Aaron, David Robinson, and Harlan Yu. "Civil Rights, Big Data, and Our Algorithmic Future," September 2014. https://bigdata.fairness.io/wp-content/uploads/2015/04/2015-04-20-Civil-Rights-Big-Data-and-Our-Algorithmic-Future-v1.2.pdf.

    [49] Edmond, Deepu Sebastian. "Jhakhand's Digital Leap." Indian Express, September 15, 2013. http://www.jhpolice.gov.in/news/jhakhands-digital-leap-indian-express-15092013-18219-1379316969.

    [50] Jharkhand Police. "Jharkhand Police IT Vision 2020 - Effective Shared Open E-Governance." 2012. http://jhpolice.gov.in/vision2020. See slide 2

    [51] Edmond, supra note 49.

    [52] Edmond, supra note 49.

    [53] Kumar, Raj. "Enter, the Future of Policing - Cops to Team up with IIM Analysts to Predict & Prevent Incidents." The Telegraph. August 28, 2012. http://www.telegraphindia.com/1120828/jsp/jharkhand/story_15905662.jsp#.VkXwxvnhDWK.

    [54] Ibid.

    [55] Ibid.

    [56] Ibid.

    [57] See supra note 49.

    [58] See here for Jharkhand Police crime dashboard.

    [59] Lavanya Gupta, and Selva Priya. "Predicting Crime Rates for Predictive Policing." Gandhian Young Technological Innovation Award, December 29, 2014. http://gyti.techpedia.in/project-detail/predicting-crime-rates-for-predictive-policing/3545.

    [60] Gupta, Lavanya. "Minority Report: Minority Report." Accessed November 13, 2015. http://cmuws2014.blogspot.in/2015/01/minority-report.html.

    [61] See supra note 59.

    [62] See here for details about 44th All India Police Science Congress.

    [63] India, Press Trust of. "Police Science Congress in Gujarat to Have DRDO Exhibition." Business Standard India, March 10, 2015. http://www.business-standard.com/article/pti-stories/police-science-congress-in-gujarat-to-have-drdo-exhibition-115031001310_1.html.

    [64] National Crime Records Bureau. "About Crime and Criminal Tracking Network & Systems - CCTNS." Accessed November 13, 2015. http://ncrb.gov.in/cctns.htm.

    [65] Ibid. (See index page)

    [66] U.S. Const. amend. IV, available here

    [67] United States v Katz, 389 U.S. 347 (1967) , see here

    [68] See supra note 1, at pg 60.

    [69] See supra note 1, at pg 60.

    [70] Villasenor, John. "What You Need to Know about the Third-Party Doctrine." The Atlantic, December 30, 2013. http://www.theatlantic.com/technology/archive/2013/12/what-you-need-to-know-about-the-third-party-doctrine/282721/.

    [71] Smith v Maryland, 442 U.S. 735 (1979), see here

    [72] United States v Jones, 565 U.S. ___ (2012), see here

    [73] Newell, Bryce Clayton. "Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, October 16, 2013. http://papers.ssrn.com/abstract=2341182, at pg 24.

    [74] See supra note 72.

    [75] Dahyabhai Chhaganbhai Thakker vs State Of Gujarat, 1964 AIR 1563

    [76] See supra note 16.

    [77] See supra note 66.

    [78] Brinegar v. United States, 338 U.S. 160 (1949), see here

    [79] Terry v. Ohio, 392 U.S. 1 (1968), see here

    [80] Ferguson, Andrew Guthrie. "Big Data and Predictive Reasonable Suspicion." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 4, 2014. http://papers.ssrn.com/abstract=2394683, at pg 287. See also supra note 79.

    [81] See supra note 80.

    [82] See supra note 80.

    [83] See supra note 80.

    [84] See supra note 80, at pg 289.

    [85] Illinois v. Gates, 462 U.S. 213 (1983). See here

    [86] See Alabama v. White, 496 U.S. 325 (1990). See here

    [87] See supra note 80, at pg 291.

    [88] See supra note 80, at pg 293.

    [89] See supra note 80, at pg 308.

    [90] Ibid.

    [91] Ibid.

    [92] Larissa Cespedes-Yaffar, Shayona Dhanak, and Amy Stephenson. "U.S. v. Mendenhall, U.S. v. Sokolow, and the Drug Courier Profile Evidence Controversy." Accessed July 6, 2015. http://courses2.cit.cornell.edu/sociallaw/student_projects/drugcourier.html.

    [93] Ibid.

    [94] United States v. Sokolow, 490 U.S. 1 (1989), see here

    [95] See supra note 80, at pg 295.

    [96] See supra note 80, at pg 297.

    [97] See supra note 80, at pg 308.

    [98] See supra note 80, at pg 310.

    [99] See supra note 11.

    [100] See supra note 11.

    [101] See supra note 80, at pg 303.

    [102] See supra note 80, at pg 300.

    [103] Illinois v. Wardlow, 528 U.S. 119 (2000), see here

    [104] Ibid.

    [105] See supra note 80, at pg 301.

    [106] Ibid.

    [107] See supra note 1, at pg 42.

    [108] See supra note 80, at pg 303.

    [109] See supra note 80, at pg 303.

    [110] Ibid.

    [111] Ibid.

    [112] Ibid.

    [113] See supra note 80, at pg 312.

    [114] See supra note 80, at pg 317.

    [115] See supra note 80, at pg 319.

    [116] See supra note 80, at pg 321.

    [117] Section 165 Indian Criminal Procedure Code, see here

    [118] Gulab Chand Upadhyaya vs State Of U.P, 2002 CriLJ 2907

    [119] Section 41 Indian Criminal Procedure Code

    [120] See supra note 79

    [121] State of Punjab v. Balbir Singh. (1994) 3 SCC 299

    [122] Ibid.

    [123] Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act 1985, see here

    [124] Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act. (1985) 3 SCC 72 : 1985 SCC (Cri) 312 : 1985 SCC (Tax) 352 : AIR 1985 SC 989

    [125] Ibid, at SCC pg 77-78.

    [126] See supra note 121, at pg 313.

    [127] Carlson, Mr David. "Exclusionary Rule." LII / Legal Information Institute, June 10, 2009. https://www.law.cornell.edu/wex/exclusionary_rule.

    [128] Ibid.

    [129] Mapp v Ohio, 367 U.S. 643 (1961), see here

    [130] Ibid.

    [131] Busby, John C. "Fruit of the Poisonous Tree." LII / Legal Information Institute, September 21, 2009. https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree.

    [132] Silverthorne Lumber Co., Inc. v. United States, 251 U.S. 385 (1920), see here.

    [133] Beck v. Ohio, 379 U.S. 89 (1964), see here.

    [134] State of Maharashtra v. Natwarlal Damodardas Soni, (1980) 4 SCC 669, at 673.

    [135] Ibid.

    [136] Radhakishan v. State of U.P. [AIR 1963 SC 822 : 1963 Supp 1 SCR 408, 411, 412 : (1963) 1 Cri LJ 809]

    [137] Ibid, at SCR pg 411-12.

    [138] Shyam Lal Sharma v. State of M.P. (1972) 1 SCC 764 : 1974 SCC (Cri) 470 : AIR 1972 SC 886

    [139] See supra note 135, at page 674.

    [140] See supra note 119, at para. 10.

    [141] See supra note 121, at pg 309.

    [142] State of Punjab v. Wassan Singh, (1981) 2 SCC 1 : 1981 SCC (Cri) 292

    [143] See supra note 121, at pg 309.

    [144] Sunder Singh v. State of U.P, AIR 1956 SC 411 : 1956 Cri LJ 801

    [145] See supra note 121, at pg 309.

    [146] Matajog Dobey v.H.C. Bhari, AIR 1956 SC 44 : (1955) 2 SCR 925 : 1956 Cri LJ 140

    [147] See supra note 121, at pg 309.

    [148] R v. Sang, (1979) 2 All ER 1222, 1230-31

    [149] See supra note 121, at pg 309.

    [150] Ibid.

    [151] Ibid.

    [152] Harlow, Carol, and Richard Rawlings. Law and Administration. 3rd ed. Law in Context. Cambridge University Press, 2009.

    [153] R.M. Malkani v. State of Maharashtra, (1973) 1 SCC 471

    [154] Kuruma, Son of Kanju v. R., (1955) AC 197

    [155] See supra note 154, at 477.

    [156] Indian Const. Art 15, see here

    [157] Ibid.

    [158] See supra note 11.

    Response by the Centre for Internet and Society to the Draft Proposal to Transition the Stewardship of the Internet Assigned Numbers Authority (IANA) Functions from the U.S. Commerce Department’s National Telecommunications and Information Administration

    by Pranesh Prakash last modified Nov 29, 2015 06:35 AM
    This proposal was made to the Global Multistakeholder Community on August 9, 2015. The proposal was drafted by Pranesh Prakash and Jyoti Panday. The research assistance was provided by Padmini Baruah, Vidushi Marda, and inputs from Sunil Abraham.

    For more than a year now, the customers and operational communities performing key internet functions related to domain names, numbers and protocols have been negotiating the transfer of IANA stewardship. India has dual interests in the ICANN IANA Transition negotiations: safeguarding independence, security and stability of the DNS for development, and promoting an effective transition agreement that internationalizes the IANA Functions Operator (IFO). Last month the IANA Stewardship Transition Coordination Group (ICG) set in motion a public review of its combined assessment of the proposals submitted by the names, numbers and protocols communities. In parallel to the transition of the NTIA oversight, the community has also been developing mechanisms to strengthen the accountability of ICANN and has devised two workstreams that consider both long term and short term issues. This 2 is our response to the consolidated ICG proposal which considers the proposals for the transition of the NTIA oversight over the IFO.

    Click to download the submission.

    The Humpty-Dumpty Censorship of Television in India

    by Bhairav Acharya last modified Nov 29, 2015 08:37 AM
    The Modi government’s attack on Sathiyam TV is another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    The article originally published in the Wire on September 8, 2015 was also mirrored on the website Free Speech/Privacy/Technology.


    It is tempting to think of the Ministry of Information and Broadcasting’s (MIB) attack on Sathiyam TV solely as another authoritarian exhibition of Prime Minister Narendra Modi’s government’s intolerance of criticism and dissent. It certainly is. But it is also another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    Sathiyam TV’s transgressions

    Sathiyam’s transgressions began more than a year ago, on May 9, 2014, when it broadcast a preacher saying of an unnamed person: “Oh Lord! Remove this satanic person from the world!” The preacher also allegedly claimed this “dreadful person” was threatening Christianity. This, the MIB reticently claims, “appeared to be targeting a political leader”, referring presumably to Prime Minister Modi, to “potentially give rise to a communally sensitive situation and incite the public to violent tendencies.”

    The MIB was also offended by a “senior journalist” who, on the same day, participated in a non-religious news discussion to allegedly claim Modi “engineered crowds at his rallies” and used “his oratorical skills to make people believe his false statements”. According to the MIB, this was defamatory and “appeared to malign and slander the Prime Minister which was repugnant to (his) esteemed office”.

    For these two incidents, Sathiyam was served a show-cause notice on 16 December 2014 which it responded to the next day, denying the MIB’s claims. Sathiyam was heard in-person by a committee of bureaucrats on 6 February 2015. On 12 May 2015, the MIB handed Sathiyam an official an official “Warning” which appears to be unsupported by law. Sathiyam moved the Delhi High Court to challenge this.

    As Sathiyam sought judicial protection, the MIB issued the channel a second warning August 26, 2016 citing three more objectionable news broadcasts of: a child being subjected to cruelty by a traditional healer in Assam; a gun murder inside a government hospital in Madhya Pradesh; and, a self-immolating man rushing the dais at a BJP rally in Telangana. All three news items were carried by other news channels and websites.

    Governing communications

    Most news providers use multiple media to transmit their content and suffer from complex and confusing regulation. Cable television is one such medium, so is the Internet; both media swiftly evolve to follow technological change. As the law struggles to keep up, governmental anxiety at the inability to perfectly control this vast field of speech and expression frequently expresses itself through acts of overreach and censorship.

    In the newly-liberalised media landscape of the early 1990s, cable television sprang up in a legal vacuum. Doordarshan, the sole broadcaster, flourished in the Centre’s constitutionally-sanctioned monopoly of broadcasting which was only broken by the Supreme Court in 1995. The same year, Parliament enacted the Cable Television Networks (Regulation) Act, 1995 (“Cable TV Act”) to create a licence regime to control cable television channels. The Cable TV Act is supplemented by the Cable Television Network Rules, 1994 (“Cable Rules”).

    The state’s disquiet with communications technology is a recurring motif in modern Indian history. When the first telegraph line was laid in India, the colonial state was quick to recognize its potential for transmitting subversive speech and responded with strict controls. The fourth iteration of the telegraph law represents the colonial government’s perfection of the architecture of control. This law is the Indian Telegraph Act, 1885, which continues to dominate communications governance in India today including, following a directive in 2004, broadcasting.

    Vague and arbitrary law

    The Cable TV Act requires cable news channels such as Sathiyam to obey a list of restrictions on content that is contained in the Cable Rules (“Programme Code“). Failure to conform to the Programme Code can result in seizure of equipment and imprisonment; but, more importantly, creates the momentum necessary to invoke the broad powers of censorship to ban a programme, channel, or even the cable operator. But the Programme Code is littered with vague phrases and undefined terms that can mean anything the government wants them to mean.

    By its first warning of May 12, 2015, the MIB claimed Sathiyam violated four rules in the Programme Code. These include rule 6(1)(c) which bans visuals or words “which promote communal attitudes”; rule 6(1)(d) which bans “deliberate, false and suggestive innuendos and half-truths”; rule 6(1)(e) which bans anything “which promotes anti-national attitudes”; and, rule 6(1)(i) which bans anything that “criticises, maligns or slanders any…person or…groups, segments of social, public and moral life of the country” (sic).

    The rest of the Programme Code is no less imprecise. It proscribes content that “offends against good taste” and “reflects a slandering, ironical and snobbish attitude” against communities. On the face of it, several provisions of the Programme Code travel beyond the permissible restrictions on free speech listed in Article 19(2) of the Constitution to question their validity. The fiasco of implementing the vague provisions of the erstwhile section 66A of the Information Technology Act, 2000 is a recent reminder of the dangers presented by poorly-drafted censorship law – which is why it was struck down by the Supreme Court for infringing the right to free speech. The Programme Code is an older creation, it has simply evaded scrutiny for two decades.

    The arbitrariness of the Programme Code is amplified manifold by the authorities responsible for interpreting and implementing it. An Inter-Ministerial Committee (IMC) of bureaucrats, supposedly a recommendatory body, interprets the Programme Code before the MIB takes action against channels. This is an executive power of censorship that must survive legal and constitutional scrutiny, but has never been subjected to it. Curiously, the courts have shied away from a proper analysis of the Programme Code and the IMC.

    Judicial challenges

    In 2011, a single judge of the Delhi High Court in the Star India case (2011) was asked to examine the legitimacy of the IMC as well as four separate clauses of the Programme Code including rule 6(1)(i), which has been invoked against Sathiyam. But the judge neatly sidestepped the issues. This feat of judicial adroitness was made possible by the crass indecency of the content in question, which could be reasonably restricted. Since the show clearly attracted at least one ground of legitimate censorship, the judge saw no cause to examine the other provisions of the Programme Code or even the composition of the IMC.

    This judicial restraint has proved detrimental. In May 2013, another single judge of the Delhi High Court, who was asked by Comedy Central to adjudge the validity of the IMC’s decision-making process, relied on Star India (2011) to uphold the MIB’s action against the channel. The channel’s appeal to the Supreme Court is currently pending. If the Supreme Court decides to examine the validity of the IMC, the Delhi High Court may put aside Sathiyam’s petition to wait for legal clarity.

    As it happens, in the Shreya Singhal case (2015) that struck down section 66A of the IT Act, the Supreme Court has an excellent precedent to follow to demand clarity and precision from the Programme Code, perhaps even strike it down, as well as due process from the MIB. On the accusation of defaming the Prime Minister, probably the only clearly stated objection by the MIB, the Supreme Court’s past law is clear: public servants cannot, for non-personal acts, claim defamation.

    Censorship by blunt force

    Beyond the IMC’s advisories and warnings, the Cable TV Act contains two broad powers of censorship. The first empowerment in section 19 enables a government official to ban any programme or channel if it fails to comply with the Programme Code or, “if it is likely to promote, on grounds of religion, race, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, linguistic or regional groups or castes or communities or which is likely to disturb the public tranquility.”

    The second empowerment is much wider. Section 20 of the Cable TV Act permits the Central Government to ban an entire cable television operator, as opposed to a single channel or programmes within channels, if it “thinks it necessary or expedient so to do in public interest”. No reasons need be given and no grounds need be considered. Such a blunt use of force creates an overwhelming power of censorship. It is not a coincidence that section 20 resembles some provisions of nineteenth-century telegraph laws, which were designed to enable the colonial state to control the flow of information to its native subjects.

    A manual for television bans

    Film and television have always attracted political attention and state censorship. In 1970, Justice Hidayatullah of the Supreme Court explained why: “It has been almost universally recognised that the treatment of motion pictures must be different from that of other forms of art and expression. This arises from the instant appeal of the motion picture… The motion picture is able to stir up emotions more deeply than any other product of art.”

    Within this historical narrative of censorship, television regulation is relatively new. Past governments have also been quick to threaten censorship for attacking an incumbent Prime Minister. There seems to be a pan-governmental consensus that senior political leaders ought to be beyond reproach, irrespective of their words and deeds.

    But on what grounds could the state justify these bans? Lord Atkins’ celebrated war-time dissent in Liversidge (1941) offers an unlikely answer:

    “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’”

    The Short-lived Adventure of India’s Encryption Policy

    by Bhairav Acharya last modified Nov 29, 2015 09:03 AM
    Written for the Berkeley Information Privacy Law Association (BIPLA).

    During his recent visit to Silicon Valley, Indian Prime Minister Narendra Modi said his government was “giving the highest importance to data privacy and security, intellectual property rights and cyber security”. But a proposed national encryption policy circulated in September 2015 would have achieved the opposite effect.

    The policy was comically short-lived. After its poorly-drafted provisions invited ridicule, it was swiftly withdrawn. But the government has promised to return with a fresh attempt to regulate encryption soon. The incident highlights the worrying assault on communications privacy and free speech in India, a concern compounded by the enormous scale of the telecommunications and Internet market.

    Even with only around 26 percent of its population online, India is already the world’s second-largest Internet user, recently overtaking the United States. The number of Internet users in India is set to grow exponentially, spurred by ambitious governmental schemes to build a ‘Digital India’ and a country-wide fiber-optic backbone. There will be a corresponding increase in the use of the Internet for communicating and conducting commerce.

    Encryption on the Internet

    Encryption protects the security of Internet users from invasions of privacy, theft of data, and other attacks. By applying an algorithmic cipher (key), ordinary data (plaintext) is encoded into an unintelligible form (ciphertext), which is decrypted using the key. The ciphertext can be intercepted but will remain unintelligible without the key. The key is secret.

    There are several methods of encryption. SSL/TLS, a family of encryption protocols, is commonly used by major websites. But while some companies encrypt sensitive data, such as passwords and financial information, during its transit through the Internet, most data at rest on servers is largely unencrypted. For instance, email providers regularly store plaintext messages on their servers. As a result, governments simply demand and receive backdoor access to information directly from the companies that provide these services. However, governments have long insisted on blanket backdoor access to all communications data, both encrypted and unencrypted, and whether at rest or in transit.

    On the other hand, proper end-to-end encryption – full encryption from the sender to recipient, where the service provider simply passes on the ciphertext without storing it, and deletes the metadata – will defeat backdoors and protect privacy, but may not be profitable. End-to-end encryption alarms the surveillance establishment, which is why British Prime Minister David Cameron wants to ban it, and many in the US government want Silicon Valley companies to stop using it.

    Communications privacy

    Instead of relying on a company to secure communications, the surest way to achieve end-to-end encryption is for the sender to encrypt the message before it leaves her computer. Since only the sender and intended recipient have the key, even if the data is intercepted in transit or obtained through a backdoor, only the ciphertext will be visible.

    For almost all of human history, encryption relied on a single shared key; that is, both the sender and recipient used a pre-determined key. But, like all secrets, the more who know it, the less secure the key becomes. From the 1970s onwards, revolutionary advances in cryptography enabled the generation of a pair of dissimilar keys, one public and one private, which are uniquely and mathematically linked. This is asymmetric or public key cryptography, where the private key remains an exclusive secret. It offers the strongest protection for communications privacy because it returns autonomy to the individual and is immune to backdoors.

    For those using public key encryption, Edward Snowden’s revelation that the NSA had cracked several encryption protocols including SSL/TLS was worrying. Brute-force decryption (the use of supercomputers to mathematically attack keys) questions the integrity of public key encryption. But, since the difficulty of code-breaking is directly proportional to key size, notionally, generating longer keys will thwart the NSA, for now.

    The crypto-wars in India

    Where does India’s withdrawn encryption policy lie in this landscape of encryption and surveillance? It is difficult to say. Because it was so badly drafted, understanding the policy was a challenge. It could have been a ham-handed response to commercial end-to-end encryption, which many major providers such as Apple and WhatsApp are adopting following consumer demand. But curiously, this did not appear to be the case, because the government later exempted WhatsApp and other “mass use encryption products”.

    The Indian establishment has a history of battling commercial encryption. From 2008, it fought Blackberry for backdoor access to its encrypted communications, coming close to banning the service, which dissipated only once the company lost its market share. There have been similar attempts to force Voice over Internet Protocol providers to fall in line, including Skype and Google. And there is a new thrust underway to regulate over-the-top content providers, including US companies.

    The policy could represent a new phase in India’s crypto-wars. The government, emboldened by the sheer scale of the country’s market, might press an unyielding demand for communications backdoors. The policy made no bones of this desire: it sought to bind communications companies by mandatory contracts, regulate key-size and algorithms, compel surrender of encryption products including “working copies” of software (the key generation mechanism), and more.

    The motives of regulation

    The policy’s deeply intrusive provisions manifest a long-standing effort of the Indian state to dominate communications technology unimpeded by privacy concerns. From wiretaps to Internet metadata, intrusive surveillance is not judicially warranted, does not require the demonstration of probable cause, suffers no external oversight, and is secret. These shortcomings are enabling the creation of a sophisticated surveillance state that sits ill with India’s constitutional values.

    Those values are being steadily besieged. India’s Supreme Court is entertaining a surge of clamorous litigation to check an increasingly intrusive state. Only a few months ago, the Attorney-General – the government’s foremost lawyer – argued in court that Indians did not have a right to privacy, relying on 1950s case law which permitted invasive surveillance. Encryption which can inexpensively lock the state out of private communications alarms the Indian government, which is why it has skirmished with commercially-available encryption in the past.

    On the other hand, the conflict over encryption is fueled by irregular laws. Telecoms licensing regulations restrict Internet Service Providers to 40-bit symmetric keys, a primitively low standard; higher encryption requires permission and presumably surrender of the shared key to the government. Securities trading on the Internet requires 128-bit SSL/TLS encryption while the country’s central bank is pushing for end-to-end encryption for mobile banking. Seen in this light, the policy could simply be an attempt to rationalize an uneven field.

    Encryption and freedom

    Perhaps the government was trying to restrict the use of public key encryption and Internet anonymization services, such as Tor or I2P, by individuals. India’s telecoms minister stated: “The purport of this encryption policy relates only to those who encrypt.” This was not particularly illuminating. If the government wants to pre-empt terrorism – a legitimate duty, this approach is flawed since regardless of the law’s command arguably no terrorist will disclose her key to the government. Besides, since there are very few Internet anonymizers in India who are anyway targeted for special monitoring, it would be more productive for the surveillance establishment to maintain the status quo.

    This leaves harmless encrypters – businesses, journalists, whistle blowers, and innocent privacy enthusiasts. For this group, impediments to encryption interferes with their ability to freely communicate. There is a proportionate link between encryption and the freedom of speech and expression, a fact acknowledged by Special Rapporteur David Kaye of the UN Human Rights Council, where India is a participating member. Kaye notes: “Encryption and anonymity are especially useful for the development and sharing of opinions, which often occur through online correspondence such as e-mail, text messaging, and other online interactions.”

    This is because encryption affords privacy which promotes free speech, a relationship reiterated by the previous UN Special Rapporteur, Frank La Rue. On the other hand, surveillance has a “chilling effect” on speech. In 1962, Justice Subba Rao’s famous dissent in the Indian Supreme Court presciently connected privacy and free speech:

    The act of surveillance is certainly a restriction on the [freedom of speech]. It cannot be suggested that the said freedom…will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like. To extend the analogy to the present case is to treat the man under surveillance as a prisoner within the confines of our country and the authorities enforcing surveillance as guards. So understood, it must be held that the petitioner’s freedom under [the right to free speech under the Indian] Constitution is also infringed.

    Kharak Singh v. State of Uttar Pradesh (1964) 1 SCR 332, pr. 30.

    Perhaps the policy expressed the government’s discomfort at individual encrypters escaping surveillance, like free agents evading the state’s control. How should the law respond to this problem? Daniel Solove says the security of the state need not compromise individual privacy. On the other hand, as Ronald Dworkin influentially maintained, the freedoms of the individual precede the interests of the state.

    Security and trade interests

    However, even when assessed from the perspective of India’s security imperatives, the policy would have had harmful consequences. It required users of encryption, including businesses and consumers, to store plaintext versions of their communications for ninety days to surrender to the government upon demand. This outrageously ill-conceived provision would have created real ‘honeypots’ (originally, honeypots are decoy servers to lure hackers) of unencrypted data, ripe for theft. Note that India does not have a data breach law.

    The policy’s demand for encryption companies to register their products and give working copies of their software and encryption mechanisms to the Indian government would have flown in the face of trade secrecy and intellectual property protection. The policy’s hurried withdrawal was a public relations exercise on the eve of Prime Minister Modi’s visit to Silicon Valley. It was successful. Modi encountered no criticism of his government’s visceral opposition to privacy, even though the policy would have severely disrupted the business practices of US communications providers operating in India.

    Encryption invites a convergence of state interests between India and US as well: both countries want to control it. Last month’s joint statement from the US-India Strategic and Commercial Dialogue pledges “further cooperation on internet and cyber issues”. This innocuous statement masks a robust information-gathering and -sharing regime. There is no guarantee against the sharing of any encryption mechanisms or intercepted communications by India.

    The government has promised to return with a reworked proposal. It would be in India’s interest for this to be preceded by a broad-based national discussion on encryption and its links to free speech, privacy, security, and commerce.


    Click to read the post published on Free Speech / Privacy / Technology website.

    How India Regulates Encryption

    by Pranesh Prakash & Japreet Grewal — last modified Jul 23, 2016 01:24 PM
    Contributors: Geetha Hariharan

    Governments across the globe have been arguing for the need to regulate the use of encryption for law enforcement and national security purposes. Various means of regulation such as backdoors, weak encryption standards and key escrows have been widely employed which has left the information of online users vulnerable not only to uncontrolled access by governments but also to cyber-criminals. The Indian regulatory space has not been untouched by this practice and constitutes laws and policies to control encryption. The regulatory requirements in relation to the use of encryption are fragmented across legislations such as the Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) and several sector-specific regulations. The regulatory framework is designed to either limit encryption or gain access to the means of decryption or decrypted information.

    Limiting encryption

    The IT Act does not prescribe the level or type of encryption to be used by online users. Under Section 84A, it grants the Government the authority to prescribe modes and methods of encryption. The Government has not issued any rules in exercise of these powers so far but had released a draft encryption policy on September 21, 2015. Under the draft policy, only those encryption algorithms and key sizes were permitted to be used as were to be notified by the Government. The draft policy was withdrawn due to widespread criticism of various requirements under the policy of which retention of unencrypted user information for 90 days and mandatory registration of all encryption products offered in the country were noteworthy.

    The Internet Service Providers License Agreement (ISP License), entered between the Department of Telecommunication (DoT) and an Internet Service Provider (ISP) to provide internet services (i.e. internet access and internet telephony services), permits the use of encryption up to 40 bit key length in the symmetric algorithms or its equivalent in others.[1] The restriction applies not only to the ISPs but also to individuals, groups and organisations that use encryption. In the event an individual, group or organisation decides to deploy encryption that is higher than 40 bits, prior permission from the DoT must be obtained and the decryption key must be deposited with the DoT. There are, however no parameters laid down for use of the decryption key by the Government. Several issues arise in relation enforcement of these license conditions.

    1. While this requirement is applicable to all individuals, groups and organisations using encryption it is difficult to enforce it as the ISP License only binds DoT and the ISP and cannot be enforced against third parties.
    2. Further, a 40 bit symmetric key length is considered to be an extremely weak standard[2] and is inadequate for protection of data stored or communicated online. Various sector-specific regulations that are already in place in India prescribe encryption of more than 40 bits.
      • The Reserve Bank of India has issued guidelines for Internet banking[3] where it prescribes 128-bit as the minimum level of encryption and acknowledges that constant advances in computer hardware and cryptanalysis may induce use of larger key lengths. The Securities and Exchange Board of India also prescribes[4] a 64-bit/128-bit encryption for standard network security and use of secured socket layer security preferably with 128-bit encryption, for securities trading over a mobile phone or a wireless application platform.  Further, under Rule 19 (2) of the Information Technology (Certifying Authorities) Rules, 2000 (CA Rules), the Government has prescribed security guidelines for management and implementation of information technology security of the certifying authorities. Under these guidelines, the Government has suggested the use of suitable security software or even encryption software to protect sensitive information and devices that are used to transmit or store sensitive information such as routers, switches, network devices and computers (also called information assets). The guidelines acknowledge the need to use internationally proven encryption techniques to encrypt stored passwords such as PKCS#1 RSA Encryption Standard (512, 1024, 2048 bit), PKCS#5 Password Based Encryption Standard or PKCS#7 Cryptographic Message Syntax Standard as mentioned under Rule 6 of the CA Rules. These encryption algorithms are very strong and secure as compared to a 40 bit encryption key standard.
      • The ISP License also contains a clause which provides that use of any hardware or software that may render the network security vulnerable would be considered a violation of the license conditions.[5] Network security may be compromised by using a weak security measure such as the 40 bit encryption or its equivalent prescribed by the DoT but the liability will be imputed to the ISP. As a result, an ISP which is merely complying with the license conditions by employing not more than a 40 bit encryption may be liable for what appears to be contradictory license conditions.
      • It is noteworthy that the restriction on the key size under the ISP License has not been imported to the Unified Service License Agreement (UL Agreement) that has been formulated by the DoT. The UL Agreement does not prescribe a specific level of encryption to be used for provision of services. Clause 37.5 of the UL Agreement however makes it clear that use of encryption will be governed by the provisions of the IT Act. As noted earlier, the Government has not specified any limit to level and type of encryption under the IT Act however it had released a draft encryption policy that has been suspended due to widespread criticism of its mandate.

     

    The Telecom Licenses (ISP License, UL Agreement, and Unified Access Service License) prohibit the use of bulk encryption by the service providers but they continue to remain responsible for maintaining privacy of communication and preventing unauthorized interception.

    Gaining access to means of decryption or decrypted information

    Besides restrictions on the level of encryption, the ISP License and the UL Agreement make it mandatory for the service providers including ISPs to provide to the DoT all details of the technology that is employed for operations and furnish all documentary details like concerned literature, drawings, installation materials and tools and testing instruments relating to the system intended to be used for operations as and when required by the DoT.[6] While these license conditions do not expressly lay down that access to means of decryption must be given to the government the language is sufficiently broad to include gaining such access as well. Further, ISPs are required to take prior approval of the DoT for installation of any equipment or execution of any project in areas which are sensitive from security point of view. The ISPs are in fact subject to and further required to facilitate continuous monitoring by the DoT. These obligations ensure that the Government has complete access to and control over the infrastructure for providing internet services which includes any installation or equipment required for the purpose of encryption and decryption.

    The Government has also been granted the power to gain access to means of decryption or simply, decrypted information under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

    1. A decryption order usually entails a direction to a decryption key holder to disclose a decryption key, allow access to or facilitate conversion of encrypted information and must contain reasons for such direction. In fact, Rule 8 of the Decryption Rules makes it mandatory for the authority to consider other alternatives to acquire the necessary information before issuing a decryption order.
    2. The Secretary in the Ministry of Home Affairs or the Secretary in charge of the Home Department in a state or union territory is authorised to issue an order of decryption in the interest of sovereignty or integrity of India, defense of India, security of the state, friendly relations with foreign states or public order or preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence. It is useful to note that this provision was amended in 2009 to expand the grounds on which a direction for decryption can be passed. Post 2009, the Government can issue a decryption order for investigation of any offence.  In the absence of any specific process laid down for collection of digital evidence do we follow the procedure under the criminal law or is it necessary that we draw a distinction between the investigation process in the digital and the physical environment and see if adequate safeguards exist to check the abuse of investigatory powers of the police herein.
    3. The orders for decryption must be examined by a review committee constituted under Rule 419A of the Indian Telegraph Rules, 1951 to ensure compliance with the provisions under the IT Act. The review committee is required to convene atleast once in two months for this purpose. However, we have been informed in a response by the Department of Electronics and Information Technology to an RTI dated April 21, 2015 filed by our organisation that since the constitution of the review committee has met only once in January 2013.

    Conclusion

    While studying a regulatory framework for encryption it is necessary that we identify the lens through which encryption is looked at i.e. whether encryption is considered as a means of information security or a threat to national security. As noted earlier, the encryption mandates for banking systems and certifying authorities in India are contradictory to those under the telecom licenses and the Decryption Rules. Would it help to analyse whether the prevailing scepticism of the Government is well founded against the need to have strong encryption? It would be useful to survey the statistics of cyber incidents where strong encryption was employed as well as look at instances that reflect on whether strong encryption has made it difficult for law enforcement agencies to prevent or resolve crimes. It would also help  to record cyber incidents that have resulted from vulnerabilities such as backdoors or key escrows deliberately introduced by law. These statistics would certainly clear the air about the role of encryption in securing cyberspace and facilitate appropriate regulation.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     



    [1] Clause 2.2 (vii) of the ISP License

    [2] Schneier, Bruce (1996). Applied Cryptography (Second ed.). John Wiley & Sons

    [3] Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds- Implementation of recommendations, 2011

    [4] Report on Internet Based Trading by the SEBI Committee on Internet based Trading and Services, 2000; It is useful to note that subsequently SEBI had acknowledged that the level of encryption would be governed by DoT policy in a SEBI circular no CIR/MRD/DP/25/2010 dated August 27, 2010 on Securities Trading using Wireless Technology

    [5] Clause 34.25 of the ISP License

    [6] Clauses 22 and  23 of Part IV  of the ISP License

    Concept Note: Network Neutrality in South Asia

    by Prasad Krishna last modified Dec 01, 2015 02:34 AM

    PDF document icon Network Neutrality South Asia Concept Note _ORF CIS.pdf — PDF document, 238 kB (244150 bytes)

    The Case of Whatsapp Group Admins

    by Japreet Grewal — last modified Dec 08, 2015 10:25 AM
    Contributors: Geetha Hariharan

    Censorship laws in India have now roped in group administrators of chat groups on instant messaging platforms such as Whatsapp (group admin(s)) for allegedly objectionable content that was posted by other users of these chat groups. Several incidents[1] were reported this year where group admins were arrested in different parts of the country for allowing content that was allegedly objectionable under law. A few reports mentioned that these arrests were made under Section 153A[2] read with Section 34[3] of the Indian Penal Code (IPC) and Section 67[4] of the Information Technology Act (IT Act).

    Targeting of a group admin for content posted by other members of a chat group has raised concerns about how this liability is imputed. Whether a group admin should be considered an intermediary under Section 2 (w) of the IT Act? If yes, whether a group admin would be protected from such liability?

    Group admin as an intermediary

    Whatsapp is an instant messaging platform which can be used for mass communication by opting to create a chat group. A chat group is a feature on Whatsapp that allows joint participation of Whatsapp users. The number of Whatsapp users on a single chat group can be up to 100. Every chat group has one or more group admins who control participation in the group by deleting or adding people. [5] It is imperative that we understand that by choosing to create a chat group on Whatsapp whether a group admin can become liable for content posted by other members of the chat group.

    Section 34 of the IPC provides that when a number of persons engage in a criminal act with a common intention, each person is made liable as if he alone did the act. Common intention implies a pre-arranged plan and acting in concert pursuant to the plan. It is interesting to note that group admins have been arrested under Section 153A on the ground that a group admin and a member posting content on a chat group that is actionable under this provision have common intention to post such content on the group. But would this hold true when for instance, a group admin creates a chat group for posting lawful content (say, for matchmaking purposes) and a member of the chat group posts content which is actionable under law (say, posting a video abusing Dalit women)? Common intention can be established by direct evidence or inferred from conduct or surrounding circumstances or from any incriminating facts.[6]

    We need to understand whether common intention can be established in case of a user merely acting as a group admin. For this purpose it is necessary to see how a group admin contributes to a chat group and whether he acts as an intermediary.

    We know that parameters for determining an intermediary differ across jurisdictions and most global organisations have categorised them based on their role or technical functions.[7] Section 2 (w) of the Information Technology Act, 2000 (IT Act) defines an intermediary as any person, who on behalf of another person, receives, stores or transmits messages or provides any service with respect to that message and includes the telecom services providers, network providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces and cyber cafés. Does a group admin receive, store or transmit messages on behalf of group participants or provide any service with respect to messages of group participants or falls in any category mentioned in the definition? Whatsapp does not allow a group admin to receive, or store on behalf of another participant on a chat group. Every group member independently controls his posts on the group. However, a group admin helps in transmitting messages of another participant to the group by allowing the participant to be a part of the group thus effectively providing service in respect of messages. A group admin therefore, should be considered an intermediary. However his contribution to the chat group is limited to allowing participation but this is discussed in further detail in the section below.

    According to the Organisation for Economic Co-operation and Development (OECD), in a 2010 report[8], an internet intermediary brings together or facilitates transactions between third parties on the Internet. It gives access to, hosts, transmits and indexes content, products and services originated by third parties on the Internet or provide Internet-based services to third parties. A Whatsapp chat group allows people who are not on your list to interact with you if they are on the group admins’ contact list. In facilitating this interaction, according to the OECD definition, a group admin may be considered an intermediary.

    Liability as an intermediary

    Section 79 (1) of the IT Act protects an intermediary from any liability under any law in force (for instance, liability under Section 153A pursuant to the rule laid down in Section 34 of IPC) if an intermediary fulfils certain conditions laid down therein. An intermediary is required to carry out certain due diligence obligations laid down in Rule 3 of the Information Technology (Intermediaries Guidelines) Rules, 2011 (Rules). These obligations include monitoring content that infringes intellectual property, threatens national security or public order, or is obscene or defamatory or violates any law in force (Rule 3(2)).[9] An intermediary is liable for publishing or hosting such user generated content, however, as mentioned earlier, this liability is conditional. Section 79 of IT Act states that an intermediary would be liable only if it initiates transmission, selects receiver of the transmission and selects or modifies information contained in the transmission that falls under any category mentioned in Rule 3 (2) of the Rules. While we know that a group admin has the ability to facilitate sharing of information and select receivers of such information, he has no direct editorial control over the information shared. Group admins can only remove members but cannot remove or modify the content posted by members of the chat group. An intermediary is liable in the event it fails to comply with due diligence obligations laid down under rule 3 (2) and 3 (3) of the Rules however, since a group admin lacks the authority to initiate transmission himself and control content, he can’t comply with these obligations. Therefore, a group admin would be protected from any liability arising out of third party/user generated content on his group pursuant to Section 79 of the IT Act.

    It is however relevant to note whether the ability of a group admin to remove participants amounts to an indirect form of editorial control.

    Other pertinent observations

    In several reports[10] there have been discussions about how holding a group admin liable makes the process convenient as it is difficult to locate all the users of a particular group. This reasoning may not be correct as the Whatsapp policy[11] makes it mandatory for a prospective user to provide his mobile number in order to use the platform and no additional information is collected from group admins which may justify why group admins are targeted. Investigation agencies can access mobile numbers of Whatsapp users and gain more information from telecom companies.

    It is also interesting to note that the group admins were arrested after a user or someone familiar to a user filed a complaint with the police about content being objectionable or hurtful. Earlier this year, the apex court had ruled in the case of Shreya Singhal v. Union of India[12] that an intermediary needed a court order or a government notification for taking down information. With actions taken against group admins on mere complaints filed by anyone, it is clear that the law enforcement officials have been overriding the mandate of the court.

    Conclusion

     

    According to a study conducted by a global research consultancy, TNS Global, around 38 % of internet users in India use instant messaging applications such as Snapchat and Whatsapp on a daily basis, Whatsapp being the most widely used application. These figures indicate the scale of impact that arrests of group admins may have on our daily communication.

    It is noteworthy that categorising a group admin as an intermediary would effectively make the Rules applicable to all Whatsapp users intending to create groups and make it difficult to enforce and would perhaps blur the distinction between users and intermediaries.

    The critical question however is whether a chat group is considered a part of the bundle of services that Whatsapp offers to its users and not as an independent platform that makes a group admin a separate entity. Also, would it be correct to draw comparison of a Whatsapp group chat with a conference call on Skype or sharing a Google document with edit rights to understand the domain in which censorship laws are penetrating today?

     

    Valuable contribution by Pranesh Prakash and Geetha Hariharan


    [1] http://www.nagpurtoday.in/whatsapp-admin-held-for-hurting-religious-sentiment/06250951http://www.catchnews.com/raipur-news/whatsapp-group-admin-arrested-for-spreading-obscene-video-of-mahatma-gandhi-1440835156.html ; http://www.financialexpress.com/article/india-news/whatsapp-group-admin-along-with-3-members-arrested-for-objectionable-content/147887/

    [2] Section 153A. “Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.— (1) Whoever— (a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different reli­gious, racial, language or regional groups or castes or communi­ties…” or 2) Whoever commits an offence specified in sub-section (1) in any place of worship or in any assembly engaged in the performance of religious wor­ship or religious ceremonies, shall be punished with imprisonment which may extend to five years and shall also be liable to fine.

    [3] Section 34. Acts done by several persons in furtherance of common intention – When a criminal act is done by several persons in furtherance of common intention of all, each of such persons is liable for that act in the same manner as if it were done by him alone.

    [4] Section 67 Publishing of information which is obscene in electronic form. -Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees."

    [5] https://www.whatsapp.com/faq/en/general/21073373

    [6] Pandurang v. State of Hyderabad AIR 1955 SC 216

    [7]https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf;  http://unesdoc.unesco.org/images/0023/002311/231162e.pdf

    [8] http://www.oecd.org/internet/ieconomy/44949023.pdf

    [9] Rule 3(2) (b) of the Rules

    [10]http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp-group-admin-better-be-careful/article7531350.ece; http://www.newindianexpress.com/states/tamil_nadu/Social-Media-Administrator-You-Could-Land-in-Trouble/2015/10/10/article3071815.ece;  http://www.medianama.com/2015/10/223-whatsapp-group-admin-arrest/http://www.thenewsminute.com/article/whatsapp-group-admin-you-are-intermediary-and-here%E2%80%99s-what-you-need-know-35031

    [11] https://www.whatsapp.com/legal/

    [12] http://supremecourtofindia.nic.in/FileServer/2015-03-24_1427183283.pdf

    DNA Research

    by Vanya Rakesh last modified Jul 21, 2016 11:02 AM
    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament. This legislation will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. Here is a collection of our research on privacy and security concerns related to the Bill.

     

    The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues related to the DNA Bill:

    1. Validity and legality of collection, usage and storage of DNA samples and information derived from the same.
    2. Monitoring projects and policies around Human DNA Profiling.
    3. Raising public awareness around issues concerning biometrics.

    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament.

    The Bill seeks to establish DNA Databases at the state and regional level and a national level database. The databases would store DNA profiles of suspects, offenders, missing persons, and deceased persons. The database could be used by courts, law enforcement (national and international) agencies, and other authorized persons for criminal and civil purposes. The Bill will also regulate DNA laboratories collecting DNA samples. Lack of adequate consent, the broad powers of the board, and the deletion of innocent persons profiles are just a few of the concerns voiced about the Bill.

    DNA Profiling Bill - Infographic
    Download the infographic. Credit: Scott Mason and CIS team.

     

    1. DNA Bill

    The Human DNA Profiling bill is a legislation that will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. There have been many concerns raised about the infringement of privacy and the power that the government will have with such information raised by Human Rights Groups, individuals and NGOs. The bill proposes to profile people through their fingerprints and retinal scans which allow the government to create different unique profiles for individuals. Some of the concerns raised include the loss of privacy by such profiling and the manner in which they are conducted. Unless strictly controlled, monitored and protected, such a database of the citizens' fingerprints and retinal scans could lead to huge blowbacks in the form of security risks and privacy invasions. The following articles elaborate upon these matters.

       

      2. Comparative Analysis with other Legislatures

      Human DNA Profiling is a system that isn't proposed only in India. This system of identification has been proposed and implemented in many nations. Each of these systems differs from the other on bases dependent on the nation's and society's needs. The risks and criticisms that DNA profiling has faced may be the same but the manner in which solutions to such issues are varying. The following articles look into the different systems in place in different countries and create a comparison with the proposed system in India to give us a better understanding of the risks and implications of such a system being implemented.

       

      Privacy Policy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:40 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Raising public awareness  and dialogue around privacy,
      2. Undertaking in depth research of domestic and international policy pertaining to privacy
      3. Driving comprehensive privacy legislation in India through research.

      India does not have a comprehensive legislation covering issues of privacy or establishing the right to privacy In 2010 an "Approach Paper on Privacy" was published, in 2011 the Department of Personnel and Training released a draft Right to Privacy Bill, in 2012 the Planning Commission constituted a group of experts which published The Report of the Group of Experts on Privacy, in 2013 CIS drafted the citizens Privacy Protection Bill, and in 2014 the Right to Privacy Bill was leaked. Currently the Government is in the process of drafting and finalizing the Bill.

      Draft Right to Privacy

      Privacy Research -

      1. Approach Paper on Privacy, 2010 -

      The following article contains the reply drafted by CIS in response to the Paper on Privacy in 2010. The Paper on Privacy was a document drafted by a group of officers created to develop a framework for a privacy legislation that would balance the need for privacy protection, security, sectoral interests, and respond to the domain legislation on the subject.

      2. Report on Privacy, 2012 -

      The Report on Privacy, 2012 was drafted and published by a group of experts under the Planning Commission pertaining to the current legislation with respect to privacy. The following articles contain the responses and criticisms to the report and the current legislation.

      3. Privacy Protection Bill, 2013 -

      The Privacy Protection Bill, 2013 was a legislation that aims to formulate the rules and law that governs privacy protection. The following articles refer to this legislation including a citizen's draft of the legislation.

      4. Right to Privacy Act, 2014 (Leaked Bill) -

      The Right to Privacy Act, 2014 is a bill still under proposal that was leaked, linked below.

      • Leaked Privacy Bill: 2014 vs. 2011 http://bit.ly/QV0Y0w

      Sectoral Privacy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:46 AM
      The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      The Right to Privacy has evolved in India since many decades, where the question of it being a Fundamental Right has been debated many times in courts of Law. With the advent of information technology and digitisation of the services, the issue of Privacy holds more relevance in sectors like Banking, Healthcare, Telecommunications, ITC, etc., The Right to Privacy is also addressed in light of the Sexual minorities, Whistle-blowers, Government services, etc.

      Sectors -

      1. Consumer Privacy and other sectors -

      Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate customer privacy measures. The following articles deal with the current consumer privacy laws in place in India and around the world. Also, privacy concerns have been considered along with other sectors like Copyright law, data protection, etc.

      § Consumer Privacy - How to Enforce an Effective Protective Regime? http://bit.ly/1a99P2z

      § Privacy and Information Technology Act: Do we have the Safeguards for Electronic Privacy? http://bit.ly/10VJp1P

      • Limits to Privacy http://bit.ly/19mPG6I

      § Copyright Enforcement and Privacy in India http://bit.ly/18fi9fM

      • Privacy in India: Country Report http://bit.ly/14pnNwl

      § Transparency and Privacy http://bit.ly/1a9dMnC

      § The Report of the Group of Experts on Privacy (Contributed by CIS) http://bit.ly/VqzKtr

      § The (In) Visible Subject: Power, Privacy and Social Networking http://bit.ly/15koqol

      § Privacy and the Indian Copyright Act, 1857 as Amended in 2010 http://bit.ly/1euwX0r

      § Should Ratan Tata be afforded the Right to Privacy? http://bit.ly/LRlXin

      § Comments on Information Technology (Guidelines for Cyber Café) Rules, 2011 http://bit.ly/15kojJn

      § Broadcasting Standards Authority Censures TV9 over Privacy Violations! http://bit.ly/16L4izl

      § Is Data Protection Enough? http://bit.ly/1bvaWx2

      § Privacy, speech at stake in cyberspace http://cis-india.org/news/privacy-speech-at-stake-in-cyberspace-1

      § Q&A to the Report of the Group of Experts on Privacy http://bit.ly/TPhzQQ

      § Privacy worries cloud Facebook's WhatsApp Deal http://cis-india.org/internet-governance/blog/economic-times-march-14-2014-sunil-abraham-privacy-worries-cloud-facebook-whatsapp-deal

      § GNI Assessment Finds ICT Companies Protect User Privacy and Freedom of Expression http://bit.ly/1mjbpmL

      § A Stolen Perspective http://bit.ly/1bWHyzv

      § Is Data Protection enough? http://cis-india.org/internet-governance/blog/privacy/is-data-protection-enough

      § I don't want my fingerprints taken http://bit.ly/aYdMia

      § Keeping it Private http://bit.ly/15wjTVc

      § Personal Data, Public Profile http://bit.ly/15vlFk4

      § Why your Facebook Stalker is Not the Real Problem http://bit.ly/1bI2MSc

      § The Private Eye http://bit.ly/173ypSI

      § How Facebook is Blatantly Abusing our Trust http://bit.ly/OBXGXk

      § Open Secrets http://bit.ly/1b5uvK0

      § Big Brother is Watching You http://bit.ly/1cGpg0K

      2. Banking/Finance -

      Privacy in the banking and finance industry is crucial as the records and funds of one person must not be accessible by another without the due authorisation. The following articles deal with the current system in place that governs privacy in the financial and banking industry.

      § Privacy and Banking: Do Indian Banking Standards Provide Enough Privacy Protection? http://bit.ly/18fhsTM

      § Finance and Privacy http://bit.ly/15aUPh6

      § Making the Powerful Accountable http://bit.ly/1nvzSpC

      3. Telecommunications -

      The telecommunications industry is the backbone of current technology with respect to ICTs. The telecommunications industry has its own rules and regulations. These rules are the focal point of the following articles including criticism and acclaim.

      § Privacy and Telecommunications: Do We Have the Safeguards? http://bit.ly/10VJp1P

      § Privacy and Media Law http://bit.ly/18fgDfF

      § IP Addresses and Expeditious Disclosure of Identity in India http://bit.ly/16dBy4N

      § Telecommunications and Internet Privacy Read more: http://bit.ly/16dEcaF

      § Encryption Standards and Practices http://bit.ly/KT9BTy

      § Encryption Standards and Practices http://cis-india.org/internet-governance/blog/privacy/privacy_encryption

      § Security: Privacy, Transparency and Technology http://cis-india.org/internet-governance/blog/security-privacy-transparency-and-technolog y

      4. Sexual Minorities -

      While the internet is a global forum of self-expression and acceptance for most of us, it does not hold true for sexual minorities. The internet is a place of secrecy for those that do not conform to the typical identities set by society and therefore their privacy is more important to them than most. When they reveal themselves or are revealed by others, they typically face a lot of group hatred from the rest of the people and therefore value their privacy. The following article looks into their situation.

      · Privacy and Sexual Minorities http://bit.ly/19mQUyZ

      5. Health -

      The privacy between a doctor and a patient is seen as incredibly important and so should the privacy of a person in any situation where they reveal more than they would to others in the sense of CT scans and other diagnoses. The following articles look into the present scenario of privacy in places like a hospital or diagnosis center.

      § Health and Privacy http://bit.ly/16L1AJX

      § Privacy Concerns in Whole Body Imaging: A Few Questions http://bit.ly/1jmvH1z

      6. e-Governance -

      The main focus of governments in ICTs is their gain for governance. There have many a multiplicity of laws and legislation passed by various countries including India in an effort to govern the universal space that is the internet. Surveillance is a major part of that governance and control. The articles listed below deal with the issues of ethics and drawbacks in the current legal scenario involving ICTs.

      § E-Governance and Privacy http://bit.ly/18fiReX

      § Privacy and Governmental Databases http://bit.ly/18fmSy8

      § Killing Internet Softly with its Rules http://bit.ly/1b5I7Z2

      § Cyber Crime & Privacy http://bit.ly/17VTluv

      § Understanding the Right to Information http://bit.ly/1hojKr7

      § Privacy Perspectives on the 2012-2013 Goa Beach Shack Policy http://bit.ly/ThAovQ

      § Identifying Aspects of Privacy in Islamic Law http://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law

      § What Does Facebook's Transparency Report Tell Us About the Indian Government's Record on Free Expression & Privacy? http://cis-india.org/internet-governance/blog/what-does-facebook-transparency-report-tell -us-about-indian-government-record-on-free-expression-and-privacy

      § Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India http://cis-india.org/internet-governance/blog/search-and-seizure-and-right-to-privacy-in-digital-age

      § Internet Privacy in India http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-i ndia

      § Internet-driven Developments - Structural Changes and Tipping Points http://bit.ly/10s8HVH

      § Data Retention in India http://bit.ly/XR791u

      § 2012: Privacy Highlights in India http://bit.ly/1kWe3n7

      § Big Dog is Watching You! The Sci-fi Future of Animal and Insect Drones http://bit.ly/1kWee1W

      7. Whistle-blowers -

      Whistle-blowers are always in a difficult situation when they must reveal the misdeeds of their corporations and governments due to the blowback that is possible if their identity is revealed to the public. As in the case of Edward Snowden and many others, a whistle-blowers identity is to be kept the most private to avoid the consequences of revealing the information that they did. This is the main focus of the article below.

      § The Privacy Rights of Whistle-blowers http://bit.ly/18GWmM3

      8. Cloud and Open Source -

      Cloud computing and open source software have grown rapidly over the past few decades. Cloud computing is when an individual or company uses offsite hardware on a pay by usage basis provided and owned by someone else. The advantages are low costs and easy access along with decreased initial costs. Open source software on the other hand is software where despite the existence of proprietary elements and innovation, the software is available to the public at no charge. These software are based of open standards and have the obvious advantage of being compatible with many different set ups and are free. The following article highlights these computing solutions.

      § Privacy, Free/Open Source, and the Cloud http://bit.ly/1cTmGoI

      9. e-Commerce -

      One of the fastest growing applications of the internet is e-Commerce. This includes many facets of commerce such as online trading, the stock exchange etc. in these cases, just as in the financial and banking industries, privacy is very important to protect ones investments and capital. The following article's main focal point is the world of e-Commerce and its current privacy scenario.

      § Consumer Privacy in e-Commerce http://bit.ly/1dCtgTs

      Security Research

      by Vanya Rakesh last modified Jan 03, 2016 09:55 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      State surveillance in India has been carried out by Government agencies for many years. Recent projects include: NATGRID, CMS, NETRA, etc. which aim to overhaul the overall security and intelligence infrastructure in the country. The purpose of such initiatives has been to maintain national security and ensure interconnectivity and interoperability between departments and agencies. Concerns regarding the structure, regulatory frameworks (or lack thereof), and technologies used in these programmes and projects have attracted criticism.

      Surveillance/Security Research -

      1. Central Monitoring System -

      The Central Monitoring System or CMS is a clandestine mass electronic surveillance data mining program installed by the Center for Development of Telematics (C-DOT), a part of the Indian government. It gives law enforcement agencies centralized access to India's telecommunications network and the ability to listen in on and record mobile, landline, satellite, Voice over Internet Protocol (VoIP) calls along with private e-mails, SMS, MMS. It also gives them the ability to geo-locate individuals via cell phones in real time.

      • The Central Monitoring System: Some Questions to be Raised in Parliament http://bit.ly/1fln2vu

      2. Surveillance Industry : Global And Domestic -

      The surveillance industry is a multi-billion dollar economic sector that tracks individuals along with their actions such as e-mails and texts. With the cause for its existence being terrorism and the government's attempts to fight it, a network has been created that leaves no one with their privacy. All that an individual does in the digital world is suspect to surveillance. This included surveillance in the form of snooping where an individual's phone calls, text messages and e-mails are monitored or a more active kind where cameras, sensors and other devices are used to actively track the movements and actions of an individual. This information allows governments to bypass the privacy that an individual has in a manner that is considered unethical and incorrect. This information that is collected also in vulnerable to cyber-attacks that are serious risks to privacy and the individuals themselves. The following set of articles look into the ethics, risks, vulnerabilities and trade-offs of having a mass surveillance industry in place.

      • Surveillance Technologies http://bit.ly/14pxg74
      • New Standard Operating Procedures for Lawful Interception and Monitoring http://bit.ly/1mRRIo4

      3. Judgements By the Indian Courts -

      The surveillance industry in India has been brought before the court in different cases. The following articles look into the cause of action in these cases along with their impact on India and its citizens.

      4. International Privacy Laws -

      Due to the universality of the internet, many questions of accountability arise and jurisdiction becomes a problem. Therefore certain treaties, agreements and other international legal literature was created to answer these questions. The articles listed below look into the international legal framework which governs the internet.

      5. Indian Surveillance Framework -

      The Indian government's mass surveillance systems are configured a little differently from the networks of many countries such as the USA and the UK. This is because of the vast difference in infrastructure both in existence and the required amount. In many ways, it is considered that the surveillance network in India is far worse than other countries. This is due to the present form of the legal system in existence. The articles below explore the system and its functioning including the various methods through which we are spied on. The ethics and vulnerabilities are also explored in these articles.

      • A Comparison of Indian Legislation to Draft International Principles on Surveillance of Communications http://bit.ly/U6T3xy
      • Surveillance and the Indian Constitution - Part 2: Gobind and the Compelling State Interest Test http://bit.ly/1dH3meL
      • Surveillance and the Indian Constitution - Part 3: The Public/Private Distinction and the Supreme Court's Wrong Turn http://bit.ly/1kBosnw
      • Mastering the Art of Keeping Indians Under Surveillance http://cis-india.org/internet-governance/blog/the-wire-may-30-2015-bhairav-acharya-mastering-the-art-of-keeping-indians-under-surveillance

      UID Research

      by Vanya Rakesh last modified Jan 03, 2016 09:59 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Researching the vision and implementation of the UID Scheme - both from a technical and regulatory perspective.
      2. Understanding the validity and legality of collection, usage and storage of Biometric information for this scheme.
      3. Raising public awareness around issues concerning privacy, data security and the objectives of the UID Scheme.

      The UID scheme seeks to provide all residents of India an identity number based on their biometrics that can be used to authenticate individuals for the purpose of Government benefits and services. A 2015 Supreme Court ruling has clarified that the UID can only be used in the PDS and LPG Schemes.

      Concerns with the scheme include the broad consent taken at the time of enrolment, the lack of clarity as to what happens with transactional metadata, the centralized storage of the biometric information in the CIDR, the seeding of the aadhaar number into service providers’ databases, and the possibility of function creep. Also, there are concerns due to absence of a legislation to look into the privacy and security concerns.

      UID Research -

      1. Ramifications of Aadhar and UID schemes -

      The UID and Aadhar systems have been bombarded with criticisms and plagued with issues ranging from privacy concerns to security risks. The following articles deal with the many problems and drawbacks of these systems.

      § UID and NPR: Towards Common Ground http://cis-india.org/internet-governance/blog/uid-npr-towards-common-ground

      § Public Statement to Final Draft of UID Bill http://bit.ly/1aGf1NN

      § UID Project in India - Some Possible Ramifications http://cis-india.org/internet-governance/blog/uid-in-india

      § Aadhaar Number vs the Social Security Number http://cis-india.org/internet-governance/blog/aadhaar-vs-social-security-number

      § Feedback to the NIA Bill http://cis-india.org/internet-governance/blog/cis-feedback-to-nia-bill

      § Unique ID System: Pros and Cons http://bit.ly/1jmxbZS

      § Submitted seven open letters to the Parliamentary Finance Committee on the UID covering the following aspects: SCOSTA Standards (http://bit.ly/1hq5Rqd), Centralized Database (http://bit.ly/1hsHJDg), Biometrics (http://bit.ly/196drke), UID Budget (http://bit.ly/1e4c2Op), Operational Design (http://bit.ly/JXR61S), UID and Transactions (http://bit.ly/1gY6B8r), and Deduplication (http://bit.ly/1c9TkSg)

      § Comments on Finance Committee Statements to Open Letters on Unique Identity: The Parliamentary Finance Committee responded to the open letters sent by CIS through an email on 12 October 2011. CIS has commented on the points raised by the Committee: http://bit.ly/1kz4H0F

      § Unique Identification Scheme (UID) & National Population Register (NPR), and Governance http://cis-india.org/internet-governance/blog/uid-and-npr-a-background-note

      § Financial Inclusion and the UID http://cis-india.org/internet-governance/privacy_uidfinancialinclusion

      § The Aadhaar Case http://cis-india.org/internet-governance/blog/the-aadhaar-case

      § Do we need the Aadhaar scheme http://bit.ly/1850wAz

      § 4 Popular Myths about UID http://bit.ly/1bWFoQg

      § Does the UID Reflect India? http://cis-india.org/internet-governance/blog/privacy/uid-reflects-india

      § Would it be a unique identity crisis? http://cis-india.org/news/unique-identity-crisis

      § UID: Nothing to Hide, Nothing to Fear? http://cis-india.org/internet-governance/blog/privacy/uid-nothing-to-hide-fear

      2. Right to Privacy and UID -

      The UID system has been hit by many privacy concerns from NGOs, private individuals and others. The sharing of one's information, especially fingerprints and retinal scans to a system that is controlled by the government and is not vetted as having good security irks most people. These issues are dealt with the in the following articles.

      § India Fears of Privacy Loss Pursue Ambitious ID Project http://cis-india.org/news/india-fears-of-privacy-loss

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://bit.ly/1bWFoQg

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://cis-india.org/internet-governance/blog/privacy/privacy-uiddevaprasad

      § Supreme Court order is a good start, but is seeding necessary? http://cis-india.org/internet-governance/blog/supreme-court-order-is-a-good-start-but-is-seeding-necessary

      § Right to Privacy in Peril http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

      3. Data Flow in the UID -

      The articles below deal with the manner in which data is moved around and handled in the UID system in India.

      § UIDAI Practices and the Information Technology Act, Section 43A and Subsequent Rules http://cis-india.org/internet-governance/blog/uid-practices-and-it-act-sec-43-a-and-subsequent-rules

      § Data flow in the Unique Identification Scheme of India http://cis-india.org/internet-governance/blog/data-flow-in-unique-identification-scheme-of-india

      CIS's Position on Net Neutrality

      by Sunil Abraham last modified Dec 09, 2015 01:06 PM
      Contributors: pranesh
      As researchers committed to the principle of pluralism we rarely produce institutional positions. This is also because we tend to update our positions based on research outputs. But the lack of clarity around our position on network neutrality has led some stakeholders to believe that we are advocating for forbearance. Nothing can be farther from the truth. Please see below for the current articulation of our common institutional position.

       

      1. Net Neutrality violations can potentially have multiple categories of harms — competition harms, free speech harms, privacy harms, innovation and ‘generativity’ harms, harms to consumer choice and user freedoms, and diversity harms thanks to unjust discrimination and gatekeeping by Internet service providers.

      2. Net Neutrality violations (including some those forms of zero-rating that violate net neutrality) can also have different kinds benefits — enabling the right to freedom of expression, and the freedom of association, especially when access to communication and publishing technologies is increased; increased competition [by enabling product differentiation, can potentially allow small ISPs compete against market incumbents]; increased access [usually to a subset of the Internet] by those without any access because they cannot afford it, increased access [usually to a subset of the Internet] by those who don't see any value in the Internet, reduced payments by those who already have access to the Internet especially if their usage is dominated by certain services and destinations.

      3. Given the magnitude and variety of potential harms, complete forbearance from all regulation is not an option for regulators nor is self-regulation sufficient to address all the harms emerging from Net Neutrality violations, since incumbent telecom companies cannot be trusted to effectively self-regulate. Therefore, CIS calls for the immediate formulation of Net Neutrality regulation by the telecom regulator [TRAI] and the notification thereof by the government [Department of Telecom of the Ministry of Information and Communication Technology]. CIS also calls for the eventual enactment of statutory law on Net Neutrality.  All such policy must be developed in a transparent fashion after proper consultation with all relevant stakeholders, and after giving citizens an opportunity to comment on draft regulations.

      4. Even though some of these harms may be large, CIS believes that a government cannot apply the precautionary principle in the case of Net Neutrality violations. Banning technical innovations and business model innovations is not an appropriate policy option. The regulation must toe a careful line to solve the optimization problem: refraining from over-regulation of ISPs and harming innovation at the carrier level (and benefits of net neutrality violations mentioned above) while preventing ISPs from harming innovation and user choice.  ISPs must be regulated to limit harms from unjust discrimination towards consumers as well as to limit harms from unjust discrimination towards the services they carry on their networks.

      5. Based on regulatory theory, we believe that a regulatory framework that is technologically neutral, that factors in differences in technological context, as well as market realities and existing regulation, and which is able to respond to new evidence is what is ideal.

        This means that we need a framework that has some bright-line rules based, but which allows for flexibility in determining the scope of exceptions and in the application of the rules.  Candidate principles to be embodied in the regulation include: transparency, non-exclusivity, limiting unjust discrimination.

      6. The harms emerging from walled gardens can be mitigated in a number of waysOn zero-rating the form of regulation must depend on the specific model and the potential harms that result from that model. Zero-rating can be: paid for by the end consumer or subsidized by ISPs or subsidized by content providers or subsidized by government or a combination of these; deal-based or criteria-based or government-imposed; ISP-imposed or offered by the ISP and chosen by consumers; Transparent and understood by consumers vs. non-transparent; based on content-type or agnostic to content-type; service-specific or service-class/protocol-specific or service-agnostic; available on one ISP or on all ISPs.  Zero-rating by a small ISP with 2% penetration will not have the same harms as zero-rating by the largest incumbent ISP.  For service-agnostic / content-type agnostic zero-rating, which Mozilla terms ‘equal rating’, CIS advocates for no regulation.

      7. CIS believes that Net Neutrality regulation for mobile and fixed-line access must be different recognizing the fundamental differences in technologies.

      8. On specialized services CIS believes that there should be logical separation and that all details of such specialized services and their impact on the Internet must be made transparent to consumers both individual and institutional, the general public and to the regulator.  Further, such services should be available to the user only upon request, and not without their active choice, with the requirement that the service cannot be reasonably provided with ‘best efforts’ delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or that the discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

      9. On incentives for telecom operators, CIS believes that the government should consider different models such as waiving contribution to the Universal Service Obligation Fund for prepaid consumers, and freeing up additional spectrum for telecom use without royalty using a shared spectrum paradigm, as well as freeing up more spectrum for use without a licence.

      10. On reasonable network management CIS still does not have a common institutional position.

      Smart Cities in India: An Overview

      by Vanya Rakesh last modified Jan 11, 2016 01:30 AM
      The Government of India is in the process of developing 100 smart cities in India which it sees as the key to the country's economic and social growth. This blog post gives an overview of the Smart Cities project currently underway in India. The smart cities mission in India is at a nascent stage and an evolving area for research. The Centre for Internet and Society will continue work in this area.

      Overview of the 100 Smart Cities Mission

      The Government of India announced its flagship programme- the 100 Smart Cities mission in the year 2014 and was launched in June 2015 to achieve urban transformation, drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology. Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY 2015-16 to FY 2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporation of the learnings into the Mission. The Mission aims to focus on area-based development in the form of redevelopment of existing spaces, or the development of new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all - especially the poor and the disadvantaged. [1] On 27th August 2015 the Centre unveiled 98 smart cities across India which were selected for this Project. Across the selected cities, 13 crore population ( 35% of the urban population will be included in the development plans. [2] The mission has been developed for the purpose of achieving urban transformation. The vision is to preserve India's traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. [3]

      The promises of the Smart City mission include reduction of carbon footprint, adequate water and electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing, robust IT connectivity and digitalization, good governance, citizen participation, security of citizens, health and education.

      Questions unanswered

      • Why and How was the Smart Cities project conceptualized in India? What was the need for such a project in India?
      • What was the role of the public/citizens at the ideation and conceptualization stage of the project?
      • Which actors from the Government, Private industry and the civil society are involved in this mission? Though the smart cities mission has been initiated by the Government of India under the Ministry of Urban Development, there is no clarity about the involvement of the associated offices and departments of the Ministry.

      How are the Smart Cities being selected?

      The 100 cities were supposed to be selected on the basis of Smart cities challenge[4] involving two stages. Stage I of the challenge involved Intra-State city selection on objective criteria to identify cities to compete in stage-II. In August 2015, The Ministry of Urban Development, Government of India announced 100 smart cities [5] evaluated on parameters such as service levels, financial and institutional capacity, past track record, called as the 'shortlisted cities' for this purpose. The selected cities are now competing for selection in the Second stage of the challenge, which is an All India competition. For this crucial stage, the potential 100 smart cities are required to prepare a Smart City Proposal (SCP) stating the model chosen (retrofitting, redevelopment, Greenfield development or a mix), along with a Pan-City dimension with Smart Solutions. The proposal must also include suggestions collected by way of consultations held with city residents and other stakeholders, along with the proposal for financing of the smart city plan including the revenue model to attract private participation. The country saw wide participation from the citizens to voice their aspirations and concerns regarding the smart city. 15th December 2015 has been declared as the deadline for submission of the SCP, which must be in consonance with evaluation criteria set by The MoUD, set on the basis of professional advice. [6] On the basis of this, 20 cities will be selected for the first year. According to the latest reports, the Centre is planning to fund only 10 cities for the first phase in case the proposals sent by the states do not match the expected quality standards and are unable to submit complete area-development plans by the deadline, i.e. 15th December, 2015. [7]

      Questions unanswered

      • Who would be undertaking the task of evaluating and selecting the cities for this project?
      • What are the criteria for selection of a city to qualify in the first 20 (or 10, depending on the Central Government) for the first phase of implementation?

      How are the smart cities going to be Funded?

      The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year. [8] The additional resources will have to be mobilized by the State/ ULBs from external/internal sources. According to the scheme, once list of shortlisted Smart Cities is finalized, Rs. 2 crore would have been disbursed to each city for proposal preparation.[9]

      According to estimates of the Central Government, around Rs 4 lakh crore of funds will be infused mainly through private investments and loans from multilateral institutions among other sources, which accounts to 80% of the total spending on the mission. [10] For this purpose, the Government will approach the World Bank and the Asian Development Bank (ADB) for a loan costing £500 million and £1 billion each for 2015-20. If ADB approves the loan, it would be it will be the bank's highest funding to India's urban sector so far.[11] Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission. [12]

      Questions unanswered

      • The Government notes on Financing of the project mentions PPPs for private funding and leveraging of resources from internal and external resources. There is lack of clarity on the external resources the Government has/will approach and the varied PPP agreements the Government is or is planning to enter into for the purpose of private investment in the smart cities.

      How is the scheme being implemented?

      Under this scheme, each city is required to establish a Special Purpose Vehicle (SPV) having flexibility regarding planning, implementation, management and operations. The body will be headed by a full-time CEO, with nominees of Central Government, State Government and ULB on its Board. The SPV will be a limited company incorporated under the Companies Act, 2013 at the city-level, in which the State/UT and the Urban Local Body (ULB) will be the promoters having equity shareholding in the ratio 50:50. The private sector or financial institutions could be considered for taking equity stake in the SPV, provided the shareholding pattern of 50:50 of the State/UT and the ULB is maintained and the State/UT and the ULB together have majority shareholding and control of the SPV. Funds provided by the Government of India in the Smart Cities Mission to the SPV will be in the form of tied grant and kept in a separate Grant Fund.[13]

      For the purpose of implementation and monitoring of the projects, the MoUD has also established an Apex Committee and National Mission Directorate for National Level Monitoring[14], a State Level High Powered Steering Committee (HPSC) for State Level Monitoring[15] and a Smart City Advisory Forum at the City Level [16].

      Also, several consulting firms[17] have been assigned to the 100 cities to help them prepare action plans.[18] Some of them include CRISIL, KPMG, McKinsey, etc. [19]

      Questions unanswered

      • What policies and regulations have been put in place to account for the smart cities, apart from policies looking at issues of security, privacy, etc.?
      • What international/national standards will be adopted while development of the smart cities? Though the Bureau of Indian Standards is in the process of formulating standardized guidelines for the smart cities in India[20], yet there is lack of clarity on adoption of these national standards, along with the role of international standards like the ones formulated by ISO.

      What is the role of Foreign Governments and bodies in the Smart cities mission?

      Ever since the government's ambitious project has been announced and cities have been shortlisted, many countries across the globe have shown keen interest to help specific shortlisted cities in building the smart cities and are willing to invest financially. Countries like Sweden, Malaysia, UAE, USA, etc. have agreed to partner with India for the mission.[21] For example, UK has partnered with the Government to develop three India cities-Pune, Amravati and Indore.[22] Israel's start-up city Tel Aviv also entered into an agreement to help with urban transformation in the Indian cities of Pune, Nagpur and Nashik to foster innovation and share its technical know-how.[23] France has piqued interest for Nagpur and Puducherry, while the United States is interested in Ajmer, Vizag and Allahabad. Also, Spain's Barcelona Regional Agency has expressed interest in exchanging technology with the Delhi. Apart from foreign government, many organizations and multilateral agencies are also keen to partner with the Indian government and have offered financial assistance by way of loans. Some of them include the UK government-owned Department for International Development, German government KfW development bank, Japan International Cooperation Agency, the US Trade and Development Agency, United Nations Industrial Development Organization and United Nations Human Settlements Programme. [24]

      Questions unanswered

      • Do these governments or organization have influence on any other component of the Smart cities?
      • How much are the foreign governments and multilateral bodies spending on the respective cities?
      • What kind of technical know-how is being shared with the Indian government and cities?

      What is the way ahead?

      On the basis of the SCP, the MoUD will evaluate, assess the credibility and select 20 smart cities out of the short-listed ones for execution of the plan in the first phase. The selected city will set up a SPV and receive funding from the Government.

      Questions unanswered

      • Will the deadline of submission of the Smart Cities Proposal be pushed back?
      • After the SCP is submitted on the basis of consultation with the citizens and public, will they be further involved in the implementation of the project and what will be their role?
      • How will the MoUD and other associated organizations as well as actors consider the implementation realities of the project, like consideration of land displacement, rehabilitation of the slum people, etc.
      • How are ICT based systems going to be utilized to make the cities and the infrastructure "smart"?
      • How is the MoUD going to respond to the concerns and criticism emerging from various sections of the society, as being reflected in the news items?
      • How will the smart cities impact and integrate the existing laws, regulations and policies? Does the Government intend to use the existing legislations in entirety, or update and amend the laws for implementation of the Smart Cities Mission?


      [1] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

      [2] http://articles.economictimes.indiatimes.com/2015-08-27/news/65929187_1_jammu-and-kashmir-12-cities-urban-development-venkaiah-naidu

      [3] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [4] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [5] Full list : http://www.scribd.com/doc/276467963/Smart-Cities-Full-List

      [6] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [7] http://www.ibtimes.co.in/modi-govt-select-only-10-cities-under-smart-city-project-this-year-report-658888

      [8] http://smartcities.gov.in/writereaddata/Financing%20of%20Smart%20Cities.pdf

      [9] Smart Cities presentation by MoUD : http://smartcities.gov.in/writereaddata/Presentation%20on%20Smart%20Cities%20Mission.pdf

      [10] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/

      [11] http://indianexpress.com/article/india/india-others/funding-for-smart-cities-key-to-coffer-lies-outside-india/#sthash.5lnW9Jsq.dpuf

      [12] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [13] http://smartcities.gov.in/writereaddata/SPVs.pdf

      [14] http://smartcities.gov.in/writereaddata/National%20Level%20Monitoring.pdf

      [15] http://smartcities.gov.in/writereaddata/State%20Level%20Monitoring.pdf

      [16] http://smartcities.gov.in/writereaddata/City%20Level%20Monitoring.pdf

      [17] http://smartcities.gov.in/writereaddata/List_of_Consulting_Firms.pdf

      [18] http://pib.nic.in/newsite/PrintRelease.aspx?relid=128457

      [20] http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html

      [21] http://accommodationtimes.com/foreign-countries-have-keen-interest-in-development-of-smart-cities/

      [22] http://articles.economictimes.indiatimes.com/2015-11-20/news/68440402_1_uk-trade-three-smart-cities-british-deputy-high-commissioner

      [23] http://www.jpost.com/Business-and-Innovation/Tech/Tel-Aviv-to-help-India-build-smart-cities-435161?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

      [24] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/#sthash.nCMxEKkc.dpuf

      ISO/IEC/ JTC 1/SC 27 Working Groups Meeting, Jaipur

      by Vanya Rakesh last modified Dec 21, 2015 02:38 AM
      I attended this event held from October 26 to 30, 2015 in Jaipur.

      The Bureau of Indian Standards (BIS) in collaboration with Data Security Council of India (DSCI) hosted the global standards’ meeting – ISO/IEC/ JTC 1/SC 27 Working Groups Meeting in Jaipur, Rajasthan at Hotel Marriott from 26th to 30th of October, 2015, followed by a half day conference on Friday, 30th October on the importance of Standards in the domain. The event witnessed experts from across the globe deliberating on forging international standards on Privacy, Security and Risk management in IoT, Cloud Computing and many other contemporary technologies, along with updating existing standards. Under SC 27, 5 working groups parallely held the meetings on varied Projects and Study periods respectively. The 5 Working Groups are as follows:

      1. WG1: Information Security Management Systems;
      2. WG 2 :Cryptography and Security Mechanisms;
      3. WG 3 : Security Evaluation, Testing and Specification;
      4. WG 4 : Security Controls and Services; and
      5. WG 5 :Identity Management and Privacy technologies; competence of security management

      This key set of Working Groups (WG)met in India for the first time.  Professionals discussed and debated development of standards under each working group to develop international standards to address issues regarding security, identity management and privacy.

      CIS had the opportunity to attend meetings under Working Group 5. This group further had parallel meetings on several topics namely:

      • Privacy enhancing data de-identification techniques ISO/IEC NWIP 20889 : Data de-identification techniques are important when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles. The selection, design, use and assessment of these techniques need to be performed appropriately in order to effectively address the risks of re-identification in a given context.  There is thus a need to classify known de-identification techniques using standardized terminology, and to describe their characteristics, including the underlying technologies, the applicability of each technique to reducing the risk of re-identification, and the usability of the de-identified data.  This is the main goal of this International Standard. Meetings were conducted to resolve comments sent by organisations across the world, review draft documents and agree on next steps.
      • A study period on Privacy Engineering framework : This session deliberated upon contributions, terms of reference and discuss the scope for the emerging field of privacy engineering framework. The session also reviewed important terms to be included in the standard and identify possible improvements to existing privacy impact assessment and management standards. It was identified that the goal of this standard is to integrate privacy into systems as part of the systems engineering process. Another concern raised was that the framework must be consistent with Privacy framework under ISO 29100 and HL7 Privacy and security standards.
      • A study period on user friendly online privacy notice and consent: The basic purpose of this New Work Item Proposal is to assess the viability of producing a guideline for PII Controllers on providing easy to understand notices and consent procedures to PII Principals within WG5. At the Meeting, a brief overview of the contributions received was given,along with assessment of  liaison to ISO/IEC JTC 1/SC 35 and other entities. This International Standard gives guidelines for the content and the structure of online privacy notices as well as documents asking for consent to collect and process personally identifiable information (PII) from PII principals online and is applicable to all situations where a PII controller or any other entity processing PII informs PII principals in any online context.
      • Some of the other sessions under Working Group 5 were on Privacy Impact Assessment ISO/IEC 29134, Standardization in the area of Biometrics and Biometric information protection, Code of Practise for the protection of personally identifiable information, Study period on User friendly online privacy notice and consent, etc.

      ISO/IEC/JTC 1/ SC27 is a joint technical committee of the international standards bodies – ISO and IEC on Information Technology security techniques which conducts regular meetings across the world. JTC 1 has over 2600 published standards developed under the broad umbrella of the committee and its 20 subcommittees. Draft International Standards adopted by the joint technical committees are circulated to the national bodies for voting. Publication as an International Standard requires approval by at least 75% of the national bodies casting a vote in favour of the same. In India, the Bureau of Indian Standards (BIS) is the National Standards Body. Standards are formulated keeping in view national priorities, industrial development, technical needs, export promotion, health, safety etc. and are harmonized with ISO/IEC standards (wherever they exist) to the extent possible, in order to facilitate adoption of ISO/IEC standards by all segments of industry and business.BIS has been actively participating in the  Technical Committee  work of ISO/IEC and is currently a Participating member in 417 and 74 Technical Committees/ Subcommittees and Observer member in 248 and 79 Technical Committees/Subcommittees of ISO and IEC respectively.  BIS  holds Secretarial responsibilities of 2 Technical Committees and 6 Subcommittees of ISO.

      The last meeting was held in the month of May, 2015 in Malaysia, followed by this meeting in October, 2015 Jaipur. 51 countries play an active role as the ‘Participating Members, India being one, while a few countries as observing members. As a part of these sessions, the participating countries also have rights to vote in all official ballots related to standards. The representatives of the country work on the preparation and development of the International Standards and provide feedback to their national organizations.

      There was an additional study group meeting on IoT to discuss comments on the previous drafts, suggest changes , review responses and identify standard gaps in SC 27.

      On October 30, 2015  BIS-DSCI hosted a half day International conference on 30 October, 2015 on Cyber Security and Privacy Standards, comprising of keynotes and panel discussions, bringing together national and international experts to share experience and exchange views on cyber security techniques and protection of data and privacy in international standards, and their growing importance in their society.  The conference looked at various themes like the Role of standards in smart cities, Responding to the Challenges of Investigating Cyber Crimes through Standards, etc. It was emphasised that due to an increasing digital world, there is a universal agreement for the need of cyber security as the infrastructure is globally connected, the cyber threats are also distributed as they are not restricted by the geographical boundaries. Hence, the need for technical and policy solutions, along with standards was highlighted for future protection of the digital world which is now deeply embedded in life, businesses and the government. Standards will help in setting crucial infrastructure for in data security and build associated infrastructure on these lines.

      The importance of standards was highlighted in context of smart cities wherein the need for standards was discussed by experts. Harmonization of regulations with standards must be looked at, by primarily creating standards which could be referred to by the regulators. Broadly, the challenges faced by smart cities are data security, privacy and digital resilience of the infrastructure. It was suggested that in the beginning, these areas must be looked at for development of standards in smart cities. Also, the ISO/IEC  has a Working Group and a Strategic Group focussing on Smart Cities. The risks of digitisation, network, identity management, etc. must be looked at to create the standards.

      The next meeting has been scheduled for April 2016 in Tampa (USA).

      This meeting was a good opportunity to interact with experts from various parts of the World and understand the working of ISO Meetings which are held twice/thrice every year. The Centre for Internet and Society will be continuing work and becoming involved in the standard setting process at the future Working group meetings.

      RTI PDF

      by Prasad Krishna last modified Dec 22, 2015 02:54 AM

      PDF document icon RTI.pdf — PDF document, 412 kB (422252 bytes)

      RTI response regarding the UIDAI

      by Vanya Rakesh last modified Dec 22, 2015 02:57 AM
      This is a response to the RTI filed regarding UIDAI

      The Supreme Curt of India, by virtue of an order dated 11th August 2015, directed the Government to widely publicize in electronic and print media, including radio and television networks that obtaining Aadhar card is not mandatory for the citizens to avail welfare schemes of the Government. (until the matter is resolved). CIS filed an RTI to get information about the steps taken by Government in this regard, the initiatives taken, and details about the expenditure incurred to publicize and inform the public about Aadhar not being mandatory to avail welfare schemes of the Government.

      Response: It has been informed that an advisory was issued by UIDAI headquarters to all regional offices to comply with the order, along with several advertisement campaigns. The total cost incurred so far by UIDAI for this is Rs. 317.30 lakh.


      Download the Response

      Benefits and Harms of "Big Data"

      by Scott Mason — last modified Dec 30, 2015 02:48 AM
      Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life.

      Introduction

      In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabyte[1]. By 2013 that had grown to 4 zettabytes[2], and with the nascent development of the so-called 'Internet of Things' gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available[3] , together with the development of innovative forms of statistical analytics, is generally referred to as "Big Data"; though there is no single agreed upon definition of the term. Although still in its initial stages, Big Data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even 10 years ago.

      Despite enormous optimism about the scope and variety of Big Data's potential applications however, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits[4]. Most notably these have included concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data [5]. However, concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on Big Data analytics as opposed to traditional forms of analysis and the creation of new digital divides to just name a few.

      The existing literature on Big Data is vast, however many of the benefits and harms identified by researchers tend to relate to sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of Big Data's possible applications, it can nevertheless be difficult to gain an overall perspective of the broader impacts of Big Data as a whole. As such this article will seek to disaggregate the potential benefits and harms of Big Data, organising them into several broad categories, which are reflective of the existing scholarly literature.

      What are the potential benefits of Big Data?

      From politicians to business leaders, recent years have seen Big Data confidently proclaimed as a potential solution to a diverse range of problems from, world hunger and diseases, to government budget deficits and corruption. But if we look beyond the hyperbole and headlines, what do we really know about the advantages of Big Data? Given the current buzz surrounding it, the existing literature on Big Data is perhaps unsurprisingly vast, providing innumerable examples of the potential applications of Big Data from agriculture to policing. However, rather than try (and fail) to list the many possible applications of Big Data analytics across all sectors and industries, for the purposes of this article we have instead attempted to distil the various advantages of Big Data discussed within literature into the following five broad categories; Decision-Making, Efficiency & Productivity, Research & Development, Personalisation and Transparency, each of which will be discussed separately below.

      Decision-Making

      Whilst data analytics have always been used to improve the quality and efficiency of decision-making processes, the advent of Big Data means that the areas of our lives in which data driven decision- making plays a role is expanding dramatically; as businesses and governments become better able to exploit new data flows. Furthermore, the real-time and predictive nature of decision-making made possible by Big Data, are increasingly allowing these decisions to be automated. As a result, Big Data is providing governments and business with unprecedented opportunities to create new insights and solutions; becoming more responsive to new opportunities and better able to act quickly - and in some cases preemptively - to deal with emerging threats.

      This ability of Big Data to speed up and improve decision-making processes can be applied across all sectors from transport to healthcare and is often cited within the literature as one of the key advantages of Big Data. Joh, for example, highlights the increased use of data driven predictive analysis by police forces to help them to forecast the times and geographical locations in which crimes are most likely to occur. This allows the force to redistribute their officers and resources according to anticipated need, and in certain cities has been highly effective in reducing crime rates [6]. Raghupathi meanwhile cites the case of healthcare, where predictive modelling driven by big data is being used to proactively identify patients who could benefit from preventative care or lifestyle changes[7].

      One area in particular where the decision-making capabilities of Big Data are having a significant impact is in the field of risk management [8]. For instance, Big Data can allow companies to map their entire data landscape to help detect sensitive information, such as 16 digit numbers - potentially credit card data - which are not being stored according to regulatory requirements and intervene accordingly. Similarly, detailed analysis of data held about suppliers and customers can help companies to identify those in financial trouble, allowing them to act quickly to minimize their exposure to any potential default[9].

      Efficiency and Productivity

      In an era when many governments and businesses are facing enormous pressures on their budgets, the desire to reduce waste and inefficiency has never been greater. By providing the information and analysis needed for organisations to better manage and coordinate their operations, Big Data can help to alleviate such problems, leading to the better utilization of scarce resources and a more productive workforce [10].

      Within the literature such efficiency savings are most commonly discussed in relation to reductions in energy consumption [11]. For example, a report published by Cisco notes how the city of Olso has managed to reduce the energy consumption of street-lighting by 62 percent through the use of smart solutions driven by Big Data[12]. Increasingly, however, statistical models generated by Big Data analytics are also being utilized to identify potential efficiencies in sourcing, scheduling and routing in a wide range of sectors from agriculture to transport. For example, Newell observes how many local governments are generating large databases of scanned license plates through the use of automated license plate recognition systems (ALPR), which government agencies can then use to help improve local traffic management and ease congestion[13].

      Commonly these efficiency savings are only made possible by the often counter-intuitive insights generated by the Big Data models. For example, whilst a human analyst planning a truck route would always tend to avoid 'drive-bys' - bypassing one stop to reach a third before doubling back - Big Data insights can sometimes show such routes to be more efficient. In such cases efficiency saving of this kind would in all likelihood have gone unrecognised by a human analyst, not trained to look for such patterns[14].

      Research, Development, and Innovation

      Perhaps one of the most intriguing benefits of Big Data is its potential use in the research and development of new products and services. As is highlighted throughout the literature, Big Data can help businesses to gain an understanding of how others perceive their products or identify customer demand and adapt their marketing or indeed the design of their products accordingly[15]. Analysis of social media data, for instance, can provide valuable insights into customers' sentiments towards existing products as well as discover demands for new products and services, allowing businesses to respond more quickly to changes in customer behaviour[16].

      In addition to market research, Big Data can also be used during the design and development stage of new products; for example by helping to test thousands of different variations of computer-aided designs in an expedient and cost-effective manner. In doing so, business and designers are able to better assess how minor changes to a products design may affect its cost and performance, thereby improving the cost-effectiveness of the production process and increasing profitability.

      Personalisation

      For many consumers, perhaps the most familiar application of Big Data is its ability to help tailor products and services to meet their individual preferences. This phenomena is most immediately noticeable on many online services such as Netflix; where data about users activities and preferences is collated and analysed to provide a personalised service, for example by suggesting films or television shows the user may enjoy based upon their previous viewing history[17]. By enabling companies to generate in-depth profiles of their customers, Big Data allows businesses to move past the 'one size fits all' approach to product and services design and instead quickly and cost-effectively adapt their services to better meet customer demand.

      In addition to service personalisation, similar profiling techniques are increasingly being utilized in sectors such as healthcare. Here data about a patient's medical history, lifestyle, and even their gene expression patterns are collated, generating a detailed medical profile which can then be used to tailor treatments to meet their specific needs[18]. Targeted care of this sort can not only help to reduce costs for example by helping to avoid over-prescriptions, but may also help to improve the effectiveness of treatments and so ultimately their outcome.

      Transparency

      If 'knowledge is power', then, - so say Big Data enthusiasts - advances in data analytics and the quantity of data available can give consumers and citizens the knowledge to hold governments and businesses to account, as well as make more informed choices about the products and services they use. Nevertheless, data (even lots of it) does not necessarily equal knowledge. In order for citizens and consumers to be able to fully utilize the vast quantities of data available to them, they must first have some way to make sense of it. For some, Big Data analytics provides just such a solution, allowing users to easily search, compare and analyze available data, thereby helping to challenge existing information asymmetries and make business and government more transparent[19].

      In the private sector, Big Data enthusiasts have claimed that Big Data holds the potential to ensure complete transparency of supply chains, enabling concerned consumers to trace the source of their products, for example to ensure that they have been sourced ethically [20]. Furthermore, Big Data is now making accessible information which was previously unavailable to average consumers and challenging companies whose business models rely on the maintenance of information asymmetries.The real-estate industry, for example, relies heavily upon its ability to acquire and control proprietary information, such as transaction data as a competitive asset. In recent years, however, many online services have allowed consumers to effectively bypass agents, by providing alternative sources of real-estate data and enabling prospective buyers and sellers to communicate directly with each other[21]. Therefore, providing consumers with access to large quantities of actionable data . Big Data can help to eliminate established information asymmetries, allowing them to make better and more informed decisions about the products they buy and the services they enlist.

      This potential to harness the power of Big Data to improve transparency and accountability can also be seen in the public sector, with many scholars suggesting that greater access to government data could help to stem corruption and make politics more accountable. This view was recently endorsed by the UN who highlighted the potential uses of Big Data to improve policymaking and accountability in a report published by the Independent Expert Advisory Group on the "Data Revolution for Sustainable Development". In the report experts emphasize the potential of what they term the 'data revolution', to help achieve sustainable development goals by for example helping civil society groups and individuals to 'develop data literacy and help communities and individuals to generate and use data, to ensure accountability and make better decisions for themselves' [22].

      What are the potential harms of Big Data?

      Whilst it is often easy to be seduced by the utopian visions of Big Data evangelists, in order to ensure that Big Data can deliver the types of far-reaching benefits its proponents promise, it is vital that we are also sensitive to its potential harms. Within the existing literature, discussions about the potential harms of Big Data are perhaps understandably dominated by concerns about privacy. Yet as Big Data has begun to play an increasingly central role in our daily lives, a broad range of new threats have begun to emerge including issues related to security and scientific epistemology, as well as problems of marginalisation, discrimination and transparency; each of which will be discussed separately below.

      Privacy

      By far the biggest concern raised by researchers in relation to Big Data is its risk to privacy. Given that by its very nature Big Data requires extensive and unprecedented access to large quantities of data; it is hardly surprising that many of the benefits outlined above in one way or another exist in tension with considerations of privacy. Although many scholars have called for a broader debate on the effects of Big Data on ethical best practice [23], a comprehensive exploration into the complex debates surrounding the ethical implications of Big Data go far beyond the scope of this article. Instead we will simply attempt to highlight some of the major areas of concern expressed in the literature, including its effects on established principles of privacy and the implication of Big Data on the suitability of existing regulatory frameworks governing privacy and data protection.

      1. Re-identification

      Traditionally many Big Data enthusiasts have used de-identification - the process of anonymising data by removing personally identifiable information (PII) - as a way of justifying mass collection and use of personal data. By claiming that such measures are sufficient to ensure the privacy of users, data brokers, companies and governments have sought to deflect concerns about the privacy implications of Big Data, and suggest that it can be compliant with existing regulatory and legal frameworks on data protection.

      However, many scholars remain concerned about the limits of anonymisation. As Tene and Polonetsky observe 'Once data-such as a clickstream or a cookie number-are linked to an identified individual, they become difficult to disentangle'[24]. They cite the example of University of Texas researchers Narayanan and Shmatikov, who were able to successfully re-identify anonymised Netflix user data by cross referencing it with data stored in a publicly accessible online database. As Narayanan and Shmatikov themselves explained, 'once any piece of data has been linked to a person's real identity, any association between this data and a virtual identity breaks anonymity of the latter' [25]. The quantity and variety of datasets which Big Data analytics has made associable with individuals is therefore expanding the scope of the types of data that can be considered PII, as well as undermining claims that de-identification alone is sufficient to ensure privacy for users.

      2. Privacy Frameworks Obsolete?

      In recent decades privacy and data protection frameworks based upon a number of so-called 'privacy principles' have formed the basis of most attempts to encourage greater consideration of privacy issues online[26]. For many however, the emergence of Big Data has raised question about the extent to which these 'principles of privacy' are workable in an era of ubiquitous data collection.

      Collection Limitation and Data Minimization : Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum[27].

      Purpose Limitation: Since the utility of a given dataset is often not easily identifiable at the time of collection, datasets are increasingly being processed several times for a variety of different purposes. Such practices have significant implications for the principle of purpose limitation, which aims to ensure that organizations are open about their reasons for collecting data, and that they use and process the data for no other purpose than those initially specified [28].

      Notice and Consent: The principles of notice and consent have formed the cornerstones of attempts to protect privacy for decades. Nevertheless in an era of ubiquitous data collection, the notion that an individual must be required to provide their explicit consent to allow for the collection and processing of their data seems increasingly antiquated, a relic of an age when it was possible to keep track of your personal data relationships and transactions. Today as data streams become more complex, some have begun to question suitability of consent as a mechanism to protect privacy. In particular commentators have noted how given the complexity of data flows in the digital ecosystem most individuals are not well placed to make truly informed decisions about the management of their data[29]. In one study, researchers demonstrated how by creating the perceptions of control, users were more likely to share their personal information, regardless of whether or not the users had actually gained control [30]. As such, for many, the garnering of consent is increasingly becoming a symbolic box-ticking exercise which achieves little more than to irritate and inconvenience customers whilst providing a burden for companies and a hindrance to growth and innovation [31].

      Access and Correction: The principle of 'access and correction' refers to the rights of individuals to obtain personal information being held about them as well as the right to erase, rectify, complete or otherwise amend that data. Aside from the well documented problems with privacy self-management, for many the real-time nature of data generation and analysis in an era of Big Data poses a number of structural challenges to this principle of privacy. As x comments, 'a good amount of data is not pre-processed in a similar fashion as traditional data warehouses. This creates a number of potential compliance problems such as difficulty erasing, retrieving or correcting data. A typical big data system is not built for interactivity, but for batch processing. This also makes the application of changes on a (presumably) static data set difficult'[32].

      Opt In-Out: The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services is making opting-out of data collection not only impractical but in some cases impossible. [33]

      3. "Chilling Effects"

      For many scholars the normalization of large scale data collection is steadily producing a widespread perception of ubiquitous surveillance amongst users. Drawing upon Foucault's analysis of Jeremy Bentham's panopticon and the disciplinary effects of surveillance, they argue that this perception of permanent visibility can cause users to sub-consciously 'discipline' and self- regulate of their own behavior, fearful of being targeted or identified as 'abnormal' [34]. As a result, the pervasive nature of Big Data risks generating a 'chilling effect' on user behavior and free speech.

      Although the notion of "chilling effects" is quite prevalent throughout the academic literature on surveillance and security, the difficulty of quantifying the perception and effects of surveillance on online behavior and practices means that there have only been a limited number of empirical studies of this phenomena, and none directly related to the chilling effects of Big Data. One study, conducted by researchers at MIT however, sought to assess the impact of Edward Snowden's revelations about NSA surveillance programs on Google search trends. Nearly 6,000 participants were asked to individually rate certain keywords for their perceived degree of privacy sensitivity along multiple dimensions. Using Google's own publicly available search data, the researchers then analyzed search patterns for these terms before and after the Snowden revelations. In doing so they were able to demonstrate a reduction of around 2.2% in searchers for those terms deemed to be most sensitive in nature. According to the researchers themselves, the results 'suggest that there is a chilling effect on search behaviour from government surveillance on the Internet'[35]. Although this study focussed on the effects on government surveillance, for many privacy advocates the growing pervasiveness of Big Data risks generating similar results. [36]

      4. Dignitary Harms of Predictive Decision-Making

      In addition to its potentially chilling effects on free speech, the automated nature of Big Data analytics also possess the potential to inflict so-called 'dignitary harms' on individuals, by revealing insights about themselves that they would have preferred to keep private [37].

      In an infamous example, following a shopping trip to the retail chain Target, a young girl began to receive mail at her father's house advertising products for babies including, diapers, clothing, and cribs. In response, her father complained to the management of the company, incensed by what he perceived to be the company's attempts to "encourage" pregnancy in teens. A few days later however, the father was forced to contact the store again to apologies, after his daughter had confessed to him that she was indeed pregnant. It was later revealed that Target regularly analyzed the sale of key products such as supplements or unscented lotions in order to generate "pregnancy prediction" scores, which could be used to assess the likelihood that a customer was pregnant and to therefore target them with relevant offers[38]. Such cases, though anecdotal illustrate how Big Data if not adopted sensitively can lead to potential embarrassing information about users being made public.

      Security

      In relation to cybersecurity Big Data can be viewed to a certain extent as a double-edged sword. On the one hand, the unique capabilities of Big Data analytics can provide organizations with new and innovative methods of enhancing their cybersecurity systems. On the other however, the sheer quantity and diversity of data emanating from a variety of sources creates its own security risks.

      5. "Honey-Pot"

      The larger the quantities of confidential information stored by companies on their databases the more attractive those databases may appear to potential hackers.

      6. Data Redundancy and Dispersion

      Inherent to Big Data systems is the duplication of data to many locations in order to optimize query processing. Data is dispersed across a wide range of data repositories in different servers, in different parts of the world. As a result it may be difficult for organizations to accurately locate and secure all items of personal information.

      Epistemological and Methodological Implications

      In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [39]. 'There is now a better way' Anderson insisted, 'Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot'[40].

      In spite of these bold claims however, many theorists remain skeptical of Big Data's methodological benefits and have expressed concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favor of shallow correlative analysis[41].

      7. Obfuscation

      Although Big Data analytics can be utilized to study almost any phenomena where enough data exists, many theorists have warned that simply because Big Data analytics can be used does not necessarily mean that they should be used[42]. Bigger is not always better and indeed the sheer quantity of data made available to users may in fact act to obscure certain insights. Whereas traditional scientific methods use sampling techniques to identify the most important and relevant data, Big Data by contrast encourages the collection and use of as much data as possible, in an attempt to attain full resolution of the phenomena being studied. However, not all data is equally useful and simply inputting as much data as possible into an algorithm is unlikely to produce accurate results and may instead obscure key insights.

      Indeed, whilst the promise of automation is central to a large part of Big Data's appeal, researchers observe that most Big Data analysis still requires an element of human judgement to filter out the 'good' data from the 'bad', and to decide what aspects of the data are relevant to the research objectives. As Boyd and Crawford observe, 'in the case of social media data, there is a 'data cleaning' process: making decisions about what attributes and variables will be counted, and which will be ignored. This process is inherently subjective"[43].

      Google's Flu Trend project provides an illustrative example of how Big Data's tendency to try to maximise data inputs can produce misleading results. Designed to accurately track flu outbreaks based upon data collected from Google searches, the project was initially proclaimed to be a great success. Gradually however it became apparent that the results being produced were not reflective of the reality on the ground. Later it was discovered that the algorithms used by the project to interpret search terms were insufficiently accurate to filter out anomalies in searches, such as those related to the 2009 H1N1 flu pandemic. As such, despite the great promise of Big Data, scholars insist it remains critical to be mindful of its limitations, remain selective about the types of data included in the analysis and exercise caution and intuition whenever interpreting its results [44].

      8. "Apophenia"

      In complete contrast to the problem of obfuscation, Boyd and Crawford observe how Big Data may also lead to the practice of 'apophenia', a phenomena whereby analysts interpret patterns where none exist, 'simply because enormous quantities of data can offer connections that radiate in all directions" [45]. David Leinweber for example demonstrated that data mining techniques could show strong but ultimately spurious correlations between changes in the S&P 500 stock index and butter production in Bangladesh [46]. Such spurious correlation between disparate and unconnected phenomena are a common feature of Big Data analytics and risks leading to unfounded conclusions being draw from the data.

      Although Leinweber's primary focus of analysis was the use of Data-Mining technologies, his observations are equally applicable to Big Data. Indeed the tendency amongst Big Data analysts to marginalise the types of domain specific expertise capable of differentiating between relevant and irrelevant correlations in favour of algorithmic automation can in many ways be seen to exacerbate many of the problems Leinweber identified.

      9. From Causation to Correlation

      Closely related to the problem of Aphonenia is the concern that Big Data's emphasis on correlative analysis risks leading to an abandonment of the pursuit of causal knowledge in favour of shallow descriptive accounts of scientific phenomena[47].

      For many, Big Data enthusiasts 'correlation is enough', producing inherently meaningful results interpretable by anyone without the need for pre-existing theory or hypothesis. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, for others by neglecting the knowledge of domain experts, Big Data risks generating a shallow type of analysis, since it fails to adequately embed observations within a pre-existing body of knowledge.

      This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[48].

      As such, although Big Data can provide unrivalled accounts of "what" people do, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially critical in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes.

      Digital Divides and Marginalisation

      Today data is a highly valuable commodity. The market for data in and of itself has been steadily growing in recent years with the business models of many online services now formulated around the strategy of harvesting data from users[49]. As with the commodification of anything however, inequalities can easily emerge between the haves and have not's. Whilst the quantity of data currently generated on a daily basis is many times greater than at any other point in human history, the vast majority of this data is owned and tightly controlled by a very small number of technology companies and data brokers. Although in some instances limited access to data may be granted to university researchers or to those willing and able to pay a fee, in many cases data remains jealously guarded by data brokers, who view it as an important competitive asset. As a result these data brokers and companies risk becoming the gatekeepers of the Big Data revolution, adjudicating not only over who can benefit from Big Data, but also in what context and under what terms. For many such inconsistencies and inequalities in access to data raises serious doubts about just how widely distributed the benefits of Big Data will be. Others go even further claiming that far from helping to alleviate inequalities, the advent of Big Data risks exacerbating already significant digital divides that exist as well as creating new ones [50].

      10. Anti-Competitive Practices

      As a result of the reluctance of large companies to share their data, there increasingly exists a divide in access between small start-ups companies and their larger and more established competitors. Thus, new entrants to the marketplace may be at a competitive disadvantage in relation to large and well established enterprises, being as they are unable to harness the analytical power of the vast quantities of data available to large companies by virtue of their privileged market position. Since the performance of many online services are today often intimately connected with the collation and use of users data, some researchers have suggested that this inequity in access to data could lead to a reduction in competition in the online marketplace, and ultimately therefore to less innovation and choice for consumers[51].

      As a result researchers including Nathan Newman of New York University have called for a reassessment and reorientation of anti-trust investigations and regulatory approaches more generally to 'to focus on how control of personal data by corporations can entrench monopoly power and harm consumer welfare in an economy shaped increasingly by the power of "big data"'[52]. Similarly a report produced by the European Data Protection Supervisor concluded that, 'The scope for abuse of market dominance and harm to the consumer through refusal of access to personal information and opaque or misleading privacy policies may justify a new concept of consumer harm for competition enforcement in digital economy' [53].

      11. Research

      From a research perspective barriers to access to data caused by proprietary control of datasets are problematic, since certain types of research could become restricted to those privileged enough to be granted access to data. Meanwhile those denied access are left not only incapable of conducting similar research projects, but also unable to test, verify or reproduce the findings of those who do. The existence of such gatekeepers may also lead to reluctance on the part of researchers to undertake research critical of the companies, upon whom they rely for access, leading to a chilling effect on the types of research conducted[54].

      12. Inequality

      Whilst bold claims are regularly made about the potential of Big Data to deliver economic development and generate new innovations, some critics of remain concerned about how equally the benefits of Big Data will be distributed and the effects this could have on already established digital divides [55].

      Firstly, whilst the power of Big Data is already being utilized effectively by most economically developed nations, the same cannot necessarily be said for many developing countries. A combination of lower levels of connectivity, poor information infrastructure, underinvestment in information technologies and a lack of skills and trained personnel make it far more difficult for the developing world to fully reap the rewards of Big Data. As a consequence the Big Data revolution risks deepening global economic inequality as developing countries find themselves unable to compete with data rich nations whose governments can more easily exploit the vast quantities of information generated by their technically literate and connected citizens.

      Likewise, to the extent that the Big Data analytics is playing a greater role in public policy-making, the capacity of individuals to generate large quantities of data, could potentially impact upon the extent to which they can provide inputs into the policy-making process. In a country such as India for example, where there exist high levels of inequality in access to information and communication technologies and the internet, there remain large discrepancies in the quantities of data produced by individuals. As a result there is a risk that those who lack access to the means of producing data will be disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority [56].

      Discrimination

      13. Injudicious or Discriminatory Outcomes

      Big Data presents the opportunity for governments, businesses and individuals to make better, more informed decisions at a much faster pace. Whilst this can evidently provide innumerable opportunities to increase efficiency and mitigate risk, by removing human intervention and oversight from the decision-making process Big Data analysts run the risk of becoming blind to unfair or injudicious results generated by skewed or discriminatory programming of the algorithms.

      There currently exists a large number of automated decision-making algorithms in operation across a broad range of sectors including most notably perhaps those used to asses an individual's suitability for insurance or credit. In either of these cases faults in the programming or discriminatory assessment criteria can have potentially damaging implications for the individual, who may as a result be unable to attain credit or insurance. This concern with the potentially discriminatory aspects of Big Data is prevalent throughout the literature and real life examples have been identified by researchers in a large number of major sectors in which Big Data is currently being used[57].

      Yu for instance, cites the case of the insurance company Progressive, which required its customers to install 'Snapsnot' - a small monitoring device - into their cars in order to receive their best rates. The device tracked and reported the customers driving habits, and offered discounts to those drivers who drove infrequently, broke smoothly, and avoided driving at night - behaviors that correlate with a lower risk of future accidents. Although this form of price differentiation provided incentives for customers to drive more carefully, it also had the unintended consequence of unfairly penalizing late-night shift workers. As Yu observes, 'for late night shift-workers, who are disproportionately poorer and from minority groups, this differential pricing provides no benefit at all. It categorizes them as similar to late-night party-goers, forcing them to carry more of the cost of the intoxicated and other irresponsible driving that happens disproportionately at night'[58].

      In another example, it is noted how Big Data is increasingly being used to evaluate applicants for entry-level service jobs. One method of evaluating applicants is by the length of their commute - the rationale being that employees with shorter commutes are statistically more likely to remain in the job longer. However, since most service jobs are typically located in town centers and since poorer neighborhoods tend to be those on the outskirts of town, such criteria can have the effect of unfairly disadvantaging those living in economically deprived areas. Consequently such metrics of evaluation can therefore also unintentionally act to reinforce existing social inequalities by making it more difficult for economically disadvantaged communities to work their way out of poverty[59].

      14. Lack of Algorithmic Transparency.

      If data is indeed the 'oil of the 21st century'[60] then algorithms are very much the engines which are driving innovation and economic development. For many companies the quality of their algorithms is often a crucial factor in providing them with a market advantage over their competitor. Given their importance, the secrets behind the programming of algorithms are often closely guarded by companies, and are typically classified as trade secrets and as such are protected by intellectual property rights. Whilst companies may claim that such secrecy is necessary to encourage market competition and innovation, many scholars are becoming increasingly concerned about the lack of transparency surrounding the design of these most crucial tools.

      In particular there is a growing sentiment common amongst many researchers that there currently exists a chronic lack of accountability and transparency in terms of how Big Data algorithms are programmed and what criteria are used to determine outcomes [61]. As Frank Pasquale observed,

      ' hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy'[62].

      As such, without increased transparency in algorithmic design, instances of Big Data discrimination may go unnoticed as analyst are unable to access the information necessary to identify them.

      Conclusion

      Today Big Data presents us with as many challenges as it does benefits. Whilst Big Data analytics can offer incredible opportunities to reduce inefficiency, improve decision-making, and increase transparency, concerns remain about the effects of these new technologies on issues such as privacy, equality and discrimination. Although the tensions between the competing demands of Big Data advocates and their critics may appear irreconcilable; only by highlighting these points of contestation can we hope to begin to ask the types of important and difficult questions necessary to do so, including; how can we reconcile Big Data's need for massive inputs of personal information with core principles of privacy such as data minimization and collection limitation? What processes and procedures need to be put in place during the design and implementation of Big Data models and algorithms to provide sufficient transparency and accountability so as to avoid instances of discrimination? What measures can be used to help close digital divides and ensure that the benefits of Big Data are shared equitably? Questions such as these are today only just beginning to be addressed; each however, will require careful consideration and reasoned debate, if Big Data is to deliver on its promises and truly fulfil its 'revolutionary' potential.


      [1] Gantz, J., &Reinsel, D. Extracting Value from Chaos, IDC, (2011), available at: http://www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf

      [2] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

      [4] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878, Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [5] Ibid.,

      [6] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

      [7] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [8] Anderson, R., & Roberts, D. 'Big Data: Strategic Risks and Opportunities, Crowe Horwarth Global Risk Consulting Limited, (2012) https://www.crowehorwath.net/uploadedfiles/crowe-horwath-global/tabbed_content/big%20data%20strategic%20risks%20and%20opportunities%20white%20paper_risk13905.pdf

      [9] Ibid.

      [10] Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society (2014)http://bds.sagepub.com/content/1/2/2053951714564227.abstract,

      [11] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [12] Cisco, 'IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service', Cisco, (2014) Available at: http://www.cisco.com/c/dam/m/en_us/ioe/public_sector/pdfs/jurisdictions/Oslo_Jurisdiction_Profile_051214REV.pdf

      [13] Newell, B, C. Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information. University of Washington - the Information School, (2013) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2341182

      [14] Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

      [15] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [16] Davenport, T., Barth., Bean, R. How is Big Data Different, MITSloan Management Review, Fall (2012), Available at, http://sloanreview.mit.edu/article/how-big-data-is-different/

      [17] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [18] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [19] Brown, B., Chui, M., Manyika, J. 'Are you Ready for the Era of Big Data?', McKinsey Quarterly, (2011), Available at, http://www.t-systems.com/solutions/download-mckinsey-quarterly-/1148544_1/blobBinary/Study-McKinsey-Big-data.pdf ; Benady, D., 'Radical transparency will be unlocked by technology and big data', Guardian (2014) Available at: http://www.theguardian.com/sustainable-business/radical-transparency-unlocked-technology-big-data

      [20] Ibid.

      [21] Ibid.

      [22] United Nations, A World That Counts: Mobilising the Data Revolution for Sustainable Development, Report prepared at the request of the United Nations Secretary-General,by the Independent Expert Advisory Group on a Data Revolutionfor Sustainable Development. (2014), pg. 18, see also, Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      [23] Greenleaf, G. Abandon All Hope? Foreword for Issue 37(2) of the UNSW Law Journal on 'Communications Surveillance, Big Data, and the Law' ,(2014) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2490425##, Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [24] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [25] Narayanan and Shmatikov quoted in Ibid.,

      [26] OECD, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, The Organization for Economic Co-Operation and Development, (1999); The European Parliament and the Council of the European Union, EU Data Protection Directive, "Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data," (1995)

      [27] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899

      [28] Article 29 Working Group., Opinion 03/2013 on purpose limitation, Article 29 Data Protection Working Party, (2013) available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

      [29] Solove, D, J. Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [30] Brandimarte, L., Acquisti, A., & Loewenstein, G., Misplaced Confidences:

      Privacy and the Control Paradox, Ninth Annual Workshop on the Economics of Information Security (WEIS) June 7-8 2010, Harvard University, Cambridge, MA, (2010), available at: https://fpf.org/wp-content/uploads/2010/07/Misplaced-Confidences-acquisti-FPF.pdf

      [31] Solove, D, J., Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [32] Yu, W, E., Data., Privacy and Big Data-Compliance Issues and Considerations, ISACA Journal, Vol. 3 2014 (2014), available at: http://www.isaca.org/Journal/archives/2014/Volume-3/Pages/Data-Privacy-and-Big-Data-Compliance-Issues-and-Considerations.aspx

      [33] Ramirez, E., Brill, J., Ohlhausen, M., Wright, J., & McSweeny, T., Data Brokers: A Call for Transparency and Accountability, Federal Trade Commission (2014) https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

      [34] Michel Foucault, Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan, London: Allen Lane, Penguin, (1977)

      [35] Marthews, A., & Tucker, C., Government Surveillance and Internet Search Behavior (2015), available at SSRN: http://ssrn.com/abstract=2412564

      [36] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012)

      [37] Hirsch, D., That's Unfair! Or is it? Big Data, Discrimination and the FTC's Unfairness Authority, Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf

      [38] Hill, K., How Target Figured Out A Teen Girl Was Pregnant Before Her Father Didhttp://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

      [39] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

      [40] Ibid.,

      [41] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [42] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [43] Ibid

      [44] Lazer, D., Kennedy, R., King, G., &Vespignani, A. " The Parable of Google Flu: Traps in Big Data Analysis ." Science 343 (2014): 1203-1205. Copy at http://j.mp/1ii4ETo

      [45] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [46] Leinweber, D. (2007) 'Stupid data miner tricks: overfitting the S&P 500', The Journal of Investing, vol. 16, no. 1, pp. 15-22. http://m.shookrun.com/documents/stupidmining.pdf

      [47] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [48] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      [49] De Zwart, M. J., Humphreys, S., & Van Dissel, B. Surveillance, big data and democracy: lessons for Australia from the US and UK. Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2. (2014) Retrieved from https://digital.library.adelaide.edu.au/dspace/handle/2440/90048

      [50] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878; Newman, N., Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [51] Newman, N., The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data (2013). Available at SSRN: http://ssrn.com/abstract=2265026, Newman, N. ,Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [52] Ibid.,

      [53] European Data Protection Supervisor, Privacy and competitiveness in the age of big data:

      The interplay between data protection, competition law and consumer protection in the Digital Economy, (2014), available at: https://secure.edps.europa.eu/EDPSWEB/webdav/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

      [54] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [55] Schradie, J., Big Data Not Big Enough? How the Digital Divide Leaves People Out, MediaShift, 31 July 2013, (2013), available at: http://mediashift.org/2013/07/big-data-not-big-enough-how-digital-divide-leaves-people-out/

      [56] Crawford, K., The Hidden Biases in Big Data, Harvard Business Review, 1 April 2013 (2013), available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data

      [57] Robinson, D., Yu, H., Civil Rights, Big Data, and Our Algorithmic Future, (2014) http://bigdata.fairness.io/introduction/

      [58] Ibid.

      [59] Ibid

      [60] Rotellla, P., Is Data The New Oil? Forbes, 2 April 2012, (2012), available at: http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/

      [61] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899; Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society(2014) http://bds.sagepub.com/content/1/2/2053951714564227.abstract

      [62] Pasquale, F., The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press , (2015)

      Eight Key Privacy Events in India in the Year 2015

      by Amber Sinha — last modified Jan 03, 2016 05:43 AM
      As the year draws to a close, we are enumerating some of the key privacy related events in India that transpired in 2015. Much like the last few years, this year, too, was an eventful one in the context of privacy.

      While we did not witness, as one had hoped, any progress in the passage of a privacy law, the year saw significant developments with respect to the ongoing Aadhaar case. The statement by the Attorney General, India's foremost law officer, that there is a lack of clarity over whether the right to privacy is a fundamental right, and the fact the the matter is yet unresolved was a huge setback to the jurisprudence on privacy. [1] However, the court has recognised a purpose limitation as applicable into the Aadhaar scheme, limiting the sharing of any information collected during the enrollment of residents in UID. A draft Encryption Policy was released and almost immediately withdrawn in the face of severe public backlash, and an updated Human DNA Profiling Bill was made available for comments. Prime Minister Narendra Modi's much publicised project "Digital India" was in news throughout the year, and it also attracted its' fair share of criticism in light of the lack of privacy safeguards it offered. Internationally, a lawsuit brought by Maximilian Schrems, an Austrian privacy activist, dealt a body blow to the fifteen year old Safe Harbour Framework in place for data transfers between EU and USA. Below, we look at what were, according to us, the eight most important privacy events in India, in 2015.

      1. August 11, 2015 order on Aadhaar not being compulsory

      In 2012, a writ petition was filed by Judge K S Puttaswamy challenging the government's policy in its attempt to enroll all residents of India in the UID project and linking the Aadhaar card with various government services. A number of other petitioners who filed cases against the Aadhaar scheme have also been linked with this petition and the court has been hearing them together. On September 11, 2015, the Supreme Court reiterated its position in earlier orders made on September 23, 2013 and March 24, 2014 stating that the Aadhaar card shall not be made compulsory for any government services. [2] Building on its earlier position, the court passed the following orders:

      a) The government must give wide publicity in the media that it was not mandatory for a resident to obtain an Aadhaar card,

      b) The production of an Aadhaar card would not be a condition for obtaining any benefits otherwise due to a citizen,

      c) Aadhaar card would not be used for any purpose other than the PDS Scheme, for distribution of foodgrains and cooking fuel such as kerosene and for the LPG distribution scheme.

      d) The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation.[3]

      Despite this being the fifth court order given by the Supreme Court[4] stating that the Aadhaar card cannot be a mandatory requirement for access to government services or subsidies, repeated violations continue. One of the violations which has been widely reported is the continued requirement of an Aadhaar number to set up a Digital Locker account which also led to activist, Sudhir Yadav filing a petition in the Supreme Court.[5]

      2. No Right to Privacy - Attorney General to SC

      The Attorney General, Mukul Rohatgi argued before the Supreme Court in the Aadhaar case that the Constitution of India did not provide for a fundamental Right to Privacy.[6] He referred to the body of case in the Supreme Court dealing with this issue and made a reference to the 1954 case, MP Sharma v. Satish Chandra[7] stating that there was "clear divergence of opinion" on the Right to Privacy and termed it as "a classic case of unclear position of law." He also referred to the discussion on this matter in the Constitutional Assembly Debates and pointed to the fact the framers of the Constitution did not intend for this to be a fundamental right. He said the matter needed to be referred to a nine judge Constitution bench.[8] This raises serious questions over the jurisprudence developed by the Supreme Court on the right to privacy over the last five decades. The matter is currently pending resolution by a larger bench which needs to be constituted by the Chief Justice of India.

      3. Shreya Singhal judgment and Section 69A, IT Act

      In the much celebrated judgment, Shreya Singhal v. Union of India, in March 2015, the Supreme Court struck down Section 66A of the Information Technology Act, 2000 as unconstitutional and laid down guidelines for online takedowns under the Internet intermediary rules. However, significantly, the court also upheld Section 69A and the blocking rules under this provision. It was held to be a narrowly-drawn provision with adequate safeguards. The rules prescribe a procedure for blocking which involves receipt of a blocking request, examination of the request by the Committee and a review committee which performs oversight functions. However, commentators have pointed to the opacity of the process in the rules under this provisions. While the rules mandate that a hearing is given to the originator of the content, this safeguard is widely disregarded. The judgment did not discuss Section 69 of the Information Technology Act, 2000 which deal with decrypting of electronic communication, however, the Department of Electronic and Information Technology brought up this issue subsequently, through a Draft Encryption Policy, discussed below.

      4. Circulation and recall of Draft Encryption Policy

      On October 19, 2015, the Department of Electronic and Information Technology (DeitY) released for public comment a draft National Encryption Policy. The draft received an immediate and severe backlash from commentators, and was withdrawn by September 22, 2015. [9] The government blamed a junior official for the poor drafting of the document and noted that it had been released without a review by the Telecom Minister, Ravi Shankar Prasad and other senior officials.[10] The main areas of contention were a requirement that individuals store plain text versions of all encrypted communication for a period of 90 days, to be made available to law enforcement agencies on demand; the government's right to prescribe key-strength, algorithms and ciphers; and only government-notified encryption products and vendors registered with the government being allowed to be used for encryption.[11] The purport of the above was to limit the ways in which citizens could encrypt electronic communication, and to allow adequate access to law enforcement agencies. The requirement to keep all encrypted information in plain text format for a period of 90 days garnered particular criticism as it would allow for creation of a 'honeypot' of unencrypted data, which could attract theft and attacks.[12] The withdrawal of the draft policy is not the final chapter in this story, as the Telecom Minister has promised that the Department will come back with a revised policy. [13] This attempt to put restrictions on use of encryption technologies is not only in line with a host of surveillance initiatives that have mushroomed in India in the last few years,[14] but also finds resonance with a global trend which has seen various governments and law enforcement organisations argue against encryption. [15]

      5. Privacy concerns raised about Digital India

      The Digital India initiative includes over thirty Mission Mode Projects in various stages of implementation. [16] All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies.[17] There is also a lack of properly articulated access control mechanisms and doubts over important issues such as data ownership owing to most projects involving public private partnership which involves private organisation collecting, processing and retaining large amounts of data. [18] Ahead of Prime Minister Modi's visit to the US, over 100 hundred prominent US based academics released a statement raising concerns about "lack of safeguards about privacy of information, and thus its potential for abuse" in the Digital India project. [19] It has been pointed out that the initiatives could enable a "cradle-to-grave digital identity that is unique, lifelong, and authenticable, and it plans to widely use the already mired in controversy Aadhaar program as the identification system." [20]

      6. Issues with Human DNA Profiling Bill, 2015

      The Human DNA Profiling Bill, 2015 envisions the creation of national and regional DNA databases comprising DNA profiles of the categories of persons specified in the Bill.[21] The categories include offenders, suspects, missing persons, unknown deceased persons, volunteers and such other categories specified by the DNA Profiling Board which has oversight over these banks. The Bill grants wide discretionary powers to the Board to introduce new DNA indices and make DNA profiles available for new purposes it may deem fit. [22] These, and the lack of proper safeguards surrounding issues like consent, retention and collection pose serious privacy risks if the Bill becomes a law. Significantly, there is no element of purpose limitation in the proposed law, which would allow the DNA samples to be re-used for unspecified purposes.[23]

      7. Impact of the Schrems ruling on India

      In Schrems v. Data Protection Commissioner, the Court of Justice in European Union (CJEU) annulled the Commission Decision 2000/520 according to which US data protection rules were deemed sufficient to satisfy EU privacy rules enabling transfers of personal data from EU to US, otherwise known as the 'Safe Harbour' framework. The court ruled that broad formulations of derogations on grounds of national security, public interest and law enforcement in place in the US goes beyond the test of proportionality and necessity under the Data Protection rules.[24] This judgment could also have implications for the data processing industry in India. For a few years now, a framework similar to the Safe Harbour has been under discussion for transfer of data between India and EU. The lack of a privacy legislation has been among the significant hurdles in arriving at a framework.[25] In the absence of a Safe Harbour framework, the companies in India rely on alternate mechanisms such as Binding Corporate Rules (BCR) or Model Contractual Clauses. These contracts impose the obligation on the data exporters and importers to ensure that 'adequate level of data protection' is provided. The Schrems judgement makes it clear that 'adequate level of data protection' entails a regime that is 'essentially equivalent' to that envisioned under Directive 95/46.[26] What this means is that any new framework of protection between EU and other countries like US or India will necessarily have to meet this test of essential equivalence. The PRISM programme in the US and a host of surveillance programmes that have been initiated by the government in India in the last few years could pose problems in satisfying this test of essential equivalence as they do not conform to the proportionality and necessity principles.

      8. The definition of "unfair trade practices" in the Consumer Protection Bill, 2015

      The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session[27] has introduced an expansive definition of the term "unfair trade practices." The definition as per the Bill includes the disclosure "to any other person any personal information given in confidence by the consumer."[28] This clause exclude from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Currently, the only law governing data protection law are the Reasonable security practices and procedures and sensitive personal data or information Rules, 2011[29] prescribed under Section 43A of the Information Technology Act, 2000. Under these rules, sensitive personal data or information is protected in that their disclosure requires prior permission from the data subject. [30] For other kinds of personal information not categorized as sensitive personal data or information, the only recourse of data subjects in case to claim breach of the terms of privacy policy which constitutes a lawful contract. [31] The Consumer Protection Bill, 2015, if enacted as law, could significantly expand the scope of protection available to data subjects. First, unlike the Section 43A rules, the provisions of the Bill would be applicable to physical as well as electronic collection of personal information. Second, disclosure to a third party of personal information other than sensitive personal data or information could also have similar 'prior permission' criteria under the Bill, if it can be shown that the information was shared by the consumer in confidence.

      What we see above are events largely built around a few trends that we have been witnessing in the context of privacy in India, in particular and across the world, in general. Lack of privacy safeguards in initiatives like the Aadhaar project and Digital India is symptomatic of policies that are not comprehensive in their scope, and consequently fail to address key concerns. Dr Usha Ramanathan has called these policies "powerpoint based policies" which are implemented based on proposals which are superficial in their scope and do not give due regard to their impact on a host of issues. [32] Second, the privacy concerns posed by the draft Encryption Policy and the Human DNA Profiling Bill point to the motive of surveillance that is in line with other projects introduced with the intent to protect and preserve national security. [33] Third, the incidents that championed the cause of privacy like the Schrems judgment have largely been initiated by activists and civil society actors, and have typically entailed the involvement of the judiciary, often the single recourse of actors in the campaign for the protection of civil rights. It must be noted that jurisprudence on the right to privacy in India has not moved beyond the guidelines set forth by the Supreme Court in PUCL v. Union of India.[34] However, new mass surveillance programmes and massive collection of personal data by both public and private parties through various schemes mandated a re-look at the standards laid down twenty years ago. The privacy issue pending resolution by a larger bench in the Aadhaar case affords an opportunity to revisit those principles in light of how surveillance has changed in the last two decades and strengthen privacy and data protection.


      [1] Right to Privacy not a fundamental right, cannot be invoked to scrap Aadhar: Centre tells Supreme Court, available at http://articles.economictimes.indiatimes.com/2015-07-23/news/64773078_1_fundamental-right-attorney-general-mukul-rohatgi-privacy

      [4] Five SC Orders Later, Aadhaar Requirement Continues to Haunt Many, available at http://thewire.in/2015/09/19/five-sc-orders-later-aadhaar-requirement-continues-to-haunt-many-11065/

      [5] Digital Locker scheme challenged in Supreme Court, available at http://www.moneylife.in/article/digital-locker-scheme-challenged-in-supreme-court/42607.html

      [6] Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise, available at http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise

      [7] 1954 SCR 1077.

      [8] Supra Note 1.

      [10] Encryption policy poorly worded by officer: Telecom Minister Ravi Shankar Prasad, available at http://economictimes.indiatimes.com/articleshow/49068406.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

      [11] Updated: India's draft encryption policy puts user privacy in danger, available at http://www.medianama.com/2015/09/223-india-draft-encryption-policy/

      [12] Bhairav Acharya, The short-lived adventure of India's encryption policy, available at http://notacoda.net/2015/10/10/the-short-lived-adventure-of-indias-encryption-policy/

      [13] Supra Note 9.

      [14] Maria Xynou, Big democracy, big surveillance: India's surveillance state, available at https://www.opendemocracy.net/opensecurity/maria-xynou/big-democracy-big-surveillance-indias-surveillance-state

      [15] China passes controversial anti-terrorism law to access encrypted user accounts, available at http://www.theverge.com/2015/12/27/10670346/china-passes-law-to-access-encrypted-communications ; Police renew call against encryption technology that can help hide terrorists, available at http://www.washingtontimes.com/news/2015/nov/16/paris-terror-attacks-renew-encryption-technology-s/?page=all .

      [18] Indira Jaising, Digital India Schemes Must Be Preceded by a Data Protection and Privacy Law, available at http://thewire.in/2015/07/04/digital-india-schemes-must-be-preceded-by-a-data-protection-and-privacy-law-5471/

      [19] US academics raise privacy concerns over 'Digital India' campaign, available at http://yourstory.com/2015/08/us-digital-india-campaign/

      [20] Lisa Hayes, Digital India's Impact on Privacy: Aadhaar numbers, biometrics, and more, available at https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/

      [22] Comments on India's Human DNA Profiling Bill (June 2015 version), available at http://www.genewatch.org/uploads/f03c6d66a9b354535738483c1c3d49e4/IndiaDNABill_FGPI_15.pdf

      [23] Elonnai Hickok, Vanya Rakesh and Vipul Kharbanda, CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015, available at http://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-human-dna-profiling-bill-2015

      [25] Jyoti Pandey, Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India, available at http://cis-india.org/internet-governance/blog/contestations-of-data-ecj-safe-harbor-ruling-and-lessons-for-india

      [26] Simon Cox, Case Watch: Making Sense of the Schrems Ruling on Data Transfer, available at https://www.opensocietyfoundations.org/voices/case-watch-making-sense-schrems-ruling-data-transfer

      [28] Section 2(41) (I) of the Consumer Protection Bill, 2015.

      [30] Rule 6 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [31] Rule 4 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [33] Supra Note 11.

      [34] Chaitanya Ramachandra, PUCL V. Union of India Revisited: Why India's Sureveillance Law must be redesigned for the Digital Age, available at http://nujslawreview.org/wp-content/uploads/2015/10/Chaitanya-Ramachandran.pdf

      Free Basics: Negating net parity

      by Sunil Abraham last modified Jan 03, 2016 05:58 AM
      Researchers funded by Facebook were apparently told by 92 per cent of Indians they surveyed from large cities, with Internet connection and college degree, that the Internet “is a human right and that Free Basics can help bring Internet to all of India.” What a strange way to frame the question given that the Internet is not a human right in most jurisdictions.

      The article was published in the Deccan Herald on January 3, 2016.


      Free Basics is gratis service offered by Facebook in partnership with telcos in 37 countries. It is a mobile app that features less than a 100 of the 1 billion odd websites that are currently available on the WWW which in turn is only a sub-set of the Internet. Free Basics violates Net Neutrality because it introduces an unnecessary gatekeeper who gets to decide on “who is in” and “who is out”. Services like Free Basics could permanently alienate the poor from the full choice of the Internet because it creates price discrimination hurdles that discourage those who want to leave the walled garden.

      Inika Charles and Arhant Madhyala, two interns at Centre for Internet and Society (CIS), surveyed 1/100th of the Facebook sample, that is, 30 persons with the very same question at a café near our office in Bengaluru. Seventy per cent agreed with Facebook that the Internet was a human right but only 26 per cent thought Free Basics would achieve universal connectivity. My real point here is that numbers don’t matter. At least not in the typical way they do. Facebook dismissed Amba Kak’s independent, unfunded, qualitative research in Delhi, in their second public rebuttal, saying the sample size was only 20.

      That was truly ironical. The whole point of her research was the importance of small numbers. Kak says, “For some, it was the idea of an ‘emergency’ which made all-access plans valuable.” A respondent stated: “But maybe once or twice a month, I need some information which only Google can give me... like the other day my sister needed to know results to her entrance exams.” If you consider that too mundane, take a moment to picture yourself stranded in the recent Chennai flood. The statistical rarity of a Black Swan does not reduce its importance. A more neutral network is usually a more resilient network. When we do have our next national disaster, do we want to be one of the few countries on the planet who, thanks to our flawed regulation, have ended up with a splinternet?

      Telecom Regulatory Authority of India (Trai) chairman R S Sharma rightly expressed some scepticism around numbers when he said “the consultation paper is not an opinion poll.” He elaborated: “The issue here is some sites are being offered to one person free of cost while another is paying for it. Is this a good thing and can operators have such powers?” Had he instead asked “Is this the best option?” my answer would be “no”. Given the way he has formulated the question, our answer is a lawyerly “it depends”. The CIS believes that differential pricing should be prohibited. However, it can be allowed under certain exceptional standards when it is done in a manner that can be justified by the regulator against four axes of sometimes orthogonal policy objectives. They are increased access, enhanced competition, increased user choice and contribution to openness. For example, a permanent ban on Free Basics makes sense in the Netherlands but regulation may be sufficient for India.

      Gatekeeping powers

      To the second and more important part to Trai chairman’s second question on gatekeeping powers of operators, our answer is a simple “no”. But then, do we have any evidence that gatekeeping powers have been abused to the detriment of consumer and public interest? No. What do we do when we cannot, like Russell’s chicken, use induction to explain our future? Prof Simon Wren-Lew says, “If Bertrand Russell’s chicken had been an economist ...(it would have)... asked a crucial additional question: Why is the farmer doing this? What is in it for him?” There were five serious problems with Free Basics that Facebook has at least partially fixed, thanks mostly to criticism from consumers in India and Brazil. One, exclusivity with access provider; two, exclusivity with a set of web services; three, lack of transparency regarding retention of personal information; four, misrepresentation through the name of the service, Internet.org and five, lack of support for encrypted traffic. But how do we know these problems will stay fixed? Emerging markets guru Jan Chipchase tweeted asking “Do you trust Facebook? Today? Tomorrow? When its share price is under pressure and it wants to wring more $$$ from the platform?”

      Zero. Facebook pays telecom operators zero. The operators pay Facebook zero. The consumers pay zero. Why do we need to regulate philanthropy? Because these freebies are not purely the fruit of private capital. They are only possible thanks to an artificial state-supported oligopoly dependent on public resources like spectrum and wires (over and under public property). Therefore, these oligopolies much serve the public interest and also ensure that users are treated in a non-discriminatory fashion.

      Also provision of a free service should not allow powerful corporations to escape regulation–in jurisdictions like Brazil it is clear that Facebook has to comply with consumer protection law even if users are not paying for the service. Given that big data is the new oil, Facebook could pay the access provider in advertisements or manipulation of public discourse or by tweaking software defaults such as autoplay for videos which could increase bills of paying consumers quite dramatically.

      India needs a Net Neutrality regime that allows for business models and technological innovation as long as they don’t discriminate between users and competitors. The Trai should begin regulation based on principles as it has rightly done with the pre-emptive temporary ban. But there is a need to bring “numbers we can trust” to the regulatory debate. We as citizens need to establish a peer-to-peer Internet monitoring infrastructure across mobile and fixed lines in India that we can use to crowd source data.

      (The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world)

      Ground Zero Summit

      by Amber Sinha — last modified Jan 03, 2016 06:06 AM
      The Ground Zero Summit which claims to be the largest collaborative platform in Asia for cyber-security was held in New Delhi from 5th to 8th November. The conference was organised by the Indian Infosec Consortium (IIC), a not for profit organisation backed by the Government of India. Cyber security experts, hackers, senior officials from the government and defence establishments, senior professionals from the industry and policymakers attended the event.

      Keynote Address

      The Union Home Minister, Mr. Rajnath Singh, inaugurated the conference. Mr Singh described cyber-barriers that impact the issues that governments face in ensuring cyber-security. Calling the cyberspace as the fifth dimension of security in addition to land, air, water and space, Mr Singh emphasised the need to curb cyber-crimes in India, which have grown by 70% in 2014 since 2013. He highlighted the fact that changes in location, jurisdiction and language made cybercrime particularly difficult to address. Continuing in the same vein, Mr. Rajnath Singh also mentioned cyber-terrorism as one the big dangers in the time to come. With a number of government initiatives like Digital India, Smart Cities and Make in India leveraging technology, the Home Minister said that the success of these projects would be dependent on having robust cyber-security systems in place.

      The Home Minister outlined some initiatives that Government of India is planning to take in order to address concerns around cyber security - such as plans to finalize a new national cyber policy. Significantly, he referred to a committee headed by Dr. Gulshan Rai, the National Cyber Security Coordinator mandated to suggest a roadmap for effectively tackling cybercrime in India. This committee has recommended the setting up of Indian Cyber Crime Coordination Centre (I-4C). This centre is meant to engage in capacity building with key stakeholders to enable them to address cyber crimes, and work with law enforcement agencies. Earlier reports about the recommendation suggest that the I-4C will likely be placed under the National Crime Records Bureau and align with the state police departments through the Crime and Criminal Tracking and Network Systems (CCTNS). I-4C is supposed to be comprised of high quality technical and R&D experts who would be engaged in developing cyber investigation tools.

      Other keynote speakers included Alok Joshi, Chairman, NTRO; Dr Gulshan Rai, National Cyber Security Coordinator; Dr. Arvind Gupta, Head of IT Cell, BJP and Air Marshal S B Dep, Chief of the Western Air Command.

      Technical Speakers

      There were a number of technical speakers who presented on an array of subjects. The first session was by Jiten Jain, a cyber security analyst who spoke on cyber espionage conducted by actors in Pakistan to target defence personnel in India. Jiten Jain talked about how the Indian Infosec Consortium had discovered these attacks in 2014. Most of these websites and mobile apps posed as defence news and carried malware and viruses. An investigation conducted by IIC revealed the domains to be registered in Pakistan. In another session Shesh Sarangdhar, the CEO of Seclabs, an application security company, spoke about the Darknet and ways to break anonymity on it. Sarangdhar mentioned that anonymity on Darknet is dependent on all determinants of the equation in the communication maintaining a specific state. He discussed techniques like using audio files, cross domain on tor, siebel attacks as methods of deanonymization. Dr. Triveni Singh. Assistant Superintendent of Police, Special Task Force, UP Police made a presentation on the trends in cyber crime. Dr. Singh emphasised the amount of uncertainty with regard to the purpose of a computer intrusion. He discussed real life case studies such as data theft, credit card fraud, share trading fraud from the perspective of law enforcement agencies.

      Anirudh Anand, CTO of Infosec Labs discussed how web applications are heavily reliant on filters or escaping methods. His talk focused on XSS (cross site scripting) and bypassing regular expression filters. He also announced the release of XSS labs, an XSS test bed for security professionals and developers that includes filter evasion techniques like b-services, weak cryptographic design and cross site request forgery. Jan Siedl, an authority on SCADA presented on TOR tricks which may be used by bots, shells and other tools to better use the TOR network and I2P. His presentation dealt with using obfuscated bridges, Hidden Services based HTTP, multiple C&C addresses and use of OTP. Aneesha, an intern with the Kerala Police spoke about elliptical curve cryptography, its features such as low processing overheads. As this requires elliptic curve paths, efficient Encoding and Decoding techniques need to be developed. Aneesha spoke about an algorithm called Generator-Inverse for encoding and decoding a message using a Single Sign-on mechanism. Other subjects presented included vulnerabilities that remained despite using TLS/SSL, deception technology and cyber kill-chain, credit card frauds, Post-quantum crypto-systems and popular android malware.

      Panels

      There were also two panels organised at the conference. Samir Saran, Vice President of Observer Research Foundation, moderated the first panel on Cyber Arms Control. The panel included participants like Lt. General A K Sahni from the South Western Air Command; Lt. General A S Lamba, Retired Vice Chief Indian Army, Alok Vijayant, Director of Cyber Security Operation of NTRO and Captain Raghuraman from Reliance Industries. The panel debated the virtues of cyber arms control treaties. It was acknowledged by the panel that there was a need to frame rules and create a governance mechanism for wars in cyberspace. However, this would be effective only if the governments are the primary actors with the capability for building cyber-warfare know-how and tools. The reality was that most kinds of cyber weapons involved non state actors from the hacker community. In light of this, the cyber control treaties would lose most of their effectiveness.

      The second panel was on the Make for India’ initiatives. Dinesh Bareja, the CEO of Open Security Alliance and Pyramid Cyber Security was the moderator for this panel which also included Nandakumar Saravade, CEO of Data Security Council of India; Sachin Burman, Director of NCIIPC; Dr. B J Srinath, Director General of ICERT and Amit Sharma, Joint Director of DRDO. The focus of this session was on ‘Make in India’ opportunities in the domain of cyber security. The panelist discussed the role the government and industry could play in creating an ecosystem that supports entrepreneurs in skill development. Among the approaches discussed were: involving actors in knowledge sharing and mentoring chapters which could be backed by organisations like NASSCOM and bringing together industry and government experts in events like the Ground Zero Summit to provide knowledge and training on cyber-security issues.

      Exhibitions

      The conference was accompanied by a exhibitions showcasing indigenous cybersecurity products. The exhibitors included Smokescreen Technologies, Sempersol Consultancy, Ninja Hackon, Octogence Technologies, Secfence, Amity, Cisco Academy, Robotics Embedded Education Services Pvt. Ltd., Defence Research and Development Organisation (DRDO), Skin Angel, Aksit, Alqimi, Seclabs and Systems, Forensic Guru, Esecforte Technologies, Gade Autonomous Systems, National Critical Information Infrastructure Protection Centre (NCIIPC), Indian Infosec Consortium (IIC), INNEFU, Forensic Guru, Event Social, Esecforte Technologies, National Internet Exchange of India (NIXI) and Robotic Zone.

      The conference also witnessed events such Drone Wars, in which selected participants had to navigate a drone, a Hacker Fashion Show and the official launch of the Ground Zero’s Music Album.

      Understanding the Freedom of Expression Online and Offline

      by Prasad Krishna last modified Jan 03, 2016 10:24 AM

      PDF document icon PROVISIONAL PROGRAMME AGENDA_.pdf — PDF document, 542 kB (555783 bytes)

      ICFI Workshop

      by Prasad Krishna last modified Jan 03, 2016 10:33 AM

      PDF document icon ICFI Workshop note 10thDec2015.pdf — PDF document, 664 kB (680175 bytes)

      Facebook Free Basics: Gatekeeping Powers Extend to Manipulating Public Discourse

      by Vidushi Marda last modified Jan 09, 2016 01:43 PM
      15 million people have come online through Free Basics, Facebook's zero rated walled garden, in the past year. "If we accept that everyone deserves access to the internet, then we must surely support free basic internet services. Who could possibly be against this?" asks Facebook founder Mark Zuckerberg, in a recent op-ed defending Free Basics.

      The article was published in Catchnews on January 6, 2015. For more info click here.


      This rhetorical question however, has elicited a plethora of answers. The network neutrality debate has accelerated over the past few weeks with the Telecom Regulatory Authority of India (TRAI) releasing a consultation paper on differential pricing.

      While notifications to "Save Free Basics in India" prompt you on Facebook, an enormous backlash against this zero rated service has erupted in India.

      Free Basics

      The policy objectives that must guide regulating net neutrality are consumer choice, competition, access and openness. Facebook claims that Free Basics is a transition to the full internet and digital equality. However, by acting as a gatekeeper, Facebook gives itself the distinct advantage of deciding what services people can access for free by virtue of them being "basic", thereby violating net neutrality.

      Amidst this debate, it's important to think of the impact Facebook can have on manipulating public discourse. In the past, Facebook has used it's powerful News Feed algorithm to significantly shape our consumption of information online.

      In July 2014, Facebook researchers revealed that for a week in January 2012, it had altered the news feeds of 689,003 randomly selected Facebook users to control how many positive and negative posts they saw. This was done without their consent as part of a study to test how social media could be used to spread emotions online.

      Their research showed that emotions were in fact easily manipulated. Users tended to write posts that were aligned with the mood of their timeline.

      Another worrying indication of Facebook's ability to alter discourse was during the ALS Ice Bucket Challenge in July and August, 2014. Users' News Feeds were flooded with videos of individuals pouring a bucket of ice over their head to raise awareness for charitable cause, but not entirely on its merit.

      The challenge was Facebook's method of boosting its native video feature which was launched at around the same time. Its News Feed was mostly devoid of any news surrounding riots in Ferguson, Missouri at the same time, which happened to be a trending topic on Twitter.

      Each day, the news feed algorithm has to choose roughly 300 posts out of a possible 1500 for each user, which involves much more than just a random selection. The posts you view when you log into Facebook are carefully curated keeping thousands of factors in mind. Each like and comment is a signal to the algorithm about your preferences and interests.

      The amount of time you spend on each post is logged and then used to determine which post you are most likely to stop to read. Facebook even keeps into account text that is typed but not posted and makes algorithmic decisions based on them.

      It also differentiates between likes - if you like a post before reading it, the news feed automatically assumes that your interest is much fainter as compared to liking a post after spending 10 minutes reading it.

      Facebook believes that this is in the best interest of the user, and these factors help users see what he/she will most likely want to engage with. However, this keeps us at the mercy of a gatekeeper who impacts the diversity of information we consume, more often than not without explicit consent. Transparency is key.


      (Vidushi Marda is a programme officer at the Centre for Internet and Society)

      Human Rights in the Age of Digital Technology: A Conference to Discuss the Evolution of Privacy and Surveillance

      by Amber Sinha — last modified Jan 11, 2016 02:12 AM
      The Centre for Internet and Society organised a conference in roundtable format called ‘Human Rights in the Age of Digital Technology: A Conference to discuss the evolution of Privacy and Surveillance. The conference was held at Indian Habitat Centre on October 30, 2015. The conference was designed to be a forum for discussion, knowledge exchange and agenda building to draw a shared road map for the coming months.

      In India, the Right to Privacy has been interpreted to mean an individual's’ right to be left alone. In the age of massive use of Information and Communications Technology, it has become imperative to have this right protected. The Supreme Court has held in a number of its decisions that the right to privacy is implicit in the fundamental right to life and personal liberty under Article 21 of the Indian Constitution, though Part III does not explicitly mention this right. The Supreme Court has identified the right to privacy most often in the context of state surveillance and introduced the standards of compelling state interest, targetted surveillance and oversight mechanism which have been incorporated in the forms of rules under the Indian Telegraph Act, 1885.  Of late, privacy concerns have gained importance in India due to the initiation of national programmes like the UID Scheme, DNA Profiling, the National Encryption Policy, etc. attracting criticism for their impact on the right to privacy. To add to the growing concerns, the Attorney General, Mukul Rohatgi argued in the ongoing Aadhaar case that the judicial position on whether the right to privacy is a fundamental right is unclear and has questioned the entire body of jurisprudence on right to privacy in the last few decades.

      Participation

      The roundtable saw participation from various civil society organisation such as Centre for Communication Governance, The Internet Democracy Project, as well as individual researchers like Dr. Usha Ramanathan and Colonel Mathew.

      Introductions

      Vipul Kharbanda, Consultant, CIS made the introductions and laid down the agenda for the day. Vipul presented a brief overview of the kind of work of CIS is engaged in around privacy and surveillance, in areas including among others, the Human DNA Profiling Bill, 2014, the Aadhaar Project, the Privacy Bill and surveillance laws in India. It was also highlighted that CIS was engaged in work in the field of Big Data in light of the growing voices wanting to use Big Data in the Smart Cities projects, etc and one of the questions was to analyse whether the 9 Privacy Principles would still be valid in a Big Data and IoT paradigm.

      The Aadhaar Case

      Dr. Usha Ramanathan began by calling the Aadhaar project an identification project as opposed to an identity project. She brought up various aspects of project ranging from the myth of voluntariness, the strong and often misleading marketing that has driven the project, the lack of mandate to collect biometric data and the problems with the technology itself. She highlighted  inconsistencies, irrationalities and lack of process that has characterised the Aadhaar project since its inception. A common theme that she identified in how the project has been run was the element of ad-hoc-ness about many important decisions taken on a national scale and migrating from existing systems to the Aadhaar framework. She particularly highlighted the fact that as civil society actors trying to make sense of the project, an acute problem faced was the lack of credible information available. In that respect, she termed it as ‘powerpoint-driven project’ with a focus on information collection but little information available about the project itself. Another issue that Dr. Ramanathan brought up was that the lack of concern that had been exhibited by most people in sharing their biometric information without being aware of what it would be used, was in some ways symptomatic of they way we had begun to interact with technology and willingly giving information about ourselves, with little thought. Dr Ramanathan’s presentation detailed the response to the project from various quarters in the form of petitions in different high courts in India, how the cases were received by the courts and the contradictory response from the government at various stages. Alongside, she also sought to place the Aadhaar case in the context of various debates and issues, like its conflict with the National Population Register, exclusion, issues around ownership of data collected, national security implications and impact on privacy and surveillance. Aside from the above issues, Dr. Ramanathan also posited that the kind of flat idea of identity envisaged by projects like Aadhaar is problematic in that it adversely impacts how people can live, act and define themselves. In summation, she termed the behavior of the government as irresponsible for the manner in which it has changed its stand on issues to suit the expediency of the moment, and was particularly severe on the Attorney General raising questions about the existence of a fundamental right to privacy and casually putting in peril jurisprudence on civil liberties that has evolved over decades.

      Colonel Mathew concurred with Dr. Ramanathan that the Aadhaar Project was not about identity but about identification. Prasanna developed on this further saying that while identity was a right unto the individual, identification was something done to you by others. Colonel Mathew further presented a brief history of the Aadhaar case, and how the significant developments over the last few years have played out in the courts. One of the important questions that Colonel Mathew addressed was the claim of uniqueness made by the UID project. He pointed to research conducted by Hans Varghese Mathew which analysed the data on biometric collection and processing released by the UID and demonstrated that there was a clear probability of a duplication in 1 out of every 97 enrolments. He also questioned the oft-repeated claim that UID would give identification to those without it and allow them to access welfare schemes. In this context, he pointed at the failures of the introducer system and the fact that only 0.03% of those registered have been enrolled through the introducer system. Colonel Mathew also questioned the change in stance by the ruling party, BJP which had earlier declared that the UID project should be scrapped as it was a threat to national security. According to him, the prime mover of the scheme were corporate interests outside the country interested in the data to be collected. This, he claimed created very serious risks to the national security. Prasanna further added to this point stating that while, on the face of it, some of the claims of threats to national security may sound alarmist in nature, if one were to critically study the manner in which the data had collected for this project, the concerns appeared justified.

      The Draft Encryption Policy

      Amber Sinha, Policy Officer at CIS, made a presentation on the brief appearance of the Draft Encryption Policy which was released in October this year, and withdrawn by the government within a day. Amber provided an overview of the policy emphasising on clauses around limitations on kind of encryption algorithms and key sizes individuals and organisations could use and the ill-advised procedures that needed to be followed. After the presentation, the topic was opened for discussion. The initial part of the discussion was focussed on specific clauses that threatened privacy and could serve the ends of enabling greater surveillance of the electronic communications of individuals and organisations, most notably having an exhaustive list of encryption algorithms, and the requirement to keep all encrypted communication in plain text format for a period of 90 days. We also attempted to locate the draft policy in the context of privacy debates in India as well as the global response to encryption. Amber emphasised that while mandating minimum standards of encryption for communication between government agencies may be a honorable motive, as it is concerned with matters of national security, however when this is extended to private parties and involved imposes upward thresholds on the kinds of encryption they can use, it stems from the motive of surveillance. Nayantara, of The Internet Democracy Project, pointed out that there had been global push back against encryption by governments in various countries like US, Russia, China, Pakistan, Israel, UK, Tunisia and Morocco. In India also, the IT Act places limits on encryption. Her points stands further buttressed by the calls against encryption in the aftermath of the terrorist attacks in Paris last month.

      It also intended to have a session on the Human DNA Profiling Bill led by Dr. Menaka Guruswamy. However, due to certain issues in scheduling and paucity of time, we were not able to have the session.

      Questions Raised

      On Aadhaar, some of the questions raised included the question of  applicability of the Section 43A, IT Act rules to the private parties involved in the process. The issue of whether Aadhaar can be tool against corruption was raised by Vipul. However, Colonel Mathew demonstrated through his research that issues like corruption in the TPDS system and MNREGA which Aadhaar is supposed to solve, are not effectively addressed by it but that there were simpler solutions to these problems.

      Ranjit raised questions about the different contexts of privacy, and referred to the work of Helen Nissenbaum. He spoke about the history of freely providing biometric information in India, initially for property documents and how it has gradually been used for surveillance. He argued has due to this tradition, many people in India do not view sharing of biometric information as infringing on their privacy. Dipesh Jain, student at Jindal Global Law School pointed to challenges like how individual privacy is perceived in India, its various contexts, and people resorting to the oft-quoted dictum of ‘why do you want privacy if you have nothing to hide’. In the context, it is pertinent to mention the response of Edward Snowden to this question who said, “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” Aakash Solanki, researcher

      Vipul and Amber also touched upon the new challenges that are upon us in a world of Big Data where traditional ways to ensure data protection through data minimisation principle and the methods like anonymisation may not work. With advances in computer science and mathematics threatening to re-identify anonymized datasets, and more and more reliances of secondary uses of data coupled with the inadequacy of the idea of informed consent, a significant paradigm shift may be required in how we view privacy laws.

      A number of action items going forward were also discussed, where different individuals volunteered to lead research on issues like the UBCC set up by the UIDAI, GSTN, the first national data utility, looking the recourses available to individual where his data is held by parties outside India’s jurisdiction.

      Document Actions