Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

Paper-thin Safeguards and Mass Surveillance in India

by Chinmayi Arun last modified Jun 20, 2015 10:17 AM
The Indian government's new mass surveillance systems present new threats to the right to privacy. Mass interception of communication, keyword searches and easy access to particular users' data suggest that state is moving towards unfettered large-scale monitoring of communication. This is particularly ominous given that our privacy safeguards remain inadequate even for targeted surveillance and its more familiar pitfalls.

This need for better safeguards was made apparent when the Gujarat government illegally placed a young  woman  under surveillance  for obviously illegitimate purposes, demonstrating that the current system is prone to egregious misuse.  While the lack of proper safeguards is problematic even in the context of targeted surveillance, it threatens the health of our democracy in the context of mass surveillance. The proliferation of mass surveillance means that vast amounts of data are collected easily using information technology, and lie relatively unprotected.

This paper examines the right to privacy and surveillance in India, in an effort to highlight more clearly the problems that are likely to emerge with mass surveillance of communication by the Indian Government. It does this by teasing out Indian privacy rights jurisprudence and the concerns underpinning it, by considering its utility in the context of mass surveillance and then explaining the kind of harm that might result if mass surveillance continues unchecked.

The first part of this paper threads together the evolution of Indian constitutional principles on privacy in the context of communication surveillance as well as search and seizure. It covers discussions of privacy in the context of our fundamental rights by the draftspersons of our constitution, and then moves on to the ways in which the Supreme Court of India has been reading the right to privacy into the constitution.

The second part of this paper discusses the difference between mass surveillance and targeted surveillance, and international human rights principles that attempt to mitigate the ill effects of mass surveillance.

The concluding part of the paper discusses mass surveillance in India, and makes a case for expanding our existing privacy safeguards to protect the right to privacy in a meaningful manner in face of state surveillance.

Download the paper here.

DesiSec: Cybersecurity and Civil Society in India

by Laird Brown — last modified Jun 29, 2015 04:25 PM
As part of its project on mapping cyber security actors in South Asia and South East Asia, the Centre for Internet & Society conducted a series of interviews with cyber security actors. The interviews were compiled and edited into one documentary. The film produced by Purba Sarkar, edited by Aaron Joseph, and directed by Oxblood Ruffin features Malavika Jayaram, Nitin Pai, Namita Malhotra, Saikat Datta, Nishant Shah, Lawrence Liang, Anja Kovacs, Sikyong Lobsang Sangay and, Ravi Sharada Prasad.
DesiSec: Cybersecurity and Civil Society in India

A screen-shot from the DesiSec film showing a man reading messages on his mobile

Originally the idea was to do 24 interviews with an array of international experts: Technical, political, policy, legal, and activist. The project was initiated at the University of Toronto and over time a possibility emerged. Why not shape these interviews into a documentary about cybersecurity and civil society? And why not focus on the world’s largest democracy, India? Whether in India or the rest of the world there are several issues that are fundamental to life online: Privacy, surveillance, anonymity and, free speech. DesiSec includes all of these, and it examines the legal frameworks that shape how India deals with these  challenges.

From the time it was shot till the final edit there has only been one change in the juridical topography: the dreaded 66A of the IT Act has been struck down. Otherwise, all else is in tact. DesiSec was produced by Purba Sarkar, shot and edited by Aaron Joseph, and directed by Oxblood Ruffin. It took our team from Bangalore to Delhi and, Dharamsala. We had the honour of interviewing: Malavika Jayaram, Nitin Pai, Namita Malhotra, Saikat Datta, Nishant Shah, Lawrence Liang, Anja Kovacs, Sikyong Lobsang Sangay and, Ravi Sharada Prasad. Everyone brought something special to the discussion and we are grateful for their insights. Also, we are particularly pleased to include the music of Charanjit Singh for the intro/outro of DesiSec. Mr. Singh is the inventor of acid house music, predating the Wikipedia entry for that category by five years. Someone should correct that.

DesiSec is released under the Creative Commons License Attribution 3.0 Unported (CC by 3.0). You can watch it on Vimeo: https://vimeo.com/123722680 or download it legally and free of charge via torrent. Feel free to show, remix, and share with your friends. And let us know what you think!


Video

IANA Transition Stewardship & ICANN Accountability (II)

by Jyoti Panday last modified Jul 31, 2015 03:47 PM
This paper is the second in a multi-part series, in which we provide an overview of submitted proposals and highlight areas of concern that will need attention moving forward. The series is a work in progress and will be updated as the processes move forward. It is up for public comments and we welcome your feedback.

The discussions and the processes established for transition plan have moved rapidly, though not fast enough—given the complicated legal and technical undertaking it is. ICG will be considering the submitted proposals and moving forward on consultations and recommendations for pending proposals. ICANN53 saw a lot of discussion on the implementation of the proposals from the numbers and protocols community, while the CWG addressed the questions related to the 2nd draft of the names community proposal. The Protocol Parameters (IANA PLAN Working Group) submitted to ICG on 6 January 2015, while the Numbering Resources (CRISP Team) submitted on 15 January 2015. The Domain Names (CWG-Stewardship) submitted its second draft to ICG on 25 June 2015. The ICG had a face-to-face meeting in Buenos Aires and their proposal to transition the stewardship of the IANA functions is expected to be out for public comment July 31 to September 8, 2015. Parallelly, the CCWG on Enhancing ICANN Accountability offered its first set of proposals for public comment in June 2015 and organised two working sessions at ICANN'53. More recently, the CCWG met in Paris focusing on the proposed community empowerment mechanisms, emerging concerns and progress on issues so far.

Number and Protocols Proposals

The numbering and the protocol communities have developed and approved their plans for the transition. Both communities are proposing a direct contractual relationship with ICANN, in which they have the ability to end the contract on their terms. The termination clause has seen push back from ICANN and teams involved in the negotiations have revealed that ICANN has verbally represented that they will reject any proposed agreement in which ICANN is not deemed the sole source prime contractor for IANA functions in perpetuity.[1] The emerging contentious negotiations on the issue of separability i.e., the ability to change to a different IANA functions operator, is an important issue.[2] As Milton Mueller points out, ICANN seems to be using these contract negotiations to undo the HYPERLINK "http://www.internetgovernance.org/2015/04/28/icann-wants-an-iana-functions-monopoly-and-its-willing-to-wreck-the-transition-process-to-get-it/#comment-40045"community process and that ICANN’s staff members are viewing themselves, rather than the formal IANA transition process shepherded by the ICG, as the final authority on the transition.[3] The attempts of ICANN Staff to influence or veto ideas regarding what solutions will be acceptable to NTIA and the Congress goes beyond its mandate to facilitate the transition dialogue. The ARIN meeting[4] and the process of updating MoU with IETF which mandates supplementary SLAs[5] are examples of ICANN leveraging its status as the incumbent IANA functions operator, with which all three operational communities must negotiate, to ensure that the outcome of the IANA transition process does not threaten its control.

Names Proposal

Recently, the CWG working on recommendations for the names related functions provided an improved 2nd draft of their earlier complex proposal which attempts to resolve the internal-external debate with a middle ground, with the creation of Post-Transition IANA (PTI). PTI a subsidiary/affiliate of the current contract-holder, ICANN, will be created and handed the IANA contract and its related technology and staff. Therefore, ICANN takes on the role of the contracting authority and PTI as the contracted party will perform the names-related IANA functions. Importantly, under the new proposal CWG has done away altogether with the requirement of “authorisation” to root zone changes and the reasons for this decision have not been provided. The proposal also calls for creation of a Customer Standing Committee (CSC) to continuously monitor the performance of IANA and creation a periodic review process, rooted in the community, with the ability to recommend ICANN relinquishing its role in names-related IANA functions, if necessary. A key concern area is the external oversight mechanism Multistakeholder Review Team– has been done away with. This is a significant departure from the version placed for public comment in December 2014. It is expected that clarification will be sought from the CWG on how it has factored in inputs from the first round of public comments.

Consensus around the CWG 2nd Draft

There is a growing consensus around the model proposed—the numbers community has commented on the proposal that it does "not foresee any incompatibility between the CWG's proposal”.[6] On the IANA PLAN list, members of the protocols community have also expressed willingness to accept the new arrangement to keep all the IANA functions together in PTI during the transition and view this as merely a reorganization.[7] However, acceptance of the proposal is pending till clarification related to how the PTI will be set up and its legal standing and scope are provided.

Structure of PTI

Presently, two corporate forms are being considered for the PTI, a nonprofit public benefit corporation (PBC) or a limited liability corporation (LLC), with a single member, ICANN, at its outset. Milton Mueller has advocated for the incorporation of PTI as a PBC rather than as a LLC, with its board composed of a mix of insiders and outsiders.[8] He is of the view that LLC form makes the implementation of PTI much more complex and risky as the CWG would need to debate mechanisms of control for the PTI as part of the transition process. The choice of structure is important as it will define the limitations and responsibilities that will be placed on the PTI Board—an important and necessary accountability mechanism.

Broadly, the division of views is around selection of the Board Members that is if they should be chosen either by IANA's customers or representative groups within ICANN or solely by the Board. The degree of autonomy which the PTI has given the existing ICANN structure is also a key developing question. Debate on autonomy of PTI are broadly centered around two distinct views of PTI being incorporated in a different country, to prevent ICANN from slowly subsuming the organization. The other view endorsed by ICANN states that a high degree of autonomy risks creates additional bureaucracy and process for no discernible improvement in actual services.

Functional Separability

Under the CWG-Stewardship draft proposal, ICANN would assume the role currently fulfilled by NTIA (overseeing the IANA function), while PTI would assume the role currently played by ICANN (the IANA functions operator). A divisive area here is that the goal of “functional separation” is defeated with PTI being structured as an “affiliate” wholly owned subsidiary, as it will be subject to management and policies of ICANN. From this view, while ICANN as the contracting party has the right of selecting future IANA functions operators, the legal and policy justification for this has not been provided. It is expected that ICANN'53 will see discussions around the PTI will focus on its composition, legal standing and applicability of the California law.

Richard Hill is of the view that the details of how PTI would be set up is critical for understanding whether or not there is "real" separation between ICANN and PTI leading to the conclusion of a meaningful contract in the sense of an agreement between two separate entities.[9] This functional separation and autonomy is granted by the combination of a legally binding contract, CSC oversight, periodic review and the possibility of non-renewal of the contract.[10]

Technical and policy roles - ICANN and PTI

The creation of PTI splits the technical and policy functions between ICANN and PTI. The ICANN Board comments on CWG HYPERLINK "http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfrIUO5F9nY4.pdf"PrHYPERLINK "http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfrIUO5F9nY4.pdf"oHYPERLINK "http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfrIUO5F9nY4.pdf"posal also confirm PTI having no policy role, nor it being intended to in the future, and that while it will have control of the budget amounts ceded to it by ICANN the funding of the PTI will be provided by ICANN as part of the ICANN budgeting process.[11] The comments from the Indian government on the proposal states this as an issue of concern, as it negates ICANN's present role as a merely technical coordination body. The concerns stem from placing ICANN in the role of the perpetual contracting authority for the IANA function makes ICANN the sole venue for decisions relating to naming policy as well as the entity with sole control over the PTI under the present wholly subsidiary entity.[12]

Key areas of work related to the distinction between the PTI and ICANN policy and technical functions include addressing how the new PFI Board would be structured, what its role would be, and what the legal construction between it and ICANN. The ICANN Board too has sought some important clarifications on its relationship as a parent body including areas where the PTI is separate from ICANN and areas where CWG sees shared services as being allowable (shared office space, HR, accounting, legal, payroll). It also sought clarification on the line of reporting, duties of the PTI Directors and alignment of PTI corporate governance with that of ICANN.

The Swedish government has commented that the next steps in this process would be clarification of the process for designing the PTI-IANA contract, a process to establish community consent before entering the contract, explicit mention of whom the contracting parties are and what their legal responsibilities would be in relation to it.[13]

Internal vs External Accountability

The ICANN Board, pushing for an internal model of full control of IANA Functions is of the view that a more independent PTI could somehow be "captured" and used to thwart the policies developed by ICANN. However, others have pointed out that under proposed structure PTI has strong ties to ICANN community that implements the policies developed by ICANN.[14] With no funding and no authority other than as a contractor of ICANN, if PTI is acting in a manner contrary to its contract it would be held in breach and could be replaced under the proposal.

Even so, as the Indian government has pointHYPERLINK "http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfJGK6yVohdU.pdf"edHYPERLINK "http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfJGK6yVohdU.pdf" out from the point of view of institutional architecture and accountability, this model is materially worse off than the status quo.[15]

The proposed PTI and ICANN relationship places complete reliance on internal accountability mechanisms within ICANN, which is not a prudent institutional design. The Indian government anticipates a situation where, in the event there is customer/ stakeholder dissatisfaction with ICANN’s role in naming policy development, there would be no mechanism to change the entity which fulfils this role. They feel that the earlier proposal for the creation of a Contract Co, a lightweight entity with the sole purpose of being the repository of contracting authority, and award contracts including the IANA Functions Contract provided a much more effective mechanism for external accountability. While the numbers and protocol communities have proposed a severable contractual relationship with ICANN for the performance of its SLAs no such mechanism exists with respect to ICANN's role in policy development for names.

Checks and Balances

Under the current proposal the Customer Standing Committee (CSC) has the role, of constantly reviewing the technical aspects of the naming function as performed by PTI. This, combined with the proposed periodic IANA Function Review (IFR), would act as a check on the PTI. The current draft proposal does not specify what will be the consequence of an unfavourable IANA Functions Review.

Some other areas of focus going forward relate to the IFR team inclusion in ICANN bylaws along the lines of the AOC established in 2009.[16] Also, ensuring the IFR team clarifies the scope of separability. The circumstances and procedures in place for pulling the IANA contract away if it has been established that ICANN is not fulfilling it contractual agreements. This will be a key accountability mechanism and deterrent for ICANN controlling the exercise of its influence.

CCWG Accountability

Work Stream (WS1): Responsible for drafting a mechanism for enhancing ICANN accountability, which must be in place before the IANA stewardship transition.

Work Stream (WS2): Addressing long term accountability topics which may extend beyond the IANA Stewardship Transition.

The IANA transition was recognized to be dependent on ICANN’s wider accountability, and this has exposed the trust issues between community and leadership and the proposal must be viewed in this context. The CCWG Draft Proposal attempts 4 significant new undertakings:

A. Restating ICANN’s Mission, Commitments, and Core Values, and placing those into the ICANN Bylaws. The CCWG has recommended that some segments of the Affirmation of Commitments (AOC)– a contract on operating principles agreed upon between ICANN and the United States government – be absorbed into the Corporation’s bylaws.

B. Establishing certain bylaws as “Fundamental Bylaws” that cannot be altered by the ICANN Board acting unilaterally, but over which stakeholders have prior approval rights;

C. Creating a formal “membership” structure for ICANN, along with “community empowerment mechanisms”. Some of the community empowerment mechanisms including (a) remove individual Board members, (b) recall the entire Board, (c) veto or approve changes to the ICANN Bylaws, Mission Statement, Commitments, and Core Values; and (d) to veto Board decisions on ICANN’s Strategic Plan and its budget;

D. Enhancing and strengthening ICANN's Independent Review Process (IRP) by creating a standing IRP Panel empowered to review actions taken by the corporation for compliance both with stated procedures and with the Bylaws, and to issue decisions that are binding upon the ICANN Board.

The key questions likely to be raised at ICANN 53 on several of these proposals will likely concern how these empowerment mechanisms affect the “legal nature” of the community.

Membership and Accountability

At the heart of the distrust between the ICANN Board and the community is the question of membership. ICANN as a corporation is a private sector body that is largely unregulated, with no natural competitors, cash-rich and directly or indirectly supports many of its participants and other Internet governance processes. Without effective accountability and transparency mechanisms, the opportunities for distortion, even corruption, are manifold. In such an environment, placing limitations on the Board’s power is critical to invoke trust. Three keys areas of accountability related to the Board include: no mechanisms for recall of individual board directors; the board’s ability to amend the company’s constitution (its bylaws), and the track record of board reconsideration requests.[17]

With no membership, ICANN’s directors represent the end of the line in terms of accountability. While there is a formal mechanism to review board decisions, the review is conducted by a subset of the same people. The CCWG’s proposal to create SOs/ACs as unincorporated “members” with Articles of Association has met with a lot of discussion, especially in the Governmental Advisory Council (GAC).[18] The GAC has posed several critical questions on this set up, some of which are listed here:

  1. Can a legal person created and acting on behalf of the GAC become a member of ICANN, even though the GAC does not appoint Board members?
  2. If GAC does not wish to become a member, how could it still be associated to the exercise of the 6 (community empowerment mechanisms) powers?
  3. It is still unclear what the liability of members of future “community empowered structures” would be.
  4. What are the legal implications on rights, obligations and liabilities of an informal group like the GAC creating an unincorporated association (UA) and taking decisions as such UA, from substantial (like exercising the community powers) to clerical (appointing its board, deciding on its financing) and whether there are implications when the members of such an UA are Governments?

Any proposal to strengthen accountability of ICANN needs to provide for membership so that there is ability to remove directors, creates financial accountability by receiving financial accounts and appointing editors and can check the ICANN’s board power to change bylaws without recourse to a higher authority.

Constitutional Undertaking

David Post and Danielle Kehl have pointed out that the CCWG correctly identifies the task it is undertaking – to ensure that ICANN’s power is adequately and appropriately constrained – as a “constitutional” one.[19] Their interpretation is based on the view that even if ICANN is not a true “sovereign,” it can usefully be viewed as one for the purpose of evaluating the sufficiency of checks on its power. Subsequently, the CCWG Draft Proposal, and ICANN’s accountability post-transition, can be understood and analyzed as a constitutional exercise, and that the transition proposal should meet constitutional criteria. Further, from this view the CCWG draft reflects the reformulation of ICANN around the broadly agreed upon constitutional criteria that should be addressed. These include:

  1. A clear enumeration of the powers that the corporation can exercise, and a clear demarcation of those that it cannot exercise.
  2. A division of the institution’s powers, to avoid concentrating all powers in one set of hands, and as a means of providing internal checks on its exercise.
  3. Mechanism(s) to enforce the constraints of (1) and (2) in the form of meaningful remedies for violations.

Their comments reflect that they support CCWG in their approach and progress made in designing a durable accountability structure for a post-transition ICANN. However, they have stressed that a number of important omissions and/or clarifications need to be addressed before they can be confident that these mechanisms will, in practice, accomplish their mission. One such suggestion relates to ICANN’s policy role and PTI technical role separability. Given ICANN’s position in the DNS hierarchy gives it the power to impose its policies, via the web of contracts with and among registries, registrars, and registrants, on all users of the DNS, a constitutional balance for the DNS must preserve and strengthen the separation between DNS policy-making and policy-implementation. Importantly, they have clarified that even if ICANN has the power to choose what policies are in the best interest of the community it is not free to impose them on the community. ICANN's role is a critical though narrow one: to organize and coordinate the activities of that stakeholder community – which it does through its various Supporting Organizations, Advisory Committees, and Constituencies – and to implement the consensus policies that emerge from that process. Their comments on the CCWG draft call for stating this clarification explicitly and institutionalizing separability to be guided by this critical safeguard against ICANN’s abuse of its power over the DNS.

An effective implementation of this limitation will help clarify the role mechanisms being proposed such as the PTI and is critical for creating sustainable mechanisms, post-transition. More importantly, clarifying ICANN’s mission would ensure that in the post-transition communities could challenge its decisions on the basis that it is not pertaining to the role outlined or based on strengthening the stability and security of the DNS. Presently, it is very unclear where ICANN can interfere in terms of policymaking and implementation.

Other Issues

Other issues expected to be raised in the context of ICANN's overall accountabiltiy will likey concern the following:

Strengthening financial transparency and oversight

Given the rapid growth of the global domain name industry, one would imagine that ICANN is held up to the same standard of accountability as laid down in the right to information mechanisms countries such as India. CIS has been raising this issue for a while and has managed to received the list of ICANN’s current domain name revenues.[20]

By sharing this information, ICANN has shown itself responsive to repeated requests for transparency however, the shared revenue data is only for the fiscal year ending June 2014, and historical revenue data is still not publicly available. Neither is a detailed list (current and historical) of ICANN’s expenditures publicly available. Accountability mechanisms and discussions must seek that ICANN provide the necessary information during its regular Quarterly Stakeholder Reports, as well as on its website.

Strengthening transparency

A key area of concern is ICANN's unchecked influence and growing role as an institution in the IG space. Seen in the light of the impending transition, the transparency concerns gain significance and given ICANN's vocal interests in maintaining the status quo of its role in DNS Management. While financial statements (current and historic) are public and community discussions are generally open, the complexity of the contractual arrangements in place tracking the financial reserves available to ICANN through these processes are not sufficient.

Further, ICANN as a monopoly is presently constrained only by the NTIA review and few internal mechanisms like the Documentary Information Disclosure Policy (DIDP)[21], Ombudsman[22], Reconsideration and Independent Review[23] and the Accountability and Transparency Review (ATRT)[24]. These mechanisms are facing teething issues and some do not conform to the principles of natural justice. For example, a Reconsideration Request can be filed if one is aggrieved by an action of ICANN’s Board or staff. Under ICANN’s By-laws, it is the Board Governance Committee, comprising ICANN Board members, that adjudicates Reconsideration Requests.[25]

Responses to the DIDP requests filed by CIS reveal that the mechanism in its current form, is not sufficient to provide the transparency necessary for ICANN’s functioning. For instance, in the response to DIDP pertaining to the Ombudsman Requests[26], ICANN cites confidentiality as a reason to decline providing information as making Ombudsman Requests public would violate ICANN Bylaws, toppling the independence and integrity of the Ombudsman. Over December ’14 and January ’15, CIS sent 10 DIDP requests to ICANN with an aim was to test and encourage discussions on transparency from ICANN. We have received responses for 9 of our requests, and in 7 of those responses ICANN provides very little new information and moving forward we would stress the improvements of existing mechanisms along with introduction of new oversight and reporting parameters towards facilitating the transition process.[27]


[1]John Sweeting and others, 'CRISP Process Overview' (ARIN 35, 2015) https://regmedia.co.uk/2015/04/30/crisp_panel.pdf

[2]Andrew Sullivan, [Ianaplan] Update On IANA Transition & Negotiations With ICANN (2015), Email http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html

[3]Milton Mueller, ‘ICANN WANTS AN IANA FUNCTIONS MONOPOLY – WILL IT WRECK THE TRANSITION PROCESS TO GET IT?’ (Internet Governance Project, 28 April 2015) http://www.internetgovernance.org/2015/04/28/icann-wants-an-iana-functions-monopoly-and-its-willing-to-wreck-the-transition-process-to-get-it/#comment-40045

[4]Tony Smith, 'Event Wrap: ICANN 52' (APNIC Blog, 20 February 2015) http://blog.apnic.net/2015/02/20/event-wrap-icann-52/

[5]Internet Engineering Task Force, 'IPROC – IETF Protocol Registries Oversight Committee' (2015) https://www.ietf.org/iana/iproc.html

[6]Axel Pawlik, Numbers Community Proposal Contact Points With CWG’S Draft IANA Stewardship Transition Proposal (2015), Email http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/msg00003.html

[7]Jari Arkko, Re: [Ianaplan] CWG Draft And Its Impact On The IETF (2015), Email http://www.ietf.org/mail-archive/web/ianaplan/current/msg01843.html

[8]Milton Mueller, Comments Of The Internet Governance Project (2015), Email http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/msg00021.html

[9]Richard Hill, Initial Comments On CWG-Stewardship Draft Proposal (2015), Email http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/msg00000.html

[10]Brenden Kuerbis, 'Why The Post-Transition IANA Should Be A Nonprofit Public Benefit Corporation' (Internet Governance Project, 18 May 2015) http://www.internetgovernance.org/2015/05/18/why-the-post-transition-iana-should-be-a-nonp

[11]ICANN Board Comments On 2Nd Draft Proposal Of The Cross Community Working Group To Develop An IANA Stewardship Transition Proposal On Naming Related Functions (20 May 2015) http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfrIUO5F9nY4.pdf

[12]Comments Of Government Of India On The ‘2nd Draft Proposal Of The Cross Community Working Group To Develop An IANA Stewardship Transition Proposal On Naming Related Functions’ (2015) http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfJGK6yVohdU.pdf

[13]Anders Hektor, Sweden Comments To CWG-Stewardship (2015), Email http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/msg00016.html

[14]Brenden Kuerbis, 'Why The Post-Transition IANA Should Be A Nonprofit Public Benefit Corporation |' (Internet Governance Project, 18 May 2015) http://www.internetgovernance.org/2015/05/18/why-the-post-transition-iana-should-be-a-nonprofit-public-benefit-corporation/

[15]Comments Of Government Of India On The ‘2nd Draft Proposal Of The Cross Community Working Group To Develop An IANA Stewardship Transition Proposal On Naming Related Functions’ (2015) http://forum.icann.org/lists/comments-cwg-stewardship-draft-proposal-22apr15/pdfJGK6yVohdU.pdf

[16]Kieren McCarthy, 'Internet Kingmakers Drop Ego, Devise Future Of DNS, IP Addys Etc' (The Register, 24 April 2015) http://www.theregister.co.uk/2015/04/24/internet_kingmakers_drop_ego_devise_future_of_the_internet/

[17]Emily Taylor, ICANN: Bridging The Trust Gap (Paper Series No. 9, Global Commission on Internet Governance March 2015) https://regmedia.co.uk/2015/04/02/gcig_paper_no9-iana.pdf

[18]Milton Mueller, 'Power Shift: The CCWG’S ICANN Membership Proposal' (Internet Governance Project, 4 June 2015) http://www.internetgovernance.org/2015/06/04/power-shift-the-ccwgs-icann-membership-proposal/

[19]David Post, Submission Of Comments On CCWG Draft Initial Proposal (2015), Email http://forum.icann.org/lists/comments-ccwg-accountability-draft-proposal-04may15/msg00050.html

[20] Hariharan, 'ICANN reveals hitherto undisclosed details of domain names revenues', 8 December, 2014 See: http://cis-india.org/internet-governance/blog/cis-receives-information-on-icanns-revenues-from-domain-names-fy-2014

[21] ICANN, Documentary Information Disclosure Policy See: https://www.icann.org/resources/pages/didp-2012-02-25-en

[22] ICANN Accountability, Role of the Ombudsman https://www.icann.org/resources/pages/accountability/ombudsman-en

[23] ICANN Reconsideration and independent review, ICANN Bylaws, Article IV, Accountability and Review https://www.icann.org/resources/pages/reconsideration-and-independent-review-icann-bylaws-article-iv-accountability-and-review

[24] ICANN Accountability and Transparency Review Final Recommendations https://www.icann.org/en/system/files/files/final-recommendations-31dec13-en.pdf

[25] ICANN Bylaws Article iv, Section 2 https://www.icann.org/resources/pages/governance/bylaws-en#IV

[26] ICANN Response to DIDP Ombudsman https://www.icann.org/resources/pages/20141228-1-ombudsman-2015-01-28-en

[27] Table of CIS DIDP Requests See: http://cis-india.org/internet-governance/blog/table-of-cis-didp-requests/at_download/file

IANA Transition Stewardship & ICANN Accountability (I)

by Jyoti Panday last modified Jul 31, 2015 02:56 PM
This paper is the first in a multi-part series, in which we provide a background to the IANA transition and updates on the ensuing processes. An attempt to familiarise people with the issues at stake, this paper will be followed by a second piece that provides an overview of submitted proposals and areas of concern that will need attention moving forward. The series is a work in progress and will be updated as the processes move forward. It is up for public comments and we welcome your feedback.

In developing these papers we have been guided by Kieren McCarthy's writings in The Register, Milton Mueller writings on the Internet Governance Project, Rafik Dammak emails on the mailings lists, the constitutional undertaking argument made in the policy paper authored by Danielle Kehl & David Post for New America Foundation.


Introduction

The 53rd ICANN conference in Buenos Aires was pivotal as it marked the last general meeting before the IANA transition deadline on 30th September, 2015. The multistakeholder process initiated seeks communities to develop transition proposals to be consolidated and reviewed by the the IANA Stewardship Transition Coordination Group (ICG). The names, number and protocol communities convened at the conference to finalize the components of the transition proposal and to determine the way forward on the transition proposals. The Protocol Parameters (IANA PLAN Working Group) submitted to ICG on 6 January 2015, while the Numbering Resources (CRISP Team) submitted on 15 January 2015. The Domain Names (CWG-Stewardship) submitted its second draft to ICG on 25 June 2015. The ICG had a face-to-face meeting in Buenos Aires and their proposal to transition the stewardship of the IANA functions is expected to be out for public comment July 31 to September 8, 2015.

Parallelly, the CCWG on Enhancing ICANN Accountability offered its first set of proposals for public comment in June 2015 and organised two working sessions at ICANN'53. More recently, the CCWG met in Paris focusing on the proposed community empowerment mechanisms, emerging concerns and progress on issues so far. CIS reserves its comments to the CCWG till the second round of comments expected in July.

This working paper explains the IANA Transition, its history and relevance to management of the Internet. It provides an update on the processes so far, including the submissions by the Indian government and highlights areas of concern that need attention going forward.

How is IANA Transition linked to DNS Management?

The IANA transition presents a significant opportunity for stakeholders to influence the management and governance of the global network. The Domain Name System (DNS), which allows users to locate websites by translating the domain name with corresponding Internet Protocol address, is critical to the functioning of the Internet. The DNS rests on the effective coordination of three critical functions—the allocation of IP Addresses (the numbers function), domain name allocation (the naming function), and protocol parameters standardisation (the protocols function).

History of the ICANN-IANA Functions contract

Initially, these key functions were performed by individuals and public and private institutions. They either came together voluntarily or through a series of agreements and contracts brokered by the Department of Commerce’s National Telecommunications and Information Administration (NTIA) and funded by the US government. With the Internet's rapid expansion and in response to concerns raised about its increasing commercialization as a resource, a need was felt for the creation of a formal institution that would take over DNS management. This is how ICANN, a California-based private, non-profit technical coordination body, came at the helm of DNS and related issues. Since then, ICANN has been performing the Internet Assigned Numbers Authority (IANA) functions under a contract with the NTIA, and is commonly referred to as the IANA Functions Operator.

IANA Functions

In February 2000, the NTIA entered into the first stand-alone IANA Functions HYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/sf_26_pg_1-2-final_award_and_sacs.pdf"contract[1] with ICANN as the Operator. While the contractual obligations have evolved over time, these are largely administrative and technical in nature including:

(1) the coordination of the assignment of technical Internet protocol parameters;

(2) the allocation of Internet numbering resources; and

(3) the administration of certain responsibilities associated with the Internet DNS root zone management;

(4) other services related to the management of the ARPA and top-level domains.

ICANN has been performing the IANA functions under this oversight, primarily as NTIA did not want to let go of complete control of DNS management. Another reason was to ensure NTIA's leverage in ensuring that ICANN’s commitments, conditional to its incorporation, were being met and that it was sticking to its administrative and technical role.

Root Zone Management—Entities and Functions Involved

NTIA' s involvement has been controversial particularly in reference to the Root Zone Management function, which allows allows for changes to the HYPERLINK "http://www.internetsociety.org/sites/default/files/The Internet Domain Name System Explained for Non-Experts (ENGLISH).pdf"highest level of the DNS namespace[2] by updating the databases that represent that namespace. DNS namespace is defined to be the set of names known as top-level domain names or TLDs which may be at the country level (ccTLDs or generic (gTLDs). This HYPERLINK "https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf"function to maintain the Root was split into two parts[3]—with two separate procurements and two separate contracts. The operational contract for the Primary (“A”) Root Server was awarded to VeriSign, the IANA Functions Contract—was awarded to ICANN.

These contracts created contractual obligations for ICANN as IANA Root Zone Management Function Operator, in co-operation with Verisign as the Root Zone Maintainer and NTIA as the Root Zone Administrator whose authorisation is explicitly required for any requests to be implemented in the root zone. Under this contract, ICANN had responsibility for the technical functions for all three communities under the IANA Functions contract.

ICANN also had policy making functions for the names community such as developing HYPERLINK "https://www.iana.org/domains/root/files"rules and procedures and policies under HYPERLINK "https://www.iana.org/domains/root/files"which any changes to the Root Zone File[4] were to be proposed, including the policies for adding new TLDs to the system. The policy making of numbers and protocols is with IETF and RIRs respectively. HYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/ntias_role_root_zone_management_12162014.pdf"NTIA role in root zone management[5] is clerical and judgment free with regards to content. It authorizes implementation of requests after verifying whether procedures and policies are being followed.

This contract was subject to extension by mutual agreement and failure of complying with predefined commitments could result in the re-opening of the contract to another entity through a Request For Proposal (RFP). In fact, in 2011 HYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf"NTIA issued a RFP pursuant to ICANNHYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf"'HYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf"s Conflict of Interest Policy.[6]

Why is this oversight needed?

The role of the Administrator becomes critical for ensuring the security and operation of the Internet with the Root Zone serving as the directory of critical resources. In December 2014, HYPERLINK "http://www.theregister.co.uk/2015/04/30/confidential_information_exposed_over_300_times_in_icann_security_snafu/"a report revealed 300 incidents of internal security breaches[7] some of which were related to the Centralized Zone Data System (CZDS) – where the internet core root zone files are mirrored and the WHOIS portal. In view of the IANA transition and given ICANN's critical role in maintaining the Internet infrastructure, the question which arises is if NTIA will let go of its Administrator role then which body should succeed it?

Transition announcement and launch of process

On 14 March 2014, the NTIA HYPERLINK "http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions"announced[8]its intent to transition key Internet domain name functions to the global multistakeholder community”. These key Internet domain name functions refer to the IANA functions. For this purpose, the NTIA HYPERLINK "http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions"asked[9] the Internet Corporation for Assigned Names and Numbers (ICANN) to convene a global multistakeholder process to develop a transition proposal which has broad community support and addresses the following four principles:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and
  • Maintain the openness of the Internet.

The transition process has been split according to the three main communities naming, numbers and protocols.

Structure of the Transition Processes

ICANN performs both technical functions and policy-making functions. The technical functions are known as IANA functions and these are performed by ICANN are for all three communities.

I. Naming function: ICANN performs technical and policy-making for the names community. The technical functions are known as IANA functions and the policy-making functions relates to their role in deciding whether .xxx or .sucks should be allowed amongst other issues. There are two parallel streams of work focusing on the naming community that are crucial to completing the transition. The first, Cross-Community Working Group to Develop an IANA Stewardship Transition Proposal on Naming Related Functions will enable NTIA to transition out of its role in the DNS. Therefore, accountability of IANA functions is the responsibility of the CWG and accountability of policy-making functions is outside its scope. CWG has submitted its second draft to the ICG.

The second, Cross-Community Working Group on Accountability (CCWG-Accountability) is identifying necessary reforms to ICANN’s bylaws and processes to enhance the organization’s accountability to the global community post-transition. Therefore accountability of IANA functions is outside the scope of the CCWG. The CCWG on Enhancing ICANN Accountability offered its first set of proposals for public comment in June 2015.

II. Numbers function: ICANN performs only technical functions for the numbers community. The policy-making functions for numbers are performed by RIRs. CRISP is focusing on the IANA functions for numbers and has submitted their proposal to the ICG earlier this year.

III. Protocols function: ICANN performs only technical functions for the protocols community. The policy-making functions for protocols are performed by IETF. IETF-WG is focusing on the IANA functions for protocols and has submitted their proposal to the ICG earlier this year.

Role of ICG

After receiving the proposals from all three communities the ICG must combine these proposals into a consolidated transition proposal and then seek public comment on all aspects of the plan. ICG’s role is crucial, because it must build a public record for the NTIA on how the three customer group submissions tie together in a manner that ensures NTIA’s HYPERLINK "http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions"criteria[10] are met and institutionalized over the long term. Further, ICG's final submission to NTIA must include a plan to enhance ICANN’s accountability based on the CCWG-Accountability proposal.

NTIA Leverage

Reprocurement of the IANA contract is HYPERLINK "http://www.newamerica.org/oti/controlling-internet-infrastructure/"essential for ICANNHYPERLINK "http://www.newamerica.org/oti/controlling-internet-infrastructure/"'HYPERLINK "http://www.newamerica.org/oti/controlling-internet-infrastructure/"sHYPERLINK "http://www.newamerica.org/oti/controlling-internet-infrastructure/" legitimacy[11] in the DNS ecosystem and the authority to reopen the contract and in keeping the policy and operational functions separate meant that, NTIA could simply direct VeriSign to follow policy directives being issued from the entity replacing ICANN if they were deemed to be not complying. This worked as an effective leverage for ICANN complying to their commitments even if it is difficult to determine how this oversight was exercised. Perceptually, this has been interpreted as a broad overreach particularly, in the context of issues of sovereignty associated with ccTLDs and the gTLDs in their influence in shaping markets. However, it is important to bear in mind that the NTIA authorization comes after the operator, ICANN—has validated the request and does not deal with the substance of the request rather focuses merely on compliance with outlined procedure.

NTIA's role in the transition process

NTIA in its HYPERLINK "http://www.ntia.doc.gov/files/ntia/publications/ntia_second_quarterly_iana_report_05.07.15.pdf"Second Quarterly Report to the Congress[12] for the period of February 1-March 31, 2015 has outlined some clarifications on the process ahead. It confirmed the flexibility of extending the contract or reducing the time period for renewal, based on community decision. The report also specified that the NTIA would consider a proposal only if it has been developed in consultation with the multi-stakeholder community. The transition proposal should have broad community support and does not seek replacement of NTIA's role with a government-led or intergovernmental organization solution. Further the proposal should maintain security, stability, and resiliency of the DNS, the openness of the Internet and must meet the needs and expectations of the global customers and partners of the IANA services. NTIA will only review a comprehensive plan that includes all these elements.

Once the communities develop and ICG submits a consolidated proposal, NTIA will ensure that the proposal has been adequately “stress tested” to ensure the continued stability and security of the DNS. NTIA also added that any proposed processes or structures that have been tested to see if they work, prior to the submission—will be taken into consideration in NTIA's review. The report clarified that NTIA will review and assess the changes made or proposed to enhance ICANN’s accountability before initiating the transition.

Prior to ICANN'53, Lawrence E. Strickling Assistant Secretary for Communications and Information and NTIA Administrator HYPERLINK "http://www.ntia.doc.gov/blog/2015/stakeholder-proposals-come-together-icann-meeting-argentina"has posed some questions for consideration[13] by the communities prior to the completion of the transition plan. The issues and questions related to CCWG-Accountability draft are outlined below:

  1. Proposed new or modified community empowerment tools—how can the CCWG ensure that the creation of new organizations or tools will not interfere with the security and stability of the DNS during and after the transition? Do these new committees and structures create a different set of accountability questions?
  2. Proposed membership model for community empowerment—have other possible models been thoroughly examined, detailed, and documented? Has CCWG designed stress tests of the various models to address how the multistakeholder model is preserved if individual ICANN Supporting Organizations and Advisory Committees opt out?
  3. Has CCWG developed stress tests to address the potential risk of capture and barriers to entry for new participants of the various models? Further, have stress tests been considered to address potential unintended consequences of “operationalizing” groups that to date have been advisory in nature?
  4. Suggestions on improvements to the current Independent Review Panel (IRP) that has been criticized for its lack of accountability—how does the CCWG proposal analyze and remedy existing concerns with the IRP?
  5. In designing a plan for improved accountability, should the CCWG consider what exactly is the role of the ICANN Board within the multistakeholder model? Should the standard for Board action be to confirm that the community has reached consensus, and if so, what accountability mechanisms are needed to ensure the Board operates in accordance with that standard?
  6. The proposal is primarily focused on the accountability of the ICANN Board—has the CCWG considered accountability improvements that would apply to ICANN management and staff or to the various ICANN Supporting Organizations and Advisory Committees?
  7. NTIA has also asked the CCWG to build a public record and thoroughly document how the NTIA criteria have been met and will be maintained in the future.
  8. Has the CCWG identified and addressed issues of implementation so that the community and ICANN can implement the plan as expeditiously as possible once NTIA has reviewed and accepted it.

NTIA has also sought community’s input on timing to finalize and implement the transition plan if it were approved. The Buenos Aires meeting became a crucial point in the transtion process as following the meeting, NTIA will need to make a determination on extending its current contract with ICANN. Keeping in mind that the community and ICANN will need to implement all work items identified by the ICG and the Working Group on Accountability as prerequisites for the transition before the contract can end, the community’s input is critical.

NTIA's legal standing

On 25th February, 2015 the US Senate Committee on Commerce, Science & Transportation on 'Preserving the Multi-stakeholder Model of Internet Governance'[14] heard from NTIA head Larry Strickling, Ambassador Gross and Fadi Chehade. The hearing sought to plug any existing legal loopholes, and tighten its administrative, technical, financial, public policy, and political oversight over the entire process no matter which entity takes up the NTIA function. The most important takeaway from this Congressional hearing came from Larry Strickling’s testimony[15] who stated that NTIA has no legal or statutory responsibility to manage the DNS.

If the NTIA does not have the legal responsibility to act, and its role was temporary; on what basis is the NTIA driving the current IANA Transition process without the requisite legal authority or Congressional mandate? Historically, the NTIA oversight, effectively devised as a leverage for ICANN fulfilling its commitments have not been open to discussion. HYPERLINK "http://forum.icann.org/lists/comments-ccwg-accountability-draft-proposal-04may15/pdfnOquQlhsmM.pdf"Concerns have also been raised[16] on the lack of engagement with non-US governments, organizations and persons prior to initiating or defining the scope and conditions of the transition. Therefore, any IANA transition plan must consider this lack of consultation, develop a multi-stakeholder process as the way forward—even if the NTIA wants to approve the final transition plan.

Need to strengthen Diversity Principle

Following submissions by various stakeholders raising concerns regarding developing world participation, representation and lack of multilingualism in the transition process—the Diversity Principle was included by ICANN in the Revised Proposal of 6 June 2014. Given that representatives from developing countries as well as from stakeholder communities outside of the ICANN community are unable to productively involve themselves in such processes because of lack of multilingualism or unfamiliarity with its way of functioning merely mentioning diversity as a principle is not adequate to ensure abundant participation. As CIS has pointed out[17] before issues have been raised about the domination by North American or European entities which results in undemocratic, unrepresentative and non-transparent decision-making in such processes. Accordingly, all the discussions in the process should be translated into multiple native languages of participants in situ, so that everyone participating in the process can understand what is going on. Adequate time must be given for the discussion issues to be translated and circulated widely amongst all stakeholders of the world, before a decision is taken or a proposal is framed. This was a concern raised in the recent CCWG proposal which was extended as many communities did not have translated texts or adequate time to participate.

Representation of the global multistakeholder community in ICG

Currently, the Co-ordination Group includes representatives from ALAC, ASO, ccNSO, GNSO, gTLD registries, GAC, ICC/BASIS, IAB, IETF, ISOC, NRO, RSSAC and SSAC. Most of these representatives belong to the ICANN community, and is not representative of the global multistakeholder community including governments. This is not representative of even a multistakeholder model which the US HYPERLINK "http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design"gHYPERLINK "http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design"ovHYPERLINK "http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design"ernment HYPERLINK "http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design"has announced[18] for the transition; nor in the multistakeholder participation spirit of NETmundial. Adequate number of seats on the Committee must be granted to each stakeholder so that they can each coordinate discussions within their own communities and ensure wider and more inclusive participation.

ICANN's role in the transition process

Another issue of concern in the pre-transition process has been ICANN having been charged with facilitating this transition process. This decision calls to question the legitimacy of the process given that the suggestions from the proposals envision a more permanent role for ICANN in DNS management. As Kieren McCarthy has pointed out [19]ICANN has taken several steps to retain the balance of power in managing these functions which have seen considerable pushback from the community. These include an attempt to control the process by announcing two separate processes[20] – one looking into the IANA transition, and a second at its own accountability improvements – while insisting the two were not related. That effort was beaten down[21] after an unprecedented letter by the leaders of every one of ICANN's supporting organizations and advisory committees that said the two processes must be connected.

Next, ICANN was accused of stacking the deck[22] by purposefully excluding groups skeptical of ICANN’s efforts, and by trying to give ICANN's chairman the right to personally select the members of the group that would decide the final proposal. That was also beaten back. ICANN staff also produced a "scoping document"[23], that pre-empt any discussion of structural separation and once again community pushback forced a backtrack.[24]

These concerns garner more urgency given recent developments with the community working HYPERLINK "http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html"groups[25] and ICANN divisive view of the long-term role of ICANN in DNS management. Further, given HYPERLINK "https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164"ICANNHYPERLINK "https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164" HYPERLINK "https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164"President Chehade’s comments that the CWG is not doing its job[26], is populated with people who do not know anything and the “IANA process needs to be left alone as much as possible”. Fadi also specified that ICANN had begun the formal process of initiating a direct contract with VeriSign to request and authorise changes to be implemented by VeriSign. While ICANN may see itself without oversight in this relationship with VeriSign, it is imperative that proposals bear this plausible outcome in mind and put forth suggestions to counter this.

The HYPERLINK "http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html"update from IETF on the ongoing negotiation with ICANN on their proposal[27] related to protocol parameters has also flagged that ICANN is unwilling to cede to any text which would suggest ICANN relinquishing its role in the operations of protocol parameters to a subsequent operator, should the circumstances demand this. ICANN has stated that agreeing to such text now would possibly put them in breach of their existing agreement with the NTIA. Finally, HYPERLINK "https://twitter.com/arunmsukumar/status/603952197186035712"ICANN HYPERLINK "https://twitter.com/arunmsukumar/status/603952197186035712"Board Member, Markus Kummer[28] stated that if ICANN was to not approve any aspect of the proposal this would hinder the consensus and therefore, the transition would not be able to move forward.

ICANN has been designated the convenor role by the US government on basis of its unique position as the current IANA functions contractor and the global coordinator for the DNS. However it is this unique position itself which creates a conflict of interest as in the role of contractor of IANA functions, ICANN has an interest in the outcome of the process being conducive to ICANN. In other words, there exists a potential for abuse of the process by ICANN, which may tend to steer the process towards an outcome favourable to itself.

Therefore there exists a strong rationale for defining the limitations of the role of ICANN as convenor. The community has suggested that ICANN should limit its role to merely facilitating discussions and not extend it to reviewing or commenting on emerging proposals from the process. Additional safeguards need to be put in place to avoid conflicts of interest or appearance of conflicts of interest. ICANN should further not compile comments on drafts to create a revised draft at any stage of the process. Additionally, ICANN staff must not be allowed to be a part of any group or committee which facilitates or co-ordinates the discussion regarding IANA transition.

How is the Obama Administration and the US Congress playing this?

Even as the issues of separation of ICANN's policy and administrative role remained unsettled, in the wake of the Snowden revelations, NTIA initiated the long due transition of the IANA contract oversight to a global, private, non-governmental multi-stakeholder institution on March 14, 2014. This announcement immediately raised questions from Congress on whether the transition decision was dictated by technical considerations or in response to political motives, and if the Obama Administration had the authority to commence such a transition unilaterally, without prior open stakeholder consultations. Republican HYPERLINK "http://www.reuters.com/article/2015/06/02/us-usa-internet-icann-idUSKBN0OI2IJ20150602"lawmakers have raised concerns about the IANA transition plan [29]worried that it may allow other countries to capture control.

More recently, HYPERLINK "https://www.congress.gov/bill/114th-congress/house-bill/2251"Defending Internet Freedom Act[30] has been re-introduced to US Congress. This bill seeks ICANN adopt the recommendations of three internet community groups, about the transition of power, before the US government relinquishes control of the IANA contract. The bill also seeks ownership of the .gov and .mil top-level domains be granted to US government and that ICANN submit itself to the US Freedom of Information Act (FOIA), a legislation similar to the RTI in India, so that its records and other information gain some degree of public access.It has also been asserted by ICANN that neither NTIA nor the US Congress will approve any transition plan which leaves open the possibility of non-US IANA Functions Operator in the future.

Funding of the transition

The Obama administration is also HYPERLINK "http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393"fighting a Republican-backed Commerce, Justice, Science, and HYPERLINK "http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393"Related Agencies Appropriations Act (H.R. 2578)[31] which seeks to block NTIA funding the IANA transition. One provision of this bill restricts NTIA from using appropriated dollars for IANA stewardship transition till the end of the fiscal year, September 30, 2015 also the base period of the contact in function. This peculiar proviso in the Omnibus spending bill actually implies that Congress believes that the IANA Transition should be delayed with proper deliberation, and not be rushed as ICANN and NTIA are inclined to.

The IANA Transition cannot take place in violation of US Federal Law that has defunded it within a stipulated time-window. At the Congressional Internet Caucus in January 2015, NTIA head Lawrence Strickling clarified that NTIA will “not use appropriated funds to terminate the IANA functions...” or “to amend the cooperative agreement with Verisign to eliminate NTIA's role in approving changes to the authoritative root zone file...”. This implicitly establishes that the IANA contract will be extended, and Strickling confirmed that there was no hard deadline for the transition.

DOTCOM Act

The Communications and Technology Subcommittee of the House Energy and Commerce Committee HYPERLINK "http://energycommerce.house.gov/markup/communications-and-technology-subcommittee-vote-dotcom-act"amended the DOTCOM Act[32], a bill which, in earlier drafts, would have halted the IANA functions transition process for up to a year pending US Congressional approval. The bill in its earlier version represented unilateral governmental interference in the multistakeholder process. The new bill reflects a much deeper understanding of, and confidence in, the significant amount of work that the global multistakeholder community has undertaken in planning both for the transition of IANA functions oversight and for the increased accountability of ICANN. The amended DOTCOM Act would call for the NTIA to certify – as a part of a proposed GAO report on the transition – that “the required changes to ICANN’s by-laws contained in the final report of ICANN’s Cross Community Working Group on Enhancing ICANN Accountability and the changes to ICANN’s bylaws required by ICANN’s IANA have been implemented.” The bill enjoys immense bipartisan support[33], and is being lauded as a prudent and necessary step for ensuring the success of the IANA transition.


[1] IANA Functions Contract <http://www.ntia.doc.gov/files/ntia/publications/sf_26_pg_1-2-final_award_and_sacs.pdf> accessed 15th June 2015

[2] Daniel Karrenberg, The Internet Domain Name System Explained For Nonexperts <http://www.internetsociety.org/sites/default/files/The%20Internet%20Domain%20Name%20System%20Explained%20for%20Non-Experts%20(ENGLISH).pdf> accessed 15 June 2015

[3] David Post and Danielle Kehl, Controlling Internet Infrastructure The “IANA Transition” And Why It Matters For The Future Of The Internet, Part I (1st edn, Open Technology Institute 2015) <https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf> accessed 10 June 2015.

[4] Iana.org, 'IANA — Root Files' (2015) <https://www.iana.org/domains/root/files> accessed 11 June 2015.

[5] 'NTIA's Role In Root Zone Management' (2014). <http://www.ntia.doc.gov/files/ntia/publications/ntias_role_root_zone_management_12162014.pdf> accessed 15 June 2015.

[6] Contract ( 2011) <http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf> accessed 10 June 2015.

[7] Kieren McCarthy, 'Confidential Information Exposed Over 300 Times In ICANN Security Snafu' The Register (2015) <http://www.theregister.co.uk/2015/04/30/confidential_information_exposed_over_300_times_in_icann_security_snafu/> accessed 15 June 2015.

[8] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[9] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[10] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[11] David Post and Danielle Kehl, Controlling Internet Infrastructure The “IANA Transition” And Why It Matters For The Future Of The Internet, Part I (1st edn, Open Technology Institute 2015) <https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf> accessed 10 June 2015.

[12] National Telecommunications and Information Administration, 'REPORT ON THE TRANSITION OF THE STEWARDSHIP OF THE INTERNET ASSIGNED NUMBERS AUTHORITY (IANA) FUNCTIONS' (NTIA 2015) <http://www.ntia.doc.gov/files/ntia/publications/ntia_second_quarterly_iana_report_05.07.15.pdf> accessed 10 July 2015.

[13] Lawrence Strickling, 'Stakeholder Proposals To Come Together At ICANN Meeting In Argentina' <http://www.ntia.doc.gov/blog/2015/stakeholder-proposals-come-together-icann-meeting-argentina> accessed 19 June 2015.

[14] Philip Corwin, 'NTIA Says Cromnibus Bars IANA Transition During Current Contract Term' <http://www.circleid.com/posts/20150127_ntia_cromnibus_bars_iana_transition_during_current_contract_term/> accessed 10 June 2015.

[15] Sophia Bekele, '"No Legal Basis For IANA Transition": A Post-Mortem Analysis Of Senate Committee Hearing' <http://www.circleid.com/posts/20150309_no_legal_basis_for_iana_transition_post_mortem_senate_hearing/> accessed 9 June 2015.

[16] Comments On The IANA Transition And ICANN Accountability Just Net Coalition (2015) <http://forum.icann.org/lists/comments-ccwg-accountability-draft-proposal-04may15/pdfnOquQlhsmM.pdf> accessed 12 June 2015.

[17] The Centre for Internet and Society, 'IANA Transition: Suggestions For Process Design' (2014) <http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design> accessed 9 June 2015.

[18] The Centre for Internet and Society, 'IANA Transition: Suggestions For Process Design' (2014) <http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design> accessed 9 June 2015.

[19] Kieren McCarthy, 'Let It Go, Let It Go: How Global DNS Could Survive In The Frozen Lands Outside US Control Public Comments On Revised IANA Transition Plan' The Register (2015) <http://www.theregister.co.uk/2015/05/26/iana_icann_latest/> accessed 15 June 2015.

[20] Icann.org, 'Resources - ICANN' (2014) <https://www.icann.org/resources/pages/process-next-steps-2014-08-14-en> accessed 13 June 2015.

[21] <https://www.icann.org/en/system/files/correspondence/crocker-chehade-to-soac-et-al-18sep14-en.pdf> accessed 10 June 2015.

[22] Richard Forno, '[Infowarrior] - Internet Power Grab: The Duplicity Of ICANN' (Mail-archive.com, 2015) <https://www.mail-archive.com/[email protected]/msg12578.html> accessed 10 June 2015.

[23] ICANN, 'Scoping Document' (2014) <https://www.icann.org/en/system/files/files/iana-transition-scoping-08apr14-en.pdf> accessed 9 June 2015.

[24] Milton Mueller, 'ICANN: Anything That Doesn’T Give IANA To Me Is Out Of Scope |' (Internetgovernance.org, 2014) <http://www.internetgovernance.org/2014/04/16/icann-anything-that-doesnt-give-iana-to-me-is-out-of-scope/> accessed 12 June 2015.

[25] Andrew Sullivan, '[Ianaplan] Update On IANA Transition & Negotiations With ICANN' (Ietf.org, 2015) <http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html> accessed 14 June 2015.

[26] DNA Member Breakfast With Fadi Chehadé (2015-02-11) (The Domain Name Association 2015).

[27] Andrew Sullivan, '[Ianaplan] Update On IANA Transition & Negotiations With ICANN' (Ietf.org, 2015) <http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html> accessed 14 June 2015.

[28] Mobile.twitter.com, 'Twitter' (2015) <https://mobile.twitter.com/arunmsukumar/status/603952197186035712> accessed 12 June 2015.

[29] Alina Selyukh, 'U.S. Plan To Cede Internet Domain Control On Track: ICANN Head' Reuters (2015) <http://www.reuters.com/article/2015/06/02/us-usa-internet-icann-idUSKBN0OI2IJ20150602> accessed 15 June 2015.

[30] 114th Congress, 'H.R.2251 - Defending Internet Freedom Act Of 2015' (2015).

[31] John Eggerton, 'House Bill Blocks Internet Naming Oversight Handoff: White House Opposes Legislation' Broadcasting & Cable (2015) <http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393> accessed 9 June 2015.

[32] Communications And Technology Subcommittee Vote On The DOTCOM Act (2015).

[33] Timothy Wilt, 'DOTCOM Act Breezes Through Committee' Digital Liberty (2015) <http://www.digitalliberty.net/dotcom-act-breezes-committee-a319> accessed 22 June 2015.

The generation of e-Emergency

by Sunil Abraham last modified Jun 29, 2015 04:40 PM
The next generation of censorship technology is expected to be ‘real-time content manipulation’ through ISPs and Internet companies.
The generation of e-Emergency

Photo: iStock

The article was published in Livemint on June 22, 2015.


Censorship during the Emergency in the 1970s was done by clamping down on the media by intimidating editors and journalists, and installing a human censor at every news agency with a red pencil. In the age of both multicast and broadcast media, thought and speech control is more expensive and complicated but still possible to do. What governments across the world have realized is that traditional web censorship methods such as filtering and blocking are not effective because of circumvention technologies and the Streisand effect (a phenomenon in which an attempt to hide or censor information proves to be counter-productive). New methods to manipulate the networked public sphere have evolved accordingly. India, despite claims to the contrary, still does not have the budget and technological wherewithal to successfully pull off some of the censorship and surveillance techniques described below, but thanks to Moore’s law and to the global lack of export controls on such technologies, this might change in the future.

First, mass technological-enabled surveillance resulting in self-censorship and self-policing. The coordinated monitoring of Occupy protests in the US by the Department of Homeland Security, the Federal Bureau of Investigation (FBI) counter-terrorism units, police departments and the private sector showcased the bleeding edge of surveillance technologies. Stingrays or IMSI catchers are fake mobile towers that were used to monitor calls, Internet traffic and SMSes. Footage from helicopters, drones, high-res on-ground cameras and the existing CCTV network was matched with images available on social media using facial recognition technology. This intelligence was combined with data from the global-scale Internet surveillance that we know about thanks to the National Security Agency (NSA) whistle-blower Edward Snowden, and what is dubbed “open source intelligence” gleaned by monitoring public social media activity; and then used by police during visits to intimidate activists and scare them off the protests.

Second, mass technological gaming—again, according to documents released by Snowden, the British spy agency, GCHQ (Government Communications Headquarters), has developed tools to seed false information online, cast fake votes in web polls, inflate visitor counts on sites, automatically discover content on video-hosting platform and send takedown notices, permanently disable accounts on computers, find private photographs on Facebook, monitor Skype activity in real time and harvest Skype contacts, prevent access to certain websites by using peer-to-peer based distributed denial of service attacks, spoof any email address and amplify propaganda on social media. According to The Intercept, a secret unit of GCHQ called the Joint Threat Research Intelligence Group (JTRIG) combined technology with psychology and other social sciences to “not only understand, but shape and control how online activism and discourse unfolds”. The JTRIG used fake victim blog posts, false flag operations and honey traps to discredit and manipulate activists.

Third, mass human manipulation. The exact size of the Kremlin troll army is unknown. But in an interview with Radio Liberty, St. Petersburg blogger Marat Burkhard (who spent two months working for Internet Research Agency) said, “there are about 40 rooms with about 20 people sitting in each, and each person has their assignments.” The room he worked in had each employee produce 135 comments on social media in every 12-hour shift for a monthly remuneration of 45,000 rubles. According to Burkhard, in order to bring a “feeling of authenticity”, his department was divided into teams of three—one of them would be a villain troll who would represent the voice of dissent, the other two would be the picture troll and the link troll. The picture troll would use images to counter the villain troll’s point of view by appealing to emotion while the link troll would use arguments and references to appeal to reason. In a day, the “troika” would cover 35 forums.

The next generation of censorship technology is expected to be “real-time content manipulation” through ISPs and Internet companies. We have already seen word filters where blacklisted words or phrases are automatically expunged. Last week, Bengaluru-based activist Thejesh GN detected that Airtel was injecting javascript into every web page that you download using a 3G connection. Airtel claims that it is injecting code developed by the Israeli firm Flash Networks to monitor data usage but the very same method can be used to make subtle personalized changes to web content. In China, according to a paper by Tao Zhu et al titled The Velocity of Censorship: High-Fidelity Detection of Microblog Post Deletions, “Weibo also sometimes makes it appear to a user that their post was successfully posted, but other users are not able to see the post. The poster receives no warning message in this case.”

More than two decades ago, John Gilmore, of Electronic Frontier Foundation, famously said, “the Net interprets censorship as damage and routes around it.” That was when the topology of the Internet was highly decentralized and there were hundreds of ISPs that competed with each other to provide access. Given the information diet of the average netizen today, the Internet is, for all practical purposes, highly centralized and therefore governments find it easier and easier to control.

Anti-Spam Laws in Different Jurisdictions: A Comparative Analysis

by Rakshanda Deka — last modified Jul 02, 2015 04:21 PM
This paper is divided into three sections. The first section puts forth a comparative table of the spam laws of five different countries - the United States of America, Australia, Canada, Singapore and the United Kingdom - based on eight distinct parameters- jurisdiction of the legislation, definition of ‘spam’, understanding of consent, labelling requirements, types of senders covered, entities empowered to sue, exceptions made and penalties prescribed. The second section is a brief background of the problem of spam and it attempts to establish the context in which the paper is written. The third section is a critical analysis of the laws covered in the first section. In an effort to spot the various loopholes in these laws and suggest effective alternatives, this section points out the distinctions between the various legislations and discusses briefly their respective advantages and disadvantages.

Note:- This analysis is a part of a larger attempt at formulating a model anti-spam law for India by analyzing the existing spam laws across the world.


CAN-SPAM Act, 2003

Spam Act, 2003 (Australia)

Spam Control Act, 2007 (Singapore)

Canada's Anti-Spam Legislation, 2014

The Privacy and Electronic Communications (EC Directive) Regulations, 2003

(United Kingdom)

Jurisdiction

National Jurisdiction.

The defendant must be either an inhabitant of the United States or have a physical place of business in the US.[1]

National Jurisdiction.

Must have an "Australian link" i.e.

(a) the message originates in Australia; or

(b) the individual or organisation who sent the message, or

authorised the sending of the message, is:

(i) an individual who is physically present in Australia

when the message is sent; or

(ii) an organisation whose central management and control

is in Australia when the message is sent; or

(c) the computer, server or device that is used to access the

message is located in Australia; or

(d) the relevant electronic account-holder is:

(i) an individual who is physically present in Australia

when the message is

Spam Act, 2003, § 7

Spam Control Act, 2007, § 7(2)

Canada's Anti-Spam Legislation, 2014, §accessed; or

(ii) an organisation that carries on business or activities in

Australia when the message is accessed; or

(e) if the message cannot be delivered because the relevant

electronic address does not exist-assuming that the

electronic address existed, it is reasonably likely that the

message would have been accessed using a computer, server

or device located in Australia.[2]

National Jurisdiction.

Must have a "Singapore link"

An electronic message has a Singapore link in the following circumstances:

(a) the message originates in Singapore;

(b) the sender of the message is -

(i) an individual who is physically present in Singapore when the message is sent; or

(ii) an entity whose central management and control is in Singapore when the message is sent;

© the computer, mobile telephone, server or device that is used to access the message is located in Singapore;

the recipient of the message is-

(i) an individual who is physically present in Singapore when the message is accessed; or

(ii)an entity that carries on business or activities in Singapore when the message is accessed; or

(e) if the message cannot be delivered because the relevant

electronic address has ceased to exist (assuming that the electronic address existed), it is reasonably likely that the

message would have been accessed using a computer, mobile telephone, server or device located in Singapore.[3]

Extends to cases where the mail originates in a foreign state but is accessed in Canada

Section 6 of the CASL prohibits the sending of unsolicited CEMs.[4]

As per Section 12 of the CASL, A person contravenes section 6 only if a computer system located in Canada is used to send or access the electronic message.

CASL applies to CEMs sent from, or accessed in, Canada.[5]

So, if a CEM is sent to Canadians from another jurisdiction, CASL will apply. Notably, there is an exception where the person sending the message "reasonably believes" that the message will be accessed in one of a list

of prescribed jurisdictions with anti-spam laws thought to

be 'substantially similar' to CASL and the message complies with the laws of that jurisdiction.

European Union

These regulations can be enforced against a person or a company anywhere in the European Union who violates the regulations.

Definition Of Spam

"unsolicited, commercial, electronic mail"[6], where

a commercial electronic mail is "any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service"[7]

"unsolicited commercial electronic messages" where electronic message means a message sent "using an internet carriage service or any other listed carriage service; and to an electronic address in connection with: an e-mail account; or an instant messaging account; or a telephone

account; or a similar accounts."[8]

"unsolicited commercial electronic message sent in bulk", where

a CEM is unsolicited if the recipient did not-

i) request to receive the message; or

ii)consent to the receipt of the message;[9] and

CEMs shall be deemed to be sent in bulk if a person sends, causes to be sent or authorizes the sending of-

a) more than 100 messages containing the same subject matter during a 24-hour period;

b) more than 1,000 messages containing the same subject matter during a 30-day period;

c) more than 10,000 messages containing the same subject matter during a one-year period.

"unsolicited, commercial, electronic message"[10]

where, an "electronic message" means a message sent by any means of telecommunication, including a text, sound, voice or image message.[11]

These rules apply to all unsolicited direct marketing communications by automatic call machines[12], fax[13], calls[14] or e-mail[15].

Where, "direct marketing" is defined as "the communication (by whatever means) of any advertising or marketing material which is directed to particular individuals"[16]

The UK used its discretion to include voice-to-voice telephone calls as well.

Consent Requirement

Opt-out

Opt-in

Opt-out

Opt-in

Opt-in

CEMs are unlawful unless the message provides-

(i)clear and conspicuous identification that the message is an advertisement or solicitation;

(ii)clear and conspicuous notice of the opportunity under paragraph (3) to decline to receive further commercial electronic mail messages from the sender; and

(iii) a valid physical postal address of the sender.[17]

Section 16 prohibits the sending of unsolicited commercial electronic messages. However, where a recipient has consented to the sending of the message, the said prohibition does not apply.[18]

Consent means:

(a) express consent; or

(b) consent that can reasonably be inferred from:

(i) the conduct; and

(ii) the business and other relationships;

of the individual or organisation concerned.[19]

CEMs are unlawful unless the message contains-

1 a) an electronic mail address, an Internet location address, a telephone number, a facsimile number or a postal address that the recipient may use to submit an unsubscribe request; and

b) a statement the above information may be utilized to send an unsubscribe request.

2. Where the unsolicited CEM is received by text or multimedia message sent to a mobile telephone number, the CEM must include a mobile telephone number to which the recipient may send an unsubscribe request. [20]

Under the CASL, it is prohibited to send or cause or permit to be sent to an electronic address a commercial electronic message unless,

(a) the person to whom the message is sent has consented to receiving it, whether the consent is express or implied; and

(b)

The message must-

(i) set out prescribed information that identifies the person who sent the message and the person - if different - on whose behalf it is sent;

(ii) set out information enabling the person to whom the message is sent to readily contact one of the persons referred to in paragraph

(i); and

(iii) set out an unsubscribe mechanism in accordance with subsection 11(1) of CASL.[21]

Under Section 19 , A person shall neither transmit, nor instigate the transmission of, communications comprising recorded matter for direct marketing purposes by means of an automated calling system except in the circumstances where the called line is that of a subscriber who has previously notified the caller that for the time being he consents to such communications being sent by, or at the instigation of, the caller on that line.

Under Section 20 , A person shall neither transmit, nor instigate the transmission of, unsolicited communications for direct marketing purposes by means of a facsimile machine where the called line is that of an individual or a company except in the circumstances where the individual subscriber has previously notified the caller that he consents for the time being to such communications being sent by, or at the instigation of, the caller.

Under Section 21, A person shall neither use, nor instigate the use of, a public electronic communications service for the purposes of making unsolicited calls for direct marketing purposes where the called line is that of a subscriber who has previously notified the caller that such calls should not for the time being be made on that line.

Under Section 22 , a person shall neither transmit, nor instigate the transmission of, unsolicited communications for the purposes of direct marketing by means of electronic mail unless the recipient of the electronic mail has previously notified the sender that he consents for the time being to such communications being sent by, or at the instigation of, the sender.

Labelling Requirements

Warning Labels mandatory on e-mails containing pornographic content

No person may send to a protected computer, any commercial electronic mail message that includes sexually oriented material and-

(a) fail to include in subject heading for the electronic mail message the marks or notices prescribed by the law; or

(B) fail to provide that the matter in the message

that is initially viewable to the recipient, when the message is opened by any recipient and absent any further actions by the recipient, includes only-

(i) material which the recipient has consented to;

(ii) the identifier information required to be included in pursuance Section 5(5); and

(iii) Instructions on how to access, or a mechanism to access, the sexually oriented material.[22]

Not Applicable.

True e-mail title and clear identification of advertisements with "ADV" label

Every unsolicited CEM must contain-

a) where there is a subject field, a title which is not false or misleading as to the content of the message;

b) the letters "<ADV>" with a space before the title in the subject field or if there is no subject field, in the words first appearing in the message to clearly identify that the message is an advertisement;

c) header information that is not false or misleading; and

d) an accurate and functional e-mail address or telephone number by which the sender can be readily contacted.[23]

Not Applicable.

Not Applicable.

Other Banned/Restricted Activities

Illegal Access- Prohibition Against Predatory and Abusive Commercial E-Mail-

"Whoever, in or affecting interstate or foreign

commerce, knowingly-

(1) accesses a protected computer without authorization, and intentionally initiates the transmission of multiple CEMs from or through such computer,

(2) uses a protected computer to relay or retransmit multiple

CEMs, with the intent to

deceive or mislead recipients, or any Internet access service, as to the origin of such messages,

(3) materially falsifies header information in multiple commercial electronic mail messages and intentionally initiates

the transmission of such messages,

(4) registers, using information that materially falsifies the identity of the actual registrant, for five or more electronic mail accounts or online user accounts or two or more domain names, and intentionally initiates the transmission of multiple

commercial electronic mail messages from any combination of such accounts or domain names, or

(5) falsely represents oneself to be the registrant or the legitimate successor in interest to the registrant of 5 or more Internet Protocol addresses, and intentionally initiates the transmission of multiple commercial electronic mail messages from such addresses, or conspires to do so, shall be punished as provided for in the Act.[24]

Supply of address harvesting software and harvested‑address lists

"A person must not supply or offer to supply:

(a) address‑harvesting software; or

(b) a right to use address‑harvesting software; or

(c) a harvested address list; or

(d) a right to use a harvested‑address list;

to another person if:

(e) the supplier is:

(i) an individual who is physically present in Australia at the time of the supply or offer; or

(ii) a body corporate or partnership that carries on business or activities in Australia at the time of the supply or offer; or

(f) the customer is:

(i) an individual who is physically present in Australia at the time of the supply or offer; or

(ii) a body corporate or partnership that carries on business or activities in Australia at the time of the supply or offer."

Dictionary Attacks and Address harvesting software

"No person shall send, cause to be sent, or authorize the sending of, an electronic message to electronic addresses generated or obtained through the use of-

a) a dictionary attack;

b) address harvesting software.[25]

Where,

"dictionary attack" means the method which by which the electronic address of a recipient is obtained using an automated means that generates possible electronic addresses by combining names, letters, numbers, punctuation marks or symbols into numerous permutations.[26] And,

"address harvesting software" means software that is specifically designed or marketed for use for-

a)searching the Internet for electronic addresses; and,

b) collecting, compiling, capturing or otherwise harvesting those electronic addresses."[27]

Altering Transmission Data

"It is prohibited, in the course of a commercial activity, to alter or cause to be altered the transmission data in an electronic message so that the message is delivered to a destination other than or in addition to that specified by the sender, unless

(a) the alteration is made with the express consent of the sender or the person to whom the message is sent, and the person altering or causing to be altered the data complies with subsection 11(4) of CASL; or

(b) the alteration is made in accordance with a court order.[28]

Installation of Computer Program

A person must not, in the course of a commercial activity, install or cause to be installed a computer program on any other person's computer system or, having so installed or caused to be installed a computer program, cause an electronic message to be sent from that computer system, unless

(a) the person has obtained the express consent of the owner or an authorized user of the

computer system and complies with subsection 11(5) of the CASL; or

(b) the person is acting in accordance with a court order.

(2) A person contravenes subsection (1) only if the computer system is located in Canada at the relevant time or if the person either is in Canada at the relevant time or is acting under the direction of a person who is in Canada at the time when they give the directions."[29]

Electronic mail for direct marketing purposes where the identity or address of the sender is concealed

A person shall neither transmit, nor instigate the transmission of, a communication for the purposes of direct marketing by means of electronic mail-

(a) where the identity of the person on whose behalf the communication has been sent has been disguised or concealed; or

(b)where a valid address to which the recipient of the communication may send a request that such communications cease has not been provided.

Types of Senders Covered

Spammers and beneficiaries-

the term ''sender'', when used with respect to a commercial electronic mail message, means a person who initiates such a message and whose product, service, or Internet web site is advertised or promoted by the message."[30]

Spammers and beneficiaries-

A person must not send, or cause to be sent, a commercial electronic message that:

(a) has an Australian link; and

(b) is not a designated commercial electronic message.[31]

Spammers,

beneficiaries, and

providers of support

services

"sender" means a person who sends a message, causes the message to be sent, or authorizes the sending of the message.[32]

Further, persons aiding or abetting the offences under Section 9 or 11 are also punishable under the Act.[33]

Spammers and beneficiaries-

Under Section 6, it is prohibited to send or cause or permit to be sent to an electronic address a CEM.

Under Section 7, It is prohibited, in the course of a commercial activity, to alter or cause to be altered the transmission data in a CEM.

Under Section 8, A person must not, in the course of a commercial activity, install or cause to be installed a computer program on any other person's computer system or, having so installed or caused to be installed a computer program, cause an electronic message to be sent from

that computer system.

Spammers and beneficiaries-

The texts of Sections 19, 20, 21 and 22 all prohibit the transmission as well as the instigation of the transmission of, communications for direct marketing purposes without the consent of the recipient.

Who Can Sue

FTC[34], Attorney Generals[35], ISPs and IAPs[36] and most recently even companies/private entities[37]

Australian Communications and Media Agency (ACMA)[38]

Any injured party, including individual users.[39]

Any injured party, including individual users.[40]

Any person who suffers damage by reason of any contravention of any of the requirements of these Regulations.[41]

Exceptions

Transactional or Relationship Messages [42]

where,

The term ''transactional or relationship

message'' means an electronic mail message the primary purpose of which is-

(i) to facilitate, complete, or confirm a commercial

transaction;

(ii) to provide warranty information, product recall information, etc. with respect to a commercial product or service used or purchased by the recipient;

(iii) to provide notifications-

(I) concerning a change in the terms or features of;

(II) of a change in the recipient's standing or status with respect to; or

(III) information with respect to a subscription, membership, account, loan, or comparable ongoing commercial relationship involving the

ongoing purchase or use by the recipient of products or services offered by the sender;

(iv) to provide information directly related to an employment relationship or related benefit plan in which the recipient is currently involved, participating,

or enrolled; or

(v) to deliver goods or services, including product updates or upgrades, that the recipient is entitled to receive under the terms of a transaction that the recipient has previously agreed to enter into with the sender.

Designated Commercial Electronic Message (DCEM). A DCEM is a message containing purely factual information, any related comments of non-commercial nature and some limited commercial information as to the identity of the sender company/individual.[43]

A message is a DCEMs if-

a) the sending of the message is authorized by any of the following bodies:

(i) a government body;

(ii) a registered political party;

(iii) a religious organization;

(iv) a charity or charitable institution; and

(b) the message relates to goods or services; and

(c) the body is the supplier, or prospective supplier, of the goods or services concerned.[44]

Messages from educational institutions:

an electronic message is a DCEM if:

(a) the sending of the message is authorised by an educational institution; and

(b) either or both of the following subparagraphs applies:

(i) the relevant electronic account‑holder is, or has been, enrolled as a student in that institution;

(ii) a member or former member of the household of the relevant electronic account‑holder is, or has been, enrolled as a student in that institution; and

(c) the message relates to goods or services; and

(d) the institution is the supplier, or prospective supplier, of the goods or services concerned.

Electronic Messages authorized by the Government[45]

The Act does not apply to any electronic message where the sending of the message is authorized by the Government or a statutory body on the occurrence of any public emergency, in the public interest or in the interests of public security or national defence.[46]

A certificate signed by the Minister shall be conclusive evidence of existence of a public emergency and the other above stated matters.[47]

  • Family and Personal relationships, where

"Family relationship" is a relationship between two people related through marriage, a common law partnership, or any legal parent-child relationship who have had direct, voluntary two-way communications; and

"personal relationship" means a relationship between two people who have had direct, voluntary two-way communications where it would be reasonable to conclude that the relationship is personal.[48]

  • Mails sent to an individual who practices a particular commercial activity with the mail containing solely an inquiry or application related to that activity[49].
  • A mail which - provides a quote or estimate for the supply of a product, goods, a service, etc. if requested by the recipient;

· facilitates, completes or confirms a commercial transaction that the recipient previously agreed to enter into with the sender;

· provides warranty information, product recall information etc. about a product, goods or a service that the recipient uses, has used or has purchased;

· provides notification of factual information about-

(i) the ongoing use or ongoing purchase by the recipient of a product, goods or a service offered under a subscription, membership, account, loan or similar relationship by the sender, or

· provides information directly related to an employment relationship or related benefit plan in which the recipient is currently involved, is currently participating or is currently enrolled;

· delivers a product, goods or a service, including updates or upgrades, that the recipient is entitled to receive under the terms of a transaction that they have previously entered into with the sender.[50]

· Telecommunications service provider merely because the service provider provides a telecommunications service that enables the transmission of the message.[51]

· CEMs which are two-way voice communication between individuals sent by means of a facsimile or a voice recording sent to a telephone account.[52]

A person may send or instigate the sending of electronic mail for the purposes of direct marketing where -

(a) the contact details of the recipient of that electronic mail in the course of the sale or negotiations for the sale of a product or service to that recipient;

(b) the direct marketing is in respect of that person's similar products and services only; and

(c) the recipient has been given a simple means of refusing (free of charge except for the costs of the transmission of the refusal) the use of his contact details for the purposes of such direct marketing, at the time that the details were initially collected, and, where he did not initially refuse the use of the details, at the time of each subsequent communication.[53]

Penalties

Civil and Criminal

Statutory damages-

Amount calculated by multiplying the number of violations by up to $250. Total amount of damages may not exceed $2,000,000. [54]

Imprisonment- upto 5 years.[55]

Forfeiture from the offender, of-

i) any property, real or personal, constituting or

traceable to gross proceeds obtained from such offense;

ii) any equipment, software, or other technology used or intended to be used to commit or to facilitate the commission of such offense.[56]

Civil only

For a body corporate without prior record,

for upto 2 contraventions, civil penalty should not exceed

i) 100 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 50 penalty units in any other case.

For more than 2 contraventions, civil penalty should not exceed

i) 2000 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 1000 penalty units in any other case.

For a body corporate with prior record,

for upto 2 contravention, civil penalty should not exceed

i) 500 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 250 penalty units in any other case.

For more than 2 contraventions, civil penalty should not exceed

i) 10,000 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 5,000 penalty units in any other case.

For a person without prior record,

for upto 2 contraventions, civil penalty should not exceed

i) 20 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 10 penalty units in any other case.

For more than 2 contraventions, civil penalty should not exceed

i) 400 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 200 penalty units in any other case.

For a person with prior record,

for upto 2 contravention, civil penalty should not exceed

i) 100 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 50 penalty units in any other case.

For more than 2 contraventions, civil penalty should not exceed

i) 2,000 penalty units if the if the civil penalty provision is subsection 16(1), (6) or

(9); or

ii) 1,000 penalty units in any other case.[57]

Civil only

i) Injunction

ii) Damages- calculated in terms of loss suffered as a direct or indirect result of the contravention of the Act.

ii) Statutory Damages

not exceeding $25 for each CEM; and not exceeding in the aggregate $1 million, unless the plaintiff proves that his actual loss from such CEMs exceeds $1 million.[58]

iii)Costs of litigation to the plaintiff.[59]

Civil only

Administrative Monetary Penalty , the purpose of which is to promote compliance with the Act and not to punish.[60]

The maximum penalty for a violation is $1,000,000 in the case of an individual, and $10,000,000 in the case of any other person.[61]

Civil on private action; Criminal for non-compliance with IC's notice

A person who suffers damage by reason of any contravention of any of the requirements of these Regulations by any other person shall be entitled to bring proceedings for compensation from that other person for that damage.[62]

The enforcement authority for these regulations is Britain's Information Commissioner who oversees both the Act and the Regulations, and investigates complaints and makes findings in the form of various types of notices.[63]

Failure to comply with any notice issued by the Information Commissioner is a criminal offence and is punishable with a fine of upto £5000 in England and Wales and £10,000 Scotland.[64]

THE PROBLEM OF SPAM -WHY IT PERSISTS

As per a study conducted by Kaspersky Lab in 2014, 66.34% of all messages exchanged over the internet were spam.[65] Over the 2000s, several countries recognized the threats posed by spam and enacted specific legislations to tackle the same. The ones taken into consideration in this paper are the CAN-SPAM Act, 2003 of the United States, Canada's Anti-Spam Legislation, 2014, The Spam Act, 2003 of Australia, Singapore's Spam Control Act, 2007 and The Privacy and Electronic Communications (EC Directive) Regulations, 2003 (United Kingdom). As will be analyzed in the course of this paper, none of these laws have evolved to become comprehensive mechanisms for combating spam yet. Nevertheless, post the enactment of these laws, spam has reduced as a percentage of the net email traffic; however, the absolute quantity of spam has increased owing to the exponential growth of email traffic universally.[66]

Who Benefits from Spam?

1. Commercial establishments - Spamming is one of the most cost-effective means of promoting products and services to a large number of potential customers. Spams are not necessarily duplicitous and often contain legitimate information to which a fraction of the recipients respond positively. As per a recent study, for spam to be profitable, only 1 in 25,000 spam recipients needs to open the email, get enticed, and make a gray-market purchase.[67]

2. Non-commercial establishments benefitting from advertisements - Many seemingly non-profit messages benefit from revenue generated through advertisements when recipients visit their site. Advertisers pay these sites either per click or per impression.

3. Spammers - The costs incurred by spammers largely include the cost of e-mail/phone number harvesting and the cost of paying botnet operators. As compared to the revenue generated as a percentage of profits earned by the merchant on whose behalf spam messages are sent, these costs are negligible.[68]

Thus, spamming proves to be an activity that involves minimal investment and often yields some response from prospective clients.

The impact of spam is clearly widespread. Presently, India lacks a specific anti-spam legislation. In consideration of the swelling growth of spam across the globe and the increasing number of Indian users, it is of utmost urgency that a specific legislation is formulated to tackle the issue.

OBSERVATIONS AND ANALYSIS

1. Definition of Spam

a. 'Spam' must be defined in a technologically neutral manner

The legislations analyzed in this paper deal with either one or a cluster of modes of communication through which spam may be sent. However, it is essential that 'spam' is defined in a manner that is technologically neutral. Most commercial spam is aimed at promoting products and services to a large number of prospective customers. Thus, making only spam e-mails illegal, like the CAN-SPAM Act does, fails to address the issue wholly as companies would always retain the option of sending unsolicited messages through other communicative devices. It becomes an issue of merely switching modes of communication without there being any actual deterrence to spamming. Thus, a narrow understanding of spam, limiting it to one or few modes of communication, is problematic and for a model law, a broader definition that discourages unsolicited messages sent via any network is warranted.

b. Non-commercial spam must also be addressed

The five legislations examined in this paper address only the issue of unsolicited 'commercial' mails/messages. For instance, under the CAN-SPAM, a commercial mail means " any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service". Singapore's Spam Control Act defines a commercial message in a similar fashion but more elaborately. CASL, while limiting the scope of the law to commercial mail, additionally prescribes that such communication need not have a profit motive. Australia's Spam Act defines a commercial message as a message that has the purpose of offering, advertising or promoting goods or services or the supplier or prospective supplier of goods or services. Under the EC Directive, the term used is 'marketing communication'; however, in essence, it includes only commercial communications.[69] These definitions suffer from an obvious exclusion error. It is known from experience that not all unsolicited messages received are in pursuance of commercial interests. Often, unsolicited mails and messages are received with explicit sexual content as well as promoting political and religious agendas sent by party volunteers.

Thus, it would be in higher consonance with the greater aim of curbing spam to broaden the scope of these legislations to address both commercial as well as non-commercial messages.

c. Bulk requirement and its quantification

The Singaporean law makes 'sent in bulk' a mandatory requirement for spam. However, deciding what quantity of a particular message qualifies it as bulk is difficult. If an objective threshold is set, say 100 messages in 24 hours, then anything short of that, say even 99 messages, go unaddressed simply because it does not meet the statutory requirement of being in bulk. This enables spammers to misuse the law by marginally falling short of the threshold and still continuing to spam. The issue here is comparable to the one faced in setting age as bar to criminal culpability. No matter what, any number arrived at is likely to be arbitrary and consequently subject of criticism. A possible way to tackle this would be to strengthen the unsubscribe mechanisms by virtue of which individuals are able to, at the very least, stop receiving unsolicited mails. For the determination of threshold for State action and its feasibility, a much more detailed study is merited.

2. Consent Requirement

Opt- out Model

Opt-in Model

Double Opt-in Model

Countries following the model

United States of America and Singapore

Canada, Australia and the United Kingdom

None at present.

When messages may be sent

At all times until recipient voluntarily opts out/unsubscribes.

Only after the recipient voluntarily opts-in/subscribes to receive messages by submitting his/her contact details to be part of a particular mailing list.

Only after the recipient responds in the affirmative to the confirmation mail sent by the sender on receiving an opt-in request from the recipient.

Specific requirements

1. The mail/message must bear a clear identifier of its content. E.g. marked as 'ADVT' for advertisements;

2. An 'unsubscribe' option must be provided in the message which may be utilized by the recipient to express his/her disinterest in the message; and

3. The message must conspicuously bear a valid physical postal address.

N/A

N/A

Advantages

Promotes commercial speech rights-

Since the default position presumes the right to market, average collection rates are considerably higher as more emails can be sent to more people.

1. Reduction in unsolicited messages- Commercial messages are not sent until the recipient voluntarily consents to receiving such messages by submitting his/her contact information.

2. Availability of unsubscribe option- Even after a recipient voluntarily opts in, he/she still has the right to withdraw from such messages by unsubscribing.

1. Ensures people are entering their information correctly, which equals a cleaner list and lowers bounce rates.

2. Reduces the probability of spam complaints because subscribers have had to take the extra step to confirm their consent.

Disadvantages

1. This merely places the burden of reduction of spam on the recipients.

2. The functionality of the 'unsubscribe' link is itself questionable. Very often these links themselves are fraudulent. In such a case, the recipient is further harmed before any opting-out can even take place.

3. In the absence of any strict regulatory oversight, there exists no incentive for the senders to strictly address unsubscribe requests.

1. Consent may be obtained in fact but not in spirit through inconspicuous pre-ticked check boxes.

2. E-mail addresses may be added to a list by spambots. Where, the person 'opted-in' may not actually be the person opting in.

3. Errors may be made when entering emails; a typo may result in someone submitting an address that is not theirs.

4. Legitimate addresses may be added by someone who does not own the address.

1. Genuine subscribers may not understand clearly the confirmation process and fail to click the verification link.

2. Confirmation emails may get stuck in spam filters.

The comparison above highlights that the opt-out model as well as the opt-in model may leave loopholes. The opt-in model has been advocated for as the better model as compared to the opt-out model as it prohibits the sending of messages unless the recipient consents to receiving such messages. However, as pointed out above, in this model consent may be given by entities other than the owner of the contact details. In such a situation, a double opt-in model may be a viable option to contemplate as it is the only model where it can be ensured that only the addressee is enabled to successfully opt-in.[70]

Presently, the double opt-in model has not been adopted by any of the countries discussed in this paper. Nonetheless, it seems to have the potential to aid the fight against spam more effectively than the existing models. Its real efficacy however, shall be proven only on practical implementation.

3. Exceptions

a. Family and Personal Relationships

Under the CASL, an exception is made for 'personal relationships' and 'family relationship'. However, these terms are defined quite narrowly. For instance, family relationship is defined as 'a relationship between two people related through marriage, a common law partnership, or any legal parent-child relationship and those individuals have had direct, voluntary, two-way communication'.[71] This implies that in a situation where an individual wants to send a message offering to sell something to an individual in his extended family, say his cousins, doing so without obtaining their consent first, would qualify his mail as spam under the CASL. This would become especially problematic in the Indian context where comparatively larger family structures prevail.

In the anti-spam legislations of the other four countries, no such exceptions are made. Quite obviously, these exceptions are of crucial significance and must be provided in any anti-spam legislation; however, it is important that they are defined in a manner such that their actual purpose i.e. of exclusion of familial and personal relationships from regulations applicable to spammers, is effectively achieved and the law does not become a creator for unnecessary litigation.

b. Transactional Messages

The term 'transactional messages' is used only under the CAN-SPAM Act of the USA. It basically covers messages sent when the recipient stands in an existing transactional relationship with the sender and the mail contains information specific to the recipient. It also includes employment relationships. In CASL, a similar exception is made under Section 6(6). The section is worded almost identically as the CAN-SPAM provision, though the term 'transactional messages' is not used. In the UK laws, messages for the purpose of direct marketing may be sent where the contact information of the recipient is received in the course of the sale or negotiations for the sale of a product or service to that recipient, thus implying an existing transactional relationship. One added proviso under the UK law is that the recipient must be clearly and distinctively given the opportunity to object, free of charge and in an easy manner, to the use of the e-mail address when collected and on the occasion of each message in case the customer has not initially refused such use.[72]

An exception for transactional messages is essential to ensure freedom of commercial speech rights even while effectively tackling spam. In the formulation of a model law, a combination of the American and the English laws may be workable.

c. Governmental Messages

The Spam Act, 2003 of Australia makes an exemption for 'designated commercial electronic message (DCEM)'. This exemption is to avoid any unintended restriction on communication between the government and the community.[73] In order to be a DCEM, a message must-

1. Be authorized by the government;

2. Contain purely factual information and any related comments of non-commercial nature; and

3. Contain some information as to the identity of the sender company/individual.

DCEMs need not always be sent by government bodies and may also be sent by third parties authorized by the government.[74] Such messages are exempt from the consent requirement as well as the unsubscribe option requirement but must comply with the identifier requirement. However, where government bodies are operating in a competitive environment, the provisions of the act would apply normally to them.[75]

Similarly, Singapore's Spam Control Act does not apply to any electronic message where the sending of the message is authorized by the Government or a statutory body on the occurrence of any public emergency, in public interest or in the interests of public security or national defence.

These exemptions are essential in order to enable free communication of important information between the government and the citizens. The Singaporean wording of the exception is rather broad and would give the government immense space for misusing the law. Such a wording might be more effective if supplemented with the Australian proviso wherein governmental communications operating in a competitive environment are excluded.

4. Penalties

a. Penalties must be higher than benefit from spamming

If the penalty prescribed itself is too low, such that loss suffered from paying penalties is lower than net benefit from spamming, the spammer is not sufficiently deterred. Four out of the five countries analyzed in this paper prescribe only civil penalties in the form of fines for spamming. Recently, a Facebook spammer was found to have made a profit of $200 million in a year.[76] For instance, as noted above, the Australian law sets a limit for penalty at $1 million. Thus, such a penalty would constitute a small fraction of the profit from spamming and would not deter a spammer.

b. High penalty does not imply effective deterrence where probability of prosecution is low.

The CAN-SPAM Act prescribes the harshest penalties including both civil as well as criminal penalties. However, it has been rather ineffective in reducing spam. This is for the reason that this Act is more about how to spam legally than anything else. It is more like- ' you can spam but do not use false headers.'[77] As a consequence, unintentional spam from ignorant commercial establishments has reduced. However, due to easy compliance standards, the 'real' spammers still go undetected to a large extent.[78] Thus, even moderate penalties may serve as good deterrents where the probability of prosecution is high.

c. Effective enforcement is the key to effective deterrence.

The cornerstone of an effective spam law is effective enforcement. Penalties must be enforced in a manner that the cost of punishment is always higher than the benefit from spamming and the probability of conviction is high. In order to implement legislative measures effectively, governments should also undertake an information campaign on spam issues targeting users, business communities, private sector groups and other stakeholders as the one primary reason for sustenance of spam is the response received from certain recipients. Such supplementary activities would also facilitate the preservation of commercial rights as excessive penalties could inhibit regular commercial activities.

CONCLUSION

The observations made in this paper are crucial to the formulation of a model anti-spam law for India. The most important part of any ant-spam legislation would be the definition of 'spam' which, as established above, must be technologically neutral in order to be able to address as much unsolicited communication as possible. On the question of consent, a double opt-in is what this paper would propose. This model has been contemplated and recommended by academic and policy researchers as a possibly more effective consent model for spam laws; however, it has not been codified as a legal regime till date. It could be a rather groundbreaking approach that India could adopt as this clearly is the only model where 'opting-in' is realized in fact and in spirit. Further, exceptions are necessary in order to prevent the abuse of laws making certain such exceptions do not suffer from inclusive or exclusion errors. A combination of the exceptions under the Australian and the American laws seems ideal at this stage of research. In terms of penalty, this paper observed that only prescribing harsh penalties is not sufficient to effectively deter spammers but efficient modes of enforcement have to be formulated to ensure actual deterrence. Lastly, while a well-drafted national anti-spam legislation is clearly the need of the hour for India; additional steps have to be taken towards sensitizing citizens to the fact that the problem of spam is real and a costly threat to the communications infrastructure of the country and combat has to begin at the individual level.


[1] CAN-SPAM Act, § 7706(f) (7).

[2] Spam Act, 2003, § 7

[3] Spam Control Act, 2007, § 7(2)

[4] Canada's Anti-Spam Legislation, 2014, § 6.

[5] Canada's Anti-Spam Legislation, 2014, § 12.

[6] 15 U.S.C. § 7701 (2003).

[7] CAN-SPAM Act, Section 3 (2)(A)

[8] Spam Act, 2003, § 6

[9] Spam Control Act, 2007, § 5(1)

[10] Canada's Anti-Spam Legislation, 2014, § 6

[11] Canada's Anti-Spam Legislation, 2014, § 1(1)

[12] Regulation 19, EC Directives, 2003

[13] Regulation 20, EC Directives, 2003

[14] Regulation 21, EC Directives, 2003

[15] Regulation 22, EC Directives, 2003

[16] Section 11, Data Protection Act, 1998

[17] CAN-SPAM Act, Section 5(5)

[18] Spam Act, 2003, § 16(2)

[19] Spam Act, 2003, Schedule 2 (2)

[20] Spam Control Act, 2007 Section 11, Schedule 2(2)

[21] Canada's Anti-Spam Legislation, 2014, Section 6

[22] CAN-SPAM Act, 2003, Section 5(d)

[23] Spam Control Act, 2007, Schedule 2, 3(1), Section 11

[24] Chapter 47 of title 18, U.S.C., § 1037, inserted through an amendment by the CAN-SPAM Act, § 4(a) (1); '§ 5(A)(1).

[25] Spam Control Act, 2007, '§ 9

[26] Spam Control Act, 2007, '§ 2

[27] Spam Control Act, 2007, '§ 2

[28] Canada's Anti-Spam Legislation, 2014, § 7

[29] Canada's Anti-Spam Legislation, 2014, § 8

[30] CAN-SPAM Act, 2003, § 3(16)(A)

[31] Spam Act, 2003, Section 16(1), Section 8

[32] Spam Control Act, 2007, § 2

[33] Spam Control Act, 2007, § 12

[34] CAN-SPAM Act, 2003, § 7(a)(c)(d)

[35] CAN-SPAM Act, 2003, § 7(f)

[36] CAN-SPAM Act, 2003, § 7(g)

[37] MySpace, Inc. v. The Globe.com, Inc., 2007 WL 1686966 (C.D. Cal., Feb. 27, 2007)

[38] Spam Act, 2003, § 26(1)

[39] Spam Control Act, 2007, § 13

[40] Canada's Anti-Spam Legislation, § 47

[41] Regulation 30(1), EC Directives, 2003

[42] CAN-SPAM Act, 2003, § 3(2)(B)

[43] Spam Act, 2003, Schedule 1, § 2

[44] Spam Act, 2003, Schedule 1, § 3

[45] Spam Control Act, 2007, § 7(3)

[46] Spam Control Act, 2007, First Schedule Clause (1)

[47] Spam Control Act, 2007, First Schedule Clause (2)

[48] Canada's Anti-Spam Legislation, § 6(5a)

[49] Canada's Anti-Spam Legislation, § 6(5b)

[50] Canada's Anti-Spam Legislation, § 6(6)

[51] Canada's Anti-Spam Legislation, § 7

[52] Canada's Anti-Spam Legislation, § 8

[53]Section 22(3), EC Directives, 2003

[54] CAN-SPAM Act, § 7 (f)(3)(A).

[55] CAN-SPAM Act, § 4 (b)

[56] CAN-SPAM Act, § 4 (c)

[57] Spam Act, 2003, Sections 24, 25

[58] Spam Control Act, 2007, § 14

[59] Spam Control Act, 2007, § 15

[60] Canada's Anti-Spam Legislation, 2014, § 20(2)

[61] Canada's Anti-Spam Legislation, 2014, § 20(4)

[62] Regulation 30(1), EC Directive, 2003

[63] Regulations 31-32, EC Directive, 2003

[64] Section 47 and 60, Data Protection Act, 1998

[65] Spam and Phishing Statistics Report Q1-2014, Kaspersky Lab

http://usa.kaspersky.com/internet-security-center/threats/spam-statistics-report-q1-2014#.VVQxNndqN5I (last accessed 29th May, 2015)

[66] Snow and Jayakar, Krishna, Can We Can Spam? A Comparison of National Spam Regulations, August 15, 2013. TPRC 41: The 41st Research Conference on Communication, Information and Internet Policy.

[67] Justin Rao and David Reiley, The Economics of Spam, Vol. 26, No. 3 The Journal of Economic Perspectives (2012), p. 104.

[68] Supra n. 66; p. 7

[69] Refer Table in Section 1.

[70] Dr. Ralph F. Wilson, Spam, Spam Bots, and Double Opt-in E-mail Lists, April 21, 2010; available at http://webmarketingtoday.com/articles/wilson-double-optin/ (last accessed 29th May 2015).

[71] Section 2(a), Electronic Commerce Protection Regulations, http://fightspam.gc.ca/eic/site/030.nsf/eng/00273.html (last accessed 29th May 2015)

[72] Evangelos Moustakas, C. Ranganathan and Penny Duquenoy, Combating Spam Through Legislation: A Comparative Analysis Of US And European Approaches, available at http://ceas.cc/2005/papers/146.pdf

[73] Spam Act 2003- A Practical Guide for Government, Australian Communications Authority, available at- http://www.acma.gov.au/webwr/consumer_info/spam/spam_act_pracguide_govt.pdf (last accessed 29th May 2015)

[74] Ibid

[75] Id

[76] Charles Arthur, Facebook spammers make $200m just posting links, researchers say, The Guardian, 28th August 2013, http://www.theguardian.com/technology/2013/aug/28/facebook-spam-202-million-italian-research (last accessed 29th May, 2015)

[77] Evangelos Moustakas, C. Ranganathan and Penny Duquenoy, Combating Spam Through Legislation: A Comparative Analysis Of US And European Approaches, available at http://ceas.cc/2005/papers/146.pdf

[78] Carolyn Duffy Marsan, CAN-SPAM: What went wrong?, 6th October 2008, available at

http://www.networkworld.com/article/2276180/security/can-spam--what-went-wrong-.html (last accessed 29th May, 2015)

Regulatory Perspectives on Net Neutrality

by Pranesh Prakash last modified Jul 18, 2015 02:46 AM
In this paper Pranesh Prakash gives an overview on why India needs to put in place net neutrality regulations, and the form that those regulations must take to avoid being over-regulation.

With assistance by Vidushi Marda (Programme Officer, Centre for Internet and Society) and Tarun Krishnakumar (Research Volunteer, Centre for Internet and Society). I would like to specially thank Vishal Misra, Steve Song, Rudolf van der Berg, Helani Galpaya, A.B. Beliappa, Amba Kak, and Sunil Abraham for extended discussions, helpful suggestions and criticisms.  However, this paper is not representative of their views, which are varied.


Today, we no longer live in a world of "roti, kapda, makaan", but in the world of "roti, kapda, makaan aur broadband". [1] This is recognized by the National Telecom Policy IV.1.2, which states the need to "recognise telecom, including broadband connectivity as a basic necessity like education and health and work towards 'Right to Broadband'."[2] According to the IAMAI, as of October 2014, India had 278 million internet users. [3] Of these, the majority access Internet through their mobile phones, and the WEF estimates only 3 in 100 have broadband on their mobiles.[4] Thus, the bulk of our population is without broadband. Telecom regulation and net neutrality has a very important role in enabling this vision of Internet as a basic human need that we should aim to fulfil.

1. Why should we regulate the telecom sector?

All ICT regulation should be aimed at achieving five goals: achieving universal, affordable access; [5] ensuring and sustaining effective competition in an efficient market and avoiding market failures; protecting against consumer harms; ensuring maximum utility of the network by ensuring interconnection; and addressing state needs (taxation, security, etc.). Generally, all these goals go hand in hand, however some tensions may arise. For instance, universal access may not be provided by the market because the costs of doing so in certain rural or remote areas may outweigh the immediate monetary benefits private corporations could receive in terms of profits from those customers. In such cases, to further the goal of universal access, schemes such as universal service obligation funds are put in place, while ensuring that such schemes either do not impact competition or very minimally impact it.

It is clear that to maximise societal benefit, effective regulation of the ICT sector is a requirement, which otherwise, due to the ability of dominant players to abuse network effect to their advantage, is inherently prone towards monopolies. For instance, in the absence of regulation, a dominant player would charge far less for intra-network calls than inter-network calls, making customers shift to the dominant network. This kind of harm to competition should be regulated by the ICT regulator. However, it is equally true that over-regulation is as undesirable as under-regulation, since over-regulation harms innovation - whether in the form of innovative technologies or innovative business models. The huge spurt of growth globally of the telecom sector since the 1980s has resulted not merely from advancements in technology, but in large part from the de-monopolisation and deregulation of the telecom sector.[6] Similarly, the Internet has largely flourished under very limited technology-specific regulation. For instance, while interconnection between different telecom networks is heavily regulated in the domestic telecom sector, interconnection between the different autonomous systems (ASes) that make up the Internet is completely unregulated, thereby allowing for non-transparent pricing and opaque transactions. Given this context, we must ensure we do not over-regulate, lest we kill innovation.

2. Why should we regulate Net Neutrality? And whom should we regulate?

We wouldn't need to regulate Net Neutrality if ISPs were not "gatekeepers" for last-mile access. "Gatekeeping" occurs when a single company establishes itself as an exclusive route to reach a large number of people and businesses or, in network terms, nodes. It is not possible for Internet services to reach the customers of the telecom network without passing through the telecom network. The situation is very different in the middle-mile and for backhaul. Even though anti-competitive terms may exist in the middle-mile, especially given the opacity of terms in "transit agreements", a packet is usually able to travel through multiple routes if one route is too expensive (even if that is not the shortest network path, and is thus inefficient in a way). However, this multiplicity of routes is not possible in the last mile.

This leaves last mile telecom operators (ISPs) in a position to unfairly discriminate between different Internet services or destinations or applications, while harming consumer choice. This is why we believe that promoting the five goals mentioned above would require regulation of last-mile telecom operators to prevent unjust discrimination against end-users and content providers.

Thus, net neutrality is the principle that we should regulate gatekeepers to ensure they do not use their power to unjustly discriminate between similarly situated persons, content or traffic.

3. How should we regulate Net Neutrality?

3.1. What concerns does Net Neutrality raise? What harms does it entail?

Discriminatory practices at the level of access to the Internet raises the following set of concerns:

1. Freedom of speech and expression, freedom of association, freedom of assembly, and privacy.

2. Harm to effective competition

a. This includes competition amongst ISPs as well as competition amongst content providers.

b. Under-regulation here may cause harm to innovation at the content provider level, including through erecting barriers to entry.

c. Over-regulation here may cause harm to innovation in terms of ISP business models.

3. Harm to consumers

a. Under-regulation here may harm consumer choice and the right to freedom of speech, expression, and communication.

b. Over-regulation on this ground may cause harm to innovation at the level of networking technologies and be detrimental to consumers in the long run.

4. Harm to "openness" and interconnectedness of the Internet, including diversity (of access, of content, etc.)

a. Exceptions for specialized services should be limited to preserve the open and interconnectedness of the Internet and of the World Wide Web.

It might help to think about Net Neutrality as primarily being about two overlapping sets of regulatory issues: preferential treatment of particular Internet-based services (in essence: content- or source-/destination-based discrimination, i.e., discrimination on basis of 'whose traffic it is'), or discriminatory treatment of applications or protocols (which would include examples like throttling of BitTorrent traffic, high overage fees upon breaching Internet data caps on mobile phones, etc., i.e., discrimination on the basis of 'what kind of traffic it is').

Situations where the negative or positive discrimination happens on the basis of particular content or address should be regulated through the use of competition principles, while negative or positive discrimination at the level of specific class of content, protocols, associated ports, and other such sender-/receiver-agnostic features, should be regulated through regulation of network management techniques . The former deals with instances where the question of "in whose favour is there discrimination" may be asked, while the latter deals with the question "in favour of what is there discrimination".

In order to do this, a regulator like TRAI can use both hard regulation - price ceilings, data cap floors, transparency mandates, preventing specific anti-competitive practices, etc. - as well as soft regulation - incentives and disincentives.

3.1.1 Net Neutrality and human rights

Any discussion on the need for net neutrality impugns the human rights of a number of different stakeholders. Users, subscribers, telecom operators and ISPs all possess distinct and overlapping rights that are to be weighed against each other before the scope, nature and form of regulatory intervention are finalised. The freedom of speech, right to privacy and right to carry on trade raise some of the most pertinent questions in this regard.

For example, to properly consider issues surrounding the practice of paid content-specific zero-rating from a human rights point of view, one must seek to balance the rights of content providers to widely disseminate their 'speech' to the largest audiences against the rights of consumers to have access to a diverse variety of different, conflicting and contrasting ideas.

This commitment to a veritable marketplace or free-market of ideas has formed the touchstone of freedom of speech law in jurisdictions across the world as well as finding mention in pronouncements of the Indian Supreme Court. Particular reference is to be made to the dissent of Mathew, J. inBennett Coleman v. Union of India[7] and of the majority Sakal Papers v. Union of India[8] which rejected the approach.

Further, the practice of deep-packet inspection, which is sometimes used in the process of network management, raises privacy concerns as it seeks to go beyond what is "public" information in the header of an IP packet, necessary for routing, to analysing non-public information. [9]

3.2 What conditions and factors may change these concerns and the regulatory model we should adopt?

While the principles relating to Net Neutrality remain the same in all countries (i.e., trying to prevent gatekeepers from unjustly exploiting their position), the severity of the problem varies depending on competition in the market, on the technologies, and on many other factors. One way to measure fair or stable allocation of the surplus created by a network - or a network-of-networks like the Internet - is by treating it as a convex cooperation game and thereupon calculating that game's Shapley value:[10] in the case of the Internet, this would be a game involving content ISPs, transit ISPs, and eyeball (i.e., last-mile) ISPs. The Shapley value changes depending on the number of competitors there are in the market: thus, the fair/stable allocation when there's vibrant competition in the market is different from the fair/stable allocation in a market without such competition. That goes to show that a desirable approach when an ISP tries to unjustly enrich itself by charging other network-participants may well be to increase competition, rather than directly regulating the last-mile ISP. Further, it shows that in a market with vibrant last-mile competition, the capacity of the last-mile ISP to unjustly are far diminished.

In countries which are remote and have little international bandwidth, the need to conserve that bandwidth is high. ISPs can regulate that by either increasing prices of Internet connections for all, or by imposing usage restrictions (such as throttling) on either heavy users or bandwidth-hogging protocols. If the amount of international bandwidth is higher, the need and desire on part of ISPs to indulge in such usage restrictions decreases. Thus, the need to regulate is far higher in the latter case, than in the former case.

The above paragraphs show that both the need for regulation and also the form that the regulation should take depend on a variety of conditions that aren't immediately apparent.

Thus, the framework that the regulator sets out to tackle issues relating to Net Neutrality are most important, whereas the specific rules may need to change depending on changes in conditions. These conditions include:

● last-mile market

○ switching costs between equivalent service providers

○ availability of an open-access last-mile

○ availability of a "public option" neutral ISP

○ increase or decrease in the competition, both in wired and mobile ISPs.

● interconnection market

○ availability of well-functioning peering exchanges

○ availability of low-cost transit

● technology and available bandwidth

○ spectrum efficiency

○ total amount of international bandwidth and local network bandwidth

● conflicting interests of ISPs

○ do the ISPs have other business interests other than providing Internet connectivity? (telephony, entertainment, etc.)

3.3 How should we deal with anti-competitive practices?

Anti-competitive practices in the telecom sector can take many forms: Abuse of dominance, exclusion of access to specific services, customer lock-in, predatory pricing, tying of services, cross-subsidization, etc., are a few of them. In some cases the anti-competitive practice targets other telecom providers, while in others it targets content providers. In the both cases, it is important to ensure that ensure that telecom subscribers have a competitive choice between effectively substitutable telecom providers and an ability to seamlessly switch between providers.

3.3.1 Lowering Switching Costs

TRAI has tackled many of these issues head on, especially in the mobile telephony space, while competitive market pressures have helped too:

Contractual or transactional lock-in. The easiest way to prevent shifting from one network to another is by contractually mandating a lock-in period, or by requiring special equipment (interoperability) to connect to one's network. In India, this is not practised in the telecom sector, with the exception of competing technologies like CDMA and GSM. Non-contractual lock-ins, for instance by offering discounts for purchasing longer-term packages, are not inherently anti-competitive unless that results in predatory pricing or constitutes an abuse of market dominance. In India, switching from one mobile provider to another, though initiated 15 years into the telecom revolution, is in most cases now almost as easy as buying a new SIM card.[11] TRAI may consider proactive regulation against contractual lock-in.

Number of competitors. Even if switching from one network to another is easy, it is not useful unless there are other equivalent options to switch to. In the telecom market, coverage is a very important factor in judging equivalence. Given that last mile connectivity is extremely expensive to provide, the coverage of different networks are very different, and this is even more true when one considers wired connectivity, which is difficult to lay in densely-populated urban and semi-urban areas and unprofitable in sparsely-populated areas. The best way to increase the number of competitors is to make it easier for competitors to exist. Some ways of doing this would be through enabling spectrum-sharing, lowering right-of-way rents, allowing post-auction spectrum trading, and promoting open-access last-mile fibre carriers and to thereby encourage competition on the basis of price and service and not exclusive access to infrastructure.

Interconnection and mandatory carriage. The biggest advantage a dominant telecom player has is exclusive access to its customer base. Since in the telecom market, no telco wants to not connect to customers of another telco, they do not outright ban other networks. However, dominant players can charge high prices from other networks, thereby discriminating against smaller networks. In the early 2000s, Airtel-to-Airtel calls were much cheaper than Airtel-to-Spice calls. However, things have significantly changed since then. TRAI has, since the 2000s, heavily regulated interconnection and imposed price controls on interconnection ("termination") charges.[12] Thus, now, generally, inter-network calls are priced similarly to intra-network calls. And if you want cheaper Airtel-to-Airtel calls, you can buy a special (unbundled) pack that enables an Airtel customer to take advantage of the fact that her friends are also on the same network, and benefits Airtel since they do not in such cases have to pay termination charges. Recently, TRAI has even made the interconnection rates zero in three cases: landline-to-landline, landline-to-cellular, and cellular-to-landline, in a bid to decrease landline call rates, and incentivise them, allowing a very low per call interconnection charges of 14 paise for cellular-to-cellular connections. [13]

○ With regard to Net Neutrality, we must have a rule that no termination charges or carriage charges may be levied by any ISP upon any Internet service. No Internet service may be discriminated against with regard to carriage conditions or speeds or any other quality of service metric. In essence all negative discrimination should be prohibited. This means that Airtel cannot forcibly charge WhatsApp or any other OTT (which essentially form a different "layer") money for the "privilege" of being able to reach Airtel customers, nor may Airtel slow down WhatsApp traffic and thus try to force WhatsApp to pay. There is a duty on telecom providers to carry any legitimate traffic ("common carriage"), not a privilege. It is important to note that consumer-facing TSPs get paid by other interconnecting Internet networks in the form of transit charges (or the TSP's costs are defrayed through peering). There shouldn't be any separate charge on the basis of content (different layer from the carriage) rather than network (same layer as the carriage). This principle is especially important for startups, and which are often at the receiving end of such discriminatory practices.

Number Portability. One other factor that prevents users from shifting between one network and another is the fact that they have to change an important aspect of their identity: their phone number (this doesn't apply to Internet over DSL, cable, etc.). At least in the mobile space, TRAI has for several years tried to mandate seamless mobile number portability. The same is being tried by the European Commission in the EU. [14] While intra-circle mobile number portability exists in India - and TRAI is pushing for inter-circle mobile number portability as well[15] - this is nowhere as seamless as it should be.

Multi-SIM phones. The Indian market is filled with phones that can accommodate multiple SIM cards, enabling customers to shift seamlessly between multiple networks. This is true not just in India, but most developing countries with extremely price-sensitive customers. Theoretically, switching costs would approach zero if in a market with full coverage by n telecom players every subscriber had a phone with n SIM slots with low-cost SIM cards being available.

The situation in the telecom sector with respect to the above provides a stark contrast to the situation in the USA, and to the situation in the DTH market. In the USA, phones get sold at discounts with multi-month or multi-year contracts, and contractual lock-ins are a large problem. Keeping each of the above factors in mind, the Indian mobile telecom space is far more competitive than the US mobile telecom space.

Further, in the Indian DTH market, given that there is transactional lock-in (set-top boxes aren't interoperable in practice, though are mandated to be so by law[16]), there are fewer choices in the market; further, the equivalent of multi-SIM phones don't exist with respect to set-top boxes. Further, while there are must-carry rules with respect to carriage, they can be of three types: 1) must mandatorily provide access to particular channels[17] (positive obligation, usually for government channels); 2) prevented from not providing particular channels (negative obligation, to prevent anti-competitive behaviour and political censorship); and 3) must mandatorily offer access to at least a set number of channels (positive obligation for ensuring market diversity). [18] Currently, only (1) is in force, since despite attempts by TRAI to ensure (3) as well.[19]

If the shifting costs are low and transparency in terms of network practice is reported in a standard manner and well-publicised, then that significantly weakens the "gatekeeper effect", which as we saw earlier, is the reason why we wish to introduce Net Neutrality regulation. This consequently means, as explained above in section 3.2, that despite the same Net Neutrality principles applying in all markets and countries, the precise form that the Net Neutrality regulations take in a telecom market with low switching costs would be different from the form that such regulations would take in a market with high switching costs.

3.3.2 Anti-competitive Practices

Some potential anti-competitive practices, which are closely linked, are cross-subsidization, tying (anti-competitive bundling) of multiple services, and vertical price squeeze. All three of these are especial concerns now, with the increased diversification of traditional telecom companies, and with the entry into telecom (like with DTH) of companies that create content. Hence, if Airtel cross-subsidizes the Hike chat application that it recently acquired, [20] or if Reliance Infocomm requires customers to buy a subscription to an offering from Reliance Big Entertainment, or if Reliance Infocomm meters traffic from another Reliance Big Entertainment differently from that from Saavn, all those would be violative of the principle of non-discrimination by gatekeepers. This same analysis can be applied to all unpaid deals and non-commercial deals, including schemes such as Internet.org and Wikipedia Zero, which will be covered later in the section on zero-rating.

While we have general rules such as sections 3 and 4 of the Competition Act, we do not currently have specific rules prohibiting these or other anti-competitive practices, and we need Net Neutrality regulation that clearly prohibit such anti-competitive practices so that the telecom regulator can take action for non-compliance . We cannot leave these specific policy prescriptions unstated, even if they are provided for in section 3 of the Competition Act. These concerns are especial concerns in the telecom sector, and the telecom regulator or arbitrator should have the power to directly deal with these, instead of each case going to the Competition Commission of India. This should not affect the jurisdiction of the CCI to investigate and adjudicate such matters, but should ensure that TRAI both has suo motu powers, and that the mechanism to complain is made simple (unlike the current scenario, where some individual complainants may fall in the cracks between TRAI and TDSAT).

3.3.3 Zero-rating

Since a large part of the net neutrality debate in India involves zero-rating practices, we deal with that in some length. Zero-rating is the practice of not counting (aka "zero-rating") certain traffic towards a subscriber's regular Internet usage. The zero-rated traffic could be zero-priced or fixed-price; capped or uncapped; subscriber-paid, Internet service-paid, paid for by both, or unpaid; content- or source/destination-based, or agnostic to content or source/destination; automatically provided by the ISP or chosen by the customer . The motivations for zero-rating may also be varied, as we shall see below. Further, depending on the circumstances, zero-rating could be competitive or anti-competitive. All forms of zero-rating result in some form of discrimination, but not all zero-rating is harmful, nor does all zero-rating need to be prohibited.

While, as explained in the section on interconnection and carriage above, negative discrimination at the network level should be prohibited, that leaves open the question of positive discrimination. It follows from section 3.1 that the right frame of analysis of this question is harm to competition, since the main harm zero-rating is, as we shall see below, about discriminating between different content providers, and not discrimination at the level of protocols, etc.

Whether one should allow for any form of positive discrimination at the network level or not depends on whether positive discrimination of (X) has an automatic and unfair negative impact on all (~X). That, in turn, depends on whether (~X) is being subject to unfair competition. As Wikipedia notes, "unfair competition means that the gains of some participants are conditional on the losses of others, when the gains are made in ways which are illegitimate or unjust." Thus, positive discrimination that has a negative impact on effective competition shall not be permitted, since in such cases it is equivalent to negative discrimination ("zero-sum game") . Positive discrimination that does not have a negative impact on effective competition may be permitted, especially since it results in increased access and increases consumer benefit, as long as the harm to openness and diversity is minimized .

While considering this, one should keep in mind the fact that startups were, 10-15 years ago, at a huge disadvantage with regard to wholesale data purchase. The marketplaces for data centres and for content delivery networks (which speed up delivery of content by being located closer, in network terms, to multiple last-mile ISPs) were nowhere near as mature as they are today, and the prices were high. There was a much higher barrier to startup entry than there is today, due to the prices and due to larger companies being able to rely on economies of scale to get cheaper rates. Was that unfair? No. There is no evidence of anti-competitive practices, nor of startups complaining about such practices. Therefore, that was fair competition, despite specific input costs that were arguably needed (though not essential) for startups to compete being priced far beyond their capacity to pay.

Today the marketplace is very different, with a variety of offerings. CDNs such as Cloudflare, which were once the preserve of rich companies, even have free offerings, thus substantially lowering barriers for startups that want faster access to customers across the globe.

Is a CDN an essential cost for a startup? No. But in an environment where speed matters and customers use or don't use a service depending on speed; and where the startup's larger competitors are all using CDNs, a startup more or less has to. Thankfully, given the cheap access to CDNs these days, that cost is not too high for a startup to bear. If the CDN market was not competitive enough, would a hypothetical global regulator have been justified in outright banning the use of CDNs to 'level' the playing field? No, because the hypothetical global regulator instead had the option to (and would have been justified in) regulating the market to ensure greater competition.

A regulator should not prohibit an act that does not negatively impact access, competition, consumer benefit, nor openness (including diversity), since that would be over-regulation and would harm innovation.

3.3.3.1 Motivations for Zero-Rating

3.3.3.1.1 Corporate Social Responsibility / Incentivizing Customers to Move Up Value Chain

There exist multiple instances where there is no commercial transaction between the OTT involved and the telecom carrier, in which zero-priced zero-rating of specific Internet content happens. We know that there is no commercial transaction either through written policy (Wikipedia Zero) or through public statements (Internet.org, a bouquet of sites). In such cases, the telecom provider would either be providing such services out of a sense of public interest, given the social value of those services, or would be providing such services out of self-interest, to showcase the value of particular Internet set the same time.

The apprehended risk is that of such a scheme creating a "walled garden", where users would be exposed only to those services which are free since the search and discovery costs of non-free Internet (i.e., any site outside the "walled garden") would be rather high. This risk, while real, is rather slim given the fact that the economic incentives for those customers who have the ability to pay for "Internet packs" but currently do not find a compelling reason to do so, or out of both a sense of public interest and self-interest of the telecom providers works against this.

In such non-commercial zero-priced zero-rating, a telecom provider would only make money if and only if subscribers start paying for sites outside of the walled garden. If subscribers are happy in the walled garden, the telecom provider starts losing money, and hence has a strong motivation to stop that scheme. If on the other hand, enough subscribers start becoming paying customers to offset the cost of providing the zero-priced zero-rated service(s) and make it profitable, that shows that despite the availability of zero-priced options a number of customers will opt for paid access to the open Internet and the open Web, and the overall harms of such zero-priced zero-rating would be minimal. Hence, the telecom providers have an incentive to keep the costs of Internet data packs low, thus encouraging customers who otherwise wouldn't pay for the Internet to become paying customers.

There is the potential of consumer harm when users seek to access a site outside of the walled garden, and find to their dismay that they have been charged for the Internet at a hefty rate, and their prepaid balance has greatly decreased. This is an issue that TRAI is currently appraised of, and a suitable solution would need to be found to protect consumers against such harm.

All in all, given that the commercial interests of the telecom providers align with the healthy practice of non-discrimination, this form of limited positive discrimination is not harmful in the long run, particularly because it is not indefinitely sustainable for a large number of sites. Hence, it may not be useful to ban this form of zero-priced zero-rating of services as long as they aren't exclusive, or otherwise anti-competitive (a vertical price-squeeze, for instance), and the harm to consumers is prohibited and the harm to openness/diversity is minimized.

3.3.3.1.2 Passing on ISP Savings / Incentivizing Customers to Lower ISP's Cost

Suppose, for instance, an OTT uses a CDN located, in network distance terms, near an eyeball ISP. In this case, the ISP has to probably pay less than it would have to had the same data been located in a data centre located further away, given that it would have fewer interconnection-related charges.

Hence the monetary costs of providing access to different Web destinations are not equal for the ISP. This cost can be varied either by the OTT (by it locating the data closer to the ISP - through a CDN, by co-locating where the ISP is also present, or by connecting to an Internet Exchange Point which the ISP is also connected to - or by it directly "peering" with the ISP) or by the ISP (by engaging in "transparent proxying" in which case the ISP creates caches at the ISP level of specific content (usually by caching non-encrypted data the ISP's customers request) and serves the cached content when a user requests a site, rather than serving the actual site). None of the practices so far mentioned are discriminatory from the customer's perspective with regard either to price or to prioritization, though all of them enable faster speeds to specific content. Hence none of the above-mentioned practices are considered even by the most ardent Net Neutrality advocates to be violations of that principle. [21] However, if an ISP zero-rates the content to either pass on its savings to the customer[22] or to incentivize the customer to access services that cost the ISP less in terms of interconnection costs, that creates a form of price discrimination for the customer, despite it benefiting the consumer.

The essential economic problem is that the cost to the ISP is variable, but the cost to the customer is fixed. Importantly, this problem is exacerbated in India where web hosting prices are high, transit prices are high, peering levels are low, and Internet Exchange Points (IXPs) are not functioning well. [23] These conditions create network inefficiencies in terms of hosting of content further away from Indian networks in terms of network distance, and thus harms consumers as well as local ISPs. In order to set this right, zero-rating of this sort may be permitted as it acts as an incentive towards fixing the market fundamentals. However, once the market fundamentals are fixed, such zero-rating may be prohibited.

This example shows that the desirability or otherwise of discriminatory practices depends fully on the conditions present in the market, including in terms of interconnection costs.

3.3.3.1.3 Unbundling Internet into Services ("Special Packs")

Since at least early 2014, mobile operators have been marketing special zero-rating "packs". These packs, if purchased by the customer, allow capped or in some instances uncapped, zero-rating of a service such as WhatsApp or Facebook, meaning traffic to/from that service will not be counted against their regular Internet usage.

For a rational customer, purchasing such a pack only makes sense in one of two circumstances:

● The person has Internet connectivity on her Internet-capable phone, but has not purchased an "Internet data pack" since she doesn't find the Internet valuable. Instead, she has heard about "WhatsApp", has friends who are on it, and wishes to use that to reduce her SMS costs (and thereby eat into the carriage provider's ability to charge separately for SMSes). She chooses to buy a WhatsApp pack for around ₹25 a month instead of paying ₹95 for an all-inclusive Internet data pack.

● The person has Internet connectivity on her Internet-capable phone, and has purchased an "Internet data pack". However, that data pack is capped and she has to decide between using WhatsApp and surfing web sites. She is on multiple WhatsApp groups and her WhatsApp traffic eats up 65% of her data cap. She thus has to choose between the two, since she doesn't want to buy two Internet data packs (each costing around ₹95 for a month). She chooses to buy a WhatsApp pack for ₹25 a month, paying a cumulative total of ₹120 instead of ₹190 which she would have had to had she bought two Internet data packs. In this situation, "unbundling" is happening, and this benefits the consumer. Such unbundling harms the openness and integrity of the Internet.

If users did not find value in the "special" data packs, and there is no market demand for such products, they will cease to be offered. Thus, assuming a telco's decision to offer such packs is purely customer-demand driven - and not due to deals it has struck with service providers - if Orkut is popular, telcos would be interested in offering Orkut packs and if Facebook is popular, they would be interested in offering a Facebook pack. Thus, clearly, there is nothing anti-competitive about such customer-paid zero-rating packs, whereas they clearly enhance consumer benefit. Would this increase the popularity of Orkut or Facebook? Potentially yes. But to prohibit this would be like prohibiting a supermarket from selectively (and non-collusively) offering discounts on popular products. Would that make already popular products even more popular? Potentially, yes. But that would not be seen as a harm to competition but would be seen as fair competition. This contravenes the "openness" of the Internet (i.e., the integral interconnected diversity that an open network like the Internet embodies) as an independent regulatory goal. The Internet, being a single gateway to a mind-boggling variety of services, allows for a diverse "long tail", which would lose out if the Internet was seen solely as a gateway to popular apps, sites, and content. However, given that this is a choice exercised freely by the consumer, such packs should not be prohibited, as that would be a case of over-regulation.

The one exception to the above analysis of competition, needless to say, is if that these special packs aren't purely customer-demand driven and are the product of special deals between an OTT and the telco. In that case, we need to ensure it isn't anti-competitive by following the prescriptions of the next section.

3.3.3.1.4 Earning Additional Revenues from Content Providers

With offerings like Airtel Zero, we have a situation where OTT companies are offering to pay for wholesale data access used by their customers, and make accessing their specific site or app free for the customer. From the customer's perspective, this is similar to a toll-free number or a pre-paid envelope or free-to-air TV channel being offered on a particular network.

However, from the network perspective, these are very different. Even if a customer-company pays Airtel for the toll-free number, that number is accessible and toll-free across all networks since the call terminates on Airtel networks and Airtel pays the connecting network back the termination charge from the fee they are paid by the customer-company. This cannot happen in case of the Internet, since the "call" terminates outside of the reach of the ISP being paid for zero-rating by the OTT company; hence unless specific measures are taken, zero-rating has to be network-specific.

The comparison to free-to-air channels is also instructive, since in 2010 TRAI made recommendations that consumers should have the choice of accessing free-to-air channels à-la-carte, without being tied up to a bouquet.[24] This would, in essence, allow a subscriber to purchase a set-top box, and without paying a regular subscription fee watch free-to-air channels. [25] However, similar to toll-free numbers, these free-to-air channels are free-to-air on all MSO's set-top boxes, unlike the proposed Airtel Zero scheme under which access to a site like Flipkart would be free for customers on Airtel's network alone.

Hence, these comparisons, while useful in helping think through the regulatory and competition issues, should not be used as instructive exact analogies, since they aren't fully comparable situations.

3.3.3.1.5 Market Options for OTT-Paid Zero-Rating

As noted above, a competitive marketplace already exists for wholesale data purchase at the level of "content ISPs" (including CDNs), which sell wholesale data to content providers (OTTs). This market is at present completely unregulated. The deals that exist are treated as commercial secrets. It is almost certain that large OTTs get better rates than small startups due to economies of scale.

However, at the eyeball ISP level, it is a single-sided market with ISPs competing to gain customers in the form of end-users. With a scheme like "Airtel Zero", this would get converted into a double-sided market, with a gatekeeper without whom neither side can reach the other being in the middle creating a two-sided toll. This situation is ripe for market abuse: this situation allows the gatekeeper to hinder access to those OTTs that don't pay the requisite toll or to provide preferential access to those who pay, apart from providing an ISP the opportunity to "double-dip".

One way to fix this is to prevent ISPs from establishing a double-sided market. The other way would be to create a highly-regulated market where the gatekeeping powers of the ISP are diminished, and the ISP's ability to leverage its exclusive access over its customers are curtailed. A comparison may be drawn here to the rules that are often set by standard-setting bodies where patents are involved: given that these patents are essential inputs, access to them must be allowed through fair, reasonable, and non-discriminatory licences. Access to the Internet and common carriers like telecom networks, being even more important (since alternatives exist to particular standards, but not to the Internet itself), must be placed at an even higher pedestal and thus even stricter regulation to ensure fair competition.

A marketplace of this sort would impose some regulatory burdens on TRAI and place burdens on innovations by the ISPs, but a regulated marketplace harms ISP innovation less than not allowing a market at all.

At a minimum, such a marketplace must ensure non-exclusivity, non-discrimination, and transparency. Thus, at a minimum, a telecom provider cannot discriminate between any OTTs who want similar access to zero-rating. Further, a telecom provider cannot prevent any OTT from zero-rating with any other telecom provider. To ensure that telecom providers are actually following this stipulation, transparency is needed, as a minimum.

Transparency can take one of two forms: transparency to the regulator alone and transparency to the public. Transparency to the regulator alone would enable OTTs and ISPs to keep the terms of their commercial transactions secret from their competitors, but enable the regulator, upon request, to ensure that this doesn't lead to anti-competitive practices. This model would increase the burden on the regulator, but would be more palatable to OTTs and ISPs, and more comparable to the wholesale data market where the terms of such agreements are strictly-guarded commercial secrets. On the other hand, requiring transparency to the public would reduce the burden on the regulator, despite coming at a cost of secrecy of commercial terms, and is far more preferable.

Beyond transparency, a regulation could take the form of insisting on standard rates and terms for all OTT players, with differential usage tiers if need be, to ensure that access is truly non-discriminatory. This is how the market is structured on the retail side.

Since there are transaction costs in individually approaching each telecom provider for such zero-rating, the market would greatly benefit from a single marketplace where OTTs can come and enter into agreements with multiple telecom providers.

Even in this model, telecom networks will be charging based not only on the fact of the number of customers they have, but on the basis of them having exclusive routing to those customers. Further, even under the standard-rates based single-market model, a particular zero-rated site may be accessible for free from one network, but not across all networks: unlike the situation with a toll-free number in which no such distinction exists.

To resolve this, the regulator may propose that if an OTT wishes to engage in paid zero-rating, it will need to do so across all networks, since if it doesn't there is risk of providing an unfair advantage to one network over another and increasing the gatekeeper effect rather than decreasing it.

However, all forms of competitive Internet service-paid zero-priced zero-rating, even when they don't harm competition, innovation amongst content providers, or consumers, will necessarily harm openness and diversity of the Internet. For instance, while richer companies with a strong presence in India may pay to zero-rate traffic for their Indian customers, decentralized technologies such as XMPP and WebRTC, having no central company behind them, would not, leading to customers preferring proprietary networks and solutions to such open technologies, which in turn, thanks to the network effect, leads to a vicious cycle. These harms to openness and diversity have to be weighed against the benefit in terms of increase in access when deciding whether to allow for competitive OTT-paid zero-priced zero-rating, as such competition doesn't exist in a truly level playing field . Further, it must be kept in mind that there are forms of zero-priced zero-rating that decrease the harm to openness / diversity, or completely remove that harm altogether: that there are other options available must be acknowledged by the regulator when considering the benefit to access from competitive OTT-paid zero-priced zero-rating.

3.3.3.1.6 Other options for zero-rating

There are other models of zero-priced zero-rating that either minimize the harm is that of ensuring free Internet access for every person. This can take the form of:[26]

● A mandatorily "leaky" 'walled garden':

○ The first-degree of all hyperlinks from the zero-rated OTT service are also free.

○ The zero-rated OTT service provider has to mandatorily provide free access to the whole of the World Wide Web to all its customers during specified hours.

○ The zero-rated OTT service provider has to mandatorily provide free access to the whole of the World Wide Web to all its customers based on amount on usage of the OTT service.[27]

● Zero-rating of all Web traffic

○ In exchange for viewing of advertisements

○ In exchange for using a particular Web browser

○ At low speeds on 3G, or on 2G.

3.3.3.2. What kinds of zero-rating are good

The majority of the forms of zero-rating covered in this section are content or source/destination-based zero-rating. Only some of the options covered in the "other options for zero-rating" section cover content-agnostic zero-rating models. Content-agnostic zero-rating models are not harmful, while content-based zero-rating models always harm, though to varying degrees, the openness of the Internet / diversity of OTTs, and to varying degrees increase access to Internet-based services. Accordingly, here is an hierarchy of desirability of zero-priced zero-rating, from most desirable to most harmful:

1. Content- & source/destination-agnostic zero-priced zero-rating.[28]

2. Content- & source/destination-based non-zero-priced zero-rating, without any commercial deals, chosen freely & paid for by users. [29]

3. Content- & source/destination-based zero-priced zero-rating, without any commercial deals, with full transparency. [30]

4. Content- & source/destination-based zero-priced zero-rating, on the basis of commercial deal with partial zero-priced access to all content, with non-discriminatory access to the same deal by all with full transparency.[31]

5. Content- & source/destination-based zero-priced zero-rating, on the basis of a non-commercial deal, without any benefits monetary or otherwise, flowing directly or indirectly from the provider of the zero-rated content to the ISP, with full transparency. [32]

6. Content- & source-destination-based zero-priced zero-rating, across all telecom networks, with standard pricing, non-discriminatory access, and full transparency.

7. Content- & source-destination-based zero-priced zero-rating, with standard pricing, non-discriminatory access, and full transparency.

8. Content- & source-destination-based zero-priced zero-rating, with non-discriminatory access, and full transparency.

9. Content- & source-destination-based zero-priced zero-rating, with non-discriminatory access, and transparency to the regulator.

10. Content- & source-destination-based zero-priced zero-rating, without any regulatory framework in place.

3.3.4 Cartels and Oligopoly

While cartels and oligopolies may have an impact on Net Neutrality, they are not problems that any set of anti-discrimination rules imposed on gatekeepers can fix. Further, cartels and oligopolies don't directly enhance the ability of gatekeepers to unjustly discriminate if there are firm rules against negative discrimination and price ceilings and floors on data caps are present for data plans. Given this, TRAI should recommend that this issue be investigated and the Competition Commission of India should take this issue up.

3.4 Reasonable Network Management Principles

Reasonable network management has to be allowed to enable the ISPs to manage performance and costs on their network. However, ISPs may not indulge in acts that are harmful to consumers in the name of reasonable network management. Below are a set of guidelines for when discrimination against classes of traffic in the name of network management are justified.

● Discrimination between classes of traffic for the sake of network management should only be permissible if:

○ there is an intelligible differentia between the classes which are to be treated differently, and

○ there is a rational nexus between the differential treatment and the aim of such differentiation, and

○ the aim sought to be furthered is legitimate, and is related to the security, stability, or efficient functioning of the network, or is a technical limitation outside the control of the ISP[33], and

○ the network management practice is the least harmful manner in which to achieve the aim.

● Provision of specialized services (i.e., "fast lanes") is permitted if and only if it is shown that

○ The service is available to the user only upon request, and not without their active choice, and

○ The service cannot be reasonably provided with "best efforts" delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or

○ The discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

These principles are only applicable at the level of ISPs, and not on access gateways for institutions that may in some cases be run by ISPs (such as a university network, free municipal WiFi, at a work place, etc.), which are not to be regulated as common carriers.

These principles may be applied on a case-by-case basis by a regulator, either suo motu or upon complaint by customers.


[1] Report of the Special Rapporteur on the Promotion and Protection of the right to freedom of opinion and expression, (19 May 2011), http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

[2] Available at http://www.trai.gov.in/WriteReadData/userfiles/file/NTP%202012.pdf.

[3] IAMAI, India to Cross 300 million internet users by Dec 14, (19 November, 2014), http://www.iamai.in/PRelease_detail.aspx?nid=3498&NMonth=11&NYear=2014.

[4] World Economic Forum, The Global Information Technology Report 2015, http://www3.weforum.org/docs/WEF_Global_IT_Report_2015.pdf.

[5] http://www.ictregulationtoolkit.org/4.1#s4.1.1

[6] See R.U.S. Prasad, The Impact of Policy and Regulatory Decisions on Telecom Growth in India (July 2008), http://web.stanford.edu/group/siepr/cgi-bin/siepr/?q=system/files/shared/pubs/papers/pdf/SCID361.pdf.

[7] 1973 AIR 106

[8] 1962 AIR 305

[9] "When ISPs go beyond their traditional use of IP headers to route packets, privacy risks begin to emerge." Alissa Cooper, How deep must DPI be to incur privacy risk? http://www.alissacooper.com/2010/01/25/how-deep-must-dpi-be-to-incur-privacy-risk/

[10] Richard T.B. Ma & Vishal Misra, The Public Option: A Non-Regulatory Alternative to Network Neutrality, http://dna-pubs.cs.columbia.edu/citation/paperfile/200/netneutrality.pdf

[11] Mobile number portability was launched in India on January 20, 2011 in the Haryana circle. See http://indiatoday.intoday.in/story/pm-launches-nationwide-mobile-number-portability/1/127176.html . Accessed on April 24, 2015.

[12] For a comprehensive list of all TRAI interconnection regulations & subsequent amendments, see http://www.trai.gov.in/Content/Regulation/0_1_REGULATIONS.aspx.

[13] See Telecommunication Interconnection Usage Charges (Eleventh Amendment) Regulations, 2015 (1 of 2015), available at http://www.trai.gov.in/Content/Regulation/0_1_REGULATIONS.aspx.

[14] Article 30 of the Universal Service Directive, Directive 2002/22/EC.

[15] See Telecommunication Mobile Number Portability (Sixth Amendment) Regulations, 2015 (3 of 2015), available at http://www.trai.gov.in/Content/Regulation/0_1_REGULATIONS.aspx.

[16] The Telecommunication (Broadcasting and Cable) Services (Seventh) (The Direct to Home Services) Tariff Order, 2015 (2 of 2015).

[17] Section 8, Cable Television Networks Act, 1995.

[18] TRAI writes new rules for Cable TV, Channels, Consumers, REAL TIME NEWS, (August 11, 2014), http://rtn.asia/rtn/233/1220_trai-writes-new-rules-cable-tv-channels-consumers.

[19] An initial requirement for all multi system operators to have a minimum capacity of 500 channels was revoked by the TDSAT in 2012. For more details, see http://www.televisionpost.com/cable/msos-not-required-to-have-500-channel-headends-tdsat/.

[20] Aparna Ghosh, Bharti SoftBank Invests $14 million in Hike, LIVE MINT, (April 2, 2014), http://www.livemint.com/Companies/nI38YwQL2eBgE6j93lRChM/Bharti-SoftBank-invests-14-million-in-mobile-messaging-app.html.

[21] Mike Masnick, Can We Kill This Ridiculous Shill-Spread Myth That CDNs Violate Net Neutrality? They Don't, https://www.techdirt.com/articles/20140812/04314528184/can-we-kill-this-ridiculous-shill-spread-myth-that-cdns-violate-net-neutrality-they-dont.shtml.

[22] Mathew Carley, What is Hayai's stance on "Net Neutrality"?, https://www.hayai.in/faq/hayais-stance-net-neutrality?c=mgc20150419

[23] Helani Galpaya & Shazna Zuhyle, South Asian Broadband Service Quality: Diagnosing the Bottlenecks, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1979928

[24] DTH players told to offer pay channels on la carte basis, HINDU BUSINESS LINE (July 22, 2010), http://www.thehindubusinessline.com/todays-paper/dth-players-told-to-offer-pay-channels-on-la-carte-basis/article999298.ece.

[25] The Telecommunication (Broadcasting and Cable) Services (Fourth) (Addressable Systems) Tariff Order, 2010.

[26] These suggestions were provided by Helani Galpaya and Sunil Abraham, based in some cases on existing practices.

[28] Example: free Internet access at low speeds, with data caps.

[29] Example: special "packs" for specific services like WhatsApp.

[30] Example: zero-rating of all locally-peered settlement-free traffic.

[31] Example: "leaky" walled gardens, such as the Jana Loyalty Program that provide limited access to all of the Web alongside access to the zero-rated content.

[32] Example: Wikipedia Zero.

[33] A CGNAT would be an instance of such a technology that poses network limitations.

CIS Cybersecurity Series (Part 22) - Anonymous

by Purba Sarkar last modified Jul 13, 2015 01:40 PM
CIS interviews a Tibetan security researcher and information activist, as part of the Cybersecurity Series. He prefers to remain anonymous.

"I don't know technology but I am aware of the information people share with me. So yes, they can track you down through your mobile phone. The last time I was in Nepal, I met a westerner. We went to this restaurant and she asked me to take the battery out of the phone. That was the first time I had heard of this and so when I asked why she said that it is possible that people had followed us and it has happened to other Tibetans in Nepal..."

Centre for Internet and Society presents its twenty second installment of the CIS Cybersecurity Series.

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

Freedom of Expression in a Digital Age

by Geetha Hariharan and Jyoti Panday — last modified Jul 15, 2015 02:42 PM
The Centre for Internet & Society, the Observer Research Foundation, the Internet Policy Observatory, the Centre for Global Communication Studies and the Annenberg School for Communication, University of Pennsylvania organized this conference on April 21, 2015 in New Delhi.

This report was edited by Elonnai Hickok


Effective research, policy formulation, and the development of regulatory frameworks in South Asia

Inside this Report

BACKGROUND TO THE CONFERENCE

THE ORGANIZERS

CONFERENCE PROGRAMME

WELCOME ADDRESS

SESSION 1: LEARNINGS FROM THE PAST

Vibodh Parthasarathi, Associate Professor, Centre for Culture, Media and Governance (CCMG), Jamia Millia Islamia University

Smarika Kumar, Alternative Law Forum

Bhairav Acharya, Advocate, Supreme Court and Delhi High Court & Consultant, CIS

Ambikesh Mahapatra, Professor of Chemistry, Jadavpur University

Questions & Comments

SESSION 2: CURRENT REALITIES

Cherian George, Associate Professor, Hong Kong Baptist University

Zakir Khan, Article 19, Bangladesh

Chinmayi Arun, Research Director, Centre for Communication Governance (CCG), National Law University (Delhi)

Raman Jit Singh Chima, Asia Consultant, Access Now

Questions & Comments

SESSION 3: LOOKING AHEAD

Sutirtho Patranobis, Assistant Editor, Hindustan Times

Karuna Nundy, Advocate, Supreme Court of India

Geeta Seshu, The Hoot

Pranesh Prakash, Policy Director, Centre for Internet & Society

Questions & Comments

Conclusion

Background to the Conference

As the Internet expands and provides greater access and enables critical rights such as freedom of expression and privacy, it also places censorship and surveillance capabilities in the hands of states and corporations. It is therefore crucial that there exist strong protections for the right to freedom of expression that balance state powers and citizen rights. While the Internet has thrown up its own set of challenges such as extremist/hate speech, the verbal online abuse of women, and the use of the Internet to spread rumours of violence, the regulation of cont ent is a question that is far from being settled and needs urgent attention. These are compounded by contextual challenges. What role can and should the law play? When is it justified for the government to intervene? What can be expected from intermediaries, such as social networks and Internet Service Providers (ISPs)? And what can users do to protect the right to free speech - their own and that of others?

Balancing freedom of expression with other rights is further complicated by the challenges of fast paced and changing technologies and the need for adaptable and evolving regulatory frameworks. By highlighting these challenges and questioning the application of existing frameworks we aim to contribute to further promoting and strengthening the right to freedom of expression across South Asia.

The Organizers

Centre for Internet & Society

Established in 2008, the Centre for Internet and Society (CIS) is a non-profit research organization that works on policy issues relating to freedom of expression, privacy, accessibility for persons with disabilities, access to knowledge and intellectual property rights, and openness (including open standards and open government data). CIS also engages in scholarly research on the budding disciplines of digital natives and digital humanities. CIS has offices in Bangalore and New Delhi.

Observer Research Foundation

ORF, established in 1990, is India's premier independent public policy think tank and is engaged in developing and discussing policy alternatives on a wide range of issues of national and international significance. The fundamental objective of ORF is to influence the formulation of policies for building a strong and prosperous India in a globalised world. It hosts India's largest annual cyber conference - CyFy: the India Conference on Cyber Security and Internet Governance

The Annenberg School for Communication, The Centre for Global Communication Studies & the Internet Policy Observatory (U. Penn.)

The Annenberg School of Communication (ASC) at the University of Pennsylvania produces research that advances the understanding of public and private communications. The Center for Global Communication Studies (CGCS) is a focused academic center at ASC and a leader in international education and training in comparative media law and policy. It affords students, academics, lawyers, regulators, civil society representatives and others the opportunity to evaluate and discuss international communications issues. The Internet Policy Observatory (IPO) was started by CGCS to research the dynamic technological and political contexts in which Internet governance debates take place. The IPO serves as a platform for informing relevant communities of activists, academics, and policy makers, displaying collected data and analysis.

Conference Programme

'Freedom of Expression in a Digital Age' Effective Research, Policy Formation & the Development of Regulatory Frameworks in South Asia
April 21st, 2015 - 11 a.m. to 6 p.m.

at

The Observer Research Foundation

20, Rouse Avenue Institutional Area

New Delhi - 110 002, INDIA

About the Conference

The conference will be a discussion highlighting the challenges in promoting and strengthening online freedom of expression and evaluating the application of existing regulatory frameworks in South Asia

Agenda

Learnings from the pastCurrent RealitiesLooking ahead
11:00 - 1:00 1:00 - 2:00 2:00 - 4:00 4:00- 4:15 4:15 - 6:00
Overview of online FoEx policy and regulatory models across South Asia
Enabling FOEX in South Asia
Challenges associated with formulating regulation for online FoEx
Definitions of FoEx across South Asia
Ways in which FoEx is, or may be, curtailed online Ways forward to bridge existing gaps between policy formation and policy implementation with respect to FOEX online
Impact of technology and markets on FoEx across South Asia
Balancing FoEx and other digital rights
Exploring emerging regulatory questions for FoEx online
Challenges to FoEx online across South Asia
The impact of jurisdiction, multi-national platforms, and domestic regulation on FoEx online
Impacting and influencing the development and implementation of Internet regulation through research
Effective research techniques and online FoEx
Role and responsibility of intermediaries in regulating online speech  across South Asia
Exploration of the future role and interplay of technology and policy in enabling FOEX online

Ms. Mahima Kaul, Head (Cyber & Media Initiative), Observer Research Foundation (ORF), introduced the conference and its context and format, as well as the organisers. In three sessions, the Conference aimed to explore historical lessons, current realities and future strategies with regard to freedom of expression on the Internet in India and South Asia.

Mr. Manoj Joshi, Distinguished Fellow, ORF, provided the welcome address. Mr. Joshi highlighted the complexities and distinctions between print and electronic media, drawing on examples from history. He stated that freedom of expression is most often conceived as a positive right in the context of print media, as restrictions to the right are strictly within the bounds of the Constitution. For instance, during the riots in Punjab in the 1980s, when hate speech was prevalent, constitutionally protected restrictions were placed on the print media. When efforts were made to crack down on journalists with the introduction of the Defamation Bill in the 1980s, journalists were lucky that the Bill also included proprietors as those liable for defamation. This created solidarity between journalists and proprietors of newspapers to fight the Bill, and it was shelved.

Freedom of expression is necessary in a democratic society, Mr. Joshi stated, but it is necessary that this freedom be balanced with other rights such as privacy of individuals and the protection against hate speech. In the absence of such balance, speech becomes one-sided, leaving no recourse to those affected by violative speech.

In the digital age, however, things become complex, Mr. Joshi said. The freedom available to speech is enhanced, but so is the misuse of that freedom. The digital space has been used to foment riots, commit cybercrime, etc. Online, in India the restrictions placed on freedom of speech have become draconian. Section 66A and the incidents of arrests under it are an example of this. It is, therefore, important to consider the kind of restrictions that should be placed on free speech online. There is also the question of self-regulation by online content-creators, but this is rendered complex by the fact that no one owns the Internet. This conference, Mr. Joshi said, will help develop an understanding of what works and what frameworks we will need going forward.

Mr. Pranesh Prakash, Policy Director, Centre for Internet & Society (CIS), introduced the speakers for the first session. Mr. Vibodh Parthasarathi, Associate Professor, Centre for Culture, Media and Governance, Jamia Millia Islamia University, would first share his views and experience regarding the various ways of curtailing freedom of expression by the State, markets and civil society. Ms. Smarika Kumar of theAlternative Law Forum (ALF) would then expand on structural violations of freedom of expression. Mr. Bhairav Acharya, Advocate with the Delhi Bar and Consultant for CIS, would throw light on the development of free speech jurisprudence and policy in India from the colonial era, while Prof. Ambikesh Mahapatra, Professor of Chemistry, Jadavpur University, was to speak about his arrest and charges under Section 66A of the Information Technology Act, 2000 (am. 2008), providing insight into the way Section 66A was misused by police and the West Bengal government.

Vibodh Parthasarathi, Associate Professor, Centre for Culture, Media and Governance (CCMG), Jamia Millia Islamia University

Mr. Parthasarathi began his talk with an anecdote, narrating an incident when he received a call from a print journalist, who said "TV people can get away with anything, but we can't, and we need to do something about it." The notion of news institutions getting away with non-kosher actions is not new - and has been a perception since the 19th century. He stressed that there have always been tensions between Freedom of Expression, access, and other rights. Curtailment happens not just by the state, but by private parties as well - market and civil society. Indeed, a large number of non-state actors are involved in curtailing FoE. Subsequently a tension between individual FoE and commercial speech freedom is emerging. This is not a new phenomenon. Jurisprudence relating to free speech makes a distinction between the persons in whom the right inheres: individuals on the one hand (including journalists and bloggers), and proprietors and commercial entities on the other.

In India, freedom of speech cases - from 1947 - relate primarily to the rights of proprietors. These cases form the legal and constitutional basis for issues of access, transmission and distribution, but are not necessarily favourable to the rights of individual journalists or newsreaders. At the individual level, the freedom to receive information is equally important, and needs to be explored further. For entities, it is crucial to consider the impact of curtailment of speech (or threats of curtailment) on entities of different sizes and kinds.

Mr. Parthasarathi further explained that online, freedom of expression depends on similar structural conditions and stressed that scholarship must study these as well. For example, intermediaries in the TV industry and online intermediaries will soon come together to provide services, but scholarship does not link them yet. The law is similarly disjointed. For instance, 'broadcasting' falls in the Union List under Schedule VII of the Constitution, and is centrally regulated. However, distribution is geographically bounded, and States regulate distribution. In order to have a cohesive broadcast regulation, he raised the point that the placement of 'broadcasting' in the Union List may need to be re-thought.

According to Mr. Parthasarathi, the underlying conceptual basis - for the interlinked scholarship and regulation of intermediaries (online and broadcast), of commercial speech and individual access to information, and censorship (State and private, direct and structural) - lies in Article 19(1)(a). He noted that there is a need to rethink the nature of this freedom. For whom do we protect freedom of speech? For individuals alone, or also for all private entities? From what are we protecting this freedom? For Mr. Parthasarathi, freedom of speech needs to be protected from the State, the market, civil society and those with entrenched political interests. Additionally, Mr. Parthasarathi raised the question of whether or not in the online context freedom of the enterprise becomes antithetical to universal access

Mr. Parthasarathi also highlighted that it is important to remember that freedom of expression is not an end in itself; it is a facilitator - the 'road'- to achieve crucial goals such as diversity of speech. But if diversity is what freedom of expression should enable, it is important to ask whether institutional exercise of freedom has led to enhanced diversity of speech. Do media freedom and media diversity go together? For Mr. Parthasarathi, media freedom and media diversity do not always go together. The most vivid example of this is the broadcast environment in India, following the deregulation of broadcast media beginning from the mid 1990s - much of which was done through executive orders on an ad hoc basis.

This led to infrastructural censorship, in addition to the ex-post curtailment of content. Increasingly the conditions on which content is produced are mediated i.e. which entities are eligible to obtain licenses, what type of capital is encouraged or discouraged, how is market dominance measured, accumulation of interests across content and carriage, or various carriage platforms? Mediating the conditions of producing speech, or infra censorship, is primarily operationalised through regulatory silences, as illustrated in the absence of any coherent or systematic anti-competitive measures.

Indian courts are champions in protecting the freedom of expression of 'outlets' - of proprietors and entities. But this has not led to diversity of speech and media. Perhaps there is a need to rethink and reformulate ideas of freedom. He pointed out that it is not enough merely to look at ex post curtailment of speech (i.e., the traditional idea of censorship). Instead the conditions in which speech is made and censored need to be explored; only then can our understanding expand. Mr Parthasarathi ended his talk by stressing that a proactive understanding of freedom of expression can highlight architectural curtailment of speech through the grant of licenses, competition and antitrust laws, media ownership and concentration across carriage and content, etc. This is essential in a digital age, where intermediaries play a crucial, growing role in facilitating freedom of speech.

Smarika Kumar, Alternative Law Forum
Beginning where Mr. Parthasarathi left off, the focus of Ms. Kumar's presentation was the curtailment of speech and the conditions under which speech is produced. At the outset, she sought from the audience a sense of the persons for whom freedom of speech is protected: for government-controlled media, the markets and commercial entities, or for civil society and citizens? Ms. Kumar aimed to derive ideas and conceptual bases to understand freedom of speech in the digital space by studying judicial interpretations of Article 19(1)(a) and its limitations. Towards this end, she highlighted some Indian cases that clarify the above issues.

Ms. Kumar began with Sakal Papers v. Union of India [AIR 1962 SC 305]. In Sakal Papers, the issue concerned the State's regulation of speech by regulation of the number of permitted pages in a newspaper. This regulation was challenged as being in violation of Article 19(1)(a) of the Constitution. The rationale for such regulation, the State argued, was that newsprint, being imported, was a scarce commodity, and therefore needed to be equitably distributed amongst different newspapers - big or small. Further, the State defended the regulation citing its necessity for ensuring equal diversity and freedom of expression amongst all newspapers. The petitioners in the case argued that such a regulation would negatively impact the newspapers' right to circulation by reducing the space for advertisements, and thus forcing the newspaper to increase selling prices. Readers of the newspaper additionally argued that such increase in prices would affect their right to access newspapers by making them less affordable, and hence such regulation was against the readers' interests. Ultimately, the Supreme Court struck down the regulation. The Constitution Bench noted that if the number of pages of a newspaper were to be limited and regulated, the space available for advertisements would reduce. Were advertisements to reduce, the cost of newspapers would increase, affecting affordability and access to information for the citizens. Ultimately, newspaper circulation would suffer; i.e., the State's regulation affected the newspapers' right of circulation which would amount to a violation of freedom of expression as the right extends to the matter of speech as well as the ability to circulate such speech.

Apart from the number of pages, the Indian government has sought to regulate newsprint in the past. In Bennett Coleman and Co. & Ors. v. Union of India [AIR 1973 SC 106], a Constitution Bench of the Supreme Court considered whether regulation of the number of pages permitted in a newspaper constituted an unreasonable restriction on freedom of expression. Towards this, the Government of India set forth a Newsprint Policy in 1972, under the terms of which the number of pages of all papers were to be limited to ten; where there were small newspapers that did not achieve the ten-page limit, a 20% increase was permitted; and finally, new newspapers could not be started by common ownership units. The Newsprint Order aimed to regulate a scarce resource (newsprint), while the Newsprint Policy sought to promote small newspapers, encourage equal diversity among newspapers and prevent monopolies. The Supreme Court upheld the Newsprint Order, stating that newsprint was indeed a scarce resource, and that the matter of import and distribution of newsprint was a matter of government policy. The Court would not interfere unless there was evidence of mala fides. However, the Court struck down the Newsprint Policy for reasons similar to Sakal Papers ; that the rights afforded to newspapers under Article 19(1)(a) - including circulation - could not be abridged for reasons of protecting against monopolies.

In his dissenting opinion, Justice Mathew stated that in conceiving freedom of expression, it is important to also consider the hearer (the reader). For Justice Mathew, Meiklejohn's view the "what is essential is not that everyone shall speak, but that everything worth saying shall be said" cannot be affected if, because of concentration of media ownership, media are not available for most speakers. In such a situation, " the hearers [cannot] be reached effectively". However, the imperative is to maximise diversity of speech. For this, we need to balance the rights of citizens against those of the press; i.e., the rights of the reader against those of the speaker.

Ms. Kumar pointed out that this was the first case to consider the right of readers to access a diversity of speech. Justice Mathew distinguished curtailment of speech by the state, and by the market - and that this is crucial in the digital age, where information is predominantly accessible through and because of intermediaries. Ms. Kumar further stressed that especially in an age where 'walled gardens' are a real possibility (in the absence of net neutrality regulation, for instance), Justice Mathew's insistence on the rights of readers and listeners to a diversity of speech is extremely important.

Ms. Kumar went on to explain that though judges in the Supreme Court recognised the rights of readers/listeners (us, the citizens) for the purposes of news and print media, a similar right is denied to us in the case of TV. In Secretary, Ministry of Broadcasting v. Cricket Association of Bengal [AIR 1995 SC 1236], the issue surrounded private operators' right to use airwaves to broadcast. The Supreme Court considered whether government agencies and Doordarshan, the government broadcaster, " have a monopoly of creating terrestrial signals and of telecasting them or refusing to telecast them", and whether Doordarshan could claim to be the single host broadcaster for all events, including those produced or organised by the company or by anybody else in the country or abroad. The Supreme Court held that the TV viewer has a right to a diversity of views and information under Article 19(1)(a), and also that the viewer must be protected against the market. The Court reasoned that " airwaves being public property, it is the duty of the state to see that airwaves are so utilised as to advance the free speech right of the citizens, which is served by ensuring plurality and diversity of views, opinions and ideas ".

If every citizen were afforded the right to use airwaves at his own choosing, "powerful economic, commercial and political interests" would dominate the media. Therefore, instead of affirming a distinct right of listeners, the Court conflated the interests of government-controlled media with those of the listeners, on the ground that government media fall under public and parliamentary scrutiny. According to Ms. Kumar this is a regressive position that formulates State interest as citizen interest. Ms. Kumar argued that in order to ensure freedom of speech there is a need to frame citizens' interests as distinct from those of the market and the government.

Bhairav Acharya, Advocate, Supreme Court and Delhi High Court & Consultant, CIS
Mr. Acharya's presentation focused on the divergence between the jurisprudence and policy surrounding freedom of expression in India. According to him, the policies of successive governments in India - from the colonial period and thereafter - have developed at odds with case-law relating to freedom of expression. Indeed, it is possible to discern from the government's actions over the last two centuries a relatively consistent narrative of governance which seeks to bend the individual's right to speech to its will. The defining characteristics of this narrative - the government's free speech policy - emerge from a study of executive and legislative decisions chiefly in relation to the press, that continue to shape policy regarding the freedom of expression on the Internet. Thus, there has been consistent tension between the individual and the community, as well as the role of the government in enforcing the expectations of the community when thwarted by law.

Today, free speech scholarship (including digital speech) fails to take into account this consistent divergence between jurisprudence and policy. Mr. Acharya pointed out that we think of digital speech issues as new, whereas there is an immense amount of insight to gain by studying the history of free speech and policy in India.

Towards this, Mr. Acharya highlighted that to understand dichotomy between modern and native law and free speech policy, it is useful to go back to the early colonial period in India, when Governor-General Warren Hastings established a system of courts in Bengal's hinterland to begin the long process of displacing traditional law to create a modern legal system. J. Duncan M. Derrett notes that the colonial expropriation of Indian law was marked by a significant tension caused by the repeatedly-stated objective of preserving some fields of native law to create a dichotomous legal structure. These efforts were assisted by orientalist jurists such as Henry Thomas Colebrook whose interpretation of the dharmasastras heralded a new stage in the evolution of Hindu law. By the mid-nineteenth century, this dual system came under strain in the face of increasing colonial pressure to rationalise the legal system to ensure more effective governance, and native protest at the perceived insensitivity of the colonial government to local customs.

Mr. Acharya explained that this myopia in Indian policy research is similar social censorship (i.e., social custom as creating limits to free speech). Law and society scholars have long studied the social censorship phenomenon, but policy research rejects this as a purely academic pursuit. But the truth is that free speech has been regulated by a dual policy of law and social custom in India since colonial times. The then-Chief Justice of the Calcutta High Court Elijah Impey required officers to respect local customs, and this extended to free speech as well. But as colonial courts did not interpret Hindu law correctly; interpretations of freedom of speech suffered as well. Mr. Acharya noted that the restrictions on freedom of speech introduced by the British continue to affect individuals in India today. Prior to British amendments, India had drawn laws from multiple sources - indeed customs and laws were tailored for communities and contexts, and not all were blessed with the consistency and precedent so familiar to common law. Since the British were unable to make sense of India's law and customs, they codified the principles of English customary law.

The Indian Penal Code (IPC) saw the codification of English criminal law (the public offences of riots, affray, unlawful assembly, etc., and private offences such as criminal intimidation). In Macaulay's initial drafts, the IPC did not contain sedition and offences of hurting religious sentiments, etc. Sections 124A ("Sedition") and 295A (" Deliberate and malicious acts intended to outrage religious feelings of any class by insulting its religion or religious beliefs") were added to the IPC in 1860, and changes were made to the Code of Criminal Procedure as well. Today, these sections are used to restrict and criminalise digital speech.

The Right to Offend :

Mr. Acharya then considered the history of the "right to offend", in light of the controversies surrounding Section 66A, IT Act. Before the insertion and strengthening of Section 295A, citizens in India had a right to offend others within the bounds of free speech. He clarified that in 1925 a pamphlet " Rangila Rasool" was published by Lahore-based Mahashe Rajpal (the name(s) of the author(s) were never revealed). The pamphlet concerned the marriages and sex life of the Prophet Mohammed, and created a public outcry. Though the publisher was acquitted of all charges and the pamphlet was upheld, the publisher was ambushed and stabbed when he walked out of jail. Under pressure from the Muslim community, the British enacted Section 295A, IPC. The government was seeking to placate and be sensitive to public feeling, entrenching the idea that the government may sacrifice free speech in the face of riots, etc. The death of India's "right to offend" begins here, said Mr. Acharya.

A prior restraint regime was created and strengthened in 1835, then in 1838, etc. At this time, the press in India was largely British. Following the growth of Indian press after the 1860s, the British made their first statutory attempt at censorship in 1867: a prior sanction was required for publication, and contravention attracted heavy penalties such as deportation and exile. Forfeiture of property, search and seizures and press-inspections were also permitted by the government under these draconian laws. Mr. Acharya noted that it is interesting that many leaders of India's national movement were jailed under the press laws.

Independence and After :

Mr. Acharya further explained that the framers of the Constitution deliberately omitted "freedom of the press" from the text of Article 19(1)(a) and that Jawaharlal Nehru did not think the press ought to be afforded such a right. This is despite a report of the Law Commission of India, which recommended that corporations be provided an Article 19 right. But why distrust the press, though citizens are granted the freedom of speech and expression under Article 19(1)(a)? In Mr. Acharya's opinion, this is evidence of the government's divergent approach towards free speech policy; and today, we experience this as a mistrust of the press, publications, and of online speech.

Mr. Acharya also explained that statutory restrictions on free speech grew at odds with judicial interpretation in the 1950s. Taking the examples ofRomesh Thapar v. the State of Madras [AIR 1950 SC 124] and Brij Bhushan v. the State of Delhi [(1950) Supp. SCR 245], Mr. Acharya showed how the judiciary interpreted Article 19 favourably. Despite the government's arguments about a public order danger, the Supreme Court refused to strike down left wing or right wing speech ( Romesh Thapar concerned a left wing publication; Brij Bhushan concerned right wing views), as "public order" was not a ground for restricting speech in the Constitution. The government reacted to the Supreme Court's judgement by enacting the First Amendment to the Constitution: Article 19(2) was amended to insert "public order" as a ground to restrict free speech. Thus, it is possible to see the divergence between free speech jurisprudence and policy in India from the time of Independence. Nehru and Sardar Vallabhbhai Patel had supported the amendment, while B.R. Ambedkar supported Romesh Thapar and Brij Bhushan. On the other hand, then-President Rajendra Prasad sought Constitutional protection for the press.

Why Study Free Speech History?

Mr. Acharya noted how the changes in free speech policy continue to affect us, including in the case of content restrictions online. In the 1950s, then-Prime Minister Nehru appointed the First Press Commission, and the newspaper National Herald was established to promote certain (left wing) developmental and social goals. Chalapati Rao was the editor of the National Herald, and a member of the First Press Commission.

At that time, the Commission rejected vertical monopolies of the press. However, today, horizontal monopolies characterize India's press. The First Press Commission also opposed 'yellow journalism' (i.e., sensational journalism and the tabloid press), but this continues today. Decades later, Prime Minister Indira Gandhi called for a "committed bureaucracy, judiciary and press", taking decisive steps to ensure the first two. For instance, Justice Mathew (one of the judges in the Bennett Coleman case) was an admirer of Indira Gandhi. As Kerala's Advocate General, he wanted the Press Registrar to have investigative powers similar to those given in colonial times; he also wanted the attacks on government personalities to be criminalized. The latter move was also supported by M.V. Gadgil, who introduced a Bill in Parliament that sought to criminalise attacks on public figures on the grounds of privacy. Mr. Acharya noted that though Indira Gandhi's moves and motives with regard to a "committed press" are unclear, the fact remains that India's regional and vernacular press was more active in criticizing the Emergency than national press.

Demonstrating the importance of understanding a contexts history - both social and legislative, following the striking down of 66A in Shreya Singhal & Ors. v. Union of India (Supreme Court, March 24, 2015), elements in the government have stated their wish to introduce and enact a new Section 66A. Mr. Acharya explained that such moves from elements in the government shows that despite the striking down of 66A, it is still possible for the repressive and mistrustful history of press policy to carry forward in India. This possibility is supported by colonial and post-Independence press history and policy that has been developed by the government. When looking at how research can impact policy, greater awareness of history and context may allow for civil society, academia, and the public at large to predict and prepare for press policy changes.

Ambikesh Mahapatra, Professor of Chemistry, Jadavpur University

Prof. Mahapatra introduced himself as a victim of the West Bengal administration and ruling party. He stated that though India's citizens have been granted the protection of fundamental rights after Independence, these rights are not fully protected; his experience with the West Bengal ruling party and its abuse of powers under the Information Technology Act, 2000 (am. 2008) ("IT Act") highlights this.

On March 23, 2012, Prof. Mahapatra had forwarded a cartoon to his friends by email. The cartoon poked fun at West Bengal Chief Minister Mamata Banerjee and her ruling party. On the night of April 12, 2012, individuals not residing in the Professor's housing colony confronted him, dragging him to the colony building and assaulting him. These individuals forced Prof. Mahapatra to write a confession about his forwarding of the cartoon and his political affiliations. Though the police arrived at the scene, they did not interfere with the hooligans. Moreover, when the leader of the hooligans brought the Professor to the police and asked that he be arrested, they did so even though they did not have an arrest warrant. At the police station, the hooligans filed a complaint against him. The Professor was asked to sign a memo mentioning the charges against him (Sections 114 and 500, Indian Penal Code, 1860 & Section 66A, IT Act). Prof. Mahapatra noted that the police complaint had been filed by an individual who was neither the receiver nor the sender of the email, but was a local committee member with the Trinamool Congress (the West Bengal ruling party).

The arrest sparked a series of indignant responses across the country. The West Bengal Human Rights Commission took suo motu cognizance of the arrest, and recommended action against the high-handedness of the police. Fifty six intellectuals appealed to the Prime Minister of India to withdraw the arrest; the former Supreme Court judge Markandey Katju was among those who appealed. Thirty cartoonists' organisations from across the world also appealed to the President and the Prime Minister to withdraw the case.

The West Bengal government paid no heed to the protests, and Chief Minister Mamata Banerjee publicly supported the actions of the police - making public statements against Justice Katju and A.K. Ganguly, former judge of the Supreme Court and head of the West Bengal Human Rights Commission respectively. A charge sheet was framed against Prof. Mahapatra and others, with Section 66A as one of the charges.

The case has been going on for over two years. Recently, on March 10, 2015, the Calcutta High Court upheld the recommendations of the West Bengal Human Rights Commission, and directed the government to implement them. The West Bengal government has preferred an appeal before a division bench, and the case will continue. This is despite the fact that Section 66A has been struck down (by the Supreme Court in Shreya Singhal & Ors. v. Union of India).

Though noting that he was not an expert, Prof. Mahapatra put forward that it seemed that the freedom of expression of the common man depends on the whims of the ruling parties and the State/Central governments. It is of utmost importance, according to him, to protect the common man's freedom of speech, for his recourse against the government and powerful entities is pitifully limited.

Questions & Comments

Q. A participant stated that the core trouble appears to lie in the power struggle of political parties. Political parties wish to retain power and gather support for their views. Despite progressive laws, it is the Executive that implements the laws. So perhaps what is truly required is police and procedural reforms rather than legislative changes.

A. Members of the panel agreed that there is a need for more sensitivity and awareness amongst the law enforcement agencies and this might be long overdue and much needed step in protecting the rights of citizens.

Q. A participant was interested in understanding how it might be possible to correct the dichotomy between FoE policy and doctrine? The participant also wanted the panel to comment on progressive policy making if any.

A. Members of the panel stated that there is no easy way of correcting this dichotomy between custom and law. Scholars have also argued that the relationship between custom and pernicious social censorship is ambiguous. Towards this, more studies are required to come to a conclusion.

Q. A participant requested clarity on what rights can be created to ensure and support a robust right to freedom of expression, and how this might affect the debates surrounding net neutrality?

A. Members of the panel noted that the Internet allows citizens and corporations to regulate speech on their own (private censorship), and this is problematic. Members of the panel also responded that the existing free speech right does not enable diversity of speech. Social and local customs permit social censorship, and this network effect is clearly visible online; individuals experience a chilling effect. Finally, in the context of net neutrality, the interests of content-producers (OTTs, for instance) are different from those of users. They may benefit economically from walled gardens or from non-interference with traffic-routing, but users may not. Therefore, there is a need for greater clarity before coming to a conclusion about potential net neutrality regulation.

Session 2: Current Realities

Dr. Cherian George, Associate Professor, Hong Kong Baptist University
Dr. George began his talk by highlighting how there is no issue as contentious as offensive speech and how it should be dealt with. The debate around free speech is often framed as a battle between those who support democracy and those who oppose it. Yet, this is also a tension within democracy. Citizens should not be unjustly excluded from participating in democracy (companion rights in Article 19 and 20, ICCPR). Relevant UN institutions and Article 19 have come up with reports and ideals that should be universally adopted - norms that apply to many areas including speech. These norms are different from traditional approaches. For example:

Human Rights Norms

Traditional Approach

Regulate incitement of violence (discrimination, hate, etc.)

Law protects people's feelings from speech that offends

Protect minorities as they are more vulnerable to exploitation and uprooting of their values

Law sides with the majority, to protect mainstream values over minority values

Allow robust criticism of ideas, religions, and beliefs

Law protects religion, beliefs, and ideas from criticism

Strive for balance between liberty and equality

Aims for order and maintenance of status quo

Promote harmony through the media

Enforces harmony by the state

Commenting on the traditional approach, Dr. George noted that if the state protects feelings of offence against speech, it allows groups to use such protection as a political weapon: "hate spin", which is the giving or taking of offence as a political strategy. Hate spin is normally framed as a "visceral, spontaneous reaction" to a video, writing, or speech, etc. Yet, the spontaneous reaction of indignation to speech or content can consistently be revealed to result from conscious manipulation by middlemen for political purposes.

South Asia is similar to West Asia - as the legal frameworks provide immunity for dangerous speech. In practice, this allows for the incitement of discrimination, hostility, and violence. At the same time, the legal frameworks allow for excessive sympathy for wounded feelings, and often the taking of offence turns into a political strategy. Power enters the equation here. The law allows the powerful to take offence and use hate speech against those not in powerful positions.

Dr. George highlighted a number of legal quandaries surrounding freedom of expression including:

  1. Enforcement gaps: There is a lack of enforcement of existing laws against incitement.
  2. Non-regulated zones: Socio-political research demonstrates that many problems cannot be regulated, and yet the law can only deal with what can be regulated. Hate speech is one of these as hate speech is not in the speech itself, but in the meaning that is produced in the mind of those saying/listening.
  3. Verdict-proof opportunities: Political entrepreneurs can use legislative and judicial processes to mainstream hateful views, regardless of how legislature and courts ultimately act. The religious right, for instance, can always pit themselves morally against "secular" decisions of apex authorities (SC, etc.). For example, in the context of the US and Islamophobia - the State legislature in Alabama introduced an anti-Shariah law. Yet, the law is against a non-existent threat and appears to be a ploy to normalize anti-Muslim sentiments, including in political rhetoric. While focusing on winning battles in courts or legislature, the intolerant groups do not need to win a legal court case to introduce and entrench language of intolerance in public discourse and discussion. This demonstrates that there is a need to begin moving away from a purely legal analysis (interpretation or development) of the laws, and a need to begin studying these issues through a sociological lens.

Zakir Khan, Article 19, Bangladesh
Mr. Khan introduced Article 19 and its work in Bangladesh and the rest of South Asia. He noted that Article 19 is involved in documenting and analysing laws and regulations affecting freedom of expression, including in Bangladesh. Article 19 also campaigns for changes in law and policy, and responds from a policy perspective to particular instances of government overreach.

Mr. Khan explained that India has the Information Technology Act, 2000 (am. 2008) ("IT Act"), and in Bangladesh, the equivalent legislation is the Information and Communication Technology Act, 2006 ("ICT Act"). The ICT Act was enacted to bring Bangladeshi law in conformity with international law; i.e. in accordance with the UNCITRAL model law on e-commerce and online transactions. The ICT Act deals with hacking, crimes committed with the use of a computer system, breach of data, breach of computer system, and hardware.

Like the IT Act in India, Bangladesh's ICT Act also criminalizes speech and expression online. For instance, Section 57, ICT Act, criminalizes the publication of "fake, obscene or defaming information in electronic form". Similarly, bringing damage to "the state's image" online is criminalized. In 2013, the Bangladesh Ministry of Law amended the ICT Act to increase penalties for online offences, and allow for the detention of suspected offenders, warrantless arrests and indefinite detention without bail. Bloggers and activists have been protesting these changes, and have been targeted for the same.

Mr. Khan noted that Article 19 has developed a tool to report violations online. Individuals who have experienced violations of their rights online can post this information onto a forum, wherein Article 19 tracks and reports on them, as well as creating awareness about the violation. Any blogger or online activist can come and voice concerns and report their stories. Mr. Khan also highlighted that given the ICT Act and the current environment, online activists and bloggers are particularly threatened. Article 19 seeks to create a safe space for online bloggers and activists by creating anonymity tools, and by creating awareness about the distinctions between political agenda and personal ideology.

Chinmayi Arun, Research Director, Centre for Communication Governance (CCG), National Law University (Delhi)
Ms. Arun began by noting that usually conversations around freedom of expression look at the overlap between FoE and content i.e. the focus is on the speaker and the content. Yet, when one targets the mediator - it shifts the focus as it would be approaching the issue from the intermediary's perspective. When structural violation of free speech happens, it either places the middleman in the position of carrying through the violation, or creates a structure through which speech violations are incentivized.

An example of this is the Bazee.com case. At the time of the case the law was structured in such a way that not only perpetrators of unlawful content were punished, but so were the bodies/persons that circulated illegal content. In regulatory terms this is known as "gatekeeper liability". In the Bazee.com case, a private party put obscene content up for sale and Bazee.com could and did not verify all of the content that was for sale. In the case, the Delhi HC held Avnish Bajaj, the CEO of Bazee.com, liable on the precedent of strict liability for circulation of obscene content. The standard of strict liability was established under Ranjit Udeshi case. The standard of strict liability is still the norm for non-online content, but after Bazee.com, a Parliament Standing Committee created a safe harbour for online intermediaries under Section 79 of the IT Act. As per the provision, if content has been published online, but an intermediary has not edited or directly created the content, it is possible for them to seek immunity from liability for the content. The Parliament Standing Committee then stated that intermediaries ought to exercise due diligence. Thus, the Indian legal regime provides online intermediaries with immunity only if content has not been published or edited by an intermediary and due diligence has been exercised as defined by Rules under the Act. While developing India's legal regime for intermediary liability the Parliamentary Standing Committee did not focus on the impact of such regulation on online speech.

To a large extent, present research and analysis of Freedom of Expression is focused on the autonomy of the speaker/individual. An alternative formulation and way of understanding the right, and one that has been offered by Robert Post through his theory of democratic self governance, is that Freedom of Expression is more about the value of the speech rather than the autonomy of the speaker. In such a theory the object of Freedom of Expression is to ensure diversity of speech in the public sphere. The question to ask then is: "Is curtailment affecting democratic dialogue?" The Supreme Court of India has recognized that people have a right to know/listen/receive information in a variety of cases. Ms. Arun explained that if one accepts this theory of speech, the liability of online intermediaries will be seen differently.

Ms. Arun further explained that in Shreya Singhal, the notice-and-takedown regime under section 79 of the IT Act has been amended, but the blocking regime under section 69A has not. Thus, the government can still use intermediaries as proxies to take down legitimate content, and not provide individuals with the opportunity to to challenge blocking orders. This is because as per the Act, blocking orders must be confidential. Though the blocking regime has not been amended, the Supreme Court has created an additional safeguard by including the requirement that the generator of content has to be contacted (to the extent possible) before the government can pass and act upon a blocking order. Mr. Arun noted that hopefully, when implemented, this will provide a means of recourse for individuals and counter, to some extent, the mandated secrecy of content blocking orders.

Raman Jit Singh Chima, Asia Consultant, Access Now
Mr. Chima began his presentation by noting that the Internet is plagued by a few founding myths. Tim Goldsmith and Jack Wu (in Who Controls the Internet: Illusions of a Borderless World) name one: that no laws apply to the Internet; that, because of the borderless nature of the Internet - data flows through cables without regard for State borders - and thus countries' laws do not affect the Internet. These cyber-anarchists, amongst whom John Perry Barlow of the Electronic Frontier Foundation (EFF) is inspiring, also argue that regulation has no role for the Internet.

Mr. Chima countered these 'myths', arguing that the law affects the Internet in many ways. The US military and Science departments funded the invention of the Internet. So the government was instrumental in the founding of the Internet, and the US Department of Commerce has agreements with ICANN (Internet Corporation for Assigned Names and Numbers) to govern the Domain Names System. So the law, contracts and regulation already apply to the Internet.

Mr. Chima further explained that today organisations like EFF and civil society in India argue for, and seek to influence, the creation of regulation for the protection of journalists against unfair and wrongful targeting by the government. This includes moves to protect whistleblowers, to ensure the openness of the Internet and its protection from illegitimate and violative acts against freedom of expression, access and other rights. Some governments, like India, also place conditions in the licenses granted to Internet Service Providers (ISPs) to ensure that they bring access to the rural, unconnected areas. Such law and regulation are not only common, but they are also good; they help the population against virtual wrongdoing.

Mr. Chima pointed out that when States contemplate policy-making for the Internet, they look to a variety of sources. Governments draw upon existing laws and standards (like India with the virtual obscenity offence provision Section, 67 and 67A, IT Act, which is drawn from the real-world penal provision Section 292, IPC) and executive action (regulation, by-laws, changes to procedural law) to create law for the Internet. Additionally, if a government repeats a set of government actions consistently over time, such actions may take on the force of law. Mr. Chima also spoke of web-developers and standards-developers (the technical community), who operate by rules that have the force of law, such as the 'rough consensus and running code' of the IETF (Internet Engineering Task Force). Governments also prescribe conditions ("terms of use") that companies must maintain, permitting or proscribing certain kinds of content on websites and platforms.

Finally, Mr. Chima highlighted international legal and policy standards that play a role in determining the Internet's law and regulation. ICANN, the administrator of the Internet Assigned Numbers Authority (IANA) functions and governing body for the Domain Names System, functions by a set of rules that operate as law, and in the creation of which, the international legal community (governments, companies, civil society and non-commercial users, and the technical community) play a role. The ITU (International Telecommunications Union) and organisations like INTERPOL also play a role.

Mr. Chima explained that when one wants to focus on issues concerning freedom of expression, multiple laws also apply. Different States set different standards. For instance, in the US, the main standards for the Internet came from issues relating to access to certain types of online content. In Reno v. ACLU (1997), the US Supreme Court considered what standards should be created to access obscene and indecent content on the Internet. The judges held that the Internet, as a medium of unprecedented dynamism, deserved the higher protection from governmental overreach.

In Asia, the main legal standards for the Internet came from Internet commerce: the UNCITRAL model law, which prescribed provisions best suited to the smoother commercial utilization of a fast and growing medium, became the foundation for Internet-related law in Asian states. Predictably, this did not offer the strongest rights protections, but rather, focused on putting in place the most effective penalties. But when Asian states drew from the European UNCITRAL law, many forgot that European states are already bound by the European Convention for Human Rights, the interpretation of which has granted robust protections to Internet-related rights.

Mr. Chima provided the example of Pakistan's new Cybercrime Bill. The Bill has troubling provisions for freedom of expression, and minimal to no due process protections. While drafting the law, Pakistan has drawn largely from model cybercrime laws from the Council of Europe, which are based on the Budapest Convention. In Europe and the US, States have strong parallel protections for rights, but States in Asia and Africa do not.

Mr. Chima concluded that when one talks of freedom of expression online, it is important to also remember the roles of intermediaries and companies. The ISPs can be made liable for content that flows through their wires, through legal mechanisms such as license provisions. ISPs can also be made to take further control over the networks, or to make some websites harder to access (like the Internet Watch Foundation's blacklist). When policy organisations consider this, it is critical that they ask whether industry bodies should be permitted to do this without public discussion, on the basis of government pressure.

Questions & Comments

Q. Participants asked for panel members to talk about the context in which bloggers find themselves in danger in Bangladesh.

A. Panel members stated that the courts are not fair to bloggers as often they side with government. It was added that courts have labelled bloggers as atheist, and subsequently all bloggers are being associated with the label. Further, it was added that most people who are outraged, do not even know what blogging is, and people associate blogging with blasphemy and as opposing religious beliefs. It was also noted that in Bangladesh, while you see violations of FoE from the State, you see more violations of blogger rights from non-state actors.

Q. Participants asked if there is anything specific about the Internet that alters how we should consider hate speech online and their affective/visceral impact.

A. Pa nel members noted that they are still grappling with the question of what difference the Internet makes, but noted that it has indeed complicated an already complex issue as there is always the question about political entrepreneurs using convenient content to foment fires.

Q. Participants questioned panel members about how the right to offend is protected in jurisdictions across Asia where there is still tension between classical liberalism and communitarian ideologies, and where the individuated nature of rights is not clearly established or entrenched.

A. Panel members responded by stating that when one compares the US, Indonesia and India, the US seems to be able to strike a balance between free speech and other competing interests as they are committed to free speech and committed to religious tolerance and plurality of competing interests. Panel members also added that the fabric of civil society also has an impact. For example, Indonesian civil society is simultaneously religious and secular and pro-democracy. In India, there seems to be a tension between secular and religious groups. In Indonesia, people are moving to religion for comfort, while still seeking a world that is religious and secular.

Q. Participants asked for clarification on ways to approach regulation of hate speech given that hate speech is not just about a particular kind of threatening speech, but encompasses rumours and innuendos.

A . Panel members acknowledged that more research needs to be done in this area and added that applying the socio-cultural lens on such issues would be beneficial.

Q. Participants asked if panel members had a framework for a regulating the content practices of private actors, who are sometimes more powerful than the state and also enforcing censorship.

A. Panel members responded that private censorship is an important issue that needs to be reflected upon in some depth, though a framework is far from being developed even as research is ongoing in the space.

Session 3: Looking Ahead

The third and final session of the conference aimed to find principles and methods to achieve beneficial and effective regulation of the Internet. One of the core aims was the search for the right balance between the dangers of the Internet (and its unprecedented powers of dissemination) and the citizens' interest in a robust right to freedom of expression. Mr. Sutirtho Patranobis, Assistant Editor with the Hindustan Times (Sri Lanka desk, previously China correspondent), shared his experience with governmental regulation of online free speech in China and Sri Lanka. Ms. Karuna Nandy,Advocate, Supreme Court of India, analysed the Indian Supreme Court's decision in Shreya Singhal v. Union of India (March 24, 2015), and sought to draw lessons for the current debate on net neutrality in India. Ms. Geeta Seshu, founder and editor of the online magazine The Hoot, offered an expanded definition of freedom of speech, focusing on universal access as the imperative. Finally, Mr. Pranesh Prakash, Policy Director, Centre for Internet & Society, offered his views on net neutrality and the issue of zero-rating, as well as arguing for an increased, cooperative role of civil society in creating awareness on issues relating to the Internet.

Sutirtho Patranobis, Assistant Editor, Hindustan Times
During his career, Mr. Patranobis was the China correspondent for the Hindustan Times. Mr. Patranobis began his presentation by sharing his experiences in China. In China, multiple online platforms have become sources of news for citizens. Chinese citizens, especially the urban young, spend increasing amounts of time on their mobile phones and the Internet, as these are the major sources of news and entertainment in the country.

The Chinese government's attitude towards freedom of expression has been characterized by increasing control over these online platforms. The includes control over global companies like Google and Facebook, which have negotiated with the Chinese government to find mutually acceptable operating rules (acceptable to the government and the company, but in most cases unfavourable to the citizens) or have faced being blocked or filtered from the country. Mr. Patranobis noted that free speech regulation in China has evolved into a sophisticated mechanism for control and oppression, and the suppression of dissent. Not only China, but Sri Lanka has also adopted similar approaches to dealing with freedom of expression.

In China, free speech regulations have evolved with an aim to curtail collective action and dissent. China's censorship programmes work towards silencing expression that can represent, reinforce or spur social mobilisation. Mr. Patranobis explained that these programmes aim to put an end to all collective activities (current or future) that may be at odds with government policies. Therefore, any online activity that exposes government action as repressive, corrupted or draconian is meted out harsh treatment. Indeed it is possible to see that there are sharp increases in online censorship and crackdowns when the government implements controversial policies offline.

Mr. Patranobis went on to discuss the nature of objectionable content, and the manner in which different jurisdictions deal with the same. Social and cultural context, governmental ideologies, and political choices dictate the nature of objectionable content in States such as China and Sri Lanka. On the flipside, media literacy, which plays a big role in ensuring an informed and aware public, is extremely low in Sri Lanka, as well as in many other States in South Asia.

Mr. Patranobis raised the question of how the Internet can be regulated while retaining freedom of expression - noting that the way forward is uncertain. In Sri Lanka, for instance, research by UNESCO shows that the conflicting policy objectives are unresolved; these first need to be balanced before robust freedom of expression can be sustained. The Internet is a tool, after all; a tool that can connect people, that can facilitate the spread of knowledge and information, to lift people from the darkness of poverty. The Internet can also be a tool to spread hate and to divide societies and peoples. Finding the right balance, contextualised according to the needs of the citizens and the State, is key to good regulation.

Karuna Nundy, Advocate, Supreme Court of India
Ms. Nandy focused her presentation on two issues currently raging in India's free speech debates: the Supreme Court's reasoning on Sections 66A and 69A, IT Act, in Shreya Singhal & Ors. v. Union of India (Supreme Court, March 24, 2015), and issues of access and innovation in the call for a net neutrality regulation. She stated that the doctrine of the "marketplace of ideas" endorsed by Justices Nariman and Chelameswar in Shreya Singhal speaks to the net neutrality debate.

Ms. Nandy held that a law can be challenged as unconstitutional if it prohibits acts that are legitimate and constitutional. Such an argument refers to the impugned law's "overbroad impact". For instance, the Supreme Court struck down Section 66A, IT Act, on the ground (among others) that the impugned section leads to the prohibition and criminalisation of legitimate and protected speech. Cases such asChintaman Rao v. State of Madhya Pradesh [(1950) SCR 759] and Kameshwar Prasad v. State of Bihar [1962 Supp. (3) SCR 369] speak to this principle. They expand the principle of overbreadth to include the notion of "chilling effect" - i.e., situations where overbroad blocking leads to the prohibition of legitimate constitutional speech. In such situations, citizens are unsure what constitutes protected speech and what does not, leading to a chilling effect and self-censorship for fear of reprisals.

In Shreya Singhal, the Supreme Court also considered the "reasonable person" doctrine that has been developed under the law of obscenity. India had initially adopted the Hicklin test, under which the test to determine what is obscene depended on whether prurient minds (minds that have a tendency to be corrupted) would find the impugned material lascivious and corrupting. This test, laid down in Ranjit Udeshi v. State of Maharashtra [AIR 1965 SC 881] and altered/refined by decades of jurisprudence, was put to rest in Aveek Sarkar v. State of West Bengal [AIR 2014 SC 1495]. In Aveek Sarkar, the Supreme Court adopted the "community standards" test to determine obscene content. According to Ms. Nandy, the "community standards" test rests on the doctrine of reasonable persons. Ms. Nandy noted that in effect there is a need for more police officers to protect those who produce legitimate content from hecklers.

Quoting from the U.S. decision of Whitney v. California [71 L. Ed. 1095], Ms. Nandy submitted that:

" It is the function of speech to free men from the bondage of irrational fears. To justify suppression of free speech there must be reasonable ground to fear that serious evil will result if free speech is practiced. There must be reasonable ground to believe that the danger apprehended is imminent. There must be reasonable ground to believe that the evil to be prevented is a serious one. "

On the issue of website blocking and the Supreme Court's reasoning on Section 69A, IT Act, in Shreya Singhal, Ms. Nandy explained that the Additional Solicitor General had conceded a number of points during the oral arguments. She further explained that website blocking can be applied when the Central Government is satisfied that there is a necessity for it. However, reasons must be recorded in writing. Also, according to the Supreme Court's interpretation of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (" Blocking Rules"), both the intermediary and the originator of the communication (the content-creator) have to be given a chance to be heard.

Rule 16 of the Blocking Rules, which mandates confidentiality of all blocking requests and orders, was also discussed in Shreya Singhal. Though some confusion has arisen about the Rule's interpretation, Ms. Nandy submitted that Rule 16 has been read down. There is no longer a strict, all-encompassing requirement of confidentiality. While the identity of the complainant and the exact nature of the complaint must be kept confidential, the blocking order and the reasoning behind the order are no longer bound by Rule 16. This is because in §109 of the judgment, the Supreme Court accepts that writ petitions can lie on the basis of blocking orders. In order for writs to lie, affected parties must first be aware of the existence and content of the blocking order. Therefore, Ms. Nandy explained, the effect of the Supreme Court's reasoning is that the confidentiality requirement in Rule 16 has been read down.

On net neutrality, Ms. Nandy argued that zero-rating is an efficient solution to providing universal access to the Internet. Services like Internet.org are not strictly market-driven. This is because there is not a large demand for Facebook or specific over-the-top (OTT) service providers. In speaking about the marketplace for ideas in Shreya Singhal, the Supreme Court did not indirectly outlaw services seeking to balance access with diversity of speech. Ms. Nandy held that price discrimination in the provision of telecom, broadband and mobile Internet services already exists. In light of this, the focus should the provision of these services on the basis of consumer choice.

Geeta Seshu, The Hoot
Ms. Seshu began her presentation by noting that one's perspective on online censorship cannot be the same as that on traditional censorship. Traditional censorship cuts off an individual's access to the censored material, but on the Internet, material that is censored in traditional media finds free and wide distribution. One's conceptualisation of freedom of expression and curtailment of this right must include access to the medium as a crucial part. To this end, it is important to not forget that access to the Internet is controlled by a limited number of Internet service and content providers. Thus, a large section of the population in India cannot exercise their right to free speech because they do not have access to the Internet.

In this context, it is important to understand the way in which the digital rollout is happening in India. Ms. Seshu explained that the rollout process lacks transparency, and noted the example of the 4G/LTE rollout plan in India. There is, of course, a diversity of content: those that have access to the Internet have the ability to exercise their right to free speech in diverse ways. However, introducing access into the free speech universe highlights many inequalities that exist in the right; for instance, Dalit groups in India have limited access to the Internet, and some kinds of content receive limited airtime.

Importantly, Ms. Seshu argued that the government and other entities use technology to regulate content availability. Policymakers exploit the technology and architecture of the networks to monitor, surveil and censor content. For instance, one may see the UID scheme as an adaptation of technology to facilitate not only service-provision, but also as a move towards a Big Brother state. Civil society and citizens need to study and respond to the ways in which technology has been used against them. Unfortunately, the debates surrounding regulation do not afford space for Internet users to be part of the discussion. In order to turn this around, it is important that citizens' and users' rights are developed and introduced into the regulatory equation.

Pranesh Prakash, Policy Director, Centre for Internet & Society
Taking up where Ms. Seshu left off, Mr. Prakash wished to explore whether the Internet was merely an enabler of discussion - allowing, for instance, a ruckus to be raised around the consultation paper of the Telecom Regulatory Authority in India (TRAI) on Over-The-Top (OTT) services and net neutrality - or whether the Internet positively adds value. The Internet is, of course, a great enabler. The discussions surrounding OTTs and net neutrality are an example: in response to the TRAI consultation, a campaign titled "Save the Internet" resulted in over 9.5 lakh comments being submitted to the TRAI. It is inconceivable that such a widespread public discussion on so complex a topic (net neutrality) could take place without the Internet's facilitation.

But, Mr. Prakash held, it is important to remember that the Internet is the tool, the platform, for such mobilisation. Campaigns and conversations such as those on net neutrality could not take place without the organisations and people involved in it. Civil society organisations have played prominent roles in this regard, creating awareness and well-informed discussions. For Mr. Prakash, civil society organisations play their role best when they create such public awareness, and it is important, to play to a stakeholders strengths. Some organisations are effective campaigners, while others (such as CIS) are competent at research, analysis and dissemination.

According to Mr. Prakash, it is equally important to remember that successful discussions, campaigns or debates (such as the ongoing one on net neutrality) do not occur solely because of one organisation's strengths, or indeed because of civil society alone. Networks are especially critical in successful campaigns and policy changes. As researchers, we may not always know where our work is read, but sometimes they reach unexpected venues. For instance, one of Mr. Prakash's papers was used by the hacker collective Anonymous for a local campaign, and he was made aware of it only accidentally. Mr. Prakash noted that civil society has to also accept its failures, pointing to the controversy surrounding the Goondas Act in Karnataka. Where there are strong counter-stakeholders (such as the film lobby in south Indian states), civil society's efforts alone may not lead to success.

On net neutrality, Mr. Prakash noted the example of a strategy employed by the Times of India newspaper, when it undercut its competitors by slashing its own prices. Such moves are not unknown in the market, and they have their benefits. Consumers benefit from the lowered prices. For instance, were a Whatsapp or Facebook pack to be introduced by a telecom operator, the consumers may choose to buy this cheap, limited data pack. This is beneficial for consumers, and also works to expand access to the Internet. At the same time, diversity of speech and consumer choice is severely restricted, as these companies and telecom operators can create 'walled gardens' of information and services. Mr. Prakash put forth that if we can facilitate competitive zero-rating, and ensure that anti-competitive cross-subsidization does not occur, then perhaps zero-rated products can achieve access without forcing a trade off between diversity and choice.

Finally, on the issue of website blocking and takedowns under Sections 69A and 79, IT Act, Mr. Prakash noted that the Shreya Singhal judgment does nothing to restrict the judiciary's powers to block websites. According to Mr. Prakash, at the moment, the Shreya Singhal judgment relieves intermediaries of the responsibility to take down content if they receive private complaints about content. After the judgment, intermediaries will lose their immunity under Section 79, IT Act, only if they refuse to comply with takedown requests from government agencies or judicial orders.

But, as Mr. Prakash explained, the judiciary is itself a rogue website-blocker. In the past few years, the judiciary has periodically ordered the blocking of hundreds of websites. Such orders have resulted in the blocking of a large number of legitimate websites (including, at one point, Google Drive and Github). To ensure that our freedom of expression online is effectively protected, Mr. Prakash argued that ways to stop the judiciary from going on such a rampage must be devised.

Questions & Comments

A. Participants and panel members commented that researchers and commentators err by making analogies between the Internet and other media like newspapers, couriers, TV, satellite, cable, etc. The architecture of the Internet is very different even from cable. On the Internet, traffic flows both ways, whereas cable is not bi-directional. Moreover, pricing models for newspapers have nothing in common with those on the Internet. The comparisons in net neutrality debates stand the danger of incorrectness, and we must guard against that. Zero-rating and net neutrality issues in high-access countries are very different from the issues in low-access countries like India.

B. Participants and panel members commented that access and availability must play a predominant role in thinking about freedom of expression. In India, we are technologically far behind other states, though we have potential. The real end-goal of this is the convergence of services and information, with the user at the centre of the ecosystem. Our technological capabilities include satellite and spectrum; the best spectrum bands are lying vacant and can be re-framed. For this, the government must be educated.

C. Participants and panel members commented that in high-access states, the net neutrality issues surround competition and innovation (since there is no or very little ISP competition and switching costs are not low), while in India and France, where there is already competition amongst providers, access plays a crucial role. On the Internet, the networking or engineering aspects can disrupt the content carried over the network, so that is also a concern.

D. Participants and panel members commented that zero-rating is both a blessing and a curse. Zero-rating would not be detrimental in a market with perfect information and competition. But the reality is information asymmetry and imperfect competition. If today, we were to allow zero-rating, diversity would suffer and we would be left with 'walled gardens'.

Conclusion

The conference addressed a range of issues characteristic of debates surrounding freedom of expression in India and South Asia. Beginning with the conceptual understanding of freedom of expression, panellists advocated an expanded definition, where the right to free speech is teleological. The panellists considered freedom of speech as a tool to ensure diversity of speech, both horizontally and vertically. Towards this end, panellists gave several suggestions:

First , policymakers and scholars must understand freedom of speech as a right of both the speaker and the listener/reader, and carve out a separate listeners' right. Panellists expanded upon this to show the implications for the debate on net neutrality, cross-media ownership and website-blocking, for instance.

Second , there is a need for scholars to examine the historical dichotomy between the policy and jurisprudence of free speech in India and other contexts across South Asia. Such an approach to scholarship and policy research would help predict future government policy (such as in the case of the Indian government's stance towards Section 66A following the Supreme Court's decision in Shreya Singhal v. Union of India) and strategize for the same.

Third , particularly with regard to the Internet, there is a need for policy advocates and policy makers to "bust" the founding myths of the Internet, and look to various domestic and international sources of law and regulation. Studies of regulation of freedom of speech on the Internet in different jurisdictions (Bangladesh, China, Sri Lanka) indicate differing government approaches, and provide examples to learn from. The interpretation and consequences of Shreya Singhal on website-blocking and intermediary liability in India provide another learning platform.

Fourth , panellists discussed the possibilities of cooperation and strategies among civil society and policy organisations in India. Taking the example of the Save the Internet campaign surrounding net neutrality in India, panellists speculated on the feasibility of using the Internet itself as a tool to campaign for governance and policy reform. Together with the audience, the panellists identified several areas that are ripe for research and advocacy, such as net neutrality and zero-rating, and citizens' free speech right as being separate from governmental and corporate interests.

CIS Cybersecurity Series (Part 24) – Shantanu Ghosh

by Purba Sarkar last modified Jul 15, 2015 02:58 PM
CIS interviews Shantanu Ghosh, Managing Director, Symantec Product Operations, India, as part of the Cybersecurity Series.

“Remember that India is also a land where there are a lot of people who are beginning to use computing devices for the first time in their lives. For many people, their smartphone is their first computing device because they have never had computers in the past. For them, the challenge is how do you make sure that they understand that that can be a threat too. It can be a threat not only to their bank accounts, with their financial information, but even to their private lives.”

Centre for Internet and Society presents its twenty fourth installment of the CIS Cybersecurity Series.”

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Shantanu Ghosh is the Managing Director of Symantec Product Operations, India. He also runs the Data Centre Security Group for Symantec globally.

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

A Dissent Note to the Expert Committee for DNA Profiling

by Elonnai Hickok last modified Jul 21, 2016 11:01 AM
The Centre for Internet and Society has participated in the Expert Committee for DNA Profiling constituted by the Department of Biotechnology in 2012 for the purpose of deliberating on and finalizing the draft Human DNA Profiling Bill and appreciates this opportunity. CIS respectively dissents from the January 2015 draft of the Bill.

 

Click for DNA Bill Functions, DNA List of Offences, and CIS Note on DNA Bill. A modified version was published by Citizen Matters Bangalore on July 28.


Based on the final draft of the Human DNA Profiling Bill that was circulated on the 13th of January 2015 by the committee, the Centre for Internet and Society is issuing this note of dissent on the following grounds:

The Centre for Internet and Society has made a number of submissions to the committee regarding different aspects of the Bill including recommendations for the functions of the board, offences for which DNA can be collected, and a general note on the Bill. Though the Centre for Internet and Society recognizes that the present form of the Bill contains stronger language regarding human rights and privacy, we do not find these to be adequate and believe that the core concerns or recommendations submitted to the committee by CIS have not been incorporated into the Bill.

The Centre for Internet and Society has foundational objections to the collection of DNA profiles for non-forensic purposes. In the current form the DNA Bill provides for collection of DNA for the following non forensic purposes:

  • Section 31(4) provides for the maintenance of indices in the DNA Bank and includes a missing person’s index, an unknown deceased person’s index, a volunteers’ index, and such other DNA indices as may be specified by regulation.
  • Section 38 defines the permitted uses of DNA profiles and DNA samples including: identifying victims of accidents or disasters or missing persons or for purposes related to civil disputes and other civil matters and other offences or cases listed in Part I of the Schedule or for other purposes as may be specified by regulation.
  • Section 39 defines the permitted instances of when DNA profiles or DNA samples may be made available and include: for the creation and maintenance of a population statistics Data Bank that is to be used, as prescribed, for the purposes of identification research, protocol development or quality control provided that it does not contain any personally identifiable information and does not violate ethical norms.
  • Part I of the schedule lists laws, disputes, and offences for which DNA profiles and DNA samples can be used. These include, among others, the Motor Vehicles Act, 1988, parental disputes, issues relating to pedigree, issues relating to assisted reproductive technologies, issues relating to transplantation of human organs, issues relating to immigration and emigration, issues relating to establishment of individual identity, any other civil matter as may be specified by the regulations, medical negligence, unidentified human remains, identification of abandoned or disputed children.

While rejecting non-forensic use entirely, we have specific substantive and procedural objections to the provisions relating to forensic profiling in the present version of the Bill. These include:

  • Over delegation of powers to the board: The DNA Board currently has vast powers as delegated by Section 12  including:
    “authorizing procedures for communication of DNA profiles for civil proceedings and for crime investigation by law enforcement and other agencies, establishing procedure for cooperation in criminal investigation between various investigation agencies within the country and with international agencies, specifying by regulations the list of applicable instances of human DNA profiling and the sources and manner of collection of samples in addition to the lists contained in the Schedule, undertaking any other activity which in the opinion of the Board advances the purposes of this Act.”

    Section 65 gives the Board the power to make regulations for a number purposes including: “other purposes in addition to identification of victims of accidents, disasters or missing persons or for purposes related to civil disputes and other civil matters and other offences or cases lists in Part I of the Schedule for which records or samples may be used under section 38, other laws, if any, to be included under item (viii) of para B of Part I of the Schedule, other civil matters, if any, to be included under item (vii) of para C of Part I of the Schedule, and authorization of other persons, if any, for collection of non intimate body samples and for performance of non-intimate forensic procedures, under Part III of the Schedule.

    Ideally these powers would lie with the legislative or judicial branch. Furthermore, the Bill establishes no mechanism for accountability or oversight over the functioning of the Board and section 68 specifically states that “no civil court shall have jurisdiction to entertain any suit or proceeding in respect to any matter which the Board is empowered by or under this Act to determine.”

    The above represents only a few instances of the overly broad powers that have been given to the Board. Indeed, the Bill gives the Board the power to make regulations for 37 different aspects relating to the collection, storage, use, sharing, analysis, and deletion of DNA samples and DNA profiles. As a result, the Bill establishes a Board that controls the entire ecosystem of DNA collection, analysis, and use in India without strong external oversight or accountability.
  • Key terms undefined: Section 31 (5) states that the “indices maintained in every DNA Data Bank will include information of data based on DNA analysis prepared by a DNA laboratory duly approved by the Board under section 1 of the Act, and of records relating thereto, in accordance with the standards as may be specified by the regulations.”

    The term’ DNA analysis’ is not defined in the Act, yet it is a critical term as any information based on such an analysis and associated records can be included in the DNA Database.
  • Low standards for sharing of information: Section 34 empowers the DNA Data Bank Manager to compare a received DNA profile with the profiles stored in the databank and for the purposes of any investigation or criminal prosecution, communicate the information regarding the received DNA profile to any court, tribunal, law enforcement agencies, or DNA laboratory which the DNA Data Bank Manager considers is concerned with it.

    The decision to share compared profiles and with whom should be made by an independent third party authority, rather than the DNA Bank Manager. Furthermore, this provision isvague and although the intention seems to be that the DNA profiles should be matched and the results communicated only in certain cases, the generic wording could take into its ambit every instance of receipt of a DNA profile. For eg. the regulations envisaged under section 31(4)(g) may prescribe for a DNA Data Bank for medical purposes, but section 34 as it is currently worded may include DNA profiles of patients to be compared and their information released to various agencies by the Data Bank Manager as an unintentional consequence.
  • Missing privacy safeguards: Though the Bill refers to security and privacy procedures that labs are to follow, these have been left to be developed and implemented by the DNA Board. Thus, except for bare minimum standards and penalties addressing the access, sharing, and use of data – the Bill contains no privacy safeguards.

    In our interactions with the committee we have asked that the Bill be brought in line with the nine national privacy principles established by the Report of the Group of Experts on Privacy submitted to the Planning Commission in 2012. This has not been done.



DNA Bill Functions

by Prasad Krishna last modified Jul 17, 2015 01:30 AM

PDF document icon DNA Bill - Functions (2).pdf — PDF document, 4 kB (5087 bytes)

DNA List of Offences

by Prasad Krishna last modified Jul 17, 2015 01:34 AM

PDF document icon DNA Bill - List of Offences (1).pdf — PDF document, 8 kB (8604 bytes)

CIS Note on DNA Bill

by Prasad Krishna last modified Jul 17, 2015 01:37 AM

PDF document icon CIS Note on DNA Bill.pdf — PDF document, 98 kB (100977 bytes)

Best Practices Meet 2015

by Prasad Krishna last modified Jul 17, 2015 01:08 PM

PDF document icon BPM 2015 Agenda.pdf — PDF document, 705 kB (722356 bytes)

Five Nations, One Future

by Prasad Krishna last modified Jul 18, 2015 02:24 AM

PDF document icon FutureMag001.pdf — PDF document, 6119 kB (6266080 bytes)

Aadhaar Number vs the Social Security Number

by Elonnai Hickok last modified Jul 24, 2015 01:24 AM
This blog calls out the differences between the Aadhaar Number and the Social Security Number

In response to news items that reported the Government of India running pilot projects to enroll children at the time of birth for Aadhaar numbers - an idea that government officials in the news items claimed was along the lines of the social security number - this note seeks to point out the ways in which the Aadhaar number and the social security number are different.[1]

Governance

SSN is governed by Federal legislation: The issuance, collection, and use of the SSN is governed by a number of Federal and State legislation with the most pertinent being the Social Security Act 1935[2] - which provides legal backing for the number, and the Privacy Act 1974 which regulates the collection, access, and sharing of the SSN by Federal Executive agencies.[3]

Aadhaar was constituted under the Planning Commission: The UIDAI was constituted as an attached office under the Planning Commission in 2009.[4] A Unique Identification Authority Bill has been drafted, but has not been enacted.[5] Though portions of the Information Technology Act 2008 apply to the UID scheme, section 43A and associated Rules (India's data protection standards) do not clearly apply to the UIDAI as the provision has jurisdiction only over body corporate.

Purpose

SSN was created as a number record keeping scheme for government services: The Social Security Act provides for the creation of a record keeping scheme - the SSN. Originally, the SSN was used as a means to track an individuals earnings in the Social Security system.[6] In 1943 via an executive order, the number was adopted across Federal agencies. Eventually the number has evolved from being a record keeping scheme into a means of identity. In 1977 it was clarified by the Carter administration that the number could act as a means to validate the status of an individual (for example if he or she could legally work in the country) but that it was not to serve as a national identity document.[7] Today the SSN serves as a number for tracking individuals in the social security system and as one (among other) form of identification for different services and businesses. Alone, the SSN card does not serve proof of identity, citizenship, and it cannot be used to transact with and does not have the ability to store information. [8]

Aadhaar was created as a biometric based authenticator and a single unique proof of identity: The Aadhaar number was established as a single proof of identity and address for any resident in India that can be used to authenticate the identity of an individual in transactions with organizations that have adopted the number. The scheme as been promoted as a tool for reducing fraud in the public distribution system and enabling the government to better deliver public benefits.[9]

Applicability

SSN is for citizens and non-citizens authorized to work: The social security number is primarily for citizens of the United States of America. In certain cases, non citizens who have been authorized by the Department of Homeland Security to work in the US may obtain a Social Security number.[10]

Aadhaar is for residents: The aadhaar number is available to any resident of India.[11]

Storage, Access, and Disclosure

SSN and applications are stored in the Numident: The numident is a centralized database containing the individuals original SNN and application and any re-application for the same. All information stored in the Numident is protected under the Privacy Act. Individuals may request records of their own personal information stored in the Numident. With the exception of the Department of Homeland Security and U.S Citizenship and Immigration Services, third parties may only request access to Numident records with the consent of the concerned individual.[12] Federal agencies and private entities that collect the SSN for a specific service store the number at the organizational level. The Privacy Act and various state level legislation regulates the disclosure, access, and sharing of the SSN number collected by agencies and organizations.

Aadhaar and data generated at multiple sources is stored in the CIDR and processed in the data warehouse: According to the report "Analytics, Empowering Operations", "At UIDAI, data generated at multiple sources would typically come to the CIDR (Central ID Repository), UIDAIs Data centre, through an online mechanism. There could be certain exceptional sources, like Contact centre or Resident consumer surveys, that will not feed into the Data center directly. Data is then processed in the Data Warehouse using Business Intelligence tools and converted into forms that can be accessed and shared easily." Examples of data that is stored in the CIDR include enrollments, letter delivery, authentication, processing, resident survey, training, and data from contact centres.[13] It is unclear if organizations that authenticate individuals via the Adhaar number store the number at the organizational level. Biometrics are listed as a form of sensitive personal information in the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) 2011, thus if any body corporate collects biometrics with the Aadhaar number - the storage, access, and disclosure of this information would be protected as per the Rules, but the Aadhaar number is not explicitly protected. [14]

Use by public and private entities

Public and private entities can request SSN: Public and private entities can request the SSN to track individuals in a system or as a form of identifying an individual. Any private business is allowed to request and use the SSN as long as the use does not violate federal or state law. Legally, an individual is only required to provide their SSN to a business if they are engaging in a transaction that requires notification to the Internal Revenue Service or the individual is initiating a transaction that is subject to federal Customer Identification Program rules.[15] Thus, an individual can refuse to provide their SSN, but a private business can also refuse to provide a service.[16]

Any public authority requesting the SSN must provide a disclosure notice to the individual explaining if the provision of SSN is required or optional. According to the Privacy Act of 1974, no individual can be denied a government service or benefit for not providing the SSN unless Federal law specifically requires the number for a particular service.[17] Thus, there are a number of Federal legislation in the U.S that specifically require the SSN. For example, the Social Security Independence and Program Improvements Act 1994 allows for the use of the SSN for jury selection and allows for cross matching of SSNs and Employer Identification Numbers for investigation into violation of Federal Laws. [18]

Public and private entities can request Aadhaar: The Aadhaar number can be adopted by any public or private entity as a single means of identifying an individual. The UIDAI has stated that the Aadhaar number is not mandatory,[19] and the Supreme Court of India has clarified that services cannot be denied on the grounds that an individual does not have an Aadhaar number.[20]

Verification

The SSN can be verified only in certain circumstances: The SSA will only respond to requests for SSN verification in certain circumstances:

  • Before issuing a replacement SSN, posting a wage item to the Master Earnings File, or establishing a claims record - the SSA will verify that the name and the number match as per their records.
  • When legally permitted, the SSA verification system will verify SSNs for government agencies.
  • When legally permitted the SSA verification system will verify a workers SSN for pre-registered and approved private employers.
  • If an individual has provided his/her consent, the SSA will verify a SSN request from a third party.

For verification the SSN number must be submitted with an accompanying name to be matched to and additional information such as date of birth, fathers name, mothers name etc. When verifying submitted SSN's, the system will respond with either confirmation that the information matches or that it does not match. It is important to note that because SSN is verified only in certain circumstances, it is not guaranteed that the person providing an SSN number is the person whom the number was assigned.[21]

The Aadhaar number can be verified in any transaction: If an organization, department, or platform has adopted the Aadhaar number as a form of authentication, they can send requests for verification to the UIDAI. The UIDAI will respond with a yes or no answer. When using their Aadhaar number as a form of authentication individuals can submit their number and demographic information or their number and biometrics for verification.[22]

Lost or stolen

SSN can be replaced: If an individual loses his/her SSN card lost or their number is fraudulently used, they can apply for a replacement SSN card or a new SNN number. [23]

Aadhaar number can be replaced: If an individual has lost their Aadhaar number, there is a process that they can follow to have their number re-sent to them. If the number cannot be located by the UIDAI , the individual has the option of re-enrolling for a new Aadhaar number.[24] The UIDAI has built the scheme with the understanding the biometrics are a unique identifier that cannot be lost or stolen, and thus have not created a system to address the possibility of stolen or fraudulent use of biometrics.

Implementation

Legislation and formal roll out: The SSN program was brought into existence via the Social Security Act and officially rolled out while eventually being adopted across Federal Departments.

Bill and pilot studies: The UID scheme has been envisioned as being brought into existence via the Unique Identification Authority Bill 2010 which has not been passed. Thus far, the project has been implemented in pilot phases across States and platforms.

Enrollment

Social Security Administration: The Social Security Agency is the soul body in the US that receives and processes applications for SSN and issues SSN numbers. [25]

UIDAI, registrars, and enrolling agencies: The UIDAI is the soul body that issues Aadhaar numbers. Registrars (contracted bodies under the UIDAI_ - and enrolling agencies (contracted bodies under Registrars) are responsible for receiving and processing enrollments into the UID scheme.

Required supporting documents

SSN requires proof of age, identity, and citizenship: To obtain a SSN you must be able to provide proof of your age, your identity, and US citizenship. The application form requires the following information:

  • Name to be shown on the card
  • Full name at birth, if different
  • Other names used
  • Mailing address
  • Citizenship or alien status
  • Sex
  • Race/ethnic description (SSA does not receive this information under EAB)
  • Date of birth
  • Place of birth
  • Mother's name at birth
  • Mother's SSN (SSA collects this information for the Internal Revenue Service (IRS) on an original application for a child under age 18. SSA does not retain these data.)
  • Fathers' name
  • Father's SSN (SSA collects this information for IRS on an original application for a child under age 18. SSA does not retain these data).
  • Whether applicant ever filed for an SSN before
  • Prior SSNs assigned
  • Name on most recent Social Security card
  • Different date of birth if used on an earlier SSN application.
  • Date application completed
  • Phone number
  • Signature
  • Applicant's relationship to the number holder.[26]

Aadhaar requires proof of age, address, birth, and residence and biometric information: The application form requires the following information:

  • Name
  • Date of birth
  • Gender
  • Address
  • Parent/guardian details
  • Email
  • Mobile number
  • Indication of consenting or not consenting to the sharing of information provided to the UIDAI with Public services including welfare services
  • Indication of if the individual wants the UIDAI to facilitate the opening of a bank account linked to the Aadhaar number and permits the sharing of information for this purpose
  • If the individual has no objection to linking their present bank account to the Aadhaar number and the relevant bank details
  • Signature[27]


[1] Sahil Makkar, "PM's idea to track kids from birth hits practical hurdles", Business Standard. April 11th 2015. Available at: http://www.business-standard.com/article/current-affairs/pm-s-idea-to-track-kids-from-birth-hits-practical-hurdles-115041100828_1.html

[2] The Social Security Act of 1935. Available at: http://www.ssa.gov/history/35act.html

[3] The United States Department of Justice, "Overview of the Privacy Act of 1974". Available at: http://www.justice.gov/opcl/social-security-number-usage

[4] Government of India Planning Commission "Notification". Available at: https://uidai.gov.in/images/notification_28_jan_2009.pdf

[5] The National Identification Authority of India Bill 2010. Available at: http://www.prsindia.org/uploads/media/UID/The%20National%20Identification%20Authority%20of%20India%20Bill,%202010.pdf

[6] History of SSA 1993 - 2000. Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[7] Social Security Number Chronology. Available at: http://www.ssa.gov/history/ssn/ssnchron.html

[8] History of SSA 1993 - 2000, Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[9] UID FAQ: Aadhaar Features, Eligibility. Available at: https://resident.uidai.net.in/faqs

[10] Social Security Numbers for Noncitizens. Available at: http://www.ssa.gov/pubs/EN-05-10096.pdf

[11] Aapka Aadhaar. Available at: https://uidai.gov.in/aapka-aadhaar.html

[12] Program Operations Manual System. Available at: https://secure.ssa.gov/poms.nsf/lnx/0203325025

[13] UIDAI Analytics -Empowering Operations - the UIDAI Experience. Available at: https://uidai.gov.in/images/commdoc/other_doc/uid_doc_30012012.pdf

[14] Information Technology (Reasonable security practices and procedures and sensitive personal data or information rules 2011) available at: http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

[15] IdentityHawk, "Who can lawfully request my social security number?" Available at: http://www.identityhawk.com/Who-Can-Lawfully-Request-My-Social-Security-Number

[16] SSA FAQ " Can I refuse to give my social security number to a private business?" Available at: https://faq.ssa.gov/link/portal/34011/34019/Article/3791/Can-I-refuse-to-give-my-Social-Security-number-to-a-private-business

[17] The United States Department of Justice, "Overview of the Privacy Act of 1974". Available at: http://www.justice.gov/opcl/social-security-number-usage

[18] Social Security Number Chronology. Available at: http://www.ssa.gov/history/ssn/ssnchron.html

[19] Aapka Aadhaar. Available at: https://uidai.gov.in/what-is-aadhaar.html

[20] Business Standard, "Aadhaar not mandatory to claim any state benefit, says Supreme Court" March 17th, 2015. Available at: http://www.business-standard.com/article/current-affairs/aadhaar-not-mandatory-to-claim-any-state-benefit-says-supreme-court-115031600698_1.html

[21] Social Security History 1993 - 2000, Chapter 6: Program Integrity. Available at: http://www.ssa.gov/history/ssa/ssa2000chapter6.html

[22] Aapka Aadhaar. Available at: https://uidai.gov.in/auth.html

[23] SSA. New or Replacement Social Security Number Card. Available at: http://www.ssa.gov/ssnumber/

[24] UIDAI, Lost EID/UID Process. Available at: https://uidai.gov.in/images/mou/eiduid_process_ver5_2_27052013.pdf

[25] Social Security. Availabl at: http://www.ssa.gov/

[26] Social Security Administration, Application for a Social Security. Available at: http://www.ssa.gov/forms/ss-5.pdf

[27] Aadhaar enrollment/correction form. Available at: http://hstes.in/pdf/2013_pdf/Genral%20Notification/Aadhaar-Enrolment-Form_English.pdf

Technology Business Incubators

by Prasad Krishna last modified Jul 25, 2015 03:41 PM

PDF document icon TBI Report - CIS.pdf — PDF document, 860 kB (880913 bytes)

First draft of Technology Business Incubators: An Indian Perspective and Implementation Guidance Report

by Vidushi Marda last modified Jul 25, 2015 04:14 PM
Contributors: Sunil Abraham, Vidushi Marda, Udbhav Tiwari and Anumeha Karnatak
The Centre for Internet and Society presents the first draft of its analysis on technology business incubators("TBI") in India. The report prepared by Sunil Abraham, Vidushi Marda, Udbhav Tiwari and Anumeha Karnatak looks at operating procedures, success stories and lessons that can be learnt from TBIs in India.

A technology business incubator (TBI) is an organisational setup that nurtures technology based and knowledge driven companies by helping them survive during the startup period in the company’s history, which lasts around the initial two to three years. Incubators do this by providing an integrated package of work space, shared office services, access to specialized equipment along with value added services like fund raising, legal services, business planning, technical assistance and networking support. The main objective of the technology business incubators is to produce successful business ventures that create jobs and wealth in the region, along with encouraging an attitude of innovation in the country as a whole.

The primary aspects that this report shall go into are the stages of a startup, the motivational factors behind establishing incubators by governments & private players, the process followed by them in selecting, nurturing talent as well as providing post incubation support. The report will also look at the role that incubators play in the general economy apart from their function of incubating companies, such as educational or public research roles. A series of case analysis of seven well established incubators from India shall follow which will look into their nurturing processes, success stories as well as lessons that can be learnt from their establishment. The final section shall look into challenges faced by incubators in developing economies and the measures taken by them to overcome these challenges.

Download the full paper

Decriminalising Defamation in India

by Prasad Krishna last modified Jul 27, 2015 02:14 PM

PDF document icon Criminal Defamation - Summary of Issues.pdf — PDF document, 78 kB (80679 bytes)

Iron out contradictions in the Digital India programme

by Sumandro Chattapadhyay last modified Jul 28, 2015 01:04 AM
The Digital India initiative takes an ambitious 'Phir Bhi Dil Hai Hindustani' approach to develop communication infrastructure, government information systems, and general capacity to digitise public life in India. I of course use 'public life' in the sense of the wide sphere of interactions between people and public institutions.

The article was published in the Hindustan Times on July 15, 2015.


The 'Phir Bhi Dil Hai Hindustani' approach involves putting together Japanese shoes, British trousers, and a Russian cap to make an entertainer with a pure Indian heart. In this case, the analogy must not be understood as different components of the initiative coming from different countries, but as coming from different efforts to use digital technologies for governance in India.

It is deploying the Public Information Infrastructure vision, inclusive of the National Optical Fibre Network (now renamed as BharatNet) and the national cloud computing platform titled Meghraj, so passionately conceptualised and pursued by Sam Pitroda. It has chosen the Aadhaar ID and the authentication-as-a-service infrastructure built by Nandan Nilekani, Ram Sewak Sharma, and the team, as the identity platform for all governmental processes across Digital India projects. It has closely embraced the mandate proposed by Jaswant Singh led National Task Force on Information Technology and Software Development for completely electronic interface for paper-free citizen-government interactions.

The digital literacy and online education aspects of the initiative build upon the National Mission on Education through ICT driven by Kapil Sibal. Two of the three vision areas of the Digital India initiative, namely 'Digital infrastructure as a utility to every citizen' and 'governance and service on demand,' are directly drawn from the two core emphasis clusters of the National e-Governance Plan designed by R. Chandrashekhar and team, namely the creation of the national and state-level network and data infrastructures, and the National Mission Mode projects to enable electronic delivery of services across ministries.

And this is not a bad thing at all. In fact, the need for this programmatic and strategic convergence has been felt for quite some time now, and it is wonderful to see the Prime Minister directly addressing this need. Although, while drawing benefits from the existing programmes, the DI initiative must also deal with the challenges inherited in the process.

Recently circulated documents describes that the institutional framework for Digital India will be headed by a Monitoring Committee overseeing two main drivers of the initiative: the Digital India Advisory Group led by the minister of communication and information technology, and the Apex Committee chaired by the cabinet secretary. While the former will function primarily through guiding the implementation works by the Department of Electronics and Information Technology (DeitY), the latter will lead the activities of both the DeitY and the various sectoral ministries.

Here lies one possible institutional bottleneck that the Digital India architecture inherits from the National e-Governance Plan. Putting the DeitY in the driving seat of the digital transformation agenda in parallel with all other central government departments indicate an understanding that the transformation is fundamentally a technical issue. However, most often what is needed is administrative reform at a larger scale, and re-engineering of processes at a smaller scale.

Government agencies that have addressed such challenges in the past, such as the department of administrative reforms and public grievances, is not mentioned explicitly within the institutional framework, and instead DeitY has been trusted with a range of tasks that may be beyond its scope and core skills.

The danger of this is that the Digital India initiative will end up initiating more infrastructural and software projects, without transforming the underlying governmental processes. For example, the recently launched eBasta website creates a centralised online shop for publishers of educational materials to make books available for teachers to browse and select for their classes, and for the students to directly download, against payment or otherwise. The website has been developed by the Centre for Development of Advanced Computing and DeitY. At the same time, the ministry of human resource development, which is responsible for matters related to public education, has already collaborated with the Central Institute of Educational Technology and the Homi Bhabha Centre for Science Education in TIFR to build a comprehensive platform for multi-media resources for education – the National Repository of Open Educational Resources. The initial plans of the DI initiative are yet to explicitly recognise that the key challenge is not in building new applications and websites, but aligning existing efforts.

This mismatch, between what the Digital India initiative proposes to achieve and how it plans to achieve it, is further demonstrated in the 'e-Governance Policy Initiatives under Digital India' document. The compilation lists the key policies to govern designing and implementation of the Digital India programmes, but surprisingly fails to mention any policies, acts, and pending bills approved or initiated by any previous government. This is remarkably counter-productive as the existing policy frameworks, such as the Framework for Mobile Governance, the National Data Sharing and Accessibility Policy, and the Interoperability Framework for e-Governance, are suitably placed to complement the new policies around use of free of open source softwares for e-governance systems, so as to ensure their transparency, interoperability, and inclusive outreach. Several pending bills like The National Identification Authority of India Bill, 2010, The Electronic Delivery of Services Bill, 2011, and The Privacy (Protection) Bill, 2013, are absolutely fundamental for comprehensive and secure implementation of the various programmes under the Digital India initiative.

The next year will complete a decade of development of national e-governance systems in India, since the launch of National e-Governance Plan in 2006. Given this history of information systems sometimes partially implemented and sometimes working in isolation, a 'Phir Bhi Dil Hai Hindustani' approach to digitise India is a very pragmatic one. What we surely do not need is increased contradiction among e-governance systems. Simultaneously, we neither need digital systems that centralise governmental power within one ministry on technical grounds, or expose citizens to abuse of their digital identity and assets due to lack of sufficient legal frameworks.

(Sumandro Chattapadhyay is research director, The Centre for Internet and Society. The views expressed are personal.)

FINANCIAL STATEMENTS OF 2013-14.pdf

by Prasad Krishna last modified Jul 28, 2015 01:11 AM

PDF document icon FINANCIAL STATEMENTS OF 2013-14.pdf — PDF document, 7173 kB (7345362 bytes)

Expert Committee Meetings

by Prasad Krishna last modified Aug 04, 2015 01:56 AM
In 2013 the Department of Biotechnology set up an Expert Committee to discuss the Human DNA Profiling Bill. The Expert Committee met four times with an additional meeting by a sub-committee set up by the Expert Committee. The Centre for Internet and Society was a member of the Committee. The zip file contains: Record Note of discussions of the Experts Committee Meeting held on 31st January 2013 at DBT, New Delhi, to discuss the potential privacy concerns on draft Human DNA Profiling Bill; Record Note of the 2nd discussion meeting of the Expert Committee held on 13th May 2013 in DBT to discuss the draft Human DNA Profiling Bill; Minutes of the 3rd meeting of the Expert Committee held on 25th November 2013 in DBT to discuss the draft Human DNA Profiling Bill; Minutes of the 4th meeting of the Expert Committee held on 10th November 2014 in DBT to discuss and finalize the draft Human DNA Profiling Bill; Record Note of discussions of the Experts Sub-Committee Meeting on Human DNA Profiling Bill held on 3rd September 2013 at CDFD, Hyderabad

ZIP archive icon Expert Committee Meetings.zip — ZIP archive, 2319 kB (2375322 bytes)

Role of Intermediaries in Countering Online Abuse

by Jyoti Panday last modified Aug 02, 2015 04:38 PM
The Internet can be a hostile space and protecting users from abuse without curtailing freedom of expression requires a balancing act on the part of online intermediaries.

This got published as two blog entries in the NALSAR Law Tech Blog. Part 1 can be accessed here and Part 2 here.


As platforms and services coalesce around user-generated content (UGC) and entrench themselves in the digital publishing universe, they are increasingly taking on the duties and responsibilities of protecting  rights including taking reasonable measures to restrict unlawful speech. Arguments around the role of intermediaries tackling unlawful content usually center around the issue of regulation—when is it feasible to regulate speech and how best should this regulation be enforced?

Recently, Twitter found itself at the periphery of such questions when an anonymous user of the platform, @LutyensInsider, began posting slanderous and sexually explicit comments about Swati Chaturvedi, a Delhi-based journalist. The online spat which began in February last year,  culminated into Swati filing an FIR against the anonymous user, last week. Within hours of the FIR, the anonymous user deleted the tweets and went silent. Predictably, Twitter users hailed this as a much needed deterrence to online harassment. Swati’s personal victory is worth celebrating, it is an encouragement for the many women bullied daily on the Internet, where harassment is rampant. However, while Swati might be well within her legal rights to counter slander, the rights and liabilities of private companies in such circumstances are often not as clear cut.

Should platforms like Twitter take on the mantle of deciding what speech is permissible or not? When and how should the limits on speech be drawn? Does this amount to private censorship?The answers are not easy and as the recent Grand Chamber of the European Court of Human Rights (ECtHR) judgment in the case of Delfi AS v. Estonia confirms, the role of UGC platforms in balancing the user rights, is an issue far from being settled. In its ruling, the  ECtHR reasoned that because of their role in facilitating expression, online platforms have a requirement “to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship”.

This is problematic because the decision moves the regime away from a framework that grants immunity from liability, as long as platforms meet certain criteria and procedures. In other words the ruling establishes strict liability for intermediaries in relation to manifestly illegal content, even if they may have no knowledge. The 'obligation' placed on the intermediary does not grant them safe harbour and is not proportionate to the monitoring and blocking capacity thus necessitated. Consequently,  platforms might be incentivized to err on the side of caution and restrict comments or confine speech resulting in censorship. The ruling is especially worrying, as the standard of care placed on the intermediary does not recognize the different role played by intermediaries in detection and removal of unlawful content. Further, intermediary liability is its own legal regime and is at the same time, a subset of various legal issues that need an understanding of variation in scenarios, mediums and technology both globally and in India.

Law and Short of IT

Earlier this year, in a leaked memo, the Twitter CEO Dick Costolo took personal responsibility for his platform's chronic problem and failure to deal with harassment and abuse. In Swati's case, Twitter did not intervene or take steps to address  harrassment. If it had to, Twitter (India),  as all online intermediaries would be bound by the provisions established under Section 79 and accompanying Rules of the Information Technology Act. These legislations outline the obligations and conditions that intermediaries must fulfill to claim immunity from liability for third party content. Under the regime, upon receiving actual knowledge of unlawful information on their platform, the intermediary must comply with the notice and takedown (NTD) procedure for blocking and removal of content.

Private complainants could invoke the NTD procedure forcing intermediaries to act as adjudicators of an unlawful act—a role they are clearly ill-equipped to perform, especially when the content relates to political speech or alleged defamation or obscenity. The SC judgment in Shreya Singhal addressing this issue, read down the provision (Section 79 by holding that a takedown notice can only be effected if the complainant secures a court order to support her allegation. Further, it was held that the scope of restrictions under the mechanism is restricted to the specific categories identified under Article 19(2). Effectively, this means Twitter need not take down content in the absence of a court order.

Content Policy as Due Diligence

Another provision, Rule 3(2) prescribes a content policy which, prior to the Shreya Singhal judgment was a criteria for administering takedown. This content policy includes an exhaustive list of types of restricted expressions, though worryingly, the terms included in it are  not clearly defined and go beyond the reasonable restrictions envisioned under Article 19(2). Terms such as “grossly harmful”, “objectionable”, “harassing”, “disparaging” and “hateful” are not defined anywhere in the Rules, are subjective and contestable as alternate interpretation and standard could be offered for the same term. Further, this content policy is not applicable to content created by the intermediary.

Prior to the SC verdict in Shreya Singhal, actual knowledge could have been interpreted to mean the intermediary is called upon its own judgement under sub-rule (4) to restrict impugned content in order to seek exemption from liability. While liability accrued from not complying with takedown requests under the content policy was clear, this is not the case anymore. By reading down of S. 79 (3) (b) the court has addressed the issue of intermediaries complying with places limits on the private censorship of intermediaries and the invisible censorship of opaque government takedown requests as they must and should adhere, to the boundaries set by Article 19(2). Following the SC judgment intermediaries do not have to administer takedowns without a court order thereby rendering this content policy redundant. As it stands, the content policy is an obligation that intermediaries must fulfill in order to be exempted from liability for UGC and this due diligence is limited to publishing rules and regulations, terms and conditions or user agreement informing users of the restrictions on content. The penalties for not publishing this content policy should be clarified.

Further, having been informed of what is permissible users are agreeing to comply with the policy outlined, by signing up to and using these platforms and services. The requirement of publishing content policy as due diligence is unnecessary given that mandating such ‘standard’ terms of use negates the difference between different types of intermediaries which accrue different kinds of liability. This also places an extraordinary power of censorship in the hands of the intermediary, which could easily stifle freedom of speech online. Such heavy handed regulation could make it impossible to publish critical views about anything without the risk of being summarily censored.

Twitter may have complied with its duties by publishing the content policy, though the obligation does not seem to be an effective deterrence. Strong safe harbour provisions for intermediaries are a crucial element in the promotion and protection of the right to freedom of expression online. By absolving platforms of responsibility for UGC as long as they publish a content policy that is vague and subjective is the very reason why India’s IT Rules are in fact, in urgent need of improvement.

Size Matters

The standards for blocking, reporting and responding to abuse vary across different categories of platforms. For example, it may be easier to counter trolls and abuse on blogs or forums where the owner or an administrator is monitoring comments and UGC. Usually platforms outline monitoring and reporting policies and procedures including recourse available to victims and action to be taken against violators. However, these measures are not always effective in curbing abuse as it is possible for users to create new accounts under different usernames. For example, in Swati’s case the anonymous user behind @LutyensInsider account changed their handle to @gregoryzackim and @gzackim before deleting all tweets. In this case, perhaps the fear of criminal charges ahead was enough to silence the anonymous user, which may not always be the case.

Tackling the Trolls

Most large intermediaries have privacy settings which restrict the audience for user posts as well as prevent strangers from contacting them as a general measure against online harassment. Platforms also publish monitoring policy outlining the procedure and mechanisms for users to register their complaint or report abuse. Often reporting and blocking mechanisms rely on community standards and users reporting unlawful content. Last week Twitter announced a new feature allowing lists of blocked users to be shared between users. An improvement on existing mechanism for blocking, the feature is aimed at making the service safer for people facing similar issues and while an improvement on standard policies defining permissible limits on content, such efforts may have their limitations.

The mechanisms follow a one-size-fits-all policy. First, such community driven efforts do not address concerns of differences in opinion and subjectivity. Swati in defending her actions stressed the “coarse discourse” prevalent on social media, though as this article points out she might be assumed guilty of using offensive and abusive language. Subjectivity and many interpretations of the same opinion can pave the way for many taking offense online. Earlier this month, Nikhil Wagle’s tweets criticising Prime Minister Narendra Modi as a “pervert” was interpreted as “abusive”, “offensive” and “spreading religious disharmony”. While platforms are within their rights to establish policies for dealing with issues faced by users, there is a real danger of them doing so for political reasons” and based on “popularity” measures which may chill free speech. When many get behind a particular interpretation of an opinion, lawful speech may also be stifled as Sreemoyee Kundu found out. A victim of online abuse her account was blocked by Facebook owing to multiple reports from a “faceless fanatical mob”. Allowing the users to set standards of permissible speech is an improvement, though it runs the risk of mob justice and platforms need to be vigilant in applying such standards.

While it may be in the interest of platforms to keep a hands off approach to community policies, certain kind of content may necessiate intervention by the intermediary. There has been an increase in private companies modifying their content policy to place reasonable restriction on certain hateful behaviour in order to protect vulnerable or marginalised voices. Twitter and Reddit's policy change in addressing revenge porn are reflective of a growing understanding amongst stakeholders that in order to promote free expression of ideas, recognition and protection of certain rights on the Internet may be necessary. However, any approach to regulate user content must assess the effect of policy decisions on user rights. Google's stand on tackling revenge porn may be laudable, though the decision to push down 'piracy' sites in its search results could be seen to adversely impact the choice that users have. Terms of service implemented with subjectivity and lack of transparency can and does lead to private censorship.

The Way Forward

Harassment is damaging, because of the feeling of powerlessness that it invokes in the victims and online intermediaries represent new forms of power through which users' negotiate and manage their online identity. Content restriction policies and practices must address this power imbalance by adopting baseline safeguards and best practices. It is only fair that based on principles of equality and justice, intermediaries be held responsible for the damage caused to users due to wrongdoings of other users or when they fail to carry out their operations and services as prescribed by the law. However, in its present state, the intermediary liability regime in India is not sufficient to deal with online harassment and needs to evolve into a more nuanced form of governance.

Any liability framework must evolve bearing in mind the slippery slope of overbroad regulation and differing standards of community responsibility. Therefore, a balanced framework would need to include elements of both targeted regulation and soft forms of governance as liability regimes need to balance fundamental human rights and the interests of private companies. Often, achieving this balance is problematic given that these companies are expected to be adjudicators and may also be the target of the breach of rights, as is the case in Delfi v Estonia. Global frameworks such as the Manila Principles can be a way forward in developing effective mechanisms. The determination of content restriction practices should  always adopt the least restrictive means of doing so, distinguishing between the classes of intermediary. They must evolve considering the proportionality of the harm, the nature of the content and the impact on affected users including the proximity of affected party to content uploader.

Further, intermediaries and governments should communicate a clear mechanism for review and appeal of restriction decisions. Content restriction policies should incorporate an effective right to be heard. In exceptional circumstances when this is not possible, a post facto review of the restricton order and its implementation must take place as soon as practicable. Further, unlawful content restricted for a limited duration or within a specific geography, must not extend beyond these limits and a periodic review should take place to ensure the validity of the restriction. Regular, systematic review of rules and guidelines guiding intermediary liability will go a long way in ensuring that such frameworks are not overly burdensome and remain effective.

Policy Paper on Surveillance in India

by Vipul Kharbanda last modified Aug 03, 2015 03:27 PM
This policy brief analyses the different laws regulating surveillance at the State and Central level in India and calls out ways in which the provisions are unharmonized. The brief then provides recommendations for the harmonization of surveillance law in India.

Introduction

The current legal framework for surveillance in India is a legacy of the colonial era laws that had been drafted by the British. Surveillance activities by the police are an everyday phenomenon and are included as part of their duties in the various police manuals of the different states. It will become clear from an analysis of the laws and regulations below, that whilst the police manuals cover the aspect of physical surveillance in some detail, they do not discuss the issue of interception of telephone or internet traffic. These issues are dealt with separately under the Telecom Act and the Information Technology Act and the Rules made thereunder, which are applicable to all security agencies and not just the police. Since the Indian laws deal with different aspects of surveillance under different legislations, the regulations dealing with this issue do not have any uniform standards. This paper therefore argues that the need of the hour is to have a single legislation which deals with all aspects of surveillance and interception in one place so that there is uniformity in the laws and practices of surveillance in the entire country.

Legal Regime

India does not have one integrated policy on surveillance and law enforcement and security agencies have to rely upon a number of different sectoral legislations to carry out their surveillance activities. These include:

1. Police Surveillance under Police Acts and Model Police Manual

Article 246(3) of the Constitution of India, read with Entry 2, List II, of the VIIth Schedule, empowers the States to legislate in matters relating to the police. This means that the police force is under the control of the state government rather than the Central government. Consequently, States have their own Police Acts to govern the conduct of the police force. Under the authority of these individual State Police Acts, rules are formulated for day-to-day running of the police. These rules are generally found in the Police Manuals of the individual states. Since a discussion of the Police Manual of each State with its small deviations is beyond the scope of this study, we will discuss the Model Police Manual issued by the Bureau of Police Research and Development.

As per the Model Police Manual, “surveillance and checking of bad characters” is considered to be one of the duties of the police force mentioned in the “Inventory of Police Duties, Functions and Jobs”.[1] Surveillance is also one of the main methods utilized by the police for preventing law and order situations and crimes.[2] As per the Manual the nature and degree of surveillance depends on the circumstances and persons on whom surveillance is mounted and it is only in very rare cases and on rare occasions that round the clock surveillance becomes necessary for a few days or weeks.[3]

Surveillance of History Sheeted Persons: Beat Police Officers should be fully conversant with the movements or changes of residence of all persons for whom history sheets of any category are maintained. They are required to promptly report the exact information to the Station House Officer (SHO), who make entries in the relevant registers. The SHO on the basis of this information reports, by the quickest means, to the SHO in whose jurisdiction the concerned person/persons are going to reside or pass through. When a history-sheeted person is likely to travel by the Railway, intimation of his movements should also be given to the nearest Railway Police Station.[4] It must be noted that the term “history sheet” or “history sheeter” is not defined either in the Indian Penal Code, 1860, most of the State Police Acts or the Model Police Manual, but it is generally understood and defined in the Oxford English Dictionary as persons with a criminal record.

Surveillance of “Bad Characters”: Keeping tabs on and getting information regarding “bad characters” is part of the duties of a beat constable. In the case of a “bad character” who is known to have gone to another State, the SHO of the station in the other state is informed using the quickest means possible followed by sending of a BC Roll 'A' directly to the SHO.[5] When a “bad character” absents himself or goes out of view, whether wanted in a case or not, the information is required to be disseminated to the police stations having jurisdiction over the places likely to be visited by him and also to the neighbouring stations, whether within the State or outside. If such person is traced and intimation is received of his arrest or otherwise, arrangements to get a complete and true picture of his activities are required to be made and the concerned record updated.[6]

The Police Manual clarifies the term “bad characters” to mean “offenders, criminals, or members of organised crime gangs or syndicates or those who foment or incite caste, communal violence, for which history sheets are maintained and require surveillance.”[7] A fascinating glimpse into the history of persons who were considered to be “bad characters” is contained in the article by Surjan Das & Basudeb Chattopadhyay in EPW[8] wherein they bring out the fact that in colonial times a number of the stereotypes propagated by the British crept into their police work as well. It appears that one did not have to be convicted to be a bad character, but people with a dark complexion, strong built, broad chins, deep-set eyes, broad forehead, short hair, scanty or goatee beard, marks on face, moustache, blunt nose, white teeth and monkey-face would normally fit the description of “bad characters”.

Surveillance of Suspicious Strangers: When a stranger of suspicious conduct or demeanour is found within the limits of a police station, the SHO is required to forward a BC Roll to the Police Station in whose jurisdiction the stranger claims to have resided. The receipt of such a roll is required to be immediately acknowledged and replied. If the suspicious stranger states that he resides in another State, a BC Roll is sent directly to the SHO of the station in the other State.[9] The manual however, does not define who a “suspicious stranger” is and how to identify one.

Release of Foreign Prisoners: Before a foreign prisoner (whose finger prints are taken for record) is released the Superintendent of Police of the district where the case was registered is required to send a report to the Director, I.B. through the Criminal Investigation Department informing the route and conveyance by which such person is likely to leave the country.[10]

Shadowing of convicts and dangerous persons: The Police Manual contains the following rules for shadowing the convicts on their release from jails:

(a) Dangerous convicts who are not likely to return to their native places are required to be shadowed. The fact, when a convict is to be shadowed is entered in the DCRB in the FP register and communicated to the Superintendent of Jails.

(b) The Police Officer deputed for shadowing an ex-convict is required to enter the fact in the notebook. The Police Officers area furnished with a challan indicating the particulars of the ex-convict marked for shadowing. This form is returned by the SHO of the area where the ex-convict takes up his residence or passes out of view to the DCRB / OCRS where the jail is situated, where it is put on record for further reference and action if any. Even though the subjects being shadowed are kept in view, no restraint is to put upon their movements on any account.[11]

Apart from the provisions discussed above, there are also provisions in the Police Manual regarding surveillance of convicts who have been released on medical grounds as well as surveillance of ex-convicts who are required to report their movements to the police as per the provisions of section 356 of the Code of Criminal Procedure.[12]

As noted above, the various police manuals are issued under the State Police Acts and they govern the police force of the specific states. The fact that each state has its own individual police manual itself leads to non-uniformity regarding standards and practices of surveillance. But it is not only the legislations at the State levels which lead to this problem, even legislation at the Central level, which are applicable to the country as a whole also have differing standards regarding different aspects of surveillance. In order to explore this further, we shall now discuss the central legislations dealing with surveillance.

2. The Indian Telegraph Act, 1885

Section 5 of the Indian Telegraph Act, 1885, empowers the Central Government and State Governments of India to order the interception of messages in two circumstances: (1) in the occurrence of any public emergency or in the interest of public safety, and (2) if it is considered necessary or expedient to do so in the interest of:[13]

  • the sovereignty and integrity of India; or
  • the security of the State; or
  • friendly relations with foreign states; or
  • public order; or
  • for preventing incitement to the commission of an offence.

The Supreme Court of India has specified the terms 'public emergency' and 'public safety', based on the following[14]:

"Public emergency would mean the prevailing of a sudden condition or state of affairs affecting the people at large calling for immediate action. The expression 'public safety' means the state or condition of freedom from danger or risk for the people at large. When either of these two conditions are not in existence, the Central Government or a State Government or the authorised officer cannot resort to telephone tapping even though there is satisfaction that it is necessary or expedient so to do in the interests of it sovereignty and integrity of India etc. In other words, even if the Central Government is satisfied that it is necessary or expedient so to do in the interest of the sovereignty and integrity of India or the security of the State or friendly relations with sovereign States or in public order or for preventing incitement to the commission of an offence, it cannot intercept the message, or resort to telephone tapping unless a public emergency has occurred or the interest of public safety or the existence of the interest of public safety requires. Neither the occurrence of public emergency nor the interest of public safety are secretive conditions or situations. Either of the situations would be apparent to a reasonable person."

In 2007, Rule 419A was added to the Indian Telegraph Rules, 1951 framed under the Indian Telegraph Act which provided that orders on the interception of communications should only be issued by the Secretary in the Ministry of Home Affairs. However, it provided that in unavoidable circumstances an order could also be issued by an officer, not below the rank of a Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary.[15]

According to Rule 419A, the interception of any message or class of messages is to be carried out with the prior approval of the Head or the second senior most officer of the authorised security agency at the Central Level and at the State Level with the approval of officers authorised in this behalf not below the rank of Inspector General of Police, in the belowmentioned emergent cases:

  • in remote areas, where obtaining of prior directions for interception of messages or class of messages is not feasible; or
  • for operational reasons, where obtaining of prior directions for interception of message or class of messages is not feasible;

however, the concerned competent authority is required to be informed of such interceptions by the approving authority within three working days and such interceptions are to be confirmed by the competent authority within a period of seven working days. If the confirmation from the competent authority is not received within the stipulated seven days, such interception should cease and the same message or class of messages should not be intercepted thereafter without the prior approval of the Union Home Secretary or the State Home Secretary.[16]

Rule 419A also tries to incorporate certain safeguards to curb the risk of unrestricted surveillance by the law enforcement authorities which include the following:

  • Any order for interception issued by the competent authority should contain reasons for such direction and a copy of such an order should be forwarded to the Review Committee within a period of seven working days;[17]
  • Directions for interception should be issued only when it is not possible to acquire the information by any other reasonable means;[18]
  • The directed interception should include the interception of any message or class of messages that are sent to or from any person n or class of persons or relating to any particular subject whether such message or class of messages are received with one or more addresses, specified in the order being an address or addresses likely to be used for the transmission of communications from or to one particular person specified or described in the order or one particular set of premises specified or described in the order;[19]
  • The interception directions should specify the name and designation of the officer or the authority to whom the intercepted message or class of messages is to be disclosed to;[20]
  • The directions for interception would remain in force for sixty days, unless revoked earlier, and may be renewed but the same should not remain in force beyond a total period of one hundred and eighty days;[21]
  • The directions for interception should be conveyed to the designated officers of the licensee(s) in writing by an officer not below the rank of Superintendent of Police or Additional Superintendent of Police or the officer of the equivalent rank;[22]
  • The officer authorized to intercept any message or class of messages should maintain proper records mentioning therein, the intercepted message or class of messages, the particulars of persons whose message has been intercepted, the name and other particulars of the officer or the authority to whom the intercepted message or class of messages has been disclosed, etc.;[23]
  • All the requisitioning security agencies should designate one or more nodal officers not below the rank of Superintendent of Police or the officer of the equivalent rank to authenticate and send the requisitions for interception to the designated officers of the concerned service providers to be delivered by an officer not below the rank of Sub-Inspector of Police;[24]
  • Records pertaining to directions for interception and of intercepted messages should be destroyed by the competent authority and the authorized security and Law Enforcement Agencies every six months unless these are, or likely to be, required for functional requirements;[25]

According to Rule 419A, service providers \are required by law enforcement to intercept communications are required to comply with the following[26]:

  • Service providers should designate two senior executives of the company in every licensed service area/State/Union Territory as the nodal officers to receive and handle such requisitions for interception;[27]
  • The designated nodal officers of the service providers should issue acknowledgment letters to the concerned security and Law Enforcement Agency within two hours on receipt of intimations for interception;[28]
  • The system of designated nodal officers for communicating and receiving the requisitions for interceptions should also be followed in emergent cases/unavoidable cases where prior approval of the competent authority has not been obtained;[29]
  • The designated nodal officers of the service providers should forward every fifteen days a list of interception authorizations received by them during the preceding fortnight to the nodal officers of the security and Law Enforcement Agencies for confirmation of the authenticity of such authorizations;[30]
  • Service providers are required to put in place adequate and effective internal checks to ensure that unauthorized interception of messages does not take place, that extreme secrecy is maintained and that utmost care and precaution is taken with regards to the interception of messages;[31]
  • Service providers are held responsible for the actions of their employees. In the case of an established violation of license conditions pertaining to the maintenance of secrecy and confidentiality of information and unauthorized interception of communication, action shall be taken against service providers as per the provisions of the Indian Telegraph Act, and this shall not only include a fine, but also suspension or revocation of their license;[32]
  • Service providers should destroy records pertaining to directions for the interception of messages within two months of discontinuance of the interception of such messages and in doing so they should maintain extreme secrecy.[33]

Review Committee

Rule 419A of the Indian Telegraph Rules requires the establishment of a Review Committee by the Central Government and the State Government, as the case may be, for the interception of communications, as per the following conditions:[34]

(1) The Review Committee to be constituted by the Central Government shall consist of the following members, namely:

(a) Cabinet Secretary - Chairman

(b) Secretary to the Government of India in charge, Legal Affairs - Member

(c) Secretary to the Government of India, Department of Telecommunications – Member

(2) The Review Committee to be constituted by a State Government shall consist of the following members, namely:

(a) Chief Secretary – Chairman

(b) Secretary Law/Legal Remembrancer in charge, Legal Affairs – Member

(c) Secretary to the State Government (other than the Home Secretary) – Member

(3) The Review Committee meets at least once in two months and records its findings on whether the issued interception directions are in accordance with the provisions of sub-section (2) of Section 5 of the Indian Telegraph Act. When the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above it may set aside the directions and order for destruction of the copies of the intercepted message or class of messages;[35]

It must be noted that the Unlawful Activities (Prevention) Act, 1967, (which is currently used against most acts of urban terrorism) also allows for the interception of communications but the procedures and safeguards are supposed to be the same as under the Indian Telegraph Act and the Information Technology Act.[36]

3. Telecom Licenses

The telecom sector in India has seen immense activity in the last two decades ever since it was opened up to private competition. These last twenty years have seen a lot of turmoil and have offered a tremendous learning opportunity for the private players as well as the governmental bodies regulating the sector. Currently any entity wishing to get a telecom license is offered a UL (Unified License) which contains terms and conditions for all the services that a licensee may choose to offer. However there were a large number of other licenses before the current regime, and since the licenses have a long phase out, we have tried to cover what we believe are the four most important licenses issued to telecom operators starting with the CMTS License:

Cellular Mobile Telephony Services (CMTS) License

In terms of National Telecom Policy (NTP)-1994, the first phase of liberalization in mobile telephone service started with issue of 8 licenses for Cellular Mobile Telephony Services (CMTS) in the 4 metro cities of Delhi, Mumbai, Calcutta and Chennai to 8 private companies in November 1994. Subsequently, 34 licenses for 18 Territorial Telecom Circles were also issued to 14 private companies during 1995 to 1998. During this period a maximum of two licenses were granted for CMTS in each service area and these licensees were called 1st & 2nd cellular licensees.[37] Consequent upon announcement of guidelines for Unified Access (Basic & Cellular) Services licenses on 11.11.2003, some of the CMTS operators were permitted to migrate from CMTS License to Unified Access Service License (UASL) but currently no new CMTS and Basic service licenses are being awarded after issuing the guidelines for Unified Access Service Licence (UASL).

The important provisions regarding surveillance in the CMTS License are listed below:

Facilities for Interception: The CMTS License requires the Licensee to provide necessary facilities to the designated authorities for interception of the messages passing through its network.[38]

Monitoring of Telecom Traffic: The designated person of the Central/State Government as conveyed to the Licensor from time to time in addition to the Licensor or its nominee have the right to monitor the telecommunication traffic in every MSC or any other technically feasible point in the network set up by the licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. The hardware at licensee’s end and software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at licensee’s cost. In case the security agencies intend to locate the equipment at licensee’s premises for facilitating monitoring, the licensee is required to extend all support in this regard including space and entry of the authorised security personnel. The interface requirements as well as features and facilities as defined by the Licensor are to be implemented by the licensee for both data and speech. The Licensee is also required to ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 210 simultaneous calls.[39]

Monitoring Records to be maintained: Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[40]

Protection of Privacy: It is the responsibility of the Licensee to ensure the protection of privacy of communication and ensure unathorised interception of messages does not take place.[41]

License Agreement for Provision of Internet Services (ISP License)

Internet services were launched in India on 15th August, 1995 by Videsh Sanchar Nigam Limited. In November, 1998, the Government opened up the sector for providing Internet services by private operators. The major provisions dealing with surveillance contained in the ISP License are given below:

Authorization for monitoring: Monitoring shall only be by the authorization of the Union Home Secretary or Home Secretaries of the States/Union Territories.[42]

Access to subscriber list by authorized intelligence agencies and licensor: The complete and up to date list of subscribers will be made available by the ISP on a password protected website – accessible to authorized intelligence agencies.[43] Information such as customer name, IP address, bandwidth provided, address of installation, data of installation, contact number and email of leased line customers shall be included in the website.[44] The licensor or its representatives will also have access to the Database relating to the subscribers of the ISP which is to be available at any instant.[45]

Right to monitor by the central/state government: The designated person of the central/state government or the licensor or nominee will have the right to monitor telecommunications traffic in every node or any other technically feasible point in the network. To facilitate this, the ISP must make arrangements for the monitoring of simultaneous calls by the Government or its security agencies.[46]

Right of DoT to monitor: DoT will have the ability to monitor customers who generate high traffic value and verify specified user identities on a monthly basis.[47]

Provision of mirror images: Mirror images of the remote access information should be made available online for monitoring purposes.[48] A safeguard provided for in the license is that remote access to networks is only allowed in areas approved by the DOT in consultation with the Security Agencies.[49]

Provision of information stored on dedicated transmission link: The ISP will provide the login password to DOT and authorized Government agencies on a monthly basis for access to information stored on any dedicated transmission link from ISP node to subscriber premises.[50]

Provision of subscriber identity and geographic location: The ISP must provide the traceable identity and geographic location of their subscribers, and if the subscriber is roaming – the ISP should try to find traceable identities of roaming subscribers from foreign companies.[51]

Facilities for monitoring: The ISP must provide the necessary facilities for continuous monitoring of the system as required by the licensor or its authorized representatives.[52]

Facilities for tracing: The ISP will also provide facilities for the tracing of nuisance, obnoxious or malicious calls, messages, or communications. These facilities are to be provided specifically to authorized officers of the Government of India (police, customs, excise, intelligence department) when the information is required for investigations or detection of crimes and in the interest of national security.[53]

Facilities and equipment to be specified by government: The types of interception equipment to be used will be specified by the government of India.[54] This includes the installation of necessary infrastructure in the service area with respect to Internet Telephony Services offered by the ISP including the processing, routing, directing, managing, authenticating the internet calls including the generation of Call Details Record, IP address, called numbers, date, duration, time, and charge of the internet telephony calls.[55]

Facilities for surveillance of mobile terminal activity: The ISP must also provide the government facilities to carry out surveillance of Mobile Terminal activity within a specified area whenever requested.[56]

Facilities for monitoring international gateway: As per the requirements of security agencies, every international gateway location having a capacity of 2 Mbps or more will be equipped will have a monitoring center capable of monitoring internet telephony traffic.[57]

Facilities for monitoring in the premise of the ISP: Every office must be at least 10x10 with adequate power, air conditioning, and accessible only to the monitoring agencies. One local exclusive telephone line must be provided, and a central monitoring center must be provided if the ISP has multiple nodal points.[58]

Protection of privacy: There is a responsibility on the ISP to protect the privacy of its communications transferred over its network. This includes securing the information and protecting against unauthorized interception, unauthorized disclosure, ensure the confidentiality of information, and protect against over disclosure of information- except when consent has been given.[59]

Log of users: Each ISP must maintain an up to date log of all users connected and the service that they are using (mail, telnet, http, etc). The ISPs must also log every outward login or telnet through their computers. These logs as well as copies of all the packets must be made available in real time to the Telecom Authority.[60]

Log of internet leased line customers: A record of each internet leased line customer should be kept along with details of connectivity, and reasons for taking the link should be kept and made readily available for inspection.[61]

Log of remote access activities: The ISP will also maintain a complete audit trail of the remote access activities that pertain to the network for at least six months. This information must be available on request for any agency authorized by the licensor.[62]

Monitoring requirements: The ISP must make arrangements for the monitoring of the telecommunication traffic in every MSC exchange or any other technically feasible point, of at least 210 calls simultaneously.[63]

Records to be made available:

  • CDRS: When required by security agencies, the ISP must make available records of i) called/calling party mobile/PSTN numbers ii) time/date and duration of calls iii) location of target subscribers and from time to time precise location iv) telephone numbers – and if any call forwarding feature has been evoked – records thereof v) data records for failed call attempts vi) CDR of roaming subscriber.[64]
  • Bulk connections: On a monthly basis, and from time to time, information with respect to bulk connections shall be forwarded to DoT, the licensor, and security agencies.[65]
  • Record of calls beyond specified threshold: Calls should be checked, analyzed, and a record maintained of all outgoing calls made by customers both during the day and night that exceed a set threshold of minutes. A list of suspected subscribers should be created by the ISP and should be informed to DoT and any officer authorized by the licensor at any point of time.[66]
  • Record of subscribers with calling line identification restrictions: Furthermore, a list of calling line identification restriction subscribers with their complete address and details should be created on a password protected website that is available to authorized government agencies.[67]

Unified Access Services (UAS) License

Unified Access Services operators provide services of collection, carriage, transmission and delivery of voice and/or non-voice messages within their area of operation, over the Licensee’s network by deploying circuit and/or packet switched equipment. They may also provide Voice Mail, Audiotex services, Video Conferencing, Videotex, E-Mail, Closed User Group (CUG) as Value Added Services over its network to the subscribers falling within its service area on a non-discriminatory basis.

The terms of providing the services are regulated under the Unified Access Service License (UASL) which also contains provisions regarding surveillance/interception. These provisions are regularly used by the state agencies to intercept telephonic and data traffic of subscribers. The relevant terms of the UASL dealing with surveillance and interception are discussed below:

Confidentiality of Information: The Licensee cannot employ bulk encryption equipment in its network. Any encryption equipment connected to the Licensee’s network for specific requirements has to have prior evaluation and approval of the Licensor or officer specially designated for the purpose. However, any encryption equipment connected to the Licensee’s network for specific requirements has to have prior evaluation and approval of the Licensor or officer specially designated for the purpose. However, the Licensee has the responsibility to ensure protection of privacy of communication and to ensure that unauthorised interception of messages does not take place.[68] The Licensee shall take necessary steps to ensure that the Licensee and any person(s) acting on its behalf observe confidentiality of customer information.[69]

Responsibility of the Licensee: The Licensee has to take all necessary steps to safeguard the privacy and confidentiality of any information about a third party and its business to whom it provides the service and from whom it has acquired such information by virtue of the service provided and shall use its best endeavors to secure that :

  • No person acting on behalf of the Licensee or the Licensee divulges or uses any such information except as may be necessary in the course of providing such service to the third party; and
  • No such person seeks such information other than is necessary for the purpose of providing service to the third party.[70]

Provision of monitoring facilities: Requisite monitoring facilities /equipment for each type of system used, shall be provided by the service provider at its own cost for monitoring as and when required by the licensor.[71] The license also requires the Licensee to provide necessary facilities to the designated authorities for interception of the messages passing through its network.[72] The licensor in this case is the President of India, as the head of the State, therefore all references to the term licensor can be assumed to be to the government of India (which usually acts through the department of telecom (DOT). For monitoring traffic, the licensee company has to provide access of their network and other facilities as well as to books of accounts to the security agencies.[73]

Monitoring by Designated Person: The designated person of the Central/ State Government as conveyed to the Licensor from time to time in addition to the Licensor or its nominee has the right to monitor the telecommunication traffic in every MSC/Exchange/MGC/MG or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. The hardware at Licensee’s end and software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at Licensee’s cost. However, the respective Government instrumentality bears the cost of user end hardware and leased line circuits from the MSC/ Exchange/MGC/MG to the monitoring centres to be located as per their choice in their premises or in the premises of the Licensee. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including space and entry of the authorized security personnel. The Licensee is required to implement the interface requirements as well as features and facilities as defined by the Licensor for both data and speech. The Licensee is to ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 210 simultaneous calls for seven security agencies.[74]

Monitoring Records to be maintained: Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[75]

List of Subscribers: The complete list of subscribers shall be made available by the Licensee on their website (having password controlled access), so that authorized Intelligence Agencies are able to obtain the subscriber list at any time, as per their convenience with the help of the password.[76] The Licensor or its representative(s) have an access to the Database relating to the subscribers of the Licensee. The Licensee shall also update the list of his subscribers and make available the same to the Licensor at regular intervals. The Licensee shall make available, at any prescribed instant, to the Licensor or its authorized representative details of the subscribers using the service.[77] The Licensee must provide traceable identity of their subscribers,[78] and should be able to provide the geographical location (BTS location) of any subscriber at a given point of time, upon request by the Licensor or any other agency authorized by it.[79]

CDRs for Large Number of Outgoing Calls: The call detail records for outgoing calls made by subscribers making large number of outgoing calls day and night and to the various telephone numbers should be analyzed. Normally, no incoming call is observed in such cases. This can be done by running special programs for this purpose.[80] Although this provision itself does not say that it is limited to bulk subscribers (subscribers with more than 10 lines), it is contained as a sub-clause of section 41.19 which talks about specific measures for bulk subscribers, therefore it is possible that this provision is limited only to bulk subscribers and not to all subscribers.

No Remote Access to Suppliers: Suppliers/manufacturers and affiliate(s) are not allowed any remote access to the be enabled to access Lawful Interception System(LIS), Lawful Interception Monitoring(LIM), Call contents of the traffic and any such sensitive sector/data, which the licensor may notify from time to time, under any circumstances.[81] The Licensee is also not allowed to use remote access facility for monitoring of content.[82] Further, suitable technical device is required to be made available at Indian end to the designated security agency/licensor in which a mirror image of the remote access information is available on line for monitoring purposes.[83]

Monitoring as per the Rules under Telegraph Act: In order to maintain the privacy of voice and data, monitoring shall be in accordance with rules in this regard under Indian Telegraph Act, 1885.[84] It interesting to note that the monitoring under the UASL license is required to be as per the Rules prescribed under the Telegraph Act, but no mention is made of the Rules under the Information Technology Act.

Monitoring from Centralised Location: The Licensee has to ensure that necessary provision (hardware/ software) is available in its equipment for doing lawful interception and monitoring from a centralized location.[85]

Unified License (UL)

The National Telecom Policy - 2012 recognized the fact that the evolution from analog to digital technology has facilitated the conversion of voice, data and video to the digital form which are increasingly being rendered through single networks bringing about a convergence in networks, services and devices. It was therefore felt imperative to move towards convergence between various services, networks, platforms, technologies and overcome the incumbent segregation of licensing, registration and regulatory mechanisms in these areas. It was for this reason that the Government of India decided to move to the Unified License regime under which service providers could opt for all or any one or more of a number of different services.[86]

Provision of interception facilities by Licensee: The UL requires that the requisite monitoring/ interception facilities /equipment for each type of service, should be provided by the Licensee at its own cost for monitoring as per the requirement specified by the Licensor from time to time.[87] The Licensee is required to provide necessary facilities to the designated authorities of Central/State Government as conveyed by the Licensor from time to time for interception of the messages passing through its network, as per the provisions of the Indian Telegraph Act.[88]

Bulk encryption and unauthorized interception: The UL prohibits the Licensee from employing bulk encryption equipment in its network. Licensor or officers specially designated for the purpose are allowed to evaluate any encryption equipment connected to the Licensee’s network. However, it is the responsibility of the Licensee to ensure protection of privacy of communication and to ensure that unauthorized interception of messages does not take place.[89] The use of encryption by the subscriber shall be governed by the Government Policy/rules made under the Information Technology Act, 2000.[90]

Safeguarding of Privacy and Confidentiality: The Licensee shall take necessary steps to ensure that the Licensee and any person(s) acting on its behalf observe confidentiality of customer information.[91] Subject to terms and conditions of the license, the Licensee is required to take all necessary steps to safeguard the privacy and confidentiality of any information about a third party and its business to whom it provides services and from whom it has acquired such information by virtue of the service provided and shall use its best endeavors to secure that: a) No person acting on behalf of the Licensee or the Licensee divulges or uses any such information except as may be necessary in the course of providing such service; and b) No such person seeks such information other than is necessary for the purpose of providing service to the third party.

Provided the above para does not apply where: a) The information relates to a specific party and that party has consented in writing to such information being divulged or used, and such information is divulged or used in accordance with the terms of that consent; or b) The information is already open to the public and otherwise known.[92]

No Remote Access to Suppliers: Suppliers/manufacturers and affiliate(s) are not allowed any remote access to the be enabled to access Lawful Interception System(LIS), Lawful Interception Monitoring(LIM), Call contents of the traffic and any such sensitive sector/data, which the licensor may notify from time to time, under any circumstances.[93] The Licensee is also not allowed to use remote access facility for monitoring of content.[94] Further, suitable technical device is required to be made available at Indian end to the designated security agency/licensor in which a mirror image of the remote access information is available on line for monitoring purposes.[95]

Monitoring as per the Rules under Telegraph Act: In order to maintain the privacy of voice and data, monitoring shall be in accordance with rules in this regard under Indian Telegraph Act, 1885.[96] Just as in the UASL, the monitoring under the UL license is required to be as per the Rules prescribed under the Telegraph Act, but no mention is made of the Rules under the Information Technology Act.

Terms specific to various services

Since the UL License intends to cover all services under a single license, in addition to the general terms and conditions for interception, it also has terms for each specific service. We shall now discuss the terms for interception specific to each service offered under the Unified License.

Access Service: The designated person of the Central/ State Government, in addition to the Licensor or its nominee, shall have the right to monitor the telecommunication traffic in every MSC/ Exchange/ MGC/ MG/ Routers or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. For establishing connectivity to Centralized Monitoring System, the Licensee at its own cost shall provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including space and entry of the authorized security personnel.

The Interface requirements as well as features and facilities as defined by the Licensor should be implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Lawful Interception and Monitoring equipment for trouble free operations of monitoring of at least 480 simultaneous calls as per requirement with at least 30 simultaneous calls for each of the designated security/ law enforcement agencies. Each MSC of the Licensee in the service area shall have the capacity for provisioning of at least 3000 numbers for monitoring. Presently there are ten (10) designated security/ law enforcement agencies. The above capacity provisions and no. of designated security/ law enforcement agencies may be amended by the Licensor separately by issuing instructions at any time.

Along with the monitored call following records are to be made available:

  • Called/calling party mobile/PSTN numbers.
  • Time/date and duration of interception.
  • Location of target subscribers. Cell ID should be provided for location of the target subscriber. However, Licensor may issue directions from time to time on the precision of location, based on technological developments and integration of Global Positioning System (GPS) which shall be binding on the LICENSEE.
  • Telephone numbers if any call-forwarding feature has been invoked by target subscriber.
  • Data records for even failed call attempts.
  • CDR (Call Data Record) of Roaming Subscriber.

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies.[97]

The call detail records for outgoing calls made by those subscribers making large number of outgoing calls day and night to the various telephone numbers with normally no incoming calls, is required to be analyzed by the Licensee. The service provider is required to run special programme, devise appropriate fraud management and prevention programme and fix threshold levels of average per day usage in minutes of the telephone connection; all telephone connections crossing the threshold of usage are required to be checked for bona fide use. A record of check must be maintained which may be verified by Licensor any time. The list/details of suspected subscribers should be informed to the respective TERM Cell of DoT and any other officer authorized by Licensor from time to time.[98]

The Licensee shall provide location details of mobile customers as per the accuracy and time frame mentioned in the Unified License. It shall be a part of CDR in the form of longitude and latitude, besides the co-ordinate of the BTS, which is already one of the mandated fields of CDR. To start with, these details will be provided for specified mobile numbers. However, within a period of 3 years from effective date of the Unified License, location details shall be part of CDR for all mobile calls.[99]

Internet Service: The Licensee is required to maintain CDR/IPDR for Internet including Internet Telephony Service for a minimum period of one year. The Licensee is also required to maintain log-in/log-out details of all subscribers for services provided such as internet access, e-mail, Internet Telephony, IPTV etc. These logs are to be maintained for a minimum period of one year. For the purpose of interception and monitoring of traffic, the copies of all the packets originating from / terminating into the Customer Premises Equipment (CPE) shall be made available to the Licensor/Security Agencies. Further, the list of Internet Lease Line (ILL) customers is to be placed on a password protected website in the format prescribed in the Unified License.[100]

Lawful Interception and Monitoring (LIM) systems of requisite capacities are to be set up by the Licensees for Internet traffic including Internet telephony traffic through their Internet gateways and /or Internet nodes at their own cost, as per the requirement of the security agencies/Licensor prescribed from time to time. The cost of maintenance of the monitoring equipment and infrastructure at the monitoring centre located at the premises of the licensee shall be borne by the Licensee. In case the Licensee obtains Access spectrum for providing Internet Service / Broadband Wireless Access using the Access Spectrum, the Licensee shall install the required Lawful Interception and Monitoring systems of requisite capacities prior to commencement of service. The Licensee, while providing downstream Internet bandwidth to an Internet Service provider is also required to ensure that all the traffic of downstream ISP passing through the Licensee’s network can be monitored in the network of the Licensee. However, for nodes of Licensee having upstream bandwidth from multiple service providers, the Licensee may be mandated to install LIM/LIS at these nodes, as per the requirement of security agencies. In such cases, upstream service providers may not be required to monitor this bandwidth.[101]

In case the Licensee has multiple nodes/points of presence and has capability to monitor the traffic in all the Routers/switches from a central location, the Licensor may accept to monitor the traffic from the said central monitoring location, provided that the Licensee is able to demonstrate to the Licensor/Security Agencies that all routers / switches are accessible from the central monitoring location. Moreover, the Licensee would have to inform the Licensor of every change that takes place in their topology /configuration, and ensure that such change does not make any routers/switches inaccessible from the central monitoring location. Further, Office space of 10 feet x 10 feet with adequate and uninterrupted power supply and air-conditioning which is physically secured and accessible only to the monitoring agencies shall be provided by the Licensee at each Internet Gateway location at its cost.[102]

National Long Distance (NLD) Service: The requisite monitoring facilities are required to be provided by the Licensee as per requirement of Licensor. The details of leased circuit provided by the Licensee is to be provided monthly to security agencies & DDG (TERM) of the Licensed Service Area where the licensee has its registered office.[103]

International Long Distance (ILD) Service: Office space of 20’x20’ with adequate and uninterrupted power supply and air-conditioning which is physically secured and accessible only to the personnel authorized by the Licensor is required to be provided by the Licensee at each Gateway location free of cost.[104] The cost of monitoring equipment is to be borne by the Licensee. The installation of the monitoring equipment at the ILD Gateway Station is to be done by the Licensee. After installation of the monitoring equipment, the Licensee shall get the same inspected by monitoring /security agencies. The permission to operate/commission the gateway will be given only after this.[105]

The designated person of the Central/ State Government, in addition to the Licensor or its nominee, has the right to monitor the telecommunication traffic in every ILD Gateway / Routers or any other technically feasible point in the network set up by the Licensee. The Licensee is required to make arrangement for monitoring simultaneous calls by Government security agencies. For establishing connectivity to Centralized Monitoring System, the Licensee, at its own cost, is required to provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time. In case the security agencies intend to locate the equipment at Licensee’s premises for facilitating monitoring, the Licensee should extend all support in this regard including Space and Entry of the authorized security personnel. The Interface requirements as well as features and facilities as defined by the Licensor should be implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations of monitoring of at least 480 simultaneous calls as per requirement with at least 30 simultaneous calls for each of the designated security/ law enforcement agencies. Each ILD Gateway of the Licensee shall have the capacity for provisioning of at least 5000 numbers for monitoring. Presently there are ten (10) designated security/ law enforcement agencies. The above capacity provisions and number of designated security/ law enforcement agencies may be amended by the Licensor separately by issuing instructions at any time.[106]

The Licensee is required to provide the call data records of all the specified calls handled by the system at specified periodicity, as and when required by the security agencies in the format prescribed from time to time.[107]

Global Mobile Personal Communication by Satellite (GMPCS) Service: The designated Authority of the Central/State Government shall have the right to monitor the telecommunication traffic in every Gateway set up in India. The Licensee shall make arrangement for monitoring of calls as specified in the Unified License.[108]

The hardware/software required for monitoring of calls shall be engineered, provided/installed and maintained by the Licensee at the ICC (Intercept Control Centre) to be established at the GMPCS Gateway(s) as also in the premises of security agencies at Licensee’s cost. The Interface requirements as well as features and facilities shall be worked out and implemented by the Licensee for both data and speech. The Licensee should ensure suitable redundancy in the complete chain of Monitoring equipment for trouble free operations. The Licensee shall provide suitable training to the designated representatives of the Licensor regarding operation and maintenance of Monitoring equipment (ICC & MC). Interception of target subscribers using messaging services should also be provided even if retrieval is carried out using PSTN links. For establishing connectivity to Centralized Monitoring System, the Licensee at its own cost shall provide appropriately dimensioned hardware and bandwidth/dark fibre upto a designated point as required by Licensor from time to time.[109] The License also has specific obligations to extend monitored calls to designated security agencies as provided in the UL.[110] Further, the Licensee is required to provide the call data records of all the calls handled by the system at specified periodicity, if and as and when required by the security agencies.[111] It is the responsibility of the service provider for Global Mobile Personal Communication by Satellite (GMPCS) to provide facility to carry out surveillance of User Terminal activity.[112]

The Licensee has to make available adequate monitoring facility at the GMPCS Gateway in India to monitor all traffic (traffic originating/terminating in India) passing through the applicable system. For this purpose, the Licensee shall set up at his cost, the requisite interfaces, as well as features and facilities for monitoring of calls by designated agencies as directed by the Licensor from time to time. In addition to the Target Intercept List (TIL), it should also be possible to carry out specific geographic location based interception, if so desired by the designated security agencies. Monitoring of calls should not be perceptible to mobile users either during direct monitoring or when call has been grounded for monitoring. The Licensee shall not prefer any charges for grounding a call for monitoring purposes. The intercepted data is to be pushed to designated Security Agencies’ server on fire and forget basis. No records shall be maintained by the Licensee regarding monitoring activities and air-time used beyond prescribed time limit.

The Licensee has to ensure that any User Terminal (UT) registered in the gateway of another country shall re-register with Indian Gateway when operating from Indian Territory. Any UT registered outside India, when attempting to make/receive calls from within India, without due authority, shall be automatically denied service by the system and occurrence of such attempts along with information about UT identity as well as location shall be reported to the designated authority immediately.

The Licensee is required to have provision to scan operation of subscribers specified by security/ law enforcement agencies through certain sensitive areas within the Indian territory and shall provide their identity and positional location (latitude and longitude) to Licensor on as and when required basis.

Public Mobile Radio Trunking Service (PMRTS): Suitable monitoring equipment prescribed by the Licensor for each type of System used has to be provided by the Licensee at his own cost for monitoring, as and when required.[113]

Very Small Aperture Terminal (VSAT) Closed User Group (CUG) Service: Requisite monitoring facilities/ equipment for each type of system used have to be provided by the Licensee at its own cost for monitoring as and when required by the Licensor.[114] The Licensee shall provide at its own cost technical facilities for accessing any port of the switching equipment at the HUB for interception of the messages by the designated authorities at a location to be determined by the Licensor.[115]

Surveillance of MSS-R Service: The Licensee has to provide at its own cost technical facilities for accessing any port of the switching equipment at the HUB for interception of the messages by the designated authorities at a location as and when required.[116] It is the responsibility of the service provider of INSAT- Mobile Satellite System Reporting (MSS-R) service to provide facility to carry out surveillance of User Terminal activity within a specified area.[117]

Resale of International Private Leased Circuit (IPLC) Service: The Licensee has to take IPLC from the licensed ILDOs. The interception and monitoring of Resellers circuits will take place at the Gateway of the ILDO from whom the IPLC has been taken by the Licensee. The provisioning for Lawful Interception & Monitoring of the Resellers’ IPLC shall be done by the ILD Operator and the concerned ILDO shall be responsible for Lawful Interception and Monitoring of the traffic passing through the IPLC. The Resellers shall extend all cooperation in respect of interception and monitoring of its IPLC and shall be responsible for the interception results. The Licensee shall be responsible to interact, correspond and liaise with the licensor and security agencies with regard to security monitoring of the traffic. The Licensee shall, before providing an IPLC to the customer, get the details of services/equipment to be connected on both ends of IPLC, including type of terminals, data rate, actual use of circuit, protocols/interface to be used etc. The Resellers shall permit only such type of service/protocol on the IPLC for which the concerned ILDO has capability of interception and monitoring. The Licensee has to pass on any direct request placed by security agencies on him for interception of the traffic on their IPLC to the concerned ILDOs within two hours for necessary actions.[118]

4. The Information Technology Act, 2000

The Information Technology Act, 2000, was amended in a major way in 2008 and is the primary legislation which regulates the interception, monitoring, decryption and collection of traffic information of digital communications in India.

More specifically, section 69 of the Information Technology Act empowers the central Government and the state governments to issue directions for the monitoring, interception or decryption of any information transmitted, received or stored through a computer resource. Section 69 of the Information Technology Act, 2000 expands the grounds upon which interception can take place as compared to the Indian Telegraph Act, 1885. As such, the interception of communications under Section 69 is carried out in the interest of[119]:

  • The sovereignty or integrity of India
  • Defence of India
  • Security of the State
  • Friendly relations with foreign States
  • Public order
  • Preventing incitement to the commission of any cognizable offense relating to the above
  • For the investigation of any offense

While the grounds for interception are similar to the Indian Telegraph Act (except for the condition of prevention of incitement of only cognizable offences and the addition of investigation of any offence) the Information Technology Act does not have the overarching condition that interception can only occur in the case of public emergency or in the interest of public safety.

Additionally, section 69 of the Act mandates that any person or intermediary who fails to assist the specified agency with the interception, monitoring, decryption or provision of information stored in a computer resource shall be punished with imprisonment for a term which may extend to seven years and shall be liable for a fine.[120]

Section 69B of the Information Technology Act empowers the Central Government to authorise the monitoring and collection of information and traffic data generated, transmitted, received or stored through any computer resource for the purpose of cyber security. According to this section, any intermediary who intentionally or knowingly fails to provide technical assistance to the authorised agency which is required to monitor and collection information and traffic data shall be punished with an imprisonment which may extend to three years and will also be liable to a fine.[121]

The main difference between Section 69 and Section 69B is that the first requires the interception, monitoring and decryption of all information generated, transmitted, received or stored through a computer resource when it is deemed “necessary or expedient” to do so, whereas Section 69B specifically provides a mechanism for all metadata of all communications through a computer resource for the purpose of combating threats to “cyber security”. Directions under Section 69 can be issued by the Secretary to the Ministry of Home Affairs, whereas directions under Section 69B can only be issued by the Secretary of the Department of Information Technology under the Union Ministry of Communications and Information Technology.

Overlap with the Telegraph Act

Thus while the Telegraph Act only allows for interception of messages or class of messages transmitted by a telegraph, the Information Technology Act enables interception of any information being transmitted or stored in a computer resource. Since a “computer resource” is defined to include a communication device (such as cellphones and PDAs) there is a overlap between the provisions of the Information Technology Act and the Telegraph Act concerning the provisions of interception of information sent through mobile phones. This is further complicated by the fact that the UAS License specifically states that it is governed by the provisions of the Indian Telegraph Act, the Indian Wireless Telegraphy Act and the Telecom Regulatory Authority of India Act, but does not mention the Information Technology Act.[122] This does not mean that the Licensees under the Telecom Licenses are not bound by any other laws of India (including the Information Technology Act) but it is just an invitation to unnecessary complexities and confusions with regard to a very serious issue such as interception. This situation has thankfully been remedied by the Unified License (UL) which, although issued under section of 4 of the Telegraph Act, also references the Information Technology Act thus providing essential clarity with respect to the applicability of the Information Technology Act to the License Agreement.

Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009

The interception of internet communications is mainly covered by the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009under the Information Technology Act (the “IT Interception Rules”). In particular, the rules framed under Section 69 and 69B include safeguards stipulating to who may issue directions of interception and monitoring, how such directions are to be executed, the duration they remain in operation, to whom data may be disclosed, confidentiality obligations of intermediaries, periodic oversight of interception directions by a Review Committee under the Indian Telegraph Act, the retention of records of interception by intermediaries and to the mandatory destruction of information in appropriate cases.

According to the IT Interception Rules, only the competent authority can issue an order for the interception, monitoring or decryption of any information generated, transmitted, received or stored in any computer resource under sub-section (2) of section 69 of the Information Technology Act.[123] At the State and Union Territory level, the State Secretaries respectively in charge of the Home Departments are designated as “competent authorities” to issue interception directions.[124] In unavoidable circumstances the Joint Secretary to the Government of India, when so authorised by the Competent Authority, may issue an order. Interception may also be carried out with the prior approval of the Head or the second senior most officer of the authorised security agency at the Central Level and at the State Level with the approval of officers authorised in this behalf not below the rank of Inspector General of Police, in the belowmentioned emergent cases:

(1) in remote areas, where obtaining of prior directions for interception or monitoring or decryption of information is not feasible; or

(2) for operational reasons, where obtaining of prior directions for interception or monitoring or decryption of any information generation, transmitted, received or stored in any computer resource is not feasible,

however, in the above circumstances the officer would have to inform the competent authority in writing within three working days about the emergency and of the interception, monitoring or decryption and obtain the approval of the competent authority within a period of seven working days. If the approval of the competent authority is not obtained within the said period of seven working days, such interception or monitoring or decryption shall cease and the information shall not be intercepted or monitored or decrypted thereafter without the prior approval of the competent authority.[125] If a state wishes to intercept information that is beyond its jurisdiction, it must request permission to issue the direction from the Secretary in the Ministry of Home Affairs.[126]

In order to avoid the risk of unauthorised interception, the IT Interception Rules provide for the following safeguards:

  • If authorised by the competent authority, any agency of the government may intercept, monitor, or decrypt information transmitted, received, or stored in any computer resource only for the purposes specified in section 69(1) of the IT Act.[127]
  • The IT Interception Rules further provide that the competent authority may give any decryption direction to the decryption key holder.[128]
  • The officer issuing an order for interception is required to issue requests in writing to designated nodal officers of the service provider.[129]
  • Any direction issued by the competent authority must contain the reasons for direction, and must be forwarded to the review committee seven days after being issued.[130]
  • In the case of issuing or approving an interception order, in arriving at its decision the competent authority must consider all alternate means of acquiring the information.[131]
  • The order must relate to information sent or likely to be sent from one or more particular computer resources to another (or many) computer resources.[132]
  • The reasons for ordering interceptions must be recorded in writing, and must specify the name and designation of the officer to whom the information obtained is to be disclosed, and also specify the uses to which the information is to be put.[133]
  • The directions for interception will remain in force for a period of 60 days, unless renewed. If the orders are renewed they cannot be in force for longer than 180 days.[134]
  • Authorized agencies are prohibited from using or disclosing contents of intercepted communications for any purpose other than investigation, but they are permitted to share the contents with other security agencies for the purpose of investigation or in judicial proceedings. Furthermore, security agencies at the union territory and state level will share any information obtained by following interception orders with any security agency at the centre.[135]
  • All records, including electronic records pertaining to interception are to be destroyed by the government agency “every six months, except in cases where such information is required or likely to be required for functional purposes”.[136]
  • The contents of intercepted, monitored, or decrypted information will not be used or disclosed by any agency, competent authority, or nodal officer for any purpose other than its intended purpose.[137]
  • The agency authorised by the Secretary of Home Affairs is required to appoint a nodal officer (not below the rank of superintendent of police or equivalent) to authenticate and send directions to service providers or decryption key holders.[138]

The IT Interception Rules also place the following obligations on the service providers:

  • In addition, all records pertaining to directions for interception and monitoring are to be destroyed by the service provider within a period of two months following discontinuance of interception or monitoring, unless they are required for any ongoing investigation or legal proceedings.[139]
  • Upon receiving an order for interception, service providers are required to provide all facilities, co-operation, and assistance for interception, monitoring, and decryption. This includes assisting with: the installation of the authorised agency's equipment, the maintenance, testing, or use of such equipment, the removal of such equipment, and any action required for accessing stored information under the direction.[140]
  • Additionally, decryption key holders are required to disclose the decryption key and provide assistance in decrypting information for authorized agencies.[141]
  • Every fifteen days the officers designated by the intermediaries are required to forward to the nodal officer in charge a list of interceptions orders received by them. The list must include the details such as reference and date of orders of the competent authority.[142]
  • The service provider is required to put in place adequate internal checks to ensure that unauthorised interception does not take place, and to ensure the extreme secrecy of intercepted information is maintained.[143]
  • The contents of intercepted communications are not allowed to be disclosed or used by any person other than the intended recipient.[144]
  • Additionally, the service provider is required to put in place internal checks to ensure that unauthorized interception of information does not take place and extreme secrecy is maintained. This includes ensuring that the interception and related information are handled only by the designated officers of the service provider.[145]

Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009

The Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009, under section 69B of the Information Technology Act, stipulate that directions for the monitoring and collection of traffic data or information can be issued by an order made by the competent authority[146] for any or all of the following purposes related to cyber security:

  • forecasting of imminent cyber incidents;
  • monitoring network application with traffic data or information on computer resource;
  • identification and determination of viruses or computer contaminant;
  • tracking cyber security breaches or cyber security incidents;
  • tracking computer resource breaching cyber security or spreading virus or computer contaminants;
  • identifying or tracking any person who has breached, or is suspected of having breached or likely to breach cyber security;
  • undertaking forensic of the concerned computer resource as a part of investigation or internal audit of information security practices in the computer resources;
  • accessing stored information for enforcement of any provisions of the laws relating to cyber security for the time being in force;
  • any other matter relating to cyber security.[147]

According to these Rules, any direction issued by the competent authority should contain reasons for such direction and a copy of such direction should be forwarded to the Review Committee within a period of seven working days.[148] Furthermore, these Rules state that the Review Committee shall meet at least once in two months and record its finding on whether the issued directions are in accordance with the provisions of sub-section (3) of section 69B of the Act. If the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above, it may set aside the directions and issue an order for the destruction of the copies, including corresponding electronic record of the monitored or collected traffic data or information.[149]

Information Technology (Guidelines for Cyber Cafes) Rules, 2011

The Information Technology (Guidelines for Cyber Cafes) Rules, 2011, were issued under powers granted under section 87(2), read with section 79(2) of the Information Technology Act, 2000.[150] These rules require cyber cafes in India to store and maintain backup logs for each login by any user, to retain such records for a year and to ensure that the log is not tampered. Rule 7 requires the inspection of cyber cafes to determine that the information provided during registration is accurate and remains updated.

5. The Indian Post Office Act, 1898
Section 26 of the Indian Post Office Act, 1898, empowers the Central Government and the State Governments to intercept postal articles.[151] In particular, section 26 of the Indian Post Office Act, 1898, states that on the occurrence of any public emergency or in the interest of public safety or tranquility, the Central Government, State Government or any officer specially authorised by the Central or State Government may direct the interception, detention or disposal of any postal article, class or description of postal articles in the course of transmission by post. Furthermore, section 26 states that if any doubt arises regarding the existence of public emergency, public safety or tranquility then a certificate to that effect by the Central Government or a State Government would be considered as conclusive proof of such condition being satisfied.

According to this section, the Central Government and the State Governments of India can intercept postal articles if it is deemed to be in the instance of a 'public emergency' or for 'public safety or tranquility'. However, the Indian Post Office Act, 1898, does not cover electronic communications and does not mandate their interception, which is covered by the Information Technology Act, 2000 and the Indian Telegraph Act, 1885.

6. The Indian Wireless Telegraphy Act, 1933
The Indian Wireless Telegraphy Act was passed to regulate and govern the possession of wireless telegraphy equipment within the territory of India. This Act essentially provides that no person can own “wireless telegraphy apparatus”[152] except with a license provided under this Act and must use the equipment in accordance with the terms provided in the license.[153]

One of the major sources of revenue for the Indian State Broadcasting Service was revenue from the licence fee from working of wireless apparatus under the Indian Telegraph Act, 1885.The Indian State Broadcasting Service was losing revenue due to lack of legislation for prosecuting persons using unlicensed wireless apparatus as it was difficult to trace them at the first place and then prove that such instrument has been installed, worked and maintained without licence. Therefore, the current legislation was proposed, in order to prohibit possession of wireless telegraphy apparatus without licence.

Presently the Act is used to prosecute cases, related to illegal possession and transmission via satellite phones. Any person who wishes to use satellite phones for communication purposes has to get licence from the Department of Telecommunications.[154]

7. The Code of Criminal Procedure
Section 91 of the Code of Criminal Procedure regulates targeted surveillance. In particular, section 91 states that a Court in India or any officer in charge of a police station may summon a person to produce any document or any other thing that is necessary for the purposes of any investigation, inquiry, trial or other proceeding under the Code of Criminal Procedure.[155] Under section 91, law enforcement agencies in India could theoretically access stored data. Additionally, section 92 of the Code of Criminal Procedure regulates the interception of a document, parcel or thing in the possession of a postal or telegraph authority.

Further section 356(1) of the Code of Criminal Procedure provides that in certain cases the Courts have the power to direct repeat offenders convicted under certain provisions, to notify his residence and any change of, or absence from, such residence after release for a term not exceeding five years from the date of the expiration of the second sentence.

Policy Suggestions

In order to avoid the different standards being adopted for different aspects of surveillance and in different parts of the country, there should be one single policy document or surveillance and interception manual which should contain the rules and regulations regarding all kinds of surveillance. This would not only help in identifying problems in the law but may also be useful in streamlining the entire surveillance regime. However it is easier said than done and requires a mammoth effort at the legislative stage. This is because under the Constitutional scheme of India law and order is a State subject and the police machinery in every State is under the authority of the State government. Therefore it would not be possible to issue a single legislation dealing with all aspects of surveillance since the States are independent in their powers to deal with the police machinery.

Even when we look at the issue of interception, certain state legislations especially the ones dealing with organized crime and bootleggers such as the Maharashtra Control of Organized Crime Act, 1999, the Andhra Pradesh Control of Organized Crime Act, 2001, also deal with the issue of interception and contain provisions empowering the state government to intercept communications for the purpose of using it to investigate or prevent criminal activities. Further even the two central level legislations that deal with interception, viz. the Telegraph Act and the Information Technology Act, specifically empower the State governments also to intercept communications on the same grounds as the Central Government. Since interception of communications is mostly undertaken by security and law enforcement agencies, broadly for the maintenance of law and order, State governments cannot be prevented from issuing their own legislations to deal with interception.

Due to the abovementioned legal and constitutional complexities the major problem in achieving harmonization is to get both the Central and State governments on to the same page. Even if the Central government amends the Telegraph Act and the IT Act to bring them in line with each other, the State governments will still be free to do whatever they please. Therefore it seems the best approach in order to achieve harmonization may be to have a two pronged strategy, i.e. (i) issue a National Surveillance Policy covering both interception and general surveillance; and (ii) amend the central legislations i.e. the Telegraph Act and the Information Technology Act in accordance with the National Surveillance Policy. Once a National Surveillance Policy, based on scientific data and the latest theories on criminology is issued, it is hoped that State governments will themselves like to adopt the principles enshrined therein and amend their own legislations dealing with interception to fall in line with the National Surveillance Policy.


[1] Section 6(2)(b) of the Model Police Manual.

[2] Section 191 (D) of the Model Police Manual.

[3] Section 200 (D) of the Model Police Manual.

[4] Section 2011 (I) of the Model Police Manual.

[5] Section 201 (II) of the Model Police Manual.

[6] Section 201 (IV) of the Model Police Manual.

[7] Section 193 (III) of the Model Police Manual.

[8] Surjan Das & Basudeb Chattopadhyay, Rural Crime in Police Perception: A Study of Village Crime Note Books, 26(3) Economic and Political Weekly 129, 129 (1991).

[9] Section 201 (III) of the Model Police Manual.

[10] Section 201 (V) of the Model Police Manual.

[11] Section 201 (VII) of the Model Police Manual.

[12] Section 356(1) of the Criminal Procedure Code states as follows:

356. Order for notifying address of previously convicted offender.

(1) When any person, having been convicted by a Court in India of an offence punishable under section 215, section 489A, section 489B, section 489C or section 489D of the Indian Penal Code, (45 of 1860 ) or of any offence punishable under Chapter XII or Chapter XVII of that Code, with imprisonment for a term of three years or upwards, is again convicted of any offence punishable under any of those sections or Chapters with imprisonment for a term of three years or upwards by any Court other than that of a Magistrate of the second class, such Court may, if it thinks fit, at the time of passing a sentence of imprisonment on such person, also order that his residence and any change of, or absence from, such residence after release be notified as hereinafter provided for a term not exceeding five years from the date of the expiration of such sentence.

[13] The Indian Telegraph Act, 1885, http://www.ijlt.in/pdffiles/Indian-Telegraph-Act-1885.pdf

[14] Privacy International, Report: “India”, Chapter 3: “Surveillance Policies”, https://www.privacyinternational.org/reports/india/iii-surveillance-policies

[15] Rule 419A(1), Indian Telegraph Rules, 1951.

[16] Rule 419A(1), Indian Telegraph Rules, 1951.

[17] Rule 419A(2), Indian Telegraph Rules, 1951.

[18] Rule 419A(3), Indian Telegraph Rules, 1951.

[19] Rule 419A(4), Indian Telegraph Rules, 1951.

[20] Rule 419A(5), Indian Telegraph Rules, 1951.

[21] Rule 419A(6), Indian Telegraph Rules, 1951.

[22] Rule 419A(7), Indian Telegraph Rules, 1951.

[23] Rule 419A(8), Indian Telegraph Rules, 1951.

[24] Rule 419A(9), Indian Telegraph Rules, 1951.

[25] Rule 419A(18), Indian Telegraph Rules, 1951.

[26] Ibid.

[27] Rule 419A(10), Indian Telegraph Rules, 1951.

[28] Rule 419A(11), Indian Telegraph Rules, 1951.

[29] Rule 419A(12), Indian Telegraph Rules, 1951.

[30] Rule 419A(13), Indian Telegraph Rules, 1951.

[31] Rule 419A(14), Indian Telegraph Rules, 1951.

[32] Rule 419A(15), Indian Telegraph Rules, 1951.

[33] Rule 419A(19), Indian Telegraph Rules, 1951.

[34] Ibid.

[35] Ibid.

[36] Section 46 of the Unlawful Activities Prevention Act, 1967. The Unlawful Activities (Prevention) Act, 1967 has certain additional safeguards such as not allowing intercepted information to be disclosed or received in evidence unless the accused has been provided with a copy of the same atleast 10 days in advance, unless the period of 10 days is specifically waived by the judge.

[37] State owned Public Sector Undertakings (PSUs) (Mahanager Telephone Nigam Limited (MTNL) and Bharat Sanchar Nigam Limited (BSNL)) were issued licenses for provision of CMTS as third operator in various parts of the country. Further, 17 fresh licenses were issued to private companies as fourth cellular operator in September/ October, 2001, one each in 4 Metro cities and 13 Telecom Circles.

[38] Section 45.2 of the CMTS License.

[39] Section 41.09 of the CMTS License.

[40] Section 41.09 of the CMTS License.

[41] Section 44.4 of the CMTS License. Similar provision exists in section 44.11 of the CMTS License.

[42] Section 34.28 (xix) of the ISP License.

[43] Section 34.12 of the ISP License.

[44] Section 34.13 of the ISP License.

[45] Section 34.22 of the ISP License.

[46] Section 34.6 of the ISP License.

[47] Section 34.15 of the ISP License.

[48] Section 34.28 (xiv) of the ISP License.

[49] Section 34.28 (xi) of the ISP License.

[50] Section 34.14 of the ISP License.

[51] Section 34.28 (ix)&(x) of the ISP License.

[52] Section 30.1 of the ISP License.

[53] Section 33.4 of the ISP License.

[54] Section 34.4 of the ISP License.

[55] Section 34.7 of the ISP License.

[56] Section 34.9 of the ISP License.

[57] Section 34.27 (a)(i) of the ISP License.

[58] Section 34.27(a)(ii-vi) of the ISP License.

[59] Section 32.1, 32.2 (i)(ii), 32.3 of the ISP License.

[60] Section 34.8 of the ISP License.

[61] Section 34.18 of the ISP License.

[62] Section 34.28 (xv) of the ISP License.

[63] Section 41.10 of the ISP License.

[64] Section 41.10 of the ISP License.

[65] Section 41.19(i) of the ISP License.

[66] Section 41.19(ii) of the ISP License.

[67] Section 41.19(iv) of the ISP License.

[68] Section 39.1 of the UASL. Similar provision is contained in section 41.4, 41.12 of the UASL.

[69] Section 39.3 of the UASL.

[70] Section 39.2 of the UASL.

[71] Section 23.2 of the UASL. Similar provisions are contained in section 41.7 of the UASL regarding provision of monitoring equipment for monitoring in the “interest of security”.

[72] Section 42.2 of the UASL.

[73] Section 41.20(xx) of the UASL.

[74] Section 41.10 of the UASL.

[75] Section 41.10 of the UASL.

[76] Section 41.14 of the UASL.

[77] Section 41.16 of the UASL.

[78] Section 41.20(ix) of the UASL.

[79] Section 41.20(ix) of the UASL.

[80] Section 41.19(ii) of the UASL.

[81] Section 41.20(xii) of the UASL.

[82] Section 41.20(xiii) of the UASL.

[83] Section 41.20(xiv) of the UASL.

[84] Section 41.20 (xix) of the UASL.

[85] Section 41.20(xvi) of the UASL.

[86] The different services covered by the Unified License are:

a. Unified License (All Services)

b. Access Service (Service Area-wise)

c. Internet Service (Category-A with All India jurisdiction)

d. Internet Service (Category-B with jurisdiction in a Service Area)

e. Internet Service (Category-C with jurisdiction in a Secondary Switching Area)

f. National Long Distance (NLD) Service

g. International Long Distance (ILD) Service

h. Global Mobile Personal Communication by Satellite (GMPCS) Service

i. Public Mobile Radio Trunking Service (PMRTS) Service

j. Very Small Aperture Terminal (VSAT) Closed User Group (CUG) Service

k. INSAT MSS-Reporting (MSS-R) Service

l. Resale of International private Leased Circuit (IPLC) Service

Authorisation for Unified License (All Services) would however cover all services listed at para 2(ii) (b) in all service areas, 2 (ii) (c), 2(ii) (f) to 2(ii) (l) above.

[87] Chapter IV, Para 23.2 of the UL.

[88] Chapter VI, Para 40.2 of the UL.

[89] Chapter V, Para 37.1 of the UL. Similar provision is contained in Chapter VI, Para 39.4,

[90] Chapter V, Para 37.5 of the UL/

[91] Chapter V, Para 37.3 of the UL.

[92] Chapter V, Para 37.2 of the UL.

[93] Chapter VI, Para 39.23(xii) of the UL.

[94] Chapter VI, Para 39.23 (xiii) of the UL.

[95] Chapter VI, Para 39.23 (xiv) of the UL.

[96] Chapter VI, Para 39.23 (xix) of the UL.

[97] Chapter VIII, Para 8.3 of the UL.

[98] Chapter VIII, Para 8.4 of the UL.

[99] Chapter VIII, Para 8.5 of the UL.

[100] Chapter IX, Paras 7.1 to 7.3 of the UL. Further obligations have also been imposed on the Licensee to ensure that its ILL customers maintain the usage of IP addresses/Network Address Translation (NAT) syslog, in case of multiple users on the same ILL, for a minimum period of one year.

[101] Chapter IX, Paras 8.1 to 8.3 of the UL.

[102] Chapter IX, Paras 8.4 and 8.5 of the UL.

[103] Chapter X, Para 5.2 of the UL.

[104] Chapter XI, Para 6.3 of the UL.

[105] Chapter XI, Para 6.4 of the UL.

[106] Chapter XI, Para 6.6 of the UL.

[107] Chapter XI, Para 6.7 of the UL.

[108] Chapter XII, Para 7.4 of the UL.

[109] Chapter XII, Para 7.5 of the UL.

[110] Chapter XII, Para 7.6 of the UL.

[111] Chapter XII, Para 7.7 of the UL.

[112] Chapter XII, Para 7.8 of the UL.

[113] Chapter XIII, Para 7.1 of the UL.

[114] Chapter XIV, Para 8.1 of the UL.

[115] Chapter XIV, Para 8.2 of the UL.

[116] Chapter XV, Para 8.1 of the UL.

[117] Chapter XV, Para 8.5 of the UL.

[118] Chapter XVI, Paras 4.1 - 4.4 of the UL.

[119] Section 69 of the Information Technology Act, 2000.

[120] Ibid.

[121] Section 69B of the Information Technology Act, 2000.

[122] Section 32 of the ISP License.

[123] Rule 3, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[124] Rule 2(d), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[125] Rule 3, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[126] Rule 6, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[127] Rule 4, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[128] Rule 5, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[129] Rule 13, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[130] Rule 7, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[131] Rule 8, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[132] Rule 9, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[133] Rule 10, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[134] Rule 11, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[135] Rule 25(2)&(6), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[136] Rule 23, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[137] Rule 25, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[138] Rule 12, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[139] Rule 23(2), Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[140] Rule 19, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[141] Rule 17, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[142] Rule 18, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[143] Rule 20& 21, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[144] Rule 25, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[145] Rule 20, Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

[146] Rule 3(1) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[147] Rule 3(2) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[148] Rule 3(3) of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[149] Rules 7 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009.

[150] Introduction to the Information Technology (Guidelines for Cyber Cafe) Rules, 2011.

[151] The Indian Post Office Act, 1898, http://www.indiapost.gov.in/Pdf/Manuals/TheIndianPostOfficeAct1898.pdf

[152] The expression “wireless telegraphy apparatus” has been defined as “any apparatus, appliance, instrument or material used or capable of use in wireless communication, and includes any article determined by rule made under Sec. 10 to be wireless telegraphy apparatus, but does not include any such apparatus, appliance, instrument or material commonly used for other electrical purposes, unless it has been specially designed or adapted for wireless communication or forms part of some apparatus, appliance, instrument or material specially so designed or adapted, nor any article determined by rule made under Section 10 not to be wireless telegraphy apparatus;”

[153] Section 4, Wireless Telegraphy Act, 1933.

[154] Snehashish Ghosh, Indian Wireless Telegraphy Act, 1933, http://cis-india.org/telecom/resources/indian-wireless-telegraphy-act.

[155] The Code of Criminal Procedure, 1973, Section 91, http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

Comparison of the Human DNA Profiling Bill 2012 with: CIS recommendations, Sub-Committee Recommendations, Expert Committee Recommendations, and the Human DNA Profiling Bill 2015

by Elonnai Hickok last modified Aug 10, 2015 03:20 AM
This blog a comparison of 1. The Human DNA Profiling Bill 2012 vs. the Human DNA Profiling Bill 2015, 2. CIS's main recommendations vs. the 2015 Bill 3. The Sub-Committee Recommendations vs. the 2015 Bill 4. The Expert Committee Recommendations vs. the 2015 Bill.

In 2013 the Expert Committee to discuss the draft Human DNA Profiling Bill was constituted by the Department of Biotechnology. The Expert Committee had constituted a Sub-Committee to modify the draft Bill in the light of invited comments/inputs from the members of the Committee

These changes were then deliberated upon by the Expert Committee. The Record Notes and Meeting Minutes of the Expert Committee and Sub-Committee can be found here. The Centre for Internet and Society was a member of the Expert Committee and sat on the Sub-Committee. In addition to input in meetings, CIS submitted a number of recommendations to the Committee. The Committee has drafted a 2015 version of the Bill and the same is to be introduced to Parliament.

Below is a comparison of 1. The 2012 Bill vs. the 2015 Bill, 2. CIS's main recommendations vs. the 2015 Bill 3. The Sub-Committee Recommendations vs.  the 2015 Bill 4.  The Expert Committee Recommendations vs. the 2015 Bill.

Introduction

  • CIS Recommendation: Recognition that DNA evidence is not infallible.
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill
  • CIS Recommendation:

Chapter I : Preliminary

Inclusion of an 'Objects Clause' that makes clear that (i) the principles of notice, confidentiality, collection limitation, personal autonomy, purpose limitation and data minimization must be adhered to at all times; (ii) DNA profiles merely estimate the identity of persons, they do not conclusively establish unique identity; (iii) all individuals have a right to privacy that must be continuously weighed against efforts to collect and retain DNA; (iv) centralized databases are inherently dangerous because of the volume of information that is at risk; (v) forensic DNA profiling is intended to have probative value; therefore, if there is any doubt regarding a DNA profile, it should not be received in evidence by a court; (vi) once adduced, the evidence created by a DNA profile is only corroborative and must be treated on par with other biometric evidence such as fingerprint measurements.

  • Sub Committee Recommendation: The Bill will not regulate DNA research. The current draft will only regulate use of DNA for civil and criminal purposes.
  • Expert Committee Recommendation: The Bill will not regulate DNA research. The current draft will only regulate use of DNA for civil and criminal purposes.
  • 2015 Bill: No Change from the 2012 Bill

Chapter II : Definitions

CIS Recommendation:

  • Removal of 2(1)(a) “analytical procedure”
  • Removal of 2(1)(b) “audit”
  • Removal of 2(1)(d) “calibration”
  • Re-drafting of 2(1)(h) “DNA Data Bank”
  • Re-naming of 2(1)(i) “DNA Data Bank Manager” to “National DNA Data Bank Manager”
  • Re-drafting of 2(1)(j) “DNA laboratory”
  • Re-drafting of 2(1)(l) “DNA Profile”
  • Re-drafting of 2(1)(o) “forensic material”
  • Removal of 2(1)(q) “intimate body sample”
  • Removal of 2(1)(v) “non-intimate body sample”
  • Removal of 2(1)(r) “intimate forensic procedure”
  • Removal of 2(1)(w) “non-intimate forensic procedure”
  • Removal of 2(1)(s) “known samples”
  • Re-drafting of 2(1)(y) “offender”
  • Removal of 2(1)(zb) “proficiency testing”
  • Re-drafting of 2(1)(zi) “suspect”
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill.

Chapter III : DNA Profiling Board

  • CIS Recommendation:
  1. The board should be made up of no more than five members. The Board must contain at least one ex-Judge or senior lawyer since the Board will perform the legal function of licensing and must obey the tenets of administrative law. To further multi-stakeholder interests, the Board should have an equal representation from civil society – both institutional (e.g NHRC and the State Human Rights Commissions) and non-institutional (well-regarded and experienced civil society persons). The Board should also have privacy advocates. CIS also recommended that the functions of the board be limited to: licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority. CIS also recommended a 'duty to consult' with affected or impacted individuals, interested individuals, and the public at large.
  • Sub-Committee Recommendation:
  1. Reduce the DNA Profiling Board (Section 4) from 16 members to 11 members and include civil society representation on the Board.
  2. Include either clause 4(f) or (g) i.e. Chief Forensic Scientist, Directorate of Forensic Science, Ministry of Home Affairs, Government of India - ex-officio Member or Director of a Central Forensic Science Laboratory to be nominated by Ministry of Home Affairs, Government of India- ex-officio Member;
  3. Change clause 4(i) i.e., to replace Chairman, National Bioethics Committee of Department of Biotechnology, Government of India- ex-officio Member with Chairman, National Human Rights Commissions or his nominee.
  4. Delete Members mentioned in clause 4(l) i.e. Two molecular biologists to be nominated by the Secretary, Department of Biotechnology, Ministry of Science and Technology, Government of India- Members;
  5. DPB Members with potential conflict of interest in matters under consideration should recuse themselves in deliberations in respect of such matters (Section 7), and they should be liable to be removed from the Board in case they are found to have not disclosed the nature of such interest.
  6. With regards to the establishment of the DNA Profiling Board (clause 3) the committee clarified that the DNA Board needs to be a body corporate
  7. The functions of the Board should be redrafted with fewer functions, and these should be listed in descending order of priority to sharpen this function – namely regulate process, regulate the labs, regulate databanks.
  • Expert Committee Recommendation:
  1. Accepted sub-committee recommendation to reduce the Board from 16 to 11 members and the detailed changes.
  2. Accepted sub-committee recommendation to include civil society on the Board.
  3. Accepted sub-committee recommendation to reduce the functions of the Board.
  • 2015 Bill:
  1. Addition in 2015 Bill of Section 4 (b) – “Chairman, National Human Rights Commission or his nominee – ex-officio Member” (2015 Bill) Note: This change represents incorporation of CIS's recommendation, sub-committee recommendation, and expert committee recommendation.
  2. Changing of Section 4 (h)  from: “Director of a State Forensic Science Laboratory to be nominated by Ministry of Home Affairs, Government of India- ex-officio Member” (2012 Bill)  toDirector cum – Chief Forensic Scientist, Directorate of Forensic  Science Services, Ministry of Home Affairs, Government of India -ex-officio Member”(2015 Bill) Note: This change represents partial incorporation of the sub-committee recommendation and expert committee recommendation.
  3. Changing of Section 4 (j) from: “Director, National Accreditation Board for Testing and Calibration of Laboratories, New Delhi- ex-officio Member”; (2012 Bill) to Director of a State Forensic Science Lab to be nominated by MHA ex-officio member” (2015 Bill)
  4. Addition of section 11(4) and 11(5) “(4) The Board shall, in carrying out its functions and activities, consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. (5) The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation.” Note: This change represents partial incorporation of CIS's recommendation and Expert Committee recommendation.

Chapter IV : Approval of DNA Laboratories

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation:
  1. Add in section 16 1(d), the words “including audit reports”
  2. Include in section 16(1)(c) that if labs do not file their audit report on an annual basis, the lab will lose approval. If the lab loses their approval - all the materials will be shifted to another lab and the data subject will be informed.
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill.

Chapter V : Standards, Quality Control and Quality Assurance

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation:
  1. Section 19(2) DNA laboratory to be headed by person possessing a doctorate in a subject germane to molecular biology.
  2. Clauses 20 and 30 should be merged into Clause 20 to read as:

“(1). The staff of every DNA laboratory shall possess such qualifications and experience commensurate with the job requirements as may be specified by the regulations.

(2). Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations.

(3). Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations.

Accordingly, change the Title: “Qualification, Recruitment and Training of DNA lab personnel.”

  1. Require DNA labs to have in place an evidence control system (Clause 22) Note: This existed in the DNA 2012 Bill
  2. Amend Clause 23(1) to read as ““Every DNA laboratory shall possess and shall follow a validation process as may be specified by the regulations.”
  3. Paraphrase Clause 27 as, “Every DNA laboratory shall have audits conducted annually in accordance with the standards as may be specified by the regulations.” It was agreed that the audits of the DNA Laboratory (clause 27) do not need to be external. Note: This existed in the DNA 2012 Bill.
  4. Bring sections 28-31 on infrastructure and training brought into Chapter V and thus new title of the chapter reads as “Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training”.
  • Expert Committee Recommendation: N/A
  • 2015 Bill
  1. Changing of Section 20 (2) from  (2) Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations made by the Board. (2012) to Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations; (2015)”  and Addition in 2015 Bill of Section 20 (3) - “Head of every DNA laboratory shall ensure that laboratory personnel keep abreast of developments within the field of DNA profiling and maintain such records on the relevant qualifications, training, skills and experience of the technical personnel employed in the laboratory as may be specified by the regulations” (2015) Note: This is as per the Sub-Committee's recommendation.
  2. Amending of  Clause 23(1) to read as ““Every DNA laboratory shall possess and shall follow a validation process as may be specified by the regulations.” Note: This is as per the Sub-Committee's recommendation.
  3. Changing of section 30 from:“Every DNA laboratory shall employ such qualified technical personnel as may be specified by the regulations made by the Board and technical personnel shall undergo regular training in DNA related subjects in such institutions and at such intervals as may be specified by the regulations made by the Board.” (2012) to “Every DNA laboratory shall have installed appropriate security system and system for safety of personnel as may be specified by the regulations.”
  • Sections 28-31 on infrastructure and training brought into Chapter V and thus new title of the chapter reads as “Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training”.  Note: This is as per the Sub-Committee's recommendation.
  • CIS Recommendation:

Chapter VI : DNA Data Bank

  1. Removal of section 32(6) which requires the names of individuals to be connected to their profiles and recommended that DNA profiles once developed, should be anonymized and retained separate from the names of their owners.
  2. Section 34(2) to be limited to containing only an offenders' index and a crime scene index
  3. Removal of section 36 which allows for international dicslosures of DNA profiles of Indians.
  • Sub-Committee Recommendation:
  1. Amend Clause 32(1) to reads as: “The Central Government shall, by notification, establish a National DNA Data Bank”.
  2. Anonymize the volunteer's database.
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill.

Chapter VII : Confidentiality of and access to DNA profiles, samples, and records

  • CIS Recommendation:
  1. Re-drafting section 39 and 40 to specify that DNA can only be used for forensic purposes and specify the manner in which DNA profiles may be received in evidence.
  2. Removal of section 40
  3. Removal of section 43
  4. Re-dreaft section 45 as it sets out a post-conviction right related to criminal procedure and evidence. This would fundamentally alter the nature of India’s criminal justice system, which currently does not contain specific provisions for post-conviction testing rights. However, courts may re-try cases in certain narrow cases when fresh evidence is brought forth that has a nexus to the evidence upon which the person was convicted and if it can be proved that the fresh evidence was not earlier adduced due to bias. Any other fresh evidence that may be uncovered cannot prompt a new trial. Clause 45 is implicated by Article 20(2) of the Constitution of India and by 6 section 300 of the CrPC. The principle of autrefois acquit that informs section 300 of the CrPC specifically deals with exceptions to the rule against double jeopardy that permit re-trials. [See, for instance, Sangeeta Mahendrabhai Patel (2012) 7 SCC 721.]
  • Sub-Committee Recommendation:
  1. Amend Clause 40 (f) to read as  “-------to the concerned parties to the said civil dispute or civil matter, with the concurrence of the court and to the concerned judicial officer or authority”.Incorporated, but is now located at section 39
  2. Include in Chapter VIII  additional Sections:   Clause 42A: “A person whose DNA profile has been created shall be given a copy of the DNA profile upon request”. Clause 42B: A person whose DNA profile has been created and stored shall be given information as to who has accessed his DNA profile or DNA information.
  • Expert Committee: N/A
  • 2015 Bill:
  1. Addition of  the phrase in section 39 “with the concurrence of the court”, thus the new clause reads as:  “-------to the concerned parties to the said civil dispute or civil matter, with the concurrence of the court” and to the concerned judicial officer or authority”. Note: This as per the recommendations of the Sub-Committee.

Chapter VIII : Finance, Accounts, and Audit

  • CIS Recommendation: N/A
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from the 2012 Bill

Chapter IX : Offences and Penalties

  • CIS Recommendation:
  1. The law prohibits the delegation of “essential legislative functions” [In re Delhi Laws, 1951]. The creation of criminal offences must be conducted by a statute that is enacted by Parliament, and when offences are created via delegated legislation, such as Rules, the quantum of punishment must be pre-set by the parent statute.
  2. Since the listing of offences for DNA profiling will directly affect the fundamental right of personal liberty, it is an undeniable fact that the identification of these offences should be subject to a democratic process of the legislature rather than be determined by the whims of the executive.
  • Sub-Committee Recommendation:
  1. Ensure a minimal jail term for any offence under the Act from DNA Data Banks without authorization is a period of one month (chapter 10 (53)) Note: This already existed in the 2012 Bill.
  2. Add to Section 56 the phrase “… or otherwise willfully neglects any other duty cast upon him under the provisions of this Act, shall be punishable …”.
  • Expert Committee: N/A
  • 2015 Bill: No change from 2012 Bill
  • CIS Recommendation: N/A
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: N/A
  • 2015 Bill: No change from 2012 Bill

Chapter X : Miscellaneous

Schedule

  • CIS Recommendation

The creation of a list of offenses under which upon arrest under which DNA samples may lawfully be collected from the arrested person without his consent including:

  1. Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the Code of Criminal Procedure, 1973; [Alternatively, all cognizable offences under the Indian Penal Code may be listed here]
  2. Every offence punishable under the Immoral Traffic (Prevention) Act, 1956;
  3. Any cognizable offence under the Indian Penal Code, 1860 that is committed by a registered medical practitioner and is not saved under section 3 of the Medical Termination of Pregnancy Act, 1971; [Note that the ITP Act does not itself create or list any offences, it only saves doctors from prosecution from IPC offences if certain conditions are met]
  4. Every offence punishable under the Pre-conception and Pre-natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994;
  5. The offence listed under sub-section (1) of section 31 of the Protection of Women from Domestic Violence Act, 2005;
  6. Every offence punishable under the Protection of Civil Rights Act, 1955;
  7. Every offence punishable under the Scheduled Castes and the Scheduled Tribes (Prevention of Atrocities) Act, 1989.
  • Sub-Committee Recommendation: N/A
  • Expert Committee Recommendation: Incorporation of CIS's recommendation to the schedule regarding instances of when DNA samples can be collected without consent.
  • 2015 Bill:
  1. Addition in 2015 of “Part II: List of specified offences - Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the Code of Criminal Procedure, 1973” (2015). Note: This represents partial incorporation of CIS's recommendation.
  2. Expansion of sources of samples for DNA profiling from - “(1) Scene of occurrence or crime (2) Tissue and skeleton remains (3) Clothing and other objects (4) Already preserved body fluids and other samples” (2012) to “1. Scene of occurrence, or scene of crime 2. Tissue and skeleton remains 3. Clothing and other objects 4. Already preserved body fluids and other samples 5. Medical Examination 6. Autopsy examination 7. Exhumation” (2015)” and Deletion of “Manner of collection of samples for DNA: (1) Medical Examination (2) Autopsy examination (3) Exhumation “ (2012)

CIS submission to the UNGA WSIS+10 Review

by Jyoti Panday last modified Aug 09, 2015 04:24 PM
The Centre for Internet & Society (CIS) submitted its comments to the non-paper on the UNGA Overall Review of the Implementation of the WSIS outcomes, evaluating the progress made and challenges ahead.

To what extent has progress been made on the vision of the peoplecentred, inclusive and development oriented Information Society in the ten years since the WSIS?
The World Summit on the Information Society (WSIS) in 2003 and 2005 played an important role in encapsulating the potential of knowledge and information and communication technologies (ICT) to contribute to economic and social development. Over the past ten years, most countries have sought to foster the use of information and knowledge by creating enabling environment for innovation and through efforts to increase access. There have been interventions to develop ICT for development both at an international and national level through private sector investment, bilateral treaties and national strategies.

However, much of the progress made in the past ten years in terms of getting people connected and reaping the benefits of ICT has not been sufficiently peoplecentred, nor have they been sufficiently inclusive.

These developments have not been sufficiently peoplecentred, since governments across the world have been using the Internet as a monumental surveillance tool, invading people’s privacy without legitimate justifications, in an arbitrary manner without due care for reasonableness,  proportionality, or democratic accountability. These developments have not been sufficiently peoplecentred, since the largest and most profitable Internet businesses — businesses that have more users than most nationstates have citizens, yet have one-sided terms of service — have eschewed core principles like open standards and interoperability that helped create the Internet and the World Wide Web, and instead promote silos.

We still reside in a world where development has been very lopsided, and ICTs have contributed to reducing some of these gulfs, while exacerbating others. For instance, persons with visual impairment are largely yet to reap the benefits of the Information Society due to a lack of attention paid to universal, while sighted persons have benefited far more; the ability of persons who don’t speak a language like English to contribute to global Internet governance discussions is severely limited; the spread of academic knowledge largely remains behind prohibitive paywalls.

As ICTs have grown both in sophistication and reach, much work remains to achieve the peoplecentred, inclusive and developmentoriented information society envisaged in WSIS. While the diffusion of ICTs has created new opportunities for development, even today less than half the world has access to broadband (with only eleven per cent of the world’s population having access to fixed broadband). See International Telecommunication Union, ICT Facts and Figures: The World in 2015.

Ninety per cent of people connected come from the industrialized countries — North America (thirty per cent), Europe (thirty per cent) and the AsiaPacific (thirty per cent). Four billion people from developing countries remain offline, representing two-thirds of the population residing in developing countries. Of the nine hundred and forty million people residing in Least Developed Countries (LDCs), only eighty-nine million use the Internet and only seven per cent of households have Internet access, compared with the world average of forty-six per cent. See International Telecommunication Union, ICT Facts and Figures: The World in 2015. This digital divide is first and foremost a question of access to basic infrastructure (like electricity).

Furthermore, there is a problem of affordability, all the more acute since in the South in comparison with countries of the North due to the high costs related to access to the connection. Further, linguistic, educational, cultural and content related barriers are also contributing to this digital divide. Growth of restrictive regimes around intellectual property, vision of the equal and connected society. Security of critical infrastructure with in light of ever growing vulnerabilities, the loss of trust following revelations around mass surveillance and a lack of consensus on how to tackle these concerns are proving to be a challenge to the vision of a connected information society. The WSIS+10 overall review is timely and a much needed intervention in assessing the progress made and planning for the challenges ahead.

There were two bodies as major outcomes of the WSIS process: the Internet Governance Forum and the Digital Solidarity Fund, with both of these largely failing to achieve their intended goals. The Internet Governance Forum, which is meant to be a leading example of “multi-stakeholder governance” is also a leading example of what the Multi-stakeholder Advisory Group (MAG) noted in 2010 as “‘black box’ approach”, with the entire process around the nomination and selection of the MAG being opaque. Indeed, when CIS requested the IGF Secretariat to share information on the nominators, we were told that this information will not be made private. Five years since the MAG lamented its own blackbox nature, things have scarcely improved. Further, analysis of MAG membership since 2006 shows that 26 persons have served for 6 years or more, with the majority of them being from government, industry, or the technical community. Unsurprisingly, 36 per cent of the MAG membership has come from the WEOG group, highlighting both deficiencies in the nomination/selection
process as well as the need for capacity building in this most important area. The Digital Solidarity Fund failed for a variety of reason, which we have analysed in a separate document annexed to this response.

What are the challenges to the implementation of WSIS outcomes?

Some of the key areas that need attention going forward and need to be addressed include:

Access to Infrastructure

  • Developing policies aimed at promoting innovation and increasing affordable access to hardware and software, and curbing the ill effects of the currentlyexcessive patent and copyright regimes.
  • Focussing global energies on solutions to lastmile access to the Internet in a manner that is not decoupled from developmental ground realities.
  • This would include policies on spectrum sharing, freeing up underutilized spectrum, and increasing unlicensed spectrum.
  • This would also include governmental policies on increasing competition among Internet providers at the last mile as well as at the backbone (both nationally and internationally), as well as commitments for investments in basic infrastructure such as an openaccess national fibreoptic backbone where the private sector investment is not sufficient.
  • Developing policies that encourage local Internet and communications infrastructure in the form of Internet exchange points, data centres, community broadcasting.

Access to Knowledges

  • As the Washington Declaration on IP and the Public Interest5 points out, the enclosure of the public domain and knowledge commons through expansive “intellectual property” laws and policies has only gotten worse with digital technologies, leading to an unjust allocation of information goods, and continuing royalty outflows from the global South to a handful of developing countries. This is not sustainable, and urgent action is needed to achieve more democratic IP laws, and prevent developments such as extra judicial enforcement mechanisms such as digital restrictions management systems from being incorporated within Web standards.
  • Aggressive development of policies and adoption of best practices to ensure that persons with disabilities are not treated as secondgrade citizens, but are able to fully and equally participate in and benefit from the Information Society.
  • Despite the rise of video content on the Internet, much of that has been in parts of the world with already high literacy, and language and illiteracy continue to pose barriers to full usage of the Internet.
  • While the Tunis Agenda highlighted the need to address communities marginalized in Information Society discourse, including youth, older persons, women, indigenous peoples, people with disabilities, and remote and rural communities, but not much progress has been seen on this front.

Rights, Trust, and Governance

  • Ensuring effective and sustainable participation especially from developing countries and marginalised communities. Developing governance mechanisms that are accountable, transparent and provide checks against both unaccountable commercial interests as well as governments.
  • Building citizen trust through legitimate, accountable and transparent governance mechanisms.
  • Ensuring cooperation between states as security is influenced by global foreign policy, and is of principal importance to citizens and consumers, and an enabler of other rights.
  • As the Manila Principles on Intermediary Liability show, uninformed intermediary liability policies, blunt and heavy handed regulatory measures, failing to meet the principles of necessity and proportionality, and a lack of consistency across these policies has resulted in censorship and other human rights abuses by governments and private parties, limiting individuals’ rights to free expression and creating an environment of uncertainty that also impedes innovation online. In developing, adopting, and reviewing legislation, policies and practices that govern the liability of intermediaries, interoperable and harmonized regimes that can promote innovation while respecting users’ rights in line with the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the United Nations Guiding Principles on Business and Human Rights are needed and should be encouraged.
  • An important challenge before the Information Society is that of the rise of “quantified society”, where enormous amounts of data are generated constantly, leading to great possibilities and grave concerns regarding privacy and data protection.
  • Reducing tensions arising from the differences between cultural and digital nationalism including on issues such as data sovereignty, data localisation, unfair trade and the need to have open markets.
  • Currently, there is a lack of internationally recognized venues accessible to all stakeholders for not only discussing but also acting upon many of these issues.

What should be the priorities in seeking to achieve WSIS outcomes and progress towards the Information Society, taking into account emerging trends?
All the challenges mentioned above should be a priority in achieving WSIS outcomes and ensuring innovation to lead social and economic progress in society. Digital literacy, multilingualism and addressing privacy and user data related issues need urgent attention in the global agenda. Enabling increased citizen participation thus accounting for the diverse voices that make the Internet a unique medium should also be treated as priority. Renewing the IGF mandate and giving it teeth by adopting indicators for development and progress, periodic review and working towards tangible outcomes would be beneficial to achieving the goal of a connected information society.

What are general expectations from the WSIS + 10 High Level Meeting of the United Nations General Assembly?
We would expect the WSIS+10 High Level Meeting to endorse an outcome document that seeks to d evelop a comprehensive policy framework addressing the challenges highlighted above . It would also be beneficial, if the outcome document could identify further steps to assess development made so far, and actions for overcoming the identified challenges. Importantly, this should not only be aimed at governments, but at all stakeholders. This would be useful as a future road map for regulation and would also allow us to understand the impact of Internet on society.

What shape should the outcome document take?
The outcome document should be a resolution of the UN General Assembly, with high level policy statements and adopted agreements to work towards identified indicators. It should stress the urgency of reforms needed for ICT governance that is democratic, respectful of human rights and social justice and promotes participatory policymaking. The language should promote the use of technologies and institutional architectures of governance that ensure users’ rights over data and information and recognize the need to restrict abusive use of technologies including those used for mass surveillance. Further, the outcome document should underscore the relevance of the Universal Declaration of Human Rights, including civil, political, social, economic, and cultural rights, in the Information Society.

The outcome document should also acknowledge that certain issues such as security, ensuring transnational rights, taxation, and other such cross jurisdictional issues may need greater international cooperation and should include concrete steps on how to proceed on these issues. The outcome document should acknowledge the limited progress made through outcome-less multi-stakeholder governance processes such as the Internet Governance Forum, which favour status quoism, and seek to enable the IGF to be more bold in achieving its original goals, which are still relevant. It should be frank in its acknowledgement of the lack of consensus on issues such as “enhanced cooperation” and the “respective roles” of stakeholders in multi-stakeholder processes, as brushing these difficulties under the carpet won’t help in magically building consensus. Further, the outcome document should recognize that there are varied approaches to multi-stakeholder governance.

A Review of the Policy Debate around Big Data and Internet of Things

by Elonnai Hickok last modified Aug 17, 2015 08:36 AM
This blog post seeks to review and understand how regulators and experts across jurisdictions are reacting to Big Data and Internet of Things (IoT) from a policy perspective.

Defining and Connecting Big Data and Internet of Things

The Internet of Things is a term that refers to networked objects and systems that can connect to the internet and can transmit and receive data. Characteristics of IoT include the gathering of information through sensors, the automation of functions, and analysis of collected data.[1] For IoT devices, because of the velocity at which data is generated, the volume of data that is generated, and the variety of data generated by different sources [2] - IoT devices can be understood as generating Big Data and/or relying on Big Data analytics. In this way IoT devices and Big Data are intrinsically interconnected.

General Implications of Big Data and Internet of Things

Big Data paradigms are being adopted across countries, governments, and business sectors because of the potential insights and change that it can bring. From improving an organizations business model, facilitating urban development, allowing for targeted and individualized services, and enabling the prediction of certain events or actions - the application of Big Data has been recognized as having the potential to bring about dramatic and large scale changes.

At the same time, experts have identified risks to the individual that can be associated with the generation, analysis, and use of Big Data. In May 2014, the White House of the United States completed a ninety day study of how big data will change everyday life. The Report highlights the potential of Big Data as well as identifying a number of concerns associated with Big Data. For example: the selling of personal data, identification or re-identification of individuals, profiling of individuals, creation and exacerbation of information asymmetries, unfair, discriminating, biased, and incorrect decisions based on Big Data analytics, and lack of or misinformed user consent.[3] Errors in Big Data analytics that experts have identified include statistical fallacies, human bias, translation errors, and data errors.[4] Experts have also discussed fundamental changes that Big Data can bring about. For example, Danah Boyd and Kate Crawford in the article "Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon" propose that Big Data can change the definition of knowledge and shape the reality it measures.[5] Similarly, a BSC/Oxford Internet Institute conference report titled " The Societal Impact of the Internet of Things" points out that often users of Big Data assume that information and conclusions based on digital data is reliable and in turn replace other forms of information with digital data.[6]

Concerns that have been voiced by the Article 29 Working Party and others specifically about IoT devices have included insufficient security features built into devices such as encryption, the reliance of the devices on wireless communications, data loss from infection by malware or hacking, unauthorized access and use of personal data, function creep resulting from multiple IoT devices being used together, and unlawful surveillance.[7]

Regulation of Big Data and Internet of Things

The regulation of Big Data and IoT is currently being debated in contexts such as the US and the EU. Academics, civil society, and regulators are exploring questions around the adequacy of present regulation and overseeing frameworks to address changes brought about Big Data, and if not - what forms of or changes in regulation are needed? For example, Kate Crawford and Jason Shultz in the article "Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms"stress the importance of bringing in 'data due process rights' i.e ensuring fairness in the analytics of Big Data and how personal information is used.[8] While Solon Barocas and Andrew Selbst in the article "Big Data's Disparate Impact" explore if present anti-discrimination legislation and jurisprudence in the US is adequate to protect against discrimination arising from Big Data practices - specifically data mining.[9]

The Impact of Big Data and IoT on Data Protection Principles

In the context of data protection, various government bodies, including the Article 29 Data Protection Working Party set up under the Directive 95/46/EC of the European Parliament, the Council of Europe, the European Commission, and the Federal Trade Commission, as well as experts and academics in the field, have called out at least ten different data protection principles and concepts that Big Data impacts:

  1. Collection Limitation: As a result of the generation of Big Data as enabled by networked devices, increased capabilities to analyze Big Data, and the prevalent use of networked systems - the principle of collection limitation is changing.[10]
  2. Consent: As a result of the use of data from a wide variety of sources and the re-use of data which is inherent in Big Data practices - notions of informed consent (initial and secondary) are changing.[11]
  3. Data Minimization: As a result of Big Data practices inherently utilizing all data possible - the principle of data minimization is changing/obsolete.[12]
  4. Notice: As a result of Big Data practices relying on vast amounts of data from numerous sources and the re-use of that data - the principle of notice is changing.[13]
  5. Purpose Limitation: As a result of Big Data practices re-using data for multiple purposes - the principle of purpose limitation is changing/obsolete.[14]
  6. Necessity: As a result of Big Data practices re-using data, the new use or re-analysis of data may not be pertinent to the purpose that was initially specified- thus the principle of necessity is changing.[15]
  7. Access and Correction: As a result of Big Data being generated (and sometimes published) at scale and in real time - the principle of user access and correction is changing.[16]
  8. Opt In and Opt Out Choices: Particularly in the context of smart cities and IoT which collect data on a real time basis, often without the knowledge of the individual, and for the provision of a service - it may not be easy or possible for individuals to opt in or out of the collection of their data.[17]
  9. PI: As a result of Big Data analytics using and analyzing a wide variety of data, new or unexpected forms of personal data may be generated - thus challenging and evolving beyond traditional or specified definitions of personal information.[18]
  10. Data Controller: In the context of IoT, given the multitude of actors that can collect, use and process data generated by networked devices, the traditional understanding of what and who is a data controller is changing.[19]

Possible Technical and Policy Solutions

In a Report titled "Internet of Things: Privacy & Security in a Connected World" by the Federal Trade Commission in the United States it was noted that though IoT changes the application and understanding of certain privacy principles, it does not necessarily make them obsolete.[20] Indeed many possible solutions that have been suggested to address the challenges posed by IoT and Big Data are technical interventions at the device level rather than fundamental policy changes. For example it has been proposed that IoT devices can be programmed to:

  • Automatically delete data after a specified period of time [21] (addressing concerns of data retention)
  • Ensure that personal data is not fed into centralized databases on an automatic basis [22] (addressing concerns of transfer and sharing without consent, function creep, and data breach)
  • Offer consumers combined choices for consent rather than requiring a one time blanket consent at the time of initiating a service or taking fresh consent for every change that takes place while a consumer is using a service. [23] (addressing concerns of informed and meaningful consent)
  • Categorize and tag data with accepted uses and programme automated processes to flag when data is misused. [24] (addressing concerns of misuse of data)
  • Apply 'sticky policies' - policies that are attached to data and define appropriate uses of the data as it 'changes hands' [25] (addressing concerns of user control of data)
  • Allow for features to only be turned on with consent from the user [26] (addressing concerns of informed consent and collection without the consent or knowledge of the user)
  • Automatically convert raw personal data to aggregated data [27] (addressing concerns of misuse of personal data and function creep)
  • Offer users the option to delete or turn off sensors [28] (addressing concerns of user choice, control, and consent)

Such solutions place the designers and manufacturers of IoT devices in a critical role. Yet some, such as Kate Crawford and Jason Shultz are not entirely optimistic about the possibility of effective technological solutions - noting in the context of automated decision making that it is difficult to build in privacy protections as it is unclear when an algorithm will predict personal information about an individual.[29]

Experts have also suggested that more emphasis should be placed on the principles and practices of:

  • Transparency,
  • Access and correction,
  • Use/misuse
  • Breach notification
  • Remedy
  • Ability to withdraw consent

Others have recommended that certain privacy principles need to be adapted to the Big Data/IoT context. For example, the Article 29 Working Party has clarified that in the context of IoT, consent mechanisms need to include the types of data collected, the frequency of data collection, as well as conditions for data collection.[30] While the Federal Trade Commission has warned that adopting a pure "use" based model has its limitations as it requires a clear (and potentially changing) definition of what use is acceptable and what use is not acceptable, and it does not address concerns around the collection of sensitive personal information.[31] In addition to the above, the European Commission has stressed that the right of deletion, the right to be forgotten, and data portability also need to be foundations of IoT systems and devices.[32]

Possible Regulatory Frameworks

To the question - are current regulatory frameworks adequate and is additional legislation needed, the FTC has recommended that though a specific IoT legislation may not be necessary, a horizontal privacy legislation would be useful as sectoral legislation does not always account for the use, sharing, and reuse of data across sectors. The FTC also highlighted the usefulness of privacy impact assessments and self regulatory steps to ensure privacy.[33] The European Commission on the other hand has concluded that to ensure enforcement of any standard or protocol - hard legal instruments are necessary.[34] As mentioned earlier, Kate Crawford and Jason Shultz have argued that privacy regulation needs to move away from principles on collection, specific use, disclosure, notice etc. and focus on elements of due process around the use of Big Data - as they say "procedural data due process". Such due process should be based on values instead of defined procedures and should include at the minimum notice, hearing before an independent arbitrator, and the right to review. Crawford and Shultz more broadly note that there are conceptual differences between privacy law and big data that pose as serious challenges i.e privacy law is based on causality while big data is a tool of correlation. This difference raises questions about how effective regulation that identifies certain types of information and then seeks to control the use, collection, and disclosure of such information will be in the context of Big Data – something that is varied and dynamic. According to Crawford and Shultz many regulatory frameworks will struggle with this difference – including the FTC's Fair Information Privacy Principles and the EU regulation including the EU's right to be forgotten.[35] The European Data Protection Supervisor on the other hand looks at Big Data as spanning the policy areas of data protection, competition, and consumer protection – particularly in the context of 'free' services. The Supervisor argues that these three areas need to come together to develop ways in which the challenges of Big Data can be addressed. For example, remedy could take the form of data portability – ensuring users the ability to move their data to other service providers empowering individuals and promoting competitive market structures or adopting a 'compare and forget' approach to data retention of customer data. The Supervisor also stresses the need to promote and treat privacy as a competitive advantage, thus placing importance on consumer choice, consent, and transparency.[36] The European Data Protection reform has been under discussion and it is predicted to be enacted by the end of 2015. The reform will apply across European States and all companies operating in Europe. The reform proposes heavier penalties for data breaches, seeks to provide users with more control of their data.[37] Additionally, Europe is considering bringing digital platforms under the Network and Information Security Directive – thus treating companies like Google and Facebook as well as cloud providers and service providers as a critical sector. Such a move would require companies to adopt stronger security practices and report breaches to authorities.[38]

Conclusion

A review of the different opinions and reactions from experts and policy makers demonstrates the ways in which Big Data and IoT are changing traditional forms of protection that governments and societies have developed to protect personal data as it increases in value and importance. While some policy makers believe that big data needs strong legislative regulation and others believe that softer forms of regulation such as self or co-regulation are more appropriate, what is clear is that Big Data is either creating a regulatory dilemma– with policy makers searching for ways to control the unpredictable nature of big data through policy and technology through the merging of policy areas, the honing of existing policy mechanisms, or the broadening of existing policy mechanisms - while others are ignoring the change that Big Data brings with it and are forging ahead with its use.

Answering the 'how do we regulate Big Data” question requires re-conceptualization of data ownership and realities. Governments need to first recognize the criticality of their data and the data of their citizens/residents, as well as the contribution to a country's economy and security that this data plays. With the technologies available now, and in the pipeline, data can be used or misused in ways that will have vast repercussions for individuals, society, and a nation. All data, but especially data directly or indirectly related to citizens and residents of a country, needs to be looked upon as owned by the citizens and the nation. In this way, data should be seen as a part of critical national infrastructure of a nation, and accorded the security, protections, and legal backing thereof to prevent the misuse of the resource by the private or public sectors, local or foreign governments. This could allow for local data warehousing and bring physical and access security of data warehouses on par with other critical national infrastructure. Recognizing data as a critical resource answers in part the concern that experts have raised – that Big Data practices make it impossible for data to be categorized as personal and thus afforded specified forms of protection due to the unpredictable nature of big data. Instead – all data is now recognized as critical.

In addition to being able to generate personal data from anonymized or non-identifiable data, big data also challenges traditional divisions of public vs. private data. Indeed Big Data analytics can take many public data points and derive a private conclusion. The use of Big Data analytics on public data also raises questions of consent. For example, though a license plate is public information – should a company be allowed to harvest license plate numbers, combine this with location, and sell this information to different interested actors? This is currently happening in the United States.[39] Lastly, Big Data raises questions of ownership. A solution to the uncertainty of public vs. private data and associated consent and ownership could be the creation a National Data Archive with such data. The archive could function with representation from the government, public and private companies, and civil society on the board. In such a framework, for example, companies like Airtel would provide mobile services, but the CDRs and customer data collected by the company would belong to the National Data Archive and be available to Airtel and all other companies within a certain scope for use. This 'open data' approach could enable innovation through the use of data but within the ambit of national security and concerns of citizens – a framework that could instill trust in consumers and citizens. Only when backed with strong security requirements, enforcement mechanisms and a proactive, responsive and responsible framework can governments begin to think about ways in which Big Data can be harnessed.


[1] BCS - The Chartered Institute for IT. (2013). The Societal Impact of the Internet of Things. Retrieved May 17, 2015, from http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf

[2] Sicular, S. (2013, March 27). Gartner’s Big Data Definition Consists of Three Parts, Not to Be Confused with Three “V”s. Retrieved May 20, 2015, from http://www.forbes.com/sites/gartnergroup/2013/03/27/gartners-big-data-definition-consists-of-three-parts-not-to-be-confused-with-three-vs/

[3] Executive Office of the President. “Big Data: Seizing Opportunities, Preserving Values”. May 2014. Available at: https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf. Accessed: July 2nd 2015.

[4] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

[5] Danah Boyd, Kate Crawford. CRITICAL QUESTIONS FOR BIG DATA. Information, Communication & Society Vol. 15, Iss. 5, 2012. Available at: http://www.tandfonline.com/doi/full/10.1080/1369118X.2012.678878. Accessed: July 2nd 2015.

[6]  The Chartered Institute for IT, Oxford Internet Institute, University of Oxford. “The Societal Impact of the Internet of Things” February 2013. Available at: http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf. Accessed: July 2nd 2015.

[7] ARTICLE 29 Data Protection Working Party. (2014). Opinion 8/2014 on the on Recent Developments on the Internet of Things. European Commission. Retrieved May 20, 2015, from http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf

[8] Crawford, K., & Schultz, J. (2013). Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms (SSRN Scholarly Paper No. ID 2325784). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2325784

[9] Barocas, S., & Selbst, A. D. (2015). Big Data’s Disparate Impact (SSRN Scholarly Paper No. ID 2477899). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2477899

[10] Barocas, S., & Selbst, A. D. (2015). Big Data’s Disparate Impact (SSRN Scholarly Paper No. ID 2477899). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2477899

[11] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[12] Tene, O., & Polonetsky, J. (2013). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 239.

[13]  Omer Tene and Jules Polonetsky, Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. & Intell. Prop. 239 (2013).

[14] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[15] Information Commissioner's Office. (2014). Big Data and Data Protection. Infomation Commissioner's Office. Retrieved May 20, 2015, from https://ico.org.uk/media/for-organisations/documents/1541/big-data-and-data-protection.pdf

[16] Article 29 Data Protection Working Party. “Opinion 8/2014 on the on Recent Developments on the Internet of Things”. September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[17] The Chartered Institute for IT and Oxford Internet Institute, University of Oxford. “The Societal Impact of the Internet of Things”. February 14th 2013. Available at: http://www.bcs.org/upload/pdf/societal-impact-report-feb13.pdf. Accessed: July 2nd 2015.

[18] Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[19] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[20] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[21] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[22] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[23] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[24] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commision. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[25] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[26] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[27] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[28] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[29]  Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[30]  Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[31] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commission. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[32] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[33] Federal Trade Commission. (2015). Internet of Things: Privacy & Security in a Connected World. Federal Trade Commission. Retrieved May 20, 2015, from https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf

[34] Article 29 Data Protection Working Party “Opinion 8/2014 on the on Recent Developments on the Internet of Things” September 16th 2014. Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp223_en.pdf. Accessed: July 2nd 2015.

[35] Kate Crawford and Jason Shultz, “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms”. Boston College Law Review, Volume 55, Issue 1, Article 4. January 1st 2014. Available at: http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr. Accessed: July 2nd 2015.

[36] European Data Protection Supervisor. Preliminary Opinion of the European Data Protection Supervisor, Privacy and competitiveness in the age of big data: the interplay between data protection, competition law and consumer protection in the Digital Economy. March 2014. Available at: https://secure.edps.europa.eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

[37] SC Magazine. Harmonised EU data protection and fines by the end of the year. June 25th 2015. Available at: http://www.scmagazineuk.com/harmonised-eu-data-protection-and-fines-by-the-end-of-the-year/article/422740/. Accessed: August 8th 2015.

[38] Tom Jowitt, “Digital Platforms to be Included in EU Cybersecurity Law”. TechWeek Europe. August 7th 2015. Available at: http://www.techweekeurope.co.uk/e-regulation/digital-platforms-eu-cybersecuity-law-174415

[39] Adam Tanner. Data Brokers are now Selling Your Car's Location for $10 Online. July 10th 2013. Available at: http://www.forbes.com/sites/adamtanner/2013/07/10/data-broker-offers-new-service-showing-where-they-have-spotted-your-car/

Big Data and the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011

by Elonnai Hickok last modified Aug 11, 2015 07:01 AM
Experts and regulators across jurisdictions are examining the impact of Big Data practices on traditional data protection standards and principles. This will be a useful and pertinent exercise for India to undertake as the government and the private and public sectors begin to incorporate and rely on the use of Big Data in decision making processes and organizational operations.This blog provides an initial evaluation of how Big Data could impact India's current data protection standards.

Experts and regulators across the globe are examining the impact of Big Data practices on traditional data protection standards and principles. This will be a useful and pertinent exercise for India to undertake as the government and the private and public sectors begin to incorporate and rely on the use of Big Data in decision making processes and organizational operations.

Below is an initial evaluation of how Big Data could impact India's current data protection standards.

India currently does not have comprehensive privacy legislation - but the Reasonable Security Practices and Procedures and Sensitive Personal Data or Information Rules 2011 formed under section 43A of the Information Technology Act 2000[1] define a data protection framework for the processing of digital data by Body Corporate. Big Data practices will impact a number of the provisions found in the Rules:

Scope of Rules: Currently the Rules apply to Body Corporate and digital data. As per the IT Act, Body Corporate is defined as "Any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities."

The present scope of the Rules excludes from its purview a number of actors that do or could have access to Big Data or use Big Data practices. The Rules would not apply to government bodies or individuals collecting and using Big Data. Yet, with technologies such as IoT and the rise of Smart Cities across India – a range of government, public, and private organizations and actors could have access to Big Data.

Definition of personal and sensitive personal data: Rule 2(i) defines personal information as "information that relates to a natural person which either directly or indirectly, in combination with other information available or likely to be available with a body corporate, is capable of identifying such person."

Rule 3 defines sensitive personal information as:

  • Password,
  • Financial information,
  • Physical/physiological/mental health condition,
  • Sexual orientation,
  • Medical records and history,
  • Biometric information

The present definition of personal data hinges on the factor of identification (data that is capable of identifying a person). Yet this definition does not encompass information that is associated to an already identified individual - such as habits, location, or activity.

The definition of personal data also addresses only the identification of 'such person' and does not address data that is related to a particular person but that also reveals identifying information about another person - either directly - or when combined with other data points.

By listing specific categories of sensitive personal information, the Rules do not account for additional types of sensitive personal information that might be generated or correlated through the use of Big Data analytics.

Importantly, the definitions of sensitive personal information or personal information do not address how personal or sensitive personal information - when anonymized or aggregated – should be treated.

Consent: Rule 5(1) requires that Body Corporate must, prior to collection, obtain consent in writing through letter or fax or email from the provider of sensitive personal data regarding the use of that data.

In a context where services are delivered with little or no human interaction, data is collected through sensors, data is collected on a real time and regular basis, and data is used and re-used for multiple and differing purposes - it is not practical, and often not possible, for consent to be obtained through writing, letter, fax, or email for each instance of data collection and for each use.

Notice of Collection: Rule 5(3) requires Body Corporate to provide the individual with a notice during collection of information that details the fact that information is being collected, the purpose for which the information is being collected, the intended recipients of the information, the name and address of the agency that is collecting the information and the agency that will retain the information. Furthermore body corporate should not retain information for longer than is required to meet lawful purposes.

Though this provision acts as an important element of transparency, in the context of Big Data, communicating the purpose for which data is collected, the intended recipients of the information, the name and address of the agency that is collecting the information and the agency that will retain the information could prove to be difficult to communicate as they are likely to encompass numerous agencies and change depending upon the analysis being done.

Access and correction: Rule 5(6) provides individuals with the ability to access sensitive personal information held by the body corporate and correct any inaccurate information.

This provision would be difficult to implement effectively in the context of Big Data as vast amounts of data are being generated and collected on an ongoing and real time basis and often without the knowledge of the individual.

Purpose Limitation: Rule 5(5) requires that body corporate should use information only of the purpose which it has been collected.

In the context of Big Data this provision would overlook the re-use of data that is inherent in such practices.

Security: Rule 8 states that any Body Corporate or person on its behalf will be understood to have complied with reasonable security practices and procedures if they have implemented such practices and have in place codes that address managerial, technical, operational and physical security control measures. These codes could follow the IS/ISO/IEC 27001 standard or another government approved and audited standard.

This provision importantly requires that data controllers collecting and processing data have in place strong security practices. In the context of Big Data – the security of devices that might be generating or collecting data and algorithms processing and analysing data is critical. Once generated, it might be challenging to ensure the data is being transferred to or being analysed by organisations that comply with such security practices as listed.

Data Breach : Rule 8 requires that if a data breach occurs, Body Corporate would have to be able to demonstrate that they have implemented their documented information security codes.

Though this provision holds a company accountable for the implementation of security practices, it does not address how a company should be held accountable for a large scale data breach as in the context of Big Data the scope and impact of a data breach is on a much larger scale.

Opt in and out and ability to withdraw consent : Rule 5(7) requires Body Corporate or any person on its behalf, prior to the collection of information - including sensitive personal information - must give the individual the option of not providing information and must give the individual the option of withdrawing consent. Such withdrawal must be sent in writing to the body corporate.

The feasibility of such a provision in the context of Big Data is unclear, especially in light of the fact that Big Data practices draw upon large amounts of data, generated often in real time, and from a variety of sources.

Disclosure of Information: Rule 6 maintains that disclosure of sensitive personal data can only take place with permission from the provider of such information or as agreed to through a lawful contract.

This provision addresses disclosure and does not take into account the “sharing” of information that is enabled through networked devices, as well as the increasing practice of companies to share anonymized or aggregated data.

Privacy Policy : Rule 4 requires that body corporate have in place a privacy policy on their website that provides clear and accessible statements of its practices and policies, type of personal or sensitive personal information that is being collected, purpose of the collection, usage of the information, disclosure of the information, and the reasonable security practices and procedures that have been put in place to secure the information.

In the context of Big Data where data from a variety of sources is being collected, used, and re-used it is important for policies to 'follow data' and appear in a contextualized manner. The current requirement of having Body Corporate post a single overarching privacy policy on its website could prove to be inadequate.

Remedy : Section 43A of the Act holds that if a body corporate is negligent in implementing and maintain reasonable security practices and procedures which results in wrongful loss or wrongful gain to any person, the body corporate can be held liable to pay compensation to the affected person.

This provision will provide limited remedy for an affected individual in the context of Big Data. Though important to help prevent data breaches resulting from negligent data practices, implementation of reasonable security practices and procedures cannot be the only hinging point for determining liability of a Body Corporate for violations and many of the harms possible through Big Data are not in the form of wrongful loss or wrongful gain to another person. Indeed many harms possible through Big Data are non-economic in nature – including physical invasion of privacy, and discriminatory practices that can arise from decisions based on Big Data analytics. Nor does the provision address the potential for future damage that can result from a 'Big Data data breach'.

The safeguards noted in the above section are not the only legal provisions that speak to privacy in India. There are over fifty sectoral legislation that have provisions addressing privacy - for example provisions addressing confidentiality of health and banking information. The government of India is also in the process of drafting a privacy legislation. In 2012 the Report of the Group of Experts on Privacy provided recommendations for a privacy framework in India. The Report envisioned a framework of co-regulation - with sector level self regulatory organization developing privacy codes (that are not lower than the defined national privacy principles) and that are enforced by a privacy commissioner.[2] Perhaps this method would be optimal for the regulation of Big Data- allowing for the needed flexibility and specificity in standards and device development. Though the Report notes that individuals can seek remedy from the court and the Privacy Commissioner can issue fines for a violation, the development of privacy legislation in India has yet to clearly integrate the importance of due process and remedy. With the onset of Big Data - this will become more important than ever.

Conclusion

The use and generation of Big Data in India is growing. Plans such as free wifi zones in cities[3], city wide CCTV networks with facial recognition capabilities[4], and the implementation of an identity/authentication platform for public and private services[5], are indicators towards a move of data generation that is networked and centralized, and where the line between public and private is blurred through the vast amount of data that is collected.

In such developments and innovations what is privacy and what role does privacy play? Is it the archaic inhibitor - limiting the sharing and use of data for new and innovative purposes? Will it be defined purely by legislative norms or through device/platform design as well? Is it a notion that makes consumers think twice about using a product or service or is it a practice that enables consumer and citizen uptake and trust and allows for the growth and adoption of these services?

How privacy will be regulated and how it will be perceived is still evolving across jurisdictions, technologies, and cultures - but it is clear that privacy is not being and cannot be overlooked. Governments across the world are reforming and considering current and future privacy regulation targeted towards life in a quantified society. As the Indian government begins to roll out initiatives that create a "Digital India" indeed a "quantified India", taking privacy into consideration could facilitate the uptake, expansion, and success of these practices and services. As the Indian government pursues the opportunities possible through Big Data it will be useful to review existing privacy protections and deliberate on if, and in what form, future protections for privacy and other rights will be needed.


[1]Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information Rules 2011). Available at: http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

[2]Group of Experts on Privacy. (2012). Report of the Group of Experts on Privacy. New Delhi: Planning Commission, Government of India. Retrieved May 20, 2015, from http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

[3] NDTV. “Free Public Wi-Fi Facility in Delhi to Have Daily Data Limit. NDTV, May 25th 2015, Available at: http://gadgets.ndtv.com/internet/news/free-public-wi-fi-facility-in-delhi-to-have-daily-data-limit-695857. Accessed: July 2nd 2015.

[4]FindBiometrics Global Identity Management. “Surat Police Get NEC Facial Recognition CCTV System”. July 21st 2015. Available at: http://findbiometrics.com/surat-police-nec-facial-recognition-27214/

[5]UIDAI Official Website. Available at: https://uidai.gov.in/

Right to Privacy in Peril

by Vipul Kharbanda last modified Aug 13, 2015 03:32 PM
It seems to have become quite a fad, especially amongst journalists, to use this headline and claim that the right to privacy which we consider so inherent to our being, is under attack. However, when I use this heading in this piece I am not referring to the rampant illegal surveillance being done by the government, or the widely reported recent raids on consenting (unmarried) adults who were staying in hotel rooms in Mumbai. I am talking about the fact that the Supreme Court of India has deemed it fit to refer the question of the very existence of a fundamental right to privacy to a Constitution Bench to finally decide the matter, and define the contours of such right if it does exist.

In an order dated August 11, 2015 the Supreme Court finally gave in to the arguments advanced by the Attorney General and admitted that there is some “unresolved contradiction” regarding the existence of a constitutional “right to privacy” under the Indian Constitution and requested that a Constitutional Bench of appropriate strength.

The Supreme Court was hearing a petition challenging the implementation of the Adhaar Card Scheme of the government, where one of the grounds to challenge the scheme was that it was violative of the right to privacy guaranteed to all citizens under the Constitution of India. However to counter this argument, the State (via the Attorney General) challenged the very concept that the Constitution of India guarantees a right to privacy by relying on an “unresolved contradiction” in judicial pronouncements on the issue, which so far had only been of academic interest. This “unresolved contradiction” arose because in the cases of M.P. Sharma & Others v. Satish Chandra & Others,[1] and Kharak Singh v. State of U.P. & Others,[2] (decided by Eight and Six Judges respectively) the Supreme Court has categorically denied the existence of a right to privacy under the Indian Constitution.

However somehow the later case of Gobind v. State of M.P. and another,[3] (which was decided by a two Judge Bench of the Supreme Court) relied upon the opinion given by the minority of two judges in Kharak Singh to hold that a right to privacy does exist and is guaranteed as a fundamental right under the Constitution of India.[4] Thereafter a large number of cases have held the right to privacy to be a fundamental right, the most important of which are R. Rajagopal & Another v. State of Tamil Nadu & Others,[5] (popularly known as Auto Shanker’s case) and People’s Union for Civil Liberties (PUCL) v. Union of India & Another.[6] However, as was noticed by the Supreme Court in its August 11 order, all these judgments were decided by two or three Judges only.

The petitioners on the other hand made a number of arguments to counter those made by the Attorney General to the effect that the fundamental right to privacy is well established under Indian law and that there is no need to refer the matter to a Constitutional Bench. These arguments are:

(i) The observations made in M.P. Sharma regarding the absence of right to privacy are not part of the ratio decidendi of that case and, therefore, do not bind the subsequent smaller Benches such as R. Rajagopal and PUCL.

(ii) Even in Kharak Singh it was held that the right of a person not to be disturbed at his residence by the State is recognized to be a part of a fundamental right guaranteed under Article 21. It was argued that this is nothing but an aspect of privacy. The observation in para 20 of the majority judgment (quoted in footnote 2 above) at best can be construed only to mean that there is no fundamental right of privacy against the State’s authority to keep surveillance on the activities of a person. However, they argued that such a conclusion cannot be good law any more in view of the express declaration made by a seven-Judge bench decision of this Court in Maneka Gandhi v. Union of India & Another.[7]

(iii) Both M.P. Sharma (supra) and Kharak Singh (supra) were decided on an interpretation of the Constitution based on the principles expounded in A.K. Gopalan v. State of Madras,[8] which have themselves been declared wrong by a larger Bench in Rustom Cavasjee Cooper v. Union of India.[9]

Other than the points above, it was also argued that world over in all the countries where Anglo-Saxon jurisprudence is followed, ‘privacy’ is recognized as an important aspect of the liberty of human beings. The petitioners also submitted that it was too late in the day for the Union of India to argue that the Constitution of India does not recognize privacy as an aspect of the liberty under Article 21 of the Constitution of India.

However these arguments of the petitioners were not enough to convince the Supreme Court that there is no doubt regarding the existence and contours of the right to privacy in India. The Court, swayed by the arguments presented by the Attorney General, admitted that questions of far reaching importance for the Constitution were at issue and needed to be decided by a Constitutional Bench.

Giving some insight into its reasoning to refer this issue to a Constitutional Bench, the Court did seem to suggest that its decision to refer the matter to a larger bench was more an exercise in judicial propriety than an action driven by some genuine contradiction in the law. The Court said that if the observations in M.P. Sharma (supra) and Kharak Singh (supra) were accepted as the law of the land, the fundamental rights guaranteed under the Constitution of India would get “denuded of vigour and vitality”. However the Court felt that institutional integrity and judicial discipline require that smaller benches of the Court follow the decisions of larger benches, unless they have very good reasons for not doing so, and since in this case it appears that the same was not done therefore the Court referred the matter to a larger bench to scrutinize the ratio of M.P. Sharma (supra) and Kharak Singh (supra) and decide the judicial correctness of subsequent two judge and three judge bench decisions which have asserted or referred to the right to privacy.


[1] AIR 1954 SC 300. In para 18 of the Judgment it was held: “A power of search and seizure is in any system of jurisprudence an overriding power of the State for the protection of social security and that power is necessarily regulated by law. When the Constitution makers have thought fit not to subject such regulation to constitutional limitations by recognition of a fundamental right to privacy, analogous to the American Fourth Amendment, we have no justification to import it, into a totally different fundamental right, by some process of strained construction.”

[2] AIR 1963 SC 1295. In para 20 of the judgment it was held: “Nor do we consider that Art. 21 has any relevance in the context as was sought to be suggested by learned counsel for the petitioner. As already pointed out, the right of privacy is not a guaranteed right under our Constitutionand therefore the attempt to ascertain the movement of an individual which is merely a manner in which privacy is invaded is not an infringement of a fundamental right guaranteed by Part III.”

[3] (1975) 2 SCC 148.

[4] It is interesting to note that while the decisions in both Kharak Singh and Gobind were given in the context of similar facts (challenging the power of the police to make frequent domiciliary visits both during the day and night at the house of the petitioner) while the majority in Kharak Singh specifically denied the existence of a fundamental right to privacy, however they held the conduct of the police to be violative of the right to personal liberty guaranteed under Article 21, since the Regulations under which the police actions were undertaken were themselves held invalid. On the other hand, while Gobind held that a fundamental right to privacy does exist in Indian law, it may be interfered with by the State through procedure established by law and therefore upheld the actions of the police since they were acting under validly issued Regulations.

[5] (1994) 6 SCC 632.

[6] (1997) 1 SCC 301.

[7] (1978) 1 SCC 248.

[8] AIR 1950 SC 27.

[9] (1970) 1 SCC 248.

Clearing Misconceptions: What the DoT Panel Report on Net Neutrality Says (and Doesn't)

by Pranesh Prakash last modified Jul 21, 2015 12:36 PM
There have been many misconceptions about what the DoT Panel Report on Net Neutrality says: the most popular ones being that they have recommended higher charges for services like WhatsApp and Viber, and that the report is an anti-Net neutrality report masquerading as a pro-Net neutrality report. Pranesh Prakash clears up these and other incorrect notions about the report in this brief analysis.

Background of the DoT panel

In January 2015, the Department of Telecommunication (DoT) formed a panel to look into "net neutrality from public policy objective, its advantages and limitations," as well the impact of a "regulated telecom services sector and unregulated content and applications sector". After spending a few months collecting both oral and written testimony from a number of players in this debate, and analysing it, on July 16 that panel submitted its report to the DoT and released it to the public for comments (till August 15, 2015). At the same time, independently, the Telecom Regulatory Authority of India (TRAI) is also considering the same set of issues. TRAI received more than a million responses in response to its consultation paper — the most TRAI has ever received on any topic — the vast majority of of them thanks in part to the great work of the Save the Internet campaign. TRAI is yet to submit its recommendations to the DoT. Once those recommendations are in, the DoT will have to take its call on how to go ahead with these two sets of issues: regulation of certain Internet-based communications services, and net neutrality.

Summary of the DoT panel report

The DoT panel had the tough job of synthesising the feedback from dozens of people and organizations. In this, they have done an acceptable job. Although, in multiple places, the panel has wrongly summarised the opinions of the "civil society" deponents: I was one of the deponents on the day that civil society actors presented their oral submissions, so I know. For instance, the panel report notes in 4.2.9.c that "According to civil society, competing applications like voice OTT services were eroding revenues of the government and the TSPs, creating security and privacy concerns, causing direct as well as indirect losses." I do not recall that being the main thrust of any civil society participant's submission before the panel. That having been said, one might still legitimately claim that none of these or other mistakes (which include errors like "emergency" instead of "emergence", "Tim Burners Lee" instead of "Tim Berners-Lee", etc.) are such that they have radically altered the report's analysis or recommendations.

The report makes some very important points that are worth noting, which can be broken into two broad headings:

On governmental regulation of OTTs

  1. Internet-based (i.e., over-the-top, or "OTT") communications services (like WhatsApp, Viber, and the like) are currently taking advantage of "regulatory arbitrage": meaning that the regulations that apply to non-IP communications services and IP communications services are different. Under the current "unified licence" regime, WhatsApp, Viber, and other such services don't have to get a licence from the government, don't have to abide by anti-spam Do-Not-Disturb regulations, do not have to share any part of their revenue with the government, do not have to abide by national security terms in the licence, and in general are treated differently from other telecom services. The report wishes to bring these within a licensing regime.
  2. The report distinguishes between Internet-based voice calls (voice over IP, or VoIP) and messaging services, and doesn't wish to interfere with the latter. It also distinguishes between domestic and international VoIP calls, and believes only the former need regulation. It is unclear on what bases these distinctions are made.
  3. OTT "application services" do not need special telecom-oriented regulation.
  4. There should a separation in regulatory terms between the network layer and the service layer. While this doesn't mean much in the short-term for Net neutrality, it will be very important in the long-term for ICT regulation, and is very welcome.

On Net neutrality

  1. The core principles of Net neutrality — which are undefined in the report, though definitions proposed in submissions they've received are quoted — should be adhered to. In the long-run, these should find place in a new law, but for the time being they can be enforced through the licence agreement between the DoT and telecom providers.
  2. On the contentious issue of zero-rating, a process that involves both ex-ante and ex-post regulation is envisaged to prevent harmful zero-rating, while allowing beneficial zero-rating. Further, the report notes that the supposed altruistic or "public interest" motives of the zero-rating scheme do not matter if they result in harm to competition, distort consumer markets, violate the core tenets of Net neutrality, or unduly benefit an Internet "gatekeeper".

Where does the DoT panel report go wrong?

  1. The proposal by the DoT panel of a licensing regime for VoIP services is a terrible idea. It would presumptively hold all licence non-holders to be unlawful, and that should not be the case. While it is in India's national interest to want to hold VoIP services to account if they do not follow legitimate regulations, it is far better to do this through ex-post regulations rather than an ex-ante licensing scheme. A licensing scheme would benefit Indian VoIP companies (including services like Hike, which Airtel has invested in) over foreign companies like Viber. The report also doesn't say how one would distinguish between OTT communication services and OTT application services, when many apps such as food ordering apps, including text chat facilities. Further, VoIP need not be provided by a company: I run my own XMPP servers, which is a protocol used for both text and video/voice. Will a licensing regime force me to become a licence-holder or will it set a high bar? The DoT panel report doesn't say. Will there be a revenue-sharing mechanism, as is currently the case under the Unified Licence? If so, how will it be calculated in case of services like WhatsApp? These questions too find no answer in the report. All in all, this part of the report's analysis is found to be sadly wanting.
  2. Many important terms are left undefined, and many distinctions that the report draws are left unexplained. For instance, it is unclear on what regulatory basis the report distinguishes between domestic and international VoIP calls — which is an unenforceable (not to mention regulatorily unimportant) distinction — or between regulation of messaging services and VoIP services, or what precisely they mean by "application-agnostic" and "application-specific" network management (since different scholars on this issue mean different things when they say "application").

What does the DoT panel report mean for consumers?

  1. Not too much currently, since the DoT panel report is still just a set of recommendations by an expert body based on (invited) public consultations.
  2. Does it uphold Net neutrality? The DoT panel report is clear that they strongly endorse the "core principles of Net neutrality". On the issue of "zero-rating", the panel proposes some sound measures, saying that there should be a two-part mechanism for ensuring that harmful zero-rating doesn't go through: First, telecom services need to submit zero-rating tariff proposals to an expert body constituted by DoT; and second consumers will be able to complain about the harmful usage of zero-rating by any service provider, which may result in a fine. What constitutes harm / violation of Net neutrality? The panel suggests that any tariff scheme that may harm competition, distorts the consumer market, or violates the core principles of Net neutrality is harmful. This makes sense.

  3. Will it increase cost of access to WhatsApp and Viber? Well, one the one hand, zero-rating of those services could decrease the cost of access to WhatsApp and Viber, but that might not be allowed if the DoT panel recommendations are accepted, since that would possibly be judged to harm competition and distort the consumer markets. The DoT panel has also recommended bringing such services within a licensing framework to bridge the "regulatory arbitrage" that they are able benefit from (meaning that these services don't have to abide by many regulations that a telecom provider has to follow). Whether this will lead to WhatsApp and similar services charging depends on what kinds of regulations are placed on them, and if any costs are imposed on them. If the government decides to take the approach they took to ISPs in the late 90s (essentially, charging them Re. 1 as the licence fee), doesn't impose any revenue sharing (as they currently require of all telecom services), etc., then there needn't be any overly burdensome costs that WhatsApp-like services will need to pass on to consumers.

What misunderstandings do people have?

  1. There are multiple news reports that the DoT panel has recommended increased charges for domestic VoIP calls, or that ISPs will now be able to double-charge. Both of these are untrue. The DoT panel's recommendations are about "regulatory arbitrage" and licensing, which need not be related to cost.
  2. There is a fear that the exception from net neutrality of "managed services and enterprise services" is a "loophole", or that exceptions for "emergency services" and "desirable public or government services" are too vague and carry the potential of misuse. If one goes by the examples that the panel cites of managed services (e.g., services an ISP provides for a private company separately from the rest of the Internet, etc.), these fear seems largely misplaced. We must also realize the the panel report is a report, and not legislation, and the rationale for wanting exemptions from Net neutrality are clear.
  3. The DoT panel has given the go-ahead for zero-rating. Once again, this is untrue. The panel cites instances of zero-rating that aren't discriminatory, violative of Net neutrality and don't harm competition or distort consumer markets (such as zero-rating of all Internet traffic for a limited time period). Then it goes on to state that the regulator should not allow zero-rating that violates the core principles of Net neutrality.

What's missing in the Net neutrality debate is nuance. It's become a debate in which you are either for Net neutrality or against it. However, none of the underlying components of Net neutrality — a complex mix of competition policy, innovation policy, the right to freedom of expression, etc. — are absolutes; therefore, it is clear that Net neutrality cannot be an absolute either.

Security: Privacy, Transparency and Technology

by Sunil Abraham last modified Sep 15, 2015 10:53 AM
The Centre for Internet and Society (CIS) has been involved in privacy and data protection research for the last five years. It has participated as a member of the Justice A.P. Shah Committee, which has influenced the draft Privacy Bill being authored by the Department of Personnel and Training. It has organised 11 multistakeholder roundtables across India over the last two years to discuss a shadow Privacy Bill drafted by CIS with the participation of privacy commissioners and data protection authorities from Europe and Canada.

 

The article was co-authored by Sunil Abraham, Elonnai Hickok and Tarun Krishnakumar. It was published by Observer Research Foundation, Digital Debates 2015: CyFy Journal Volume 2.


Our centre’s work on privacy was considered incomplete by some stakeholders because of a lack of focus in the area of cyber security and therefore we have initiated research on it from this year onwards. In this article, we have undertaken a preliminary examination of the theoretical relationships between the national security imperative and privacy, transparency and technology.

Security and Privacy

Daniel J. Solove has identified the tension between security and privacy as a false dichotomy: "Security and privacy often clash, but there need not be a zero-sum tradeoff." [1] Further unpacking this false dichotomy, Bruce Schneier says, "There is no security without privacy. And liberty requires both security and privacy." [2] Effectively, it could be said that privacy is a precondition for security, just as security is a precondition for privacy. A secure information system cannot be designed without guaranteeing the privacy of its authentication factors, and it is not possible to guarantee privacy of authentication factors without having confidence in the security of the system. Often policymakers talk about a balance between the privacy and security imperatives—in other words a zero-sum game. Balancing these imperatives is a foolhardy approach, as it simultaneously undermines both imperatives. Balancing privacy and security should instead be framed as an optimisation problem. Indeed, during a time when oversight mechanisms have failed even in so-called democratic states, the regulatory power of technology [3] should be seen as an increasingly key ingredient to the solution of that optimisation problem.

Data retention is required in most jurisdictions for law enforcement, intelligence and military purposes. Here are three examples of how security and privacy can be optimised when it comes to Internet Service Provider (ISP) or telecom operator logs:

  1. Data Retention: We propose that the office of the Privacy Commissioner generate a cryptographic key pair for each internet user and give one key to the ISP / telecom operator. This key would be used to encrypt logs, thereby preventing unauthorised access. Once there is executive or judicial authorisation, the Privacy Commissioner could hand over the second key to the authorised agency. There could even be an emergency procedure and the keys could be automatically collected by concerned agencies from the Privacy Commissioner. This will need to be accompanied by a policy that criminalises the possession of unencrypted logs by ISP and telecom operators.

  2. Privacy-Protective Surveillance: Ann Cavoukian and Khaled El Emam [4] have proposed combining intelligent agents, homomorphic encryption and probabilistic graphical models to provide “a positive-sum, ‘win–win’ alternative to current counter-terrorism surveillance systems.” They propose limiting collection of data to “significant” transactions or events that could be associated with terrorist-related activities, limiting analysis to wholly encrypted data, which then does not just result in “discovering more patterns and relationships without an understanding of their context” but rather “intelligent information—information selectively gathered and placed into an appropriate context to produce actual knowledge.” Since fully homomorphic encryption may be unfeasible in real-world systems, they have proposed use of partially homomorphic encryption. But experts such as Prof. John Mallery from MIT are also working on solutions based on fully homomorphic encryption.

  3. Fishing Expedition Design: Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal have proposed a standard [5] that could be adopted by authorised agencies, telecom operators and ISPs. Instead of giving authorised agencies complete access to logs, they propose a format for database queries, which could be sent to the telecom operator or ISP by authorised agencies. The telecom operator or ISP would then process the query, and anonymise/obfuscate the result-set in an automated fashion based on applicable privacypolicies/regulation. Authorised agencies would then hone in on a subset of the result-set that they would like with personal identifiers intact; this smaller result set would then be shared with the authorised agencies.

An optimisation approach to resolving the false dichotomy between privacy and security will not allow for a total surveillance regime as pursued by the US administration. Total surveillance brings with it the ‘honey pot’ problem: If all the meta-data and payload data of citizens is being harvested and stored, then the data store will become a single point of failure and will become another target for attack. The next Snowden may not have honourable intentions and might decamp with this ‘honey pot’ itself, which would have disastrous consequences.

If total surveillance will completely undermine the national security imperative, what then should be the optimal level of surveillance in a population? The answer depends upon the existing security situation. If this is represented on a graph with security on the y-axis and the proportion of the population under surveillance on the x-axis, the benefits of surveillance could be represented by an inverted hockey-stick curve. To begin with, there would already be some degree of security. As a small subset of the population is brought under surveillance, security would increase till an optimum level is reached, after which, enhancing the number of people under surveillance would not result in any security pay-off. Instead, unnecessary surveillance would diminish security as it would introduce all sorts of new vulnerabilities. Depending on the existing security situation, the head of the hockey-stick curve might be bigger or smaller. To use a gastronomic analogy, optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

In India the designers of surveillance projects have fortunately rejected the total surveillance paradigm. For example, the objective of the National Intelligence Grid (NATGRID) is to streamline and automate targeted surveillance; it is introducing technological safeguards that will allow express combinations of result-sets from 22 databases to be made available to 12 authorised agencies. This is not to say that the design of the NATGRID cannot be improved.

Security and Transparency

There are two views on security and transparency: One, security via obscurity as advocated by vendors of proprietary software, and two, security via transparency as advocated by free/open source software (FOSS) advocates and entrepreneurs. Over the last two decades, public and industry opinion has swung towards security via transparency. This is based on the Linus rule that “given enough eyeballs, all bugs are shallow.” But does this mean that transparency is a necessary and sufficient condition? Unfortunately not, and therefore it is not necessarily true that FOSS and open standards will be more secure than proprietary software and proprietary standards.

Optimal surveillance is like salt in cooking—necessary in small quantities but counter-productive even if slightly in excess.

The recent detection of the Heartbleed [6] security bug in Open SSL, [7] causing situations where more data can be read than should be allowed, and Snowden’s revelations about the compromise of some open cryptographic standards (which depend on elliptic curves), developed by the US National Institute of Standards and Technology, are stark examples. [8]

At the same time, however, open standards and FOSS are crucial to maintaining the balance of power in information societies, as civil society and the general public are able to resist the powers of authoritarian governments and rogue corporations using cryptographic technology. These technologies allow for anonymous speech, pseudonymous speech, private communication, online anonymity and circumvention of surveillance and censorship. For the media, these technologies enable anonymity of sources and the protection of whistle-blowers—all phenomena that are critical to the functioning of a robust and open democratic society. But these very same technologies are also required by states and by the private sector for a variety of purposes—national security, e-commerce, e-banking, protection of all forms of intellectual property, and services that depend on confidentiality, such as legal or medical services.

In order words, all governments, with the exception of the US government, have common cause with civil society, media and the general public when it comes to increasing the security of open standards and FOSS. Unfortunately, this can be quite an expensive task because the re-securing of open cryptographic standards depends on mathematicians. Of late, mathematical research outputs that can be militarised are no longer available in the public domain because the biggest employers of mathematicians worldwide today are the US military and intelligence agencies. If other governments invest a few billion dollars through mechanisms like Knowledge Ecology International’s proposed World Trade Organization agreement on the supply of knowledge as a public good, we would be able to internationalise participation in standard-setting organisations and provide market incentives for greater scrutiny of cryptographic standards and patching of vulnerabilities of FOSS. This would go a long way in addressing the trust deficit that exists on the internet today.

Security and Technology

A techno-utopian understanding of security assumes that more technology, more recent technology and more complex technology will necessarily lead to better security outcomes.

This is because the security discourse is dominated by vendors with sales targets who do not present a balanced or accurate picture of the technologies that they are selling. This has resulted in state agencies and the general public having an exaggerated understanding of the capabilities of surveillance technologies that is more aligned with Hollywood movies than everyday reality.

More Technology

Increasing the number of x-ray machines or full-body scanners at airports by a factor of ten or hundred will make the airport less secure unless human oversight is similarly increased. Even with increased human oversight, all that has been accomplished is an increase in the potential locations that can be compromised. The process of hardening a server usually involves stopping non-essential services and removing non-essential software. This reduces the software that should be subject to audit, continuously monitored for vulnerabilities and patched as soon as possible. Audits, ongoing monitoring and patching all cost time and money and therefore, for governments with limited budgets, any additional unnecessary technology should be seen as a drain on the security budget. Like with the airport example, even when it comes to a single server on the internet, it is clear that, from a security perspective, more technology without a proper functionality and security justification is counter-productive. To reiterate, throwing increasingly more technology at a problem does not make things more secure; rather, it results in a proliferation of vulnerabilities.

Latest Technology

Reports that a number of state security agencies are contemplating returning to typewriters for sensitive communications in the wake of Snowden’s revelations makes it clear that some older technologies are harder to compromise in comparison to modern technology. [9] Between iris- and fingerprint-based biometric authentication, logically, it would be easier for a criminal to harvest images of irises or authentication factors in bulk fashion using a high resolution camera fitted with a zoom lens in a public location, in comparison to mass lifting of fingerprints.

Complex Technology

Fifteen years ago, Bruce Schneier said, "The worst enemy of security is complexity. This has been true since the beginning of computers, and it’s likely to be true for the foreseeable future." [10] This is because complexity increases fragility; every feature is also a potential source of vulnerabilities and failures. The simpler Indian electronic machines used until the 2014 elections are far more secure than the Diebold voting machines used in the 2004 US presidential elections. Similarly when it comes to authentication, a pin number is harder to beat without user-conscious cooperation in comparison to iris- or fingerprint-based biometric authentication.

In the following section of the paper we have identified five threat scenarios [11] relevant to India and identified solutions based on our theoretical framing above.

Threat Scenarios and Possible Solutions

Hacking the NIC Certifying Authority
One of the critical functions served by the National Informatics Centre (NIC) is as a Certifying Authority (CA). [12] In this capacity, the NIC issues digital certificates that authenticate web services and allow for the secure exchange of information online. [13] Operating systems and browsers maintain lists of trusted CA root certificates as a means of easily verifying authentic certificates. India’s Controller of Certifying Authority’s certificates issued are included in the Microsoft Root list and recognised by the majority of programmes running on Windows, including Internet Explorer and Chrome. [14] In 2014, the NIC CA’s infrastructure was compromised, and digital certificates were issued in NIC’s name without its knowledge. [15] Reports indicate that NIC did not "have an appropriate monitoring and tracking system in place to detect such intrusions immediately." [16] The implication is that websites could masquerade as another domain using the fake certificates. Personal data of users can be intercepted or accessed by third parties by the masquerading website. The breach also rendered web servers and websites of government bodies vulnerable to attack, and end users were no longer sure that data on these websites was accurate and had not been tampered with. [17] The NIC CA was forced to revoke all 250,000 SSL Server Certificates issued until that date [18] and is no longer issuing digital certificates for the time being. [19]Public key pinning is a means through which websites can specify which certifying authorities have issued certificates for that site. Public key pinning can prevent man-in-the-middle attacks due to fake digital certificates. [20] Certificate Transparency allows anyone to check whether a certificate has been properly issued, seeing as certifying authorities must publicly publish information about the digital certificates that they have issued. Though this approach does not prevent fake digital certificates from being issued, it can allow for quick detection of misuse. [21]

‘Logic Bomb’ against Airports
Passenger operations in New Delhi’s Indira Gandhi International Airport depend on a centralised operating system known as the Common User Passenger Processing System (CUPPS). The system integrates numerous critical functions such as the arrival and departure times of flights, and manages the reservation system and check-in schedules. [22] In 2011, a logic bomb attack was remotely launched against the system to introduce malicious code into the CUPPS software. The attack disabled the CUPPS operating system, forcing a number of check-in counters to shut down completely, while others reverted to manual check-in, resulting in over 50 delayed flights. Investigations revealed that the attack was launched by three disgruntled employees who had assisted in the installation of the CUPPS system at the New Delhi Airport. [23] Although in this case the impact of the attack was limited to flight delay, experts speculate that the attack was meant to take down the entire system. The disruption and damage resulting from the shutdown of an entire airport would be extensive.

Adoption of open hardware and FOSS is one strategy to avoid and mitigate the risk of such vulnerabilities. The use of devices that embrace the concept of open hardware and software specifications must be encouraged, as this helps the FOSS community to be vigilant in detecting and reporting design deviations and investigate into probable vulnerabilities.

Attack on Critical Infrastructure
The Nuclear Power Corporation of India encounters and prevents numerous cyber attacks every day. [24] The best known example of a successful nuclear plant hack is the Stuxnet worm that thwarted the operation of an Iranian nuclear enrichment complex and set back the country’s nuclear programme. [25]

The worm had the ability to spread over the network and would activate when a specific configuration of systems was encountered [26] and connected to one or more Siemens programmable logic controllers. [27] The worm was suspected to have been initially introduced through an infected USB drive into one of the controller computers by an insider, thus crossing the air gap. [28] The worm used information that it gathered to take control of normal industrial processes (to discreetly speed up centrifuges, in the present case), leaving the operators of the plant unaware that they were being attacked. This incident demonstrates how an attack vector introduced into the general internet can be used to target specific system configurations. When the target of a successful attack is a sector as critical and secured as a nuclear complex, the implications for a country’s security and infrastructure are potentially grave.

Security audits and other transparency measures to identify vulnerabilities are critical in sensitive sectors. Incentive schemes such as prizes, contracts and grants may be evolved for the private sector and academia to identify vulnerabilities in the infrastructure of critical resources to enable/promote security auditing of infrastructure.

Micro Level: Chip Attacks
Semiconductor devices are ubiquitous in electronic devices. The US, Japan, Taiwan, Singapore, Korea and China are the primary countries hosting manufacturing hubs of these devices. India currently does not produce semiconductors, and depends on imported chips. This dependence on foreign semiconductor technology can result in the import and use of compromised or fraudulent chips by critical sectors in India. For example, hardware Trojans, which may be used to access personal information and content on a device, may be inserted into the chip. Such breaches/transgressions can render equipment in critical sectors vulnerable to attack and threaten national security. [29]

Indigenous production of critical technologies and the development of manpower and infrastructure to support these activities are needed. The Government of India has taken a number of steps towards this. For example, in 2013, the Government of India approved the building of two Semiconductor Wafer Fabrication (FAB) manufacturing facilities [30] and as of January 2014, India was seeking to establish its first semiconductor characterisation lab in Bangalore. [31]

Macro Level: Telecom and Network Switches

The possibility of foreign equipment containing vulnerabilities and backdoors that are built into its software and hardware gives rise to concerns that India’s telecom and network infrastructure is vulnerable to being hacked and accessed by foreign governments (or non-state actors) through the use of spyware and malware that exploit such vulnerabilities. In 2013, some firms, including ZTE and Huawei, were barred by the Indian government from participating in a bid to supply technology for the development of its National Optic Network project due to security concerns. [32] Similar concerns have resulted in the Indian government holding back the conferment of ‘domestic manufacturer’ status on both these firms. [33]

Following reports that Chinese firms were responsible for transnational cyber attacks designed to steal confidential data from overseas targets, there have been moves to establish laboratories to test imported telecom equipment in India. [34] Despite these steps, in a February 2014 incident the state-owned telecommunication company Bharat Sanchar Nigam Ltd’s network was hacked, allegedly by Huawei. [35]

Security practitioners and policymakers need to avoid the zero-sum framing prevalent in popular discourse regarding security VIS-A-VIS privacy, transparency and technology.

A successful hack of the telecom infrastructure could result in massive disruption in internet and telecommunications services. Large-scale surveillance and espionage by foreign actors would also become possible, placing, among others, both governmental secrets and individuals personal information at risk.

While India cannot afford to impose a general ban on the import of foreign telecommunications equipment, a number of steps can be taken to address the risk of inbuilt security vulnerabilities. Common International Criteria for security audits could be evolved by states to ensure compliance of products with international norms and practices. While India has already established common criteria evaluation centres, [36] the government monopoly over the testing function has resulted in only three products being tested so far. A Code Escrow Regime could be set up where manufacturers would be asked to deposit source code with the Government of India for security audits and verification. The source code could be compared with the shipped software to detect inbuilt vulnerabilities.

Conclusion

Cyber security cannot be enhanced without a proper understanding of the relationship between security and other national imperatives such as privacy, transparency and technology. This paper has provided an initial sketch of those relationships, but sustained theoretical and empirical research is required in India so that security practitioners and policymakers avoid the zero-sum framing prevalent in popular discourse and take on the hard task of solving the optimisation problem by shifting policy, market and technological levers simultaneously. These solutions must then be applied in multiple contexts or scenarios to determine how they should be customised to provide maximum security bang for the buck.


[1]. Daniel J. Solove, Chapter 1 in Nothing to Hide: The False Tradeoff between Privacy and Security (Yale University Press: 2011), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1827982.

[2]. Bruce Schneier, “What our Top Spy doesn’t get: Security and Privacy aren’t Opposites,” Wired, January 24, 2008, http://archive.wired.com/politics/security commentary/security matters/2008/01/securitymatters_0124 and Bruce Schneier, “Security vs. Privacy,” Schneier on Security, January 29, 2008, https://www.schneier.com/blog/archives/2008/01/security_vs_pri.html.

[3]. There are four sources of power in internet governance: Market power exerted by private sector organisations; regulatory power exerted by states; technical power exerted by anyone who has access to certain categories of technology, such as cryptography; and finally, the power of public pressure sporadically mobilised by civil society. A technically sound encryption standard, if employed by an ordinary citizen, cannot be compromised using the power of the market or the regulatory power of states or public pressure by civil society. In that sense, technology can be used to regulate state and market behaviour.

[4]. Ann Cavoukian and Khaled El Emam, “Introducing Privacy-Protective Surveillance: Achieving Privacy and Effective Counter-Terrorism,” Information & Privacy Commisioner, September 2013, Ontario, Canada, http://www.privacybydesign.ca/content/uploads/2013/12/pps.pdf.

[5]. Madan Oberoi, Pramod Jagtap, Anupam Joshi, Tim Finin and Lalana Kagal, “Information Integration and Analysis: A Semantic Approach to Privacy”(presented at the third IEEE International Conference on Information Privacy, Security, Risk and Trust, Boston, USA, October 2011), ebiquity.umbc.edu/_file_directory_/papers/578.pdf.

[6]. Bruce Byfield, “Does Heartbleed disprove ‘Open Source is Safer’?,” Datamation, April 14, 2014, http://www.datamation.com/open-source/does-heartbleed-disprove-open-source-is-safer-1.html.

[7]. “Cybersecurity Program should be more transparent, protect privacy,” Centre for Democracy and Technology Insights, March 20, 2009, https://cdt.org/insight/cybersecurity-program-should-be-more-transparent-protect-privacy/#1.

[8]. “Cracked Credibility,” The Economist, September 14, 2013, http://www.economist.com/news/international/21586296-be-safe-internet-needs-reliable-encryption-standards-software-and.

[9]. Miriam Elder, “Russian guard service reverts to typewriters after NSA leaks,” The Guardian, July 11, 2013, www.theguardian.com/world/2013/jul/11/russia-reverts-paper-nsa-leaks and Philip Oltermann, “Germany ‘may revert to typewriters’ to counter hi-tech espionage,” The Guardian, July 15, 2014, www.theguardian.com/world/2014/jul/15/germany-typewriters-espionage-nsa-spying-surveillance.

[10]. Bruce Schneier, “A Plea for Simplicity,” Schneier on Security, November 19, 1999, https://www.schneier.com/essays/archives/1999/11/a_plea_for_simplicit.html.

[11]. With inputs from Pranesh Prakash of the Centre for Internet and Society and Sharathchandra Ramakrishnan of Srishti School of Art, Technology and Design.

[12]. “Frequently Asked Questions,” Controller of Certifying Authorities, Department of Electronics and Information Technology, Government of India, http://cca.gov.in/cca/index.php?q=faq-page#n41.

[13]. National Informatics Centre Homepage, Government of India, http://www.nic.in/node/41.

[14]. Adam Langley, “Maintaining Digital Certificate Security,” Google Security Blog, July 8, 2014, http://googleonlinesecurity.blogspot.in/2014/07/maintaining-digital-certificate-security.html.

[15]. This is similar to the kind of attack carried out against DigiNotar, a Dutch certificate authority. See: http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1246&context=jss.

[16]. R. Ramachandran, “Digital Disaster,” Frontline, August 22, 2014, http://www.frontline.in/the-nation/digital-disaster/article6275366.ece.

[17]. Ibid.

[18]. “NIC’s digital certification unit hacked,” Deccan Herald, July 16, 2014, http://www.deccanherald.com/content/420148/archives.php.

[19]. National Informatics Centre Certifying Authority Homepage, Government of India, http://nicca.nic.in//.

[20]. Mozilla Wiki, “Public Key Pinning,” https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinning.

[21]. “Certificate Transparency - The quick detection of fraudulent digital certificates,” Ascertia, August 11, 2014, http://www.ascertiaIndira.com/blogs/pki/2014/08/11/certificate-transparency-the-quick-detection-of-fraudulent-digital-certificates.

[22]. “Indira Gandhi International Airport (DEL/VIDP) Terminal 3, India,” Airport Technology.com, http://www.airport-technology.com/projects/indira-gandhi-international-airport-terminal -3/.

[23]. “How techies used logic bomb to cripple Delhi Airport,” Rediff, November 21, 2011, http://www.rediff.com/news/report/how-techies-used-logic-bomb-to-cripple-delhi-airport/20111121 htm.

[24]. Manu Kaushik and Pierre Mario Fitter, “Beware of the bugs,” Business Today, February 17, 2013, http://businesstoday.intoday.in/story/india-cyber-security-at-risk/1/191786.html.

[25]. “Stuxnet ‘hit’ Iran nuclear plants,” BBC, November 22, 2010, http://www.bbc.com/news/technology-11809827.

[26]. In this case, systems using Microsoft Windows and running Siemens Step7 software were targeted.

[27]. Jonathan Fildes, “Stuxnet worm ‘targeted high-value Iranian assets’,” BBC, September 23, 2010, http://www.bbc.com/news/technology-11388018.

[28]. Farhad Manjoo, “Don’t Stick it in: The dangers of USB drives,” Slate, October 5, 2010, http://www.slate.com/articles/technology/technology/2010/10/dont_stick_it_in.html.

[29]. Ibid.

[30]. “IBM invests in new $5bn chip fab in India, so is chip sale off?,” ElectronicsWeekly, February 14, 2014, http://www.electronicsweekly.com/news/business/ibm-invests-new-5bn-chip-fab-india-chip-sale-2014-02/.

[31]. NT Balanarayan, “Cabinet Approves Creation of Two Semiconductor Fabrication Units,” Medianama, February 17, 2014, http://articles.economictimes.indiatimes.com/2014-02-04/news/47004737_1_indian-electronics-special-incentive-package-scheme-semiconductor-association.

[32]. Jamie Yap, “India bars foreign vendors from national broadband initiative,” ZD Net, January 21, 2013, http://www.zdnet.com/in/india-bars-foreign-vendors-from-national-broadband-initiative-7000010055/.

[33]. Kevin Kwang, “India holds back domestic-maker status for Huawei, ZTE,” ZD Net, February 6, 2013, http://www.zdnet.com/in/india-holds-back-domestic-maker-status-for-huawei-zte-70 00010887/. Also see “Huawei, ZTE await domestic-maker tag,” The Hindu, February 5, 2013, http://www.thehindu.com/business/companies/huawei-zte-await-domesticmaker-tag/article4382888.ece.

[34]. Ellyne Phneah, “Huawei, ZTE under probe by Indian government,” ZD Net, May 10, 2013, http://www.zdnet.com/in/huawei-zte-under-probe-by-indian-government-7000015185/.

[35]. Devidutta Tripathy, “India investigates report of Huawei hacking state carrier network,” Reuters, February 6, 2014, http://www.reuters.com/article/2014/02/06/us-india-huawei-hacking-idUSBREA150QK20140206.

[36]. “Products Certified,” Common Criteria Portal of India, http://www.commoncriteria-india.gov.in/Pages/ProductsCertified.aspx.

Security: Privacy, Transparency and Technology

by Prasad Krishna last modified Aug 19, 2015 02:24 AM

PDF document icon Digital-Debates.pdf — PDF document, 5860 kB (6000742 bytes)

Free Speech Policy in India: Community, Custom, Censorship, and the Future of Internet Regulation

by Bhairav Acharya last modified Aug 23, 2015 10:12 AM
This note summarises my panel contribution to the conference on Freedom of Expression in a Digital Age at New Delhi on 21 April 2015, which was organised by the Observer Research Foundation (ORF) and the Centre for Internet and Society (CIS) in collaboration with the Internet Policy Observatory of the Center for Global Communication Studies (CGCS) at the Annenberg School for Communication, University of Pennsylvania

Download the Note here (PDF, 103 Kb)


Preliminary

There has been legitimate happiness among many in India at the Supreme Court’s recent decision in the Shreya Singhal case to strike down section 66A of the Information Technology Act, 2000 ("IT Act") for unconstitutionally fettering the right to free speech on the Internet. The judgment is indeed welcome, and reaffirms the Supreme Court’s proud record of defending the freedom of speech, although it declined to interfere with the government’s stringent powers of website blocking. As the dust settles there are reports the government is re-grouping to introduce fresh law, allegedly stronger to secure easier convictions, to compensate the government’s defeat.

Case Law and Government Policy

India’s constitutional courts have a varied history of negotiating the freedom of speech that justifiably demands study. But, in my opinion, inadequate attention is directed to the government’s history of free speech policy. It is possible to discern from the government’s actions over the last two centuries a relatively consistent narrative of governance that seeks to bend the individual’s right to speech to its will. The defining characteristics of this narrative – the government’s free speech policy – emerge from a study of executive and legislative decisions chiefly in relation to the press, that continue to shape policy regarding the freedom of expression on the Internet.

India’s corpus of free speech case law is not uniform nor can it be since, for instance, the foundational issues that attend hate speech are quite different from those that inform contempt of court. So too, Indian free speech policy has been varied, captive to political compulsions and disparate views regarding the interests of the community, governance and nation-building. There has been consistent tension between the individual and the community, as well as the role of the government in enforcing the expectations of the community when thwarted by law.

Dichotomy between Modern and Native Law

To understand free speech policy, it is useful to go back to the early colonial period in India, when Governor-General Warren Hastings established a system of courts in Bengal’s hinterland to begin the long process of displacing traditional law to create a modern legal system. By most accounts, pre-modern Indian law was not prescriptive, Austinian, and uniform. Instead, there were several legal systems and a variety of competing and complementary legal sources that supported different interpretations of law within most legal systems. J. Duncan M. Derrett notes that the colonial expropriation of Indian law was marked by a significant tension caused by the repeatedly-stated objective of preserving some fields of native law to create a dichotomous legal structure. These efforts were assisted by orientalist jurists such as Henry Thomas Colebrook whose interpretation of the dharmasastras heralded a new stage in the evolution of Hindu law.

In this background, it is not surprising that Elijah Impey, a close associate of Hastings, simultaneously served as the first Chief Justice of the Supreme Court at Fort William while overseeing the Sadr Diwani Adalat, a civil court applying Anglo-Hindu law for Hindus, and the Sadr Faujdari Adalat, a criminal court applying Anglo-Islamic law to all natives. By the mid-nineteenth century, this dual system came under strain in the face of increasing colonial pressure to rationalise the legal system to ensure more effective governance, and native protest at the perceived insensitivity of the colonial government to local customs.

Criminal Law and Free Speech in the Colony

In 1837, Thomas Macaulay wrote the first draft of a new comprehensive criminal law to replace indigenous law and custom with statutory modern law. When it was enacted as the Indian Penal Code in 1860 ("IPC"), it represented the apogee of the new colonial effort to recreate the common law in India. The IPC’s enactment coincided with the growth and spread of both the press and popular protest in India. The statute contained the entire gamut of public-order and community-interest crimes to punish unlawful assembly, rioting, affray, wanton provocation, public nuisance, obscenity, defiling a place of worship, disturbing a religious assembly, wounding religious feelings, and so on. It also criminalised private offences such as causing insult, annoyance, and intimidation. These crimes continue to be invoked in India today to silence individual opinion and free speech, including on the Internet. Section 66A of the IT Act utilised a very similar vocabulary of censorship.

Interestingly, Macaulay’s IPC did not feature the common law offences of sedition and blasphemy or the peculiar Indian crime of promoting inter-community enmity; these were added later. Sedition was criminalised by section 124A at the insistence of Barnes Peacock and applied successfully against Indian nationalist leaders including Bal Gangadhar Tilak in 1897 and 1909, and Mohandas Gandhi in 1922. In 1898, the IPC was amended again to incorporate section 153A to criminalise the promotion of enmity between different communities by words or deeds. And, in 1927, a more controversial amendment inserted section 295A into the IPC to criminalise blasphemy. All three offences have been recently used in India against writers, bloggers, professors, and ordinary citizens.

Loss of the Right to Offend

The two amendments of 1898 and 1927, which together proscribed the promotion of inter-community enmity and blasphemy, represent the dismantling of the right to offend in India. But, oddly, they were defended by the colonial government in the interests of native sensibilities. The proceedings of the Imperial Legislative Council reveal several members, including Indians, were enthusiastic about the amendments. For some, the amendments were a necessary corrective action to protect community honour from subversive speech. The 1920s were a period of foment in India as the freedom movement intensified and communal tension mounted. In this environment, it was easy to fuse the colonial interest in strong administration with a nationalist narrative that demanded the retrieval of Indian custom to protect native sensibilities from being offended by individual free speech, a right derived from modern European law. No authoritative jurist could be summoned to prove or refute the claim that native custom privileged community honour.

Sadly the specific incident which galvanised the amendment of 1927, which established the crime of blasphemy in India, would not appear unfamiliar to a contemporary observer. Mahashay Rajpal, an Arya Samaj activist, published an offensive pamphlet of the Prophet Muhammad titled Rangeela Rasool, for which he was arrested and tried but acquitted in the absence of specific blasphemy provisions. With his speech being found legal, Rajpal was released and given police protection but Ilam Din, a Muslim youth, stabbed him to death. Instead of supporting its criminal law and strengthening its police forces to implement the decisions of its courts, the colonial administration surrendered to the threat of public disorder and enacted section 295A of the IPC.

Protest and Community Honour

The amendment of 1927 marks an important point of rupture in the history of Indian free speech. It demonstrated the government’s policy intention of overturning the courts to restrict the individual’s right to speech when faced with public protest. In this way, the combination of public disorder and the newly-created crimes of promoting inter-community enmity and blasphemy opened the way for the criminal justice system to be used as a tool by natives to settle their socio-cultural disputes. Both these crimes address group offence; they do not redress individual grievances. In so far as they are designed to endorse group honour, these crimes signify the community’s attempt to suborn modern law and individual rights.

Almost a century later, the Rangeela Rasool affair has become the depressing template for illegal censorship in India: fringe groups take offence at permissible speech, crowds are marshalled to articulate an imagined grievance, and the government capitulates to the threat of violence. This formula has become so entrenched that governance has grown reflexively suppressive, quick to silence speech even before the perpetrators of lumpen violence can receive affront. This is especially true of online speech, where censorship is driven by the additional anxiety brought by the difficulty of Internet regulation. In this race to be offended the government plays the parochial referee, acting to protect indigenous sensibilities from subversive but legal speech.

The Censorious Post-colony

Independence marked an opportunity to remake Indian governance in a freer image. The Constituent Assembly had resolved not to curb the freedom of speech in Article 19(1)(a) of the Constitution on account of public order. In two cases from opposite ends of the country where right-wing and left-wing speech were punished by local governments on public order grounds, the Supreme Court acted on the Constituent Assembly’s vision and struck down the laws in question. Free speech, it appeared, would survive administrative concerns, thanks to the guarantee of a new constitution and an independent judiciary. Instead Prime Minister Jawaharlal Nehru and his cabinet responded with the First Amendment in 1951, merely a year after the Constitution was enacted, to create three new grounds of censorship, including public order. In 1963, a year before he demitted office, the Sixteenth Amendment added an additional restriction.

Nehru did not stop at amending the Constitution, he followed shortly after with a concerted attempt to stage-manage the press by de-legitimising certain kinds of permissible speech.

Under Justice G. S. Rajadhyaksha, the government constituted the First Press Commission which attacked yellow journalism, seemingly a sincere concern, but included permissible albeit condemnable speech that was directed at communities, indecent or vulgar, and biased. Significantly, the Commission expected the press to only publish speech that conformed to the developmental and social objectives of the government. In other words, Nehru wanted the press to support his vision of India and used the imperative of nation-building to achieve this goal. So, the individual right to offend communities was taken away by law and policy, and speech that dissented from the government’s socio-economic and political agenda was discouraged by policy. Coupled with the new constitutional ground of censorship on account of public order, the career of free speech in independent India began uncertainly.

How to regulate permissible speech?

Despite the many restrictions imposed by law on free speech, Indian free speech policy has long been engaged with the question of how to regulate the permissible speech that survives constitutional scrutiny. This was significantly easier in colonial India. In 1799, Governor-General Richard Wellesley, the brother of the famous Duke of Wellington who defeated Napoleon at Waterloo, instituted a pre-censorship system to create what Rajeev Dhavan calls a “press by permission” marked by licensed publications, prior restraint, subsequent censorship, and harsh penalties. A new colonial regime for strict control over the publication of free speech was enacted in the form of the Press and Registration of Books Act, 1867, the preamble of which recognises that “the literature of a country is…an index of…the condition of [its] people”. The 1867 Act was diluted after independence but still remains alive in the form of the Registrar of Newspapers.

After surviving Indira Gandhi’s demand for a committed press and the depredations of her regime during the Emergency, India’s press underwent the examination of the Second Press Commission. This was appointed in 1978 under the chairmanship of Justice P. K. Goswami, a year after the Janata government released the famous White Paper on Misuse of Mass Media. When Gandhi returned to power, Justice Goswami resigned and the Commission was reconstituted under Justice K. K. Mathew. In 1982, the Commission’s report endorsed the earlier First Press Commission’s call for conformist speech, but went further by proposing the appointment of a press regulator invested with inspection powers; criminalising attacks on the government; re-interpreting defamation law to encompass democratic criticism of public servants; retaining stringent official secrecy law; and more. It was quickly acted upon by Rajiv Gandhi through his infamous Defamation Bill.

The contours of future Internet regulation

The juggernaut of Indian free speech policy has received temporary setbacks, mostly inflicted by the Supreme Court. Past experience shows us that governments with strong majorities – whether Jawaharlal Nehru’s following independence or Indira Gandhi’s in the 1970s – act on their administrative impulses to impede free speech by government policy. The Internet is a recent and uncontrollable medium of speech that attracts disproportionately heavy regulatory attention. Section 66A of the IT Act may be dead but several other provisions remain to harass and punish online free speech. Far from relaxing its grip on divergent opinions, the government appears poised for more incisive invasions of personal freedoms.

I do not believe the contours of future speech regulation on the Internet need to be guessed at, they can be derived from the last two centuries of India’s free speech policy. When section 66A is replaced – and it will be, whether overtly by fresh statutory provisions or stealthily by policy and non-justiciable committees and commissions – it will be through a regime that obeys the mandate of the First Press Commission to discourage dissenting and divergent speech while adopting the regulatory structures of the Second Press Commission to permit a limited inspector raj and forbid attacks on personalities. The interests of the community, howsoever improperly articulated, will seek precedence over individual freedoms and the accompanying threat of violence will give new meaning to Bhimrao Ambedkar’s warning of the “grammar of anarchy”.

Net Neutrality and the Law of Common Carriage

by Bhairav Acharya last modified Aug 23, 2015 11:09 AM
Net neutrality makes strange bedfellows. It links the truck operators that dominate India’s highways, such as those that carry vegetables from rural markets to cities, and Internet service providers which perform a more technologically advanced task.

Download PDF


Over the last decade, the truckers have opposed the government’s attempts to impose the obligations of common carriage on them, this has resulted in strikes and temporary price rises; and, in the years ahead, there is likely to be a similar – yet, technologically very different – debate as net neutrality advocates call for an adapted version of common carriage to bind Internet services.

Net neutrality demands a rigorous examination that is not attempted by this short note which, constrained by space, will only briefly trace the law and policy of net neutrality in the US and attempt a brief comparison with the principles of common carriage in India. Net neutrality defies definition. Very simply, the principle demands that Internet users have equal access to all content and applications on the Internet. This can only be achieved if Internet service providers: (i) do not block lawful content; (ii) do not throttle – deliberately slow down or speed up access to selected content; (iii) do not prioritise certain content over others for monetary gain; and, (iv) are transparent in their management of the networks by which data flows.

Almost exactly a year ago, the District of Columbia Circuit Court of Appeals – a senior court below the US Supreme Court – struck down portions of the ‘Open Internet Order’ that was issued by the Federal Communications Commission (FCC) in 2010. Although sound in law, the Court’s verdict impeded net neutrality to raise crucial questions regarding common carriage, free speech, competition, and others. More recently, Airtel’s announcement of its decision to charge certain end-users for VoIP services – subsequently suspended pending a policy decision from the Telecom Regulatory Authority of India (TRAI) – has fuelled the net neutrality debate in India.

Because of its innovative technological history in relation to the Internet, the US has pioneered many legal attempts to regulate the Internet in respect of net neutrality. In 1980, when Internet data flowed through telephone lines, the FCC issued the ‘Computer II’ regime which distinguished basic services from enhanced services. The difference between the two turned on the nature of the transmission. Regular telephone calls involved a pure transmission of data and were hence classified as basic services. On the other hand, access to the Internet required the processing of user data through computers; these were classified as enhanced services. Importantly, because of their essential nature, the Computer II rules bound basic services providers to the obligations of common carriage whereas enhanced services providers were not.

What is common carriage? Common law countries share a unique heritage in respect of their law governing the transport of goods and people. Those that perform such transport are called carriers. The law makes a distinction between common carriers and other carriers. A carrier becomes a common carrier when it “holds itself out” to the public as willing to transport people or goods for compensation. The act of holding out is simply a public communication of an offer to transport, it may be fulfilled even by an advertisement. The four defining elements of a common carrier are (i) a holding out of a willingness (a public undertaking) (ii) to transport persons or property (iii) from place to place (iv) for compensation.

Common carriers discharge a public trust. By virtue of their unique position and essential function, they are required to serve their customers equally and without discrimination. The law of carriage of goods and people places four broad duties upon common carriers. Firstly, common carriers are bound to carry everyone’s goods or all people and cannot refuse such carriage unless certain strict conditions are met. Secondly, common carriers must perform their carriage safely without deviating from accepted routes unless in exceptional circumstances. Thirdly, common carriers must obey the timeliness of their schedules, they must be on time. And, lastly, common carriers must assume liabilities for the loss or damages of goods, or death or injuries to people, during carriage.

The Computer II regime was issued under a telecommunications law of 1934 which retained the classical markers and duties of common carriers. The law extended the principles of common carriage to telephone services providers. In 1980, when the regime was introduced, the FCC did not invest Internet services with the same degree of essence and public trust; hence, enhanced services escaped strict regulation. However, the FCC did require that basic services and enhanced services be offered through separate entities, and that basic services providers that operated the ‘last-mile’ wired transmission infrastructure to users offer these facilities to enhanced services providers on a common carrier basis.

In 1996, the new Telecommunications Act revisited US law after more than sixty years. The new dispensation maintained the broad structure of the Computer II regime: it recognised telecommunications carriers in place of basic services providers, and information-services providers in place of enhanced services. Carriers in the industry had already converged telephone and Internet communications as a single service. Hence, when a user engaged a carrier that provided telephone and broadband Internet services, the classification of the carrier would depend on the service being accessed. When a carrier provided broadband Internet access, it was an information-services provider (not a telecommunications carrier) and vice versa. Again, telecommunications carriers were subjected to stricter regulations and liability resembling common carriage.

In 1998, the provision of broadband Internet over wired telephone lines through DSL technologies was determined to be a pure transmission and hence a telecommunications service warranting common carriage regulation. However, in 2002, the FCC issued the ‘Cable Broadband Order’ that treated the provision of cable broadband through last-mile wired telephone transmission networks as a single and integrated information service. This exempted most cable broadband from the duties of common carriage. This policy was challenged in the US Supreme Court in 2005 in the Brand X case and upheld.

Significantly, the decision in the Brand X case was not made on technological merits. The case arose when a small ISP that had hitherto used regular telephone lines to transmit data wanted equal access to the coaxial cables of the broadcasting majors on the basis of common carriage. Instead of making a finding on the status of cable broadband providers based on the four elements of common carriage, the Court employed an administrative law principle of deferring to the decisions of an expert technical regulator – known as the Chevron deference principle – to rule against the small ISP. Thereafter wireless and mobile broadband were also declared to be information services and saved from the application of common carriage law.

Taking advantage of this exemption from common carriage which released broadband providers from the duty of equal access and anti-discrimination, Comcast began from 2007 to degrade P2P data flows to its users. This throttling was reported to the FCC which responded with the 2008 ‘Comcast Order’ to demand equal and transparent transmission from Comcast. Instead, Comcast took the FCC to court. In 2010, the Comcast Order was struck down by the DC Circuit Court of Appeals. And, again, the decision in the Comcast case was made on an administrative law principle, not on technological merits.

In the Comcast case, the Court said that as long as the FCC treated broadband Internet access as an information service it could not enforce an anti-discrimination order against Comcast. This is because the duty of anti-discrimination attached only to common carriers which the FCC applied to telecommunications carriers. Following the Comcast case, the FCC began to consider reclassifying broadband Internet providers as telecommunications carriers.

However, in the 2010 ‘Open Internet Order’, the FCC attempted a different regulatory approach. Instead of a classification based on common carriage, the new rules recognised two types of Internet service providers: (i) fixed providers, which transmitted to homes, and, (ii) mobile providers, which were accessed by smartphones. The rules required both types of providers to ensure transparency in network management, disallowed blocking of lawful content, and re-imposed the anti-discrimination requirement to forbid prioritised access or throttling of certain content.

Before they were even brought into effect, Verizon challenged the Open Internet Order in the same court that delivered the Comcast judgement. The decision of the Court is pending. Meanwhile, in India, Airtel’s rollback of its announcement to charge its pre-paid mobile phone users more for VoIP services raises very similar questions. Like the common law world, India already extends the principles of common carriage to telecommunications. Indian jurisprudence also sustains the distinction between common carriage and private carriage, and applies an anti-discrimination requirement to telecommunications providers through a licensing regime.

TRAI must decide if it wants to continue this distinction. No doubt, the provision of communications services through telephone and the Internet serves an eminent public good. It was on this basis that President Obama called on the FCC to reclassify broadband Internet providers as common carriers. Telecommunications carriers, such as Airtel, might argue that they have expended large sums of money on network infrastructure that is undermined by the use of high-bandwidth free VoIP applications, and that the law of common carriage must recognise this fact. And still others call for a new approach to net neutrality outside the dichotomy of common and private carriage. Whatever the solution, it must be reached by widespread engagement and participation, for Internet access – as the government’s Digital India project is aware – serves public interest.

Net Neutrality and the Law of Common Carriage

by Bhairav Acharya last modified Aug 23, 2015 11:06 AM

PDF document icon Net Neutrality and the Law of Common Carriage.pdf — PDF document, 92 kB (94529 bytes)

Privacy, Autonomy, and Sexual Choice: The Common Law Recognition of Homosexuality

by Bhairav Acharya last modified Aug 23, 2015 12:20 PM
In the last few decades, all major common law jurisdictions have decriminalised non-procreative sex – oral and anal sex (sodomy) – to allow private, consensual, and non-commercial homosexual intercourse.

Download PDF

Anti-sodomy statutes across the world, often drafted in the same anachronistic vein as section 377 of the Indian Penal Code, 1860 (“IPC”), have either been repealed or struck down on the grounds that they invade individual privacy and are detrimentally discriminative against homosexual people.

This is not an examination of India’s laws against homosexuality, it does not review the Supreme Court of India’s judgment in Suresh Koushal v. Naz Foundation (2014) 1 SCC 1 nor the Delhi High Court’s judgment in Naz Foundation v. Government of NCT Delhi 2009 (160) DLT 277, which the former overturned – in my view, wrongly. This note simply provides a legal history of the decriminalisation of non-procreative sexual activity in the United Kingdom and the United States. Same-sex marriage is also not examined.

In the United Kingdom

The Wolfenden Report

In England, following a campaign of arrests of non-heterosexual persons and subsequent protests in the 1950s, the government responded to public dissatisfaction by appointing the Departmental Committee on Homosexual Offences and Prostitution chaired by John Frederick Wolfenden. The report of this committee (“Wolfenden Report”) was published in 1957 and recommended that:

“…homosexual behaviour between consenting adults in private should no longer be a criminal offence.”

The Report further observed that it was not the function of a State to punitively scrutinise the private lives of its citizens:

“(T)he law’s function is to preserve public order and decency, to protect the citizen from what is offensive or injurious, and to provide sufficient safeguards against exploitation and corruption of others… It is not, in our view, the function of the law to intervene in the private life of citizens, or to seek to enforce any particular pattern of behaviour.”

The Sexual Offences Act, 1967

The Wolfenden Report was accepted and, in its pursuance, the Sexual Offences Act, 1967 was enacted to, for the first time in common law jurisdictions, partially decriminalise homosexual activity – described in English law as ‘buggery’ or anal sex between males.
Section 1(1) of the original Sexual Offences Act, as notified on 27 July 1967 stated –
"Notwithstanding any statutory or common law provision, but subject to the provisions of the next following section, a homosexual act in private shall not be an offence provided that the parties consent thereto and have attained the age of twenty one years."
A ‘homosexual act’ was defined in section 1(7) as –
“For the purposes of this section a man shall be treated as doing a homosexual act if, and only if, he commits buggery with another man or commits an act of gross indecency with another man or is a party to the commission by a man of such an act.”
The meaning of ‘private’ was also set forth rather strictly in section 1(2) –
“An act which would otherwise be treated for the purposes of this Act as being done in private shall not be so treated if done –
(a) when more than two persons take part or are present; or
(b) in a lavatory to which the public have or are permitted to have access, whether on
payment or otherwise.”
Hence, by 1967, English law permitted:

  • as between two men,
  • both twenty-one years or older,
  • anal sex (buggery),
  • and other sexual activity (“gross indecency”)
  • if, and only if, a strict prescription of privacy was maintained,
  • that excluded even a non-participating third party from being present,
  • and restricted the traditional conception of public space to exclude even lavatories.

However, the benefit of Section 1 of the Sexual Offences Act, 1967 did not extend beyond England and Wales; to mentally unsound persons; members of the armed forces; merchant ships; and, members of merchant ships whether on land or otherwise.

Developments in Scotland and Northern Ireland

Over the years, the restrictions in the original Sexual Offences Act, 1967 were lifted. In 1980, the Criminal Justice (Scotland) Act, 1980 partially decriminalised homosexual activity in Scotland on the same lines that the Act of 1967 did for England and Wales. One year later, in 1981, an Irishman Jeffrey Dudgeon successfully challenged the continued criminalisation of homosexuality in Northern Ireland before the European Court of Human Rights (“ECHR”) in the case of Dudgeon v. United Kingdom (1981) 4 EHRR 149. Interestingly, Dudgeon was not decided on the basis of detrimental discrimination or inequality, but on the ground that the continued illegality of homosexuality violated the petitioner’s right to privacy guaranteed by Article 8 of the 1950 European Convention on Human Rights (“European Convention”). In a 15-4 majority judgement, the ECHR found that “…moral attitudes towards male homosexuality…cannot…warrant interfering with the applicant’s private life…” Following Dudgeon, the Homosexual Offences (Northern Ireland) Order, 1982 came into effect; and with it, brought some semblance of uniformity in the sodomy laws of the United Kingdom.

Equalising the age of consent

However, protests continued against the unequal age of consent required for consensual homosexual sex (21 years) as opposed to that for heterosexual sex (16 years). In 1979, a government policy advisory recommended that the age of consent for homosexual sex be reduced to 18 years – two years older than that for heterosexual sex, but was never acted upon. In 1994, an attempt to statutorily equalise the age of consent at 16 years was defeated in the largely conservative House of Commons although a separate legislative proposal to reduce it to 18 years was carried and enacted under the Criminal Justice and Public Order Act, 1994. Following this, the unequal ages of consent forced a challenge against UK law in the ECHR in 1994; four years later, in Sutherland v. United Kingdom [1998] EHRLR 117, the ECHR found that the unequal age of consent violated Articles 8 and 14 of the European Convention – relating to privacy and discrimination. Sutherland was significant in two ways – it forced the British government to once again introduce legislation to equalise the ages of consent; and, significantly, it affirmed a homosexual human right on the ground of anti-discrimination (as opposed to privacy).

To meet its European Convention commitments, the House of Commons passed, in June 1998, a bill for an equal age of sexual consent but it was rejected by the more conservative House of Lords. In December 1998, the government reintroduced the equal age of consent legislation which again passed the House of Commons and was defeated in the House of Lords. Finally, in 1999, the government invoked the statutory superiority of the House of Commons, reintroduced for the third time the legislation, passed it unilaterally to result in the enactment of the Sexual Offences (Amendment) Act, 2000 that equalised the age of sexual consent for both heterosexuals and homosexuals at 16 years of age.

Uniformity of equality

However, by this time, different UK jurisdictions observed separate legislations regarding homosexual activity. The privacy conditions stipulated in the original Sexual Offences Act, 1967 remained, although they had been subject to varied interpretation by English courts. To resolve this, the UK Parliament enacted the Sexual Offences Act, 2003 which repealed all earlier conflicting legislation, removed the strict privacy conditions attached to homosexual activity and re-drafted sexual offences in a gender neutral manner. A year later, the Civil Partnership Act, 2004 gave same-sex couples the same rights and responsibilities as a civil marriage. And, in 2007, the Equality Act (Sexual Orientation) Regulations came into force to prohibit general discrimination against homosexual persons in the same manner as such prohibition exists in respect of grounds of race, religion, disability, sex and so on.

In the United States

Diversity of state laws

Sodomy laws in the United States of America have followed a different trajectory. A different political and legal system leaves individual US States with wide powers to draft and follow their own constitutions and laws. Accordingly, by 1961 all US States had their own individual anti-sodomy laws, with different definitions of sodomy and homosexuality. In 1962, Illinois became the first US State to repeal its anti-sodomy law. Many States followed suit over the next decades including Connecticut (1971); Colorado and Oregon (1972); Delaware, Hawaii and North Dakota (1973); Ohio (1974); New Hampshire and New Mexico (1975); California, Maine, Washington and West Virginia (1976); Indiana, South Dakota, Wyoming and Vermont (1977); Iowa and Nebraska (1978); New Jersey (1979); Alaska (1980); and, Wisconsin (1983).

Bowers v. Hardwick

However, not all States repealed their anti-sodomy laws. Georgia was one such State that retained a statutory bar to any oral or anal sex between any persons of any sex contained in Georgia Code Annotated §16-6-2 (1984) (“Georgia statute”) which provided, in pertinent part, as follows:

“(a) A person commits the offense of sodomy when he performs or submits to any sexual act involving the sex organs of one person and the mouth or anus of another… (b) A person convicted of the offense of sodomy shall be punished by imprisonment for not less than one nor more than 20 years”

In 1982, a police officer arrested Michael Hardwick in his bedroom for sodomy, an offence which carried a prison sentence of up to twenty years. His case went all the way up to the US Supreme Court which, in 1986, pronounced its judgement in Bowers v. Hardwick 478 US 186 (1986). Although the Georgia statute was framed broadly to include even heterosexual sodomy (anal or oral sex between a man and a woman or two women) within its ambit of prohibited activity, the Court chose to frame the issue at hand rather narrowly. Justice Byron White, speaking for the majority, observed at the outset –

“This case does not require a judgment on whether laws against sodomy between consenting adults in general, or between homosexuals in particular, are wise or
desirable. It raises no question about the right or propriety of state legislative decisions to repeal their laws that criminalize homosexual sodomy, or of state-court decisions invalidating those laws on state constitutional grounds. The issue presented is whether the Federal Constitution confers a fundamental right upon homosexuals to engage in sodomy…”

Privacy and autonomy

Interestingly, Hardwick’s case against the Georgia statute was not grounded on an equality-discrimination argument (since the Georgia statute prohibited even heterosexual sodomy but was only enforced against homosexuals) but on a privacy argument that sought to privilege and immunise private consensual non-commercial sexual conduct from intrusive State intervention. To support this privacy claim, a long line of cases was relied upon that restricted the State’s ability to intervene in, and so upheld the sanctity of, the home, marriage, procreation, contraception, child rearing and so on [See, Carey v. Population Services 431 US 678 (1977), Pierce v. Society of Sisters 268 US 510 (1925) and Meyer v. Nebraska 262 US 390 (1923) on child rearing and education; Prince v. Massachusetts 321 US 158 (1944) on family relationships; Skinner v. Oklahoma ex rel. Williamson 316 US 535 (1942) on procreation; Loving v. Virginia 388 US 1 (1967) on marriage; Griswold v. Connecticut 381 US 479 (1965) and Eisenstadt v. Baird 405 US 438 (1972) on contraception; and Roe v. Wade 410 US 113 (1973) on abortion]. Further, the Court was pressed to declare a fundamental right to consensual homosexual sodomy by reading it into the Due Process clause of the Fourteenth Amendment to the US Constitution.

The 9-judges Court split 5-4 down the middle to rule against all of Hardwick’s propositions and uphold the constitutionality of the Georgia statute. The Court’s majority agreed that cases cited by Hardwick had indeed evolved a right to privacy, but disagreed that this privacy extended to homosexual persons since “(n)o connection between family, marriage, or procreation on the one hand and homosexual activity on the other has been demonstrated…”. In essence, the Court’s majority held that homosexuality was distinct from procreative human sexual behaviour; that homosexual sex could, by virtue of this distinction, be separately categorised and discriminated against; and, hence, homosexual sex did not qualify for the benefit of intimate privacy protection that was available to heterosexuals. What reason did the Court give to support this discrimination? Justice White speaking for the majority gives us a clue: “Proscriptions against that (homosexual) conduct have ancient roots.” Justice White was joined in his majority judgement by Chief Justice Burger, Justice Powell, Justice Rehnquist and Justice O’Connor. His rationale was underscored by Chief Justice Burger who also wrote a short concurring opinion wherein he claimed:

“Decisions of individuals relating to homosexual conduct have been subject to state intervention throughout the history of Western civilization. Condemnation of those practices is firmly rooted in Judeo-Christian moral and ethical standards. Blackstone described “the infamous crime against nature” as an offense of “deeper malignity” than rape, a heinous act “the very mention of which is a disgrace to human nature,” and “a crime not fit to be named.” … To hold that the act of homosexual sodomy is somehow protected as a fundamental right would be to cast aside millennia of moral teaching.”

The majority’s “wilful blindness”: Blackmun’s dissent

The Court’s dissenting opinion was delivered by Justice Blackmun, in which Justice Brennan, Justice Marshall and Justice Stevens joined. At the outset, the Justice Blackmun disagreed with the issue that was framed by the majority led by Justice White: “This case is (not) about “a fundamental right to engage in homosexual sodomy,” as the Court purports to declare…” and further pointed out that the Georgia statute proscribed not just homosexual sodomy, but oral or anal sex committed by any two persons: “…the Court’s almost obsessive focus on homosexual activity is particularly hard to justify in light of the broad language Georgia has used.”. When considering the issue of privacy for intimate sexual conduct, Justice Blackmun criticised the findings of the majority: “Only the most wilful blindness could obscure the fact that sexual intimacy is a sensitive, key relationship of human existence, central to family life, community welfare, and the development of human personality…” And when dealing with the ‘historical morality’ argument that was advanced by Chief Justice Burger, the minority observed:

“The assertion that “traditional Judeo-Christian values proscribe” the conduct involved cannot provide an adequate justification for (§)16-6-2 (of the Georgia Statute). That certain, but by no means all, religious groups condemn the behavior at issue gives the State no license to impose their judgments on the entire citizenry. The legitimacy of secular legislation depends instead on whether the State can advance some justification for its law beyond its conformity to religious doctrine.”

The states respond, privacy is upheld

Bowers was argued and decided over five years in the 1980s. At the time, the USA was witnessing a neo-conservative wave in its society and government, which was headed by a republican conservative. The HIV/AIDS issue had achieved neither the domestic nor international proportions it now occupies and the linkages between HIV/AIDS, homosexuality and the right to health were still unclear. In the years after Bowers, several more US States repealed their sodomy laws.

In some US States, sodomy laws that were not legislatively repealed were judicially struck down. In 1998, the Georgia State Supreme Court, in Powell v. State of Georgia S98A0755, 270 Ga. 327, 510 S.E. 2d 18 (1998), heard a challenge to the same sodomy provision of the Georgia statute that was upheld in by the US Supreme Court in Bowers. In a complete departure from the US Supreme Court’s findings, the Georgia Supreme Court first considered whether the Georgia statute violated individual privacy: “It is clear from the right of privacy appellate jurisprudence…that the “right to be let alone” guaranteed by the Georgia Constitution is far more extensive that the right of privacy protected by the U.S. Constitution…”

Having established that an individual right to privacy existed to protect private consensual sodomy, the Georgia Court then considered whether there was a ‘legitimate State interest’ that justified the State’s restriction of this right. The justifications that were offered by the State included the possibility of child sexual abuse, prostitution and moral degradation of society. The Court found that there already were a number of legal provisions to deter and punish rape, child abuse, trafficking, prostitution and public indecency. Hence: “In light of the existence of these statutes, the sodomy statute’s raison d’ etre can only be to regulate the private sexual conduct of consenting adults, something which Georgians’ right of privacy puts beyond the bounds of government regulation.” By a 2-1 decision, Chief Justice Benham leading the majority, the Georgia Supreme Court struck down the Georgia statute for arbitrarily violating the privacy of individuals. Interestingly, the subjects of the dispute were not homosexual, but two heterosexual adults – a man and a woman. Similar cases where a US State’s sodomy laws were judicially struck down include:

  • Campbell v. Sundquist 926 S.W.2d 250 (1996) – [Tennessee – by the Tennessee Court of Appeals on privacy violation; appeal to the State Supreme Court expressly denied].
  • Commonwealth v. Bonadio 415 A.2d 47 (1980) – [Pennsylvania – by the Pennsylvania Supreme Court on both equality and privacy violations];
  • Doe v. Ventura MC 01-489, 2001 WL 543734 (2001) – [Minnesota – by the Hennepin County District Judge on privacy violation; no appellate challenge];
  • Gryczan v. Montana 942 P.2d 112 (1997) – [Montana – by the Montana Supreme Court on privacy violation];
  • Jegley v. Picado 80 S.W.3d 332 (2001) – [Arkansas – by the Arkansas Supreme Court, on privacy violation];
  • Kentucky v. Wasson 842 S.W.2d 487 (1992) [Kentucky – by the Kentucky Supreme Court on both equality and privacy violations];
  • Massachusetts v. Balthazar 366 Mass. 298, 318 NE2d 478 (1974) and GLAD v. Attorney General 436 Mass. 132, 763 NE2d 38 (2002) – [Massachusetts – by the Superior Judicial Court on privacy violation];
  • People v. Onofre 51 NY 2d 476 (1980) [New York – by the New York Court of Appeals on privacy violation]; and,
  • Williams v. Glendenning No. 98036031/CL-1059 (1999) – [Maryland – by the Baltimore City Circuit Court on both privacy and equality violations; no appellate challenge].

Lawrence v. Texas

These developments made for an uneven field in the matter of legality of homosexual sex with the sodomy laws of most States being repealed by their State legislatures or subject to State judicial invalidation, while the sodomy laws of the remaining States were retained under the shade of constitutional protection afforded by Bowers. Texas was one such State which maintained an anti-sodomy law contained in Texas Penal Code Annotated § 21.06(a) (2003) (“Texas statute”) which criminalised sexual intercourse between two people of the same sex. In 1998, the Texas statute was invoked to arrest two men engaged in private, consensual, non-commercial sodomy. They subsequently challenged the constitutionality of the Texas statute, their case reaching the US Supreme Court. In 2003, the US Supreme Court, in Lawrence v. Texas 539 US 558 (2003) pronounced on the validity of the Texas statute. Interestingly, while the issue under consideration was identical to that decided in Bowers, the Court this time around was presented with detailed arguments on the equality-discrimination aspect of same-sex sodomy laws – which the Bowers Court majority did not consider. The Court split 6-3; the majority struck down the Texas statute. Justice Kennedy, speaking for himself and 4 other judges of the majority, found instant fault with the Bowers Court for framing the issue in question before it as simply whether homosexuals had a fundamental right to engage in sodomy.

Privacy, intimacy, home

This mistake, Justice Kennedy claimed, “…discloses the Court’s own failure… To say that the issue in Bowers was simply the right to engage in certain sexual conduct demeans…the individual…just as it would demean a married couple were it to be said marriage is simply about the right to have sexual intercourse. Their penalties and purposes (of the laws involved)…have more far-reaching consequences, touching upon the most private human conduct, sexual behavior, and in the most private of places, the home.” Justice Kennedy, joined by Justice Stevens, Justice Souter, Justice Ginsburg and Justice Breyer, found that the Texas statute violated the right to privacy granted by the Due Process clause of the US Constitution:

“The petitioners are entitled to respect for their private lives. The State cannot demean their existence or control their destiny by making their private sexual conduct a crime. “It is a promise of the Constitution that there is a realm of personal liberty which the government may not enter.”” [The quote is c.f. Planned Parenthood of Southeastern Pa. v. Casey 505 US 833 (1992)]

Imposed morality is defeated

With the privacy argument established as controlling, Justice Kennedy went to some length to refute the ‘historical morality’ argument that was put forward in Bowers by then Chief Justice Burger: “At the outset it should be noted that there is no longstanding history in this country of laws directed at homosexual conduct as a distinct matter… The sweeping references by Chief Justice Burger to the history of Western civilization and to Judeo-Christian moral and ethical standards did not take account of other authorities pointing in an opposite direction.” To illustrate these other authorities, Justice Kennedy references the ECHR’s decision in Dudgeon supra which was reached five years before Bowers: “Authoritative in all countries that are members of the Council of Europe (21 nations then, 45 nations now), the decision (Dudgeon) is at odds with the premise in Bowers that the claim put forward was insubstantial in our Western civilization.”.

The Court then affirmed that morality could not be a compelling ground to infringe upon a fundamental right: “Our obligation is to define the liberty of all, not to mandate our own moral code”. The lone remaining judge of the majority, Justice O’Connor, based her decision not on the right to privacy but on equality-discrimination considerations. Interestingly, Justice O’Connor sat on the Bowers Court and ruled with the majority in that case. Basing her decision on equal protection grounds allowed her to concur with the majority in Lawrence but not overturn her earlier position in Bowers which had rejected a right to privacy claim. It also enabled her to strike down the Texas statute while not conceding homosexuality as a constitutionally guaranteed private liberty. There were three dissenters: The chief dissent was delivered by Justice Scalia, in which he was joined by Chief Justice Rehnquist and Justice Thomas. Bowers was not merely distinguished by the majority, it was overruled:

“Bowers was not correct when it was decided, and it is not correct today. It ought not to remain binding precedent. Bowers v. Hardwick should be and now is overruled.”

Mastering the Art of Keeping Indians Under Surveillance

by Bhairav Acharya last modified Aug 23, 2015 12:26 PM
In its first year in office, the National Democratic Alliance government has been notably silent on the large-scale surveillance projects it has inherited. This ended last week amidst reports the government is hastening to complete the Central Monitoring System (CMS) within the year.

The article was published in the Wire on May 30, 2015.


In a statement to the Rajya Sabha in 2009, Gurudas Kamat, the erstwhile United Progressive Alliance’s junior communications minister, said the CMS was a project to enable direct state access to all communications on mobile phones, landlines, and the Internet in India. He meant the government was building ‘backdoors’, or capitalising on existing ones, to enable state authorities to intercept any communication at will, besides collecting large amounts of metadata, without having to rely on private communications carriers.

This is not new. Legally sanctioned backdoors have existed in Europe and the USA since the early 1990s to enable direct state interception of private communications. But the laws of those countries also subject state surveillance to a strong regime of state accountability, individual freedoms, and privacy. This regime may not be completely robust, as Edward Snowden’s revelations have shown, but at least it exists on paper. The CMS is not illegal by itself, but it is coloured by the compromised foundation of Indian surveillance law upon which it is built.

Surveillance and social control

The CMS is a technological project. But technology does not exist in isolation; it is contextualised by law, society, politics, and history. Surveillance and the CMS must be seen in the same contexts.

The great sociologist Max Weber claimed the modern state could not exist without monopolising violence. It seems clear the state also entertains the equal desire to monopolise communications technologies. The state has historically shaped the way in which information is transmitted, received, and intercepted. From the telegraph and radio to telephones and the Internet, the state has constantly endeavoured to control communications technologies.

Law is the vehicle of this control. When the first telegraph line was laid down in India, its implications for social control were instantly realised; so the law swiftly responded by creating a state monopoly over the telegraph. The telegraph played a significant role in thwarting the Revolt of 1857, even as Indians attempted to destroy the line; so the state consolidated its control over the technology to obviate future contests.

This controlling impulse was exercised over radio and telephones, which are also government monopolies, and is expressed through the state’s surveillance prerogative. On the other hand, because of its open and decentralised architecture, the Internet presents the single greatest threat to the state’s communications monopoly and dilutes its ability to control society.

Interception in India

The power to intercept communications arises with the regulation of telegraphy. The first two laws governing telegraphs, in 1854 and 1860, granted the government powers to take possession of telegraphs “on the occurrence of any public emergency”. In 1876, the third telegraph law expanded this threshold to include “the interest of public safety”. These are vague phrases and their interpretation was deliberately left to the government’s discretion.

This unclear formulation was replicated in the Indian Telegraph Act of 1885, the fourth law on the subject, which is currently in force today. The 1885 law included a specific power to wiretap. Incredibly, this colonial surveillance provision survived untouched for 87 years even as countries across the world balanced their surveillance powers with democratic safeguards.

The Indian Constitution requires all deprivations of free speech to conform to any of nine grounds listed in Article 19(2). Public emergencies and public safety are not listed. So Indira Gandhi amended the wiretapping provision in 1972 to insert five grounds copied from Article 19(2). However, the original unclear language on public emergencies and public safety remained.

Indira Gandhi’s amendment was ironic because one year earlier she had overseen the enactment of the Defence and Internal Security of India Act, 1971 (DISA), which gave the government fresh powers to wiretap. These powers were not subject to even the minimal protections of the Telegraph Act. When the Emergency was imposed in 1975, Gandhi’s government bypassed her earlier amendment and, through the DISA Rules, instituted the most intensive period of surveillance in Indian history.

Although DISA was repealed, the tradition of having parallel surveillance powers for fictitious emergencies continues to flourish. Wiretapping powers are also found in the Maharashtra Control of Organised Crime Act, 1999 which has been copied by Karnataka, Andhra Pradesh, Arunachal Pradesh, and Gujarat.

Procedural weaknesses

Meanwhile, the Telegraph Act with its 1972 amendment continued to weather criticism through the 1980s. The wiretapping power was largely exercised free of procedural safeguards such as the requirements to exhaust other less intrusive means of investigation, minimise information collection, limit the sharing of information, ensure accountability, and others.

This changed in 1996 when the Supreme Court, on a challenge brought by PUCL, ordered the government to create a minimally fair procedure. The government fell in line in 1999, and a new rule, 419A, was put into the Indian Telegraph Rules, 1951.

Unlike the United States, where a wiretap can only be ordered by a judge when she decides the state has legally made its case for the requested interception, an Indian wiretap is sanctioned by a bureaucrat or police officer. Unlike the United Kingdom, which also grants wiretapping powers to bureaucrats but subjects them to two additional safeguards including an independent auditor and a judicial tribunal, an Indian wiretap is only reviewed by a committee of the original bureaucrat’s colleagues. Unlike most of the world which restricts this power to grave crime or serious security needs, an Indian wiretap can even be obtained by the income tax department.

Rule 419A certainly creates procedure, but it lacks crucial safeguards that impugn its credibility. Worse, the contours of rule 419A were copied in 2009 to create flawed procedures to intercept the content of Internet communications and collect metadata. Unlike rule 419A, these new rules issued under sections 69(2) and 69B(3) of the Information Technology Act 2000 have not been constitutionally scrutinised.

Three steps to tap

Despite its monopoly, the state does not own the infrastructure of telephones. It is dependent on telecommunications carriers to physically perform the wiretap. Indian wiretaps take place in three steps: a bureaucrat authorises the wiretap; a law enforcement officer serves the authorisation on a carrier; and, the carrier performs the tap and returns the information to the law enforcement officer.

There are many moving parts in this process, and so there are leaks. Some leaks are cynically motivated such as Amar Singh’s lewd conversations in 2011. But others serve a public purpose: Niira Radia’s conversations were allegedly leaked by a whistleblower to reveal serious governmental culpability. Ironically, leaks have created accountability where the law has failed.

The CMS will prevent leaks by installing servers on the transmission infrastructure of carriers to divert communications to regional monitoring centres. Regional centres, in turn, will relay communications to a centralised monitoring centre where they will be analysed, mined, and stored. Carriers will no longer perform wiretaps; and, since this obviates their costs of compliance, they are willing participants.

In its annual report of 2012, the Centre for the Development of Telematics (C-DOT), a state-owned R&D centre tasked with designing and creating the CMS, claimed the system would intercept 3G video, ILD, SMS, and ISDN PRI communications made through landlines or mobile phones – both GSM and CDMA.

There are unclear reports of an expansion to intercept Internet data, such as emails and browsing details, as well as instant messaging services; but these remain unconfirmed. There is also a potential overlap with another secretive Internet surveillance programme being developed by the Defence R&D Organisation called NETRA, no details of which are public.

Culmination of surveillance

In its present state, Indian surveillance law is unable to bear the weight of the CMS project, and must be vastly strengthened to protect privacy and accountability before the state is given direct access to communications.

But there is a larger way to understand the CMS in the context of Indian surveillance. Christopher Bayly, the noted colonial historian, writes that when the British set about establishing a surveillance apparatus in colonised India, they came up against an established system of indigenous intelligence gathering. Colonial rule was at its most vulnerable at this point of intersection between foreign surveillance and indigenous knowledge, and the meeting of the two was riven by suspicion. So the colonial state simply co-opted the interface by creating institutions to acquire local knowledge.

The CMS is also an attempt to co-opt the interface between government and the purveyors of communications; because if the state cannot control communications, it cannot control society. Seen in this light, the CMS represents the natural culmination of the progression of Indian surveillance. No challenge against it that does not question the construction of the modern Indian state will be successful.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:04 PM
Privacy enjoys an abundance of meanings. It is claimed in diverse situations every day by everyone against other people, society and the state.

Traditionally traced to classical liberalism’s public/private divide, there are now several theoretical conceptions of privacy that collaborate and sometimes contend. Indian privacy law is evolving in response to four types of privacy claims: against the press, against state surveillance, for decisional autonomy, and in relation to personal information. The Indian Supreme Court has selectively borrowed competing foreign privacy norms, primarily American, to create an unconvincing pastiche of privacy law in India. These developments are undermined by a lack of theoretical clarity and the continuing tension between individual freedoms and communitarian values.

This was published in Economic & Political Weekly, 50(22), 30 May 2015. Download the full article here.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:02 PM

PDF document icon Acharya - The Four Parts of Privacy in India (EPW Insight).pdf — PDF document, 610 kB (625400 bytes)

Multi-stakeholder Advisory Group Analysis

by Jyoti Panday last modified Apr 12, 2016 10:02 AM
This analysis has been done to see the trend in the selection and rotation of the members of the Multistakeholder advisory group (MAG) in the Internet Governance Forum (IGF). The MAG has been functional for nine years from 2006-2015. The analysis is based on data procured, collated and organised by Pranesh Prakash and Jyoti Panday. Shambhavi Singh, Law Student, NLU Delhi who was interning with CIS at the time also assisted with the organisation and analysis of the data.

The researcher has collected the data from the lists of members available in the public domain from 2010-2015. The lists prior to 2010 have been procured by the Centre for Internet and society from the UN Secretariat of the Internet Governance Forum (IGF).

This research is based solely upon the members and the nature of their stake holding has been analysed in the light of MAG terms of reference. No data has been made available regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG (The IGF Secretariat does not share this data).

According to the analysis, in these six years, the MAG has had around 182 members from various stakeholder groups.

We have divided it into five stakeholder groups, Government, Civil Society, Industry, Technical Community and Academia. Any overlap between two or more of these groups has also been taken into account, for example- A member of the Internet Society (ISOC) being both in the Civil Society and Technical Community.

According to the MAG Terms of Reference[1], it is the prerogative of the UN Secretary General to select MAG Members. The general policy is that the MAG members are appointed for a period of one year, which is automatically renewed for 2 more years consecutively depending on their engagement in MAG activities.

There is also a policy of rotating off 1/3rd members of MAG every year for diversity and taking new viewpoints in consideration. There is also an exceptional circumstance where a person might continue beyond three years in case there is a lack of candidates fitting the desired area.

However, it seems like the exception has become the norm as a whopping number of members have continued beyond 3 years, ranging from 4 years up to as long as 8 years, this figure rounds up to around 49. No doubt some of them are exceptional talents and difficult to replace. However, the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the usual term.

S. No.

Stakeholder

Number of years

Total Members continuing beyond 3 years

1

Civil Society

8, 6, 6, 4, 4,

5

2

Government/Industry

4, 5

2

3

Technical community/ Civil society

8, 8, 8, 6, 6, 4, 4, 4, 4,4

10

4

Industry/ Civil society

8, 6,

2

5

Industry

8, 7, 7, 6, 6, 4,

6

6

Industry/Tech Community/ Civil Society

8,

1

7

Government

7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4,

19

8

Academia

6, 6, 5,

3

9

Industry/ Tech community

6,

1

The stakeholders that have continued beyond 8 years have around 39% members from Government and related agencies. The next being Technical Community/Civil Society with around 20% representation, followed by Industry at 12%, 10% from the Civil Society, 6% from Academia, 4% from Government/Industry, 4% from Industry/Civil Society and 2% each from Industry/Technical Community and Industry/Technical Community/Civil Society respectively.

null

Table with overlapping interests merged

S. No.

Stakeholder

Total Members continuing beyond 3 years

1

Civil Society

7 + 9 + 1+1 = 18

2

Government

19

3

Tech Community

9 + 1 + 1+1 = 12

4

Industry

6 + 2 + 1 + 1+2 = 13

5

Academia

3

When the overlap is grouped separately, as in if a Technical Community/Civil Society person is placed both in Technical Community and Civil Society groups individually, then the representation of stakeholder representation is as follows(approximate values)-

Government- 29%

Civil Society- 28%

Industry- 20%

Technical Community-17%

Academia-5%

This clearly shows us that stakeholders from academia generally did not stay on MAG beyond 3 years. Even when all members that have ever been on MAG are taken into consideration, only around 8% representation has been from the academic community. This needs to be taken into account when new MAG members are selected in 2016.

The researcher has also looked at the MAG Representation based on gender and UN Regional Groups. The results of the analysis were as follows-

The ratio of male members is to female members is approximately 16:9 in the MAG and the approximate value in percentage being 64% and 36% respectively.

null

Now coming to the UN Regional Groups, the results that the analysis yielded were as follows-

The Western European and Others Group (WEOG) has the highest representation in MAG, a large number of members being from Switzerland, USA and UK. This is followed by the Asia Pacific Group which has 20% representation. The third largest is the African group with 19% representation followed by Latin American and Caribbean Group (GRULAC) and Eastern European Group with 13% and 12% representation respectively.

null

The representation of developed, developing and Least Developed Countries is as follows-

Developed countries have approximately 42% representation, developing countries having 53% and LDCs having a mere 5% representation. There should be some effort to strive for better LDC representation as they are the most backward when it comes to Global ICT Penetration. [2]

null


[1] Intgovforum.org, 'MAG Terms Of Reference' (2015) <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference> accessed 13 July 2015.

[2] ICT Facts And Figures (1st edn, International Telecommunication Union 2015) <http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdf> accessed 11 July 2015.

Supreme Court Order is a Good Start, but is Seeding Necessary?

by Elonnai Hickok and Rohan George — last modified Sep 07, 2015 01:21 PM
This blog post seeks to unpack the ‘seeding’ process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the court in the context of the seeding process.

Introduction

On August 11th 2015, in the writ petition Justice K.S Puttaswamy (Retd.) & Another vs. Union of India & Others1, the Supreme Court of India issued an interim order regarding the constitutionality of the UIDAI scheme. In response to the order, Dr. Usha Ramanathan published an article titled  'Decoding the Aadhaar judgment: No more seeding, not till the privacy issue is settled by the court' which, among other points, highlights concerns around the seeding of Aadhaar numbers into service delivery databases. She writes that "seeding' is a matter of grave concern in the UID project. This is about the introduction of the number into every data base. Once the number is seeded in various databases, it makes convergence of personal information remarkably simple. So, if the number is in the gas agency, the bank, the ticket, the ration card, the voter ID, the medical records and so on, the state, as also others who learn to use what is called the 'ID platform', can 'see' the citizen at will."2

Building off of this statement, this article seeks to unpack the 'seeding' process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the Court in the context of the seeding process.

What is Seeding?

In the UID scheme, data points within databases of service providers and banks are organized via individual Aadhaar numbers through a process known as 'seeding'. The UIDAI has released two documents on the seeding process - "Approach Document for Aadhaar Seeding in Service Delivery Databases version 1.0" (Version 1.0)3 and "Standard Protocol Covering the Approach & Process for Seeding Aadhaar Number in Service Delivery Databases June 2015 Version 1.1" (Version 1.1)4

According to Version 1.0 "Aadhaar seeding is a process by which UIDs of residents are included in the service delivery database of service providers for enabling Aadhaar based authentication during service delivery."5 Version 1.0 further states that the "Seeding process typically involves data extraction, consolidation, normalization, and matching".6 According to Version 1.1, Aadhaar seeding is "a process by which the Aadhaar numbers of residents are included in the service delivery database of service providers for enabling de-duplication of database and Aadhaar based authentication during service delivery".7 There is an extra clause in Version 1.1's definition of seeding which includes "de-duplication" in addition to authentication.

Though not directly stated, it is envisioned that the Aadhaar number will be seeded into the databases of service providers and banks to enable cash transfers of funds. This was alluded to in the Version 1.1 document with the UIDAI stating "Irrespective of the Scheme and the geography, as the Aadhaar Number of a given Beneficiary finally has to be linked with the Bank Account, Banks play a strategic and key role in Seeding."8

How does the seeding process work?

The seeding process itself can be done through manual/organic processes or algorithmic/in-organic processes. In the inorganic process the Aadhaar database is matched with the database of the service provider - namely the database of beneficiaries, KYR+ data from enrolment agencies, and the EID-UID database from the UIDAI. Once compared and a match is found - for example between KYR fields in the service delivery database and KYR+ fields in the Aadhaar database - the Aadhaar number is seeded into the service delivery database.9

Organic seeding can be carried out via a number of methods, but the recommended method from the UIDAI is door to door collection of Aadhaar numbers from residents which are subsequently uploaded into the service delivery database either manually or through the use of a tablet or smart phone. Perhaps demonstrating the fact that technology cannot be used as a 'patch' for a broken or premature system, organic (manual) seeding is suggested as the preferred process by the UIDAI due to challenges such as lack of digitization of beneficiary records, lack of standardization in Name and Address records, and incomplete data.10

According to the 1.0 Approach Paper, to facilitate the seeding process, the UIDAI has developed an in house software known as Ginger. Service providers that adopt the Aadhaar number must move their existing databases onto the Ginger platform, which then organizes the present and incoming data in the database by individual Aadhaar numbers. This 'organization' can be done automatically or manually. Once organized, data can be queried by Aadhaar number by person's on the 'control' end of the Ginger platform.11

In practice this means that during an authentication in which the UIDAI responds to a service provider with a 'yes' or 'no' response, the UIDAI would have access to at least these two sets of data: 1.) Transaction data (date, time, device number, and Aadhaar number of the individual authenticating) 2.) Data associated to an individual Aadhaar number within a database that has been seeded with Aadhaar numbers (historical and incoming). According to the Approach Document version 1.0, "The objective here is that the seeding process/utility should be able to access the service delivery data and all related information in at least the read-only mode." 12 and the Version 1.1 document states "Software application users with authorized access should be able to access data online in a seamless fashion while providing service benefit to residents." 13

What are the concerns with seeding?

With the increased availability of data analysis and processing technologies, organisations have the ability to link disparate data points stored across databases in order that the data can be related to each other and thereby analysed to derive holistic, intrinsic, and/or latent assessments. This can allow for deeper and more useful insights from otherwise standalone data. In the context of the government linking data, such "relating" can be useful - enabling the government to visualize a holistic and more accurate data and to develop data informed policies through research14. Yet, allowing for disparate data points to be merged and linked to each other raises questions about privacy and civil liberties - as well as more intrinsic questions about purpose, access,  consent and choice.  To name a few, linked data can be used to create profiles of individuals, it can facilitate surveillance, it can enable new and unintended uses of data, and it can be used for discriminatory purposes.

The fact that the seeding process is meant to facilitate extraction, consolidation, normalization and matching of data so it can be queried by Aadhaar number, and that existing databases can be transposed onto the Ginger platform can give rise to Dr. Ramanthan's concerns. She argues that anyone having access to the 'control' end of the Ginger platform can access all data associated to a Aadhaar number, that convergence can now easily be initiated with databases on the Ginger platform,  and that profiling of individuals can take place through the linking of data points via the Ginger platform.

How does the Supreme Court Order impact the seeding process and what still needs to be clarified?

In the interim order the Supreme Court lays out four welcome clarifications and limitations on the UID scheme:

  1. The Union of India shall give wide publicity in the electronic and print media including radio and television networks that it is not mandatory for a citizen to obtain an Aadhaar card;
  2. The production of an Aadhaar card will not be condition for obtaining any benefits otherwise due to a citizen;
  3. The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene. The Aadhaar card may also be used for the purpose of the LPG Distribution Scheme;
  4. The information about an individual obtained by the Unique Identification Authority of India while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation."15

In some ways, the court order addresses some of the concerns regarding the seeding of Aadhaar numbers by limiting the scope of the seeding process to the PDS scheme, but there are still a number of aspects of the scheme as they pertain to the seeding process that need to be addressed by the court.

These include:

The Process of Seeding

Prior to the Supreme Court interim order, the above concerns were quite broad in scope as Aadhaar could be adopted by any private or public entity - and the number was being seeded in databases of banks, the railways, tax authorities, etc. The interim order, to an extent, lessens these concerns by holding that  "The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme…".

However, the Court could have perhaps been more specific regarding what is included under the PDS scheme, because the scheme itself is broad. That said, the restrictions put in place by the court create a form of purpose limitation and a boundary of  proportionality on the UID scheme. By limiting the purpose of the Aadhaar number to use in the PDS system, the  Aadhaar number can only be seeded into the databases of entities involved in the PDS Scheme, rather than any entity that had adopted the number. Despite this, the seeding process is an issue in itself for the following reasons:

Access: Embedding service delivery databases and bank databases with the Aadhaar number allows for the UIDAI or authorized users to access information in these databases. According to version 1.1 of the seeding document from the UIDAI - the UIDAI is carrying out the seeding process through 'seeding agencies'. These agencies can include private companies, public limited companies, government companies, PSUs, semi-government organizations, and NGOs that are registered and operating in India for at least three years.16 Though under contract by the UIDAI, it is unclear what information such organizations would be able to access. This ambiguity leaves the data collected by UIDAI open to potential abuse and unauthorized access. Thus, the Court Ruling fails to provide clarity on the access that the seeding process enables for the UIDAI and for private parties.

Consent: Upon enrolling for an Aadhaar number, individuals have the option of consenting to the UIDAI sharing information in three instances:
  • "I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services."
  • "I want the UIDAI to facilitate opening of a new Bank/Post Office Account linked to my Aadhaar Number.
  • "I have no objection to sharing my information for this purpose""I have no objection to linking my present bank account provided here to my Aadhaar number"17
Aside for the vague and sweeping language of actions users provide consent for, which raises questions about how informed an individual is of the information he consents to share, at no point is an individual provided the option of  consenting  to the UIDAI accessing data - historic or incoming - that is stored in the database of a service provider in the PDS system seeded with the Aadhaar number. Furthermore, as noted earlier, the fact that the UIDAI concedes that a beneficiary has to be linked with a bank account raises questions of consent to this process as linking one's bank account with their Aadhaar number is an optional part of the enrollment process. Thus, even with the restrictions from the court order, if individuals want to use their Aadhaar number to access benefits, they must also seed their number with their bank accounts. On this point, in an order from the Finance Ministry it was clarified that the seeding of Aadhaar numbers into databases is a voluntary decision, but if a beneficiary provides their number on a voluntary basis - it can be seeded into a database.18

Withdrawing Consent: The Court also did not directly address if individuals could withdraw consent after enrolling in the UID scheme - and if they did - whether Aadhaar numbers should be 'unseeded' from PDS related databases. Similarly, the Court did not clarify whether services that have seeded the Aadhaar number, but are not PDS related, now need to unseed the number. Though news items indicate that in some cases (not all) organizations and government departments not involved in the PDS system are stopping the seeding process19, there is no indication of departments undertaking an 'unseeding' process. Nor is there any indication of the UIDAI allowing indivduals enrolled to 'un-enroll' from the scheme. In being silent on issues around consent, the court order inadvertently overlooks the risk of function creep possible through the seeding process, which "allows numerous opportunities for expansion of functions far beyond those stated to be its purpose"20.

Verification and liability: According to Version 1.0 and Version 1.1 of the Seeding documents, "no seeding is better than incorrect seeding". This is because incorrect seeding can lead to inaccuracies in the authentication process and result in individuals entitled to benefits being denied such benefits. To avoid errors in the seeding process the UIDAI has suggested several steps including using the "Aadhaar Verification Service" which verifies an Aadhaar number submitted for seeding against the Aadhaar number and demographic data such as gender and location in the CIDR. Though recognizing the importance of accuracy in the seeding process, the UIDAI takes no responsibility for the same. According to Version 1.1 of the seeding document, "the responsibility of correct seeding shall always stay with the department, who is the owner of the database."21 This replicates a disturbing trend in the implementation of the UID scheme - where the UIDAI 'initiates' different processes through private sector companies but does not take responsibility for such processes. 22

The Scope of the UIDAI's mandate and the necessity of seeding

Aside from the problems within the seeding process itself, there is a question of the scope of the UIDAI's mandate and the role that seeding plays in fulfilling this. This is important in understanding the necessity of the seeding process.

On the official website, the UIDAI has stated that its mandate is "to issue every resident a unique identification number linked to the resident's demographic and biometric information, which they can use to identify themselves anywhere in India, and to access a host of benefits and services." 23 Though the Supreme Court order clarifies the use of the Aadhaar number, it does not address the actual legality of the UIDAI's mandate - as there is no enabling statute in place -and it does not clarify or confirm the scope of the UIDAI's mandate.

In Version 1.0 of the Seeding document the UIDAI has stated the "Aadhaar numbers of enrolled residents are being 'seeded' ie. included in the databases of service providers that have adopted the Aadhaar platform in order to enable authentication via the Aadhaar number during a transaction or service delivery."24 This statement is only partially correct. For only providing and authenticating of an Aadhaar number - seeding is not necessary as the Aadhaar number submitted for verification alone only needs to be compared with the records in the CIDR to complete authentication of the same. Yet, in an example justifying the need for seeding in the Version 1.0 seeding document the UIDAI states "A consolidated view of the entire data would facilitate the social welfare department of the state to improve the service delivery in their programs, while also being able to ensure that the same person is not availing double benefits from two different districts."25 For this purpose, seeding is again unnecessary as it would be simple to correlate PDS usage with a Aadhaar number within the PDS database. Even if limited to the PDS system,  seeding in the databases of service providers is only necessary for the creation and access to comprehensive information about an individual in order to determine eligibility for a service. Further, seeding is only necessary in the databases of banks if the Aadhaar number moves from being an identity factor - to a transactional factor - something that the UIDAI seems to envision as the Version 1.1 seeding document states that Aadhaar is sufficient enough to transfer payments to an individual and thus plays a key role in cash transfers of benefits.26

Conclusion

Despite the fact that adherence to the interim order from the Supreme Court has been adhoc27, the order does provide a number of welcome limitations and clarifications to the UID Scheme. Yet, despite limited clarification from the Supreme Court and further clarification from the Finance Ministry's Order, the process of seeding and its necessity remain unclear. Is the UIDAI taking fully informed consent for the seeding process and what it will enable? Should the UIDAI be liable for the accuracy of the seeding process? Is seeding of service provider and bank databases necessary for the UIDAI to fulfill its mandate? Is the UIDAI's mandate to provide an identifier and an authentication of identity mechanism or is it to provide authentication of eligibility of an individual to receive services? Is this mandate backed by law and with adequate safeguards? Can the court order be interpreted to mean that to deliver services in the PDS system, UIDAI will need access to bank accounts or other transactions/information stored in a service provider's database to verify the claims of the user?

Many news items reflect a concern of convergence arising out of the UID scheme.28 To be clear, the process of seeding is not the same as convergence. Seeding enables convergence which can enable profiling, surveillance, etc. That said, the seeding process needs to be examined more closely by the public and the court to ensure that society can reap the benefits of seeding while avoiding the problems it may pose.


[1]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[2]. Usha Ramanthan. Decoding the Aadhaar judgment: No more seeding, not till the privacy issues is settled by the court. The Indian Express. August 12th 2015. Available at: http://indianexpress.com/article/blogs/decoding-the-aadhar-judgment-no-more-seeding-not-till-the-privacy-issue-is-settled-by-the-court/

[3]. UIDAI. Approach Document for Aadhaar Seeding in Service Delivery Databases. Version 1.0. Available at: https://authportal.uidai.gov.in/static/aadhaar_seeding_v_10_280312.pdf

[4]. UIDAI. Standard Protocol Covering the Approach & Process for Seeding Aadhaar Numbers in Service Delivery Databases. Available at: https://uidai.gov.in/images/aadhaar_seeding_june_2015_v1.1.pdf

[5]. Version 1.0 pg. 2

[6]. Version 1.0 pg. 19

[7]. Version 1.1 pg. 3

[8]. Version 1.1 pg. 7

[9]. Version 1.1 pg. 5 -7

[10]. Version 1.1 pg. 7-13

[11]. Version 1.0 pg 19-22

[12]. Version 1.0 pg. 4

[13]. Version 1.1 pg. 5, figure 3.

[14]. David Card, Raj Chett, Martin Feldstein, and Emmanuel Saez. Expanding Access to Adminstrative Data for Research in the United States. Available at: http://obs.rc.fas.harvard.edu/chetty/NSFdataaccess.pdf

[15]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[16]. Version 1.1 pg. 18

[17]. Aadhaar Enrollment Form from Karnataka State. http://www.karnataka.gov.in/aadhaar/Downloads/Application%20form%20-%20English.pdf

[18]. Business Line. Aadhaar only for foodgrains, LPG, kerosene, distribution. August 27th 2015. Available at: http://www.thehindubusinessline.com/economy/aadhaar-only-for-foodgrains-lpg-kerosene-distribution/article7587382.ece

[19]. Bharti Jain. Election Commission not to link poll rolls to Aadhaar. The Times of India. August 15th 2015. Available at: http://timesofindia.indiatimes.com/india/Election-Commission-not-to-link-poll-rolls-to-Aadhaar/articleshow/48488648.cms

[20]. Graham Greenleaf. “Access all areas': Function creep guaranteed in Australia's ID Card Bill (No.1) Computer Law & Security Review. Volume 23, Issue 4. 2007. Available at:  http://www.sciencedirect.com/science/article/pii/S0267364907000544

[21]. Version 1.1 pg. 3

[22]. For example, the UIDAI depends on private companies to act as enrollment agencies and collect, verify, and enroll individuals in the UID scheme. Though the UID enters into MOUs with these organizations, the UID cannot be held responsible for the security or accuracy of data collected, stored, etc. by these entities. See draft MOU for registrars: https://uidai.gov.in/images/training/MoU_with_the_State_Governments_version.pdf

[23]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[24]. Version 1.0 pg.3

[25]. Version 1.0  pg.4

[26]. Version 1.1 pg. 3

[27]. For example, there are reports of Aadhaar being introduced for different services such as education. See: Tanu Kulkarni. Aadhaar may soon replace roll numbers. The Hindu. August 21st, 2015. For example: http://www.thehindu.com/news/cities/bangalore/aadhaar-may-soon-replace-roll-numbers/article7563708.ece

[28]. For example see: Salil Tripathi. A dangerous convergence. July 31st. 2015. The Live Mint. Available at: http://www.livemint.com/Opinion/xrqO4wBzpPbeA4nPruPNXP/A-dangerous-convergence.html

Are we Throwing our Data Protection Regimes under the Bus?

by Rohan George — last modified Sep 10, 2015 02:02 PM
In this blog post Rohan examines why the principle of consent is providing us increasingly less of an aegis in protecting our data.

Consent is complicated. What we think of as reasonably obtained consent varies substantially with the circumstance. For example, in treating rape cases, the UK justice system has moved to recognise complications like alcohol and its effect on explicit consent[1]. Yet in contracts, consent may be implied simply when one person accepts another’s work on a contract without objections[2]. These situations highlight the differences between the various forms of informed consent and the implications on its validity.

Consent has emerged as a key principle in regulating the use of personal data, and different countries have adopted different regimes, ranging from the comprehensive regimes like of the EU to more sectoral approaches like that in the USA. However, in our modern epoch characterised by the big data analytics that are now commonplace, many commentators have challenged the efficacy and relevance of consent in data protection. I argue that we may even risk throwing our data protection regimes under the proverbial bus should we continue to focus on consent as a key pillar of data protection.

Consent as a tool in Data Protection Regimes

In fact, even a cursory review of current data protection laws around the world shows the extent of the law’s reliance on consent. In the EU for example, Article 7 of the Data Protection Directive, passed in 1995, provides that data processing is only legitimate when “the data subject has unambiguously given his consent”[3]. Article 8, which guards against processing of sensitive data, provides that such prohibitions may be lifted when “the data subject has given his explicit consent to the processing of those data”[4]. Even as the EU attempts to strengthen data protection within the bloc with the proposed reforms to data protection[5], the focus on the consent of data subject remains strong. There are proposals for an “unambiguous consent by the data subject”[6] requirement to be put in place. Such consent will be mandatory before any data processing can occur[7].

Despite adopting very different overall approaches to data protection and privacy, consent is an equally integral part of data protection frameworks in the USA. In his book Protectors of Privacy[8], Abraham Newman describes two main types of privacy legislation: comprehensive and limited. He argues that places like the EU have adopted comprehensive regimes, which primarily seek to protect individuals because of the “informational and power asymmetry” between individuals and organisations[9]. On the other hand, he classifies the American approach as limited, focusing on more sectoral protections and principles of fair information practice instead of overarching legislation[10]. These sectors include the Fair Credit Reporting Act[11] (which governs consumer credit reporting), the Privacy Act[12] (which governs data collected by Federal government) and Electronic Communications Privacy Act[13] (which deals with email communications) among others. However, the Federal Trade Commission describes itself as having only “limited authority over the collection and dissemination of personal data collected online”[14].

This is because the general data processing that is commonplace in today’s era of big data is only regulated by the privacy protections that come from the Federal Trade Commission’s (FTC) Fair Information Practice Principles (FIPPs). Expectedly, consent is equally important under the FTC’s FIPPs. The FTC describes the principle of consent as “the second widely-accepted core principle of fair information practice”[15] in addition to the principle of notice. Other guidelines on fair data processing published by organisations like the Organisation for Economic Cooperation and Development[16] (OECD) or Canadian Standards Association[17] (CSA) also include consent as a key mechanism in data protection.

The origins of consent in privacy and data protection

Given the clearly extensive reliance on consent in data protection, it seems prudent to examine the origins of consent in privacy and data protection. Just why does consent have so much weight in data protection?

One reason is that data protection, along with inextricably linked concerns about privacy, could be said to be rooted in protecting private property. It was argued that the “early parameters of what was to become the right to privacy were set in cases dealing with unconventional property claims”[18], such as unconsented publication of personal letters[19] or photographs[20]. It was the publication of Brandeis and Warren’s well-known article “The Right to Privacy”[21], that developed “the current philosophical dichotomy between privacy and property rights”[22], as they asserted that privacy protections ought to be recognised as a right in and of themselves and needed separate protection[23]. Indeed, it was Warren and Brandeis who famously borrowed Justice Cooley's expression that privacy is the “right to be let alone”[24].

On the other side of the debate are scholars like Epstein and Posner, who see privacy protections as part of protecting personal property under tort law[25]. However, the central point is that most scholars seem to acknowledge the relationship between privacy and private property. Even Brandeis and Warren themselves argued that one general aim of privacy is “to protect the privacy of private life, and to whatever degree and in whatever connection a man's life has ceased to be private”[26].

It is also important to locate the idea of consent within the domain of privacy and private property protections. Ostensibly, consent seems to have the effect of lessening the privacy protections afforded in a particular situation to a person, because by acquiescing to the situation, one could be seen as waiving their privacy concerns. Brandeis and Warren concur with this position as they acknowledge how “the right to privacy ceases upon the publication of the facts by the individual, or with his consent”[27]. They assert that this is “but another application of the rule which has become familiar in the law of literary and artistic property”[28].

Perhaps the most eloquent articulation of the importance of consent in privacy comes from Sir Edward Coke’s idea that “every man’s house is his castle”[29]. Though the ‘Castle Doctrine’ has been used as a justification for protecting one’s property with the use of force[30], I think that implied in the idea of the ‘Castle Doctrine’ is that consent is necessary in order to preserve privacy. If not, why would anyone be justified in preventing trespass, other than to prevent unconsented entry or use of their property. The doctrine of “Volenti non fit injuria”[31], or ‘to one who consents no injury is done’, is thus the very embodiment of the role of consent in protecting private property. And as conceptions of private property develop to recognise that the data one gives out is part of his private property, for example in US v. Jones, which led scholars to assert that “people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties”[32], so does consent act as an important aspect of privacy protection.

Yet, linking privacy with private property is not universally accepted as the conception of privacy. For instance, Alan Westin, in his book Privacy and Freedom[33], describes privacy as “the right to control information about oneself”[34]. Another scholar, Ruth Gavison, contends instead that “our interest in privacy is related to our concern over our accessibility to others: the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others' attention”[35].

While these alternative notions about privacy’s foundational principles may differ from those related to linking privacy with private property, locating consent within these formulations of privacy is possible. Regarding Westin’s argument, I think that implicit in the right to control one’s information are ideas about individual autonomy, which is exercised through giving or withholding one’s consent. Similarly, Gavison herself states that privacy functions to advance “liberty, autonomy and selfhood”[36]. Consent plays a key role in upholding this liberty, autonomy and selfhood that privacy affords us. Clearly therefore, it is far from unfounded to claim that consent is an integral part of protecting privacy.

Consent, Big Data and Data protection

Given the solid underpinnings of the principle of consent in privacy protection, it was hardly a coincidence that consent became an integral part of data protection. However, with the rise of big data practices, one quickly finds that consent ceases to work effectively as a tool for protecting privacy. In a big data context, Solove argues that privacy regulation rooted in consent is ineffective, because garnering consent amidst ubiquitous data collection for all the online services one uses as part of daily life is unmanageable[37]. Additionally, the secondary uses of one’s data are difficult to assess at the point of collection, and subsequently meaningful consent for secondary use is difficult to obtain[38]. This section examines these two primary consequences of prioritising consent amidst Big data practises.

Consent places unrealistic and unfair expectations on the Individual

As noted by Tene and Polonetsky, the first concern is that current privacy frameworks which emphasize informed consent “impose significant, sometimes unrealistic, obligations on both organizations and individuals”[39]. The premise behind this argument stems from the way that consent is often garnered by organisations, especially regarding use of their services. An examination of various terms of use policies from banks, online video streaming websites, social networking sites, online fashion or more general online shopping websites reveals a deluge of information that the user has to comprehend. Moreover, there are a too many “entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity”[40].

As Cate and Mayer-Schönberger note in the Microsoft Global Privacy Summit Summary Report, “almost everywhere that individuals venture, especially online, they are presented with long and complex privacy notices routinely written by lawyers for lawyers, and then requested to either “consent” or abandon the use of the desired service”[41]. In some cases, organisations try to simplify these policies for the users of their service, but such initiatives make up the minority of terms of use policies. Tene and Polonetsky assert that “it is common knowledge among practitioners in the field that privacy policies serve more as liability disclaimers for businesses than as assurances of privacy for consumers”[42].

However, it is equally important to consider the principle of consent from perspective of companies. At a time where many businesses have to comply with numerous regulations and processes in the name of ‘compliance’[43], the obligations for obtaining consent could burden some businesses. Firms have to gather consent amidst enhancing user or customer experiences, which represents a tricky balance to find. For example, requiring consent at every stage may make the user experience much worse. Imagine having to give consent for your profile to be uploaded every time you make a high score in a video game? At the same time, “organizations are expected to explain their data processing activities on increasingly small screens and obtain consent from often-uninterested individuals”[44]. Given these factors, it is somewhat understandable for companies to garner consent for all possible (secondary) uses as otherwise it is not feasible to keep collecting.

Nonetheless, this results in situations where “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[45].The totality of the situation shows the odds stacked against the individual. It could be even argued that this is one manifestation of the informational and power asymmetry that exists between individuals and organisations[46], because users may unwittingly agree to unfair, unclear or even unknown terms and conditions and data practices. Not only are individuals greatly misinformed about data collected about them, but the vast majority of people do not even read these Terms and Conditions or End User license agreements[47]. Solove also argues that “people often lack enough expertise to adequately assess the consequences of agreeing to certain present uses or disclosures of their data”[48].

While the organisational practice of providing extensive and complicated terms of use policies is not illegal, the fact that by one estimation, it may take you would have to take 76 working days to review the privacy policies you have agreed to online[49], or by another, that in the USA the opportunity cost society incurs in reading privacy policies is $781 billion[50], should not go unnoticed. I do think it is unfair for the law to put users into such situations, where they are “forced to make overly complex decisions based on limited information”[51]. There have been laudable attempts by some government organisations like Canada’s Office of the Privacy Commissioner and USA’s Federal Trade Commission to provide guidance to firms to make their privacy policies more accessible[52]. However, these are hard to enforce. Therefore, it can be assumed that when users have neither the expertise nor the rigour to review privacy policies effectively, the consent they provide would naturally be far from informed.

Secondary use, Aggregation and Superficial Consent

What amplifies this informational asymmetry is the potential for the aggregation of individual’s data and subsequent secondary use of that data collected. “Even if people made rational decisions about sharing individual pieces of data in isolation, they greatly struggle to factor in how their data might be aggregated in the future”[53].

This has to do with the prevalence of big data analytics that characterizes our modern epoch, and has major implications for the nature and meaningfulness of the consent users provide. By definition, “big data analysis seeks surprising correlations”[54] and some of its most insightful results are counterintuitive and nearly impossible to conceive at the point of primary data collection. One noteworthy example comes from the USA, with the predictive analytics of Walmart. By studying purchasing patterns of its loyalty card holders[55], the company ascertained that prior to a hurricane the most popular items that people tend to buy are actually Pop Tarts (a pre-baked toaster pastry) and Beer[56]. These correlations are highly counterintuitive and far from what people expect to be necessities before a hurricane. These insights led to Walmart stores being stocked with the most relevant products at the time of need. This is one example of how data might be repurposed and aggregated for a novel purpose, but nonetheless the question about the nature of consent obtained by Walmart for the collection and analysis of the shopping habits of its loyalty card holders stands.

One reason secondary uses make consent less meaningful has been articulated by De Zwart et al, who observe that “the idea of consent becomes unworkable in an environment where it is not known, even by the people collecting and selling data, what will happen to the data”[57]. Taken together with Solove’s aggregation effect, two points become apparent:

  1. Data we consent to be collected about us may be aggregated with other data we may have revealed in the past. While separately they may be innocuous, there is a risk of future aggregation to create new information which one may find overly intrusive and not consent to. However, current data protection regimes make it hard for one to provide such consent, because there is no way for the user to know how his past and present data may be aggregated in the future.
  2. Data we consent to be collected for one specific purpose may be used in a myriad of other ways. The user has virtually no way to know how their data might be repurposed because often time neither do the collectors of that data[58].

Therefore, regulators reliance on principles of purpose limitation and the mechanism of consent for robust data protection seems suboptimal at the very least, as big data practices of aggregation, repurposing and secondary uses become commonplace.

Other problems with the mechanism of consent in the context of Big Data

On one end of the spectrum are situations where organisations garner consent for future secondary uses at the time of data collection. As discussed earlier, this is currently the common practice for organisations and the likelihood of users providing informed consent is low.

However, equally valid is considering the situations on the other end of the spectrum, where obtaining user consent for secondary use becomes too expensive and cumbersome[59]. As a result, potentially socially valuable secondary use of data for research and innovation or simply “the practice of informed and reflective citizenship”[60] may not take place. While potential social research may be hindered by the consent requirement, the reality that one cannot give meaningful consent to an unknown secondary uses of data is more pressing. Essentially, not knowing what you are consenting to scarcely provides the individual with any semblance of strong privacy protections and so the consent that individuals provide is superficial at best.

Many scholars also point to the binary nature of consent as it stands today[61]. Solove describes consent in data protection as nuanced[62] while Cate and Mayer-Schönberger go further to assert that “binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data”. This dichotomous nature of consent further reduces its usefulness in data protection regimes.

Whether data collection is opted into or opted out of also has a bearing on the nature of the consent obtained. Many argue that regulations with options to opt out are not effective as “opt-out consent might be the product of mere inertia or lack of awareness of the option to opt out”[63]. This is in line with initiatives around the world to make gathering consent more explicit by having options to opt in instead of opt out. Noted articulations of the impetus to embrace opt in regimes include ex FTC chairman Jon Leibowitz as early as 2007[64], as well as being actively considered by the EU in the reform of their data protection laws[65].

However, as Solove rightly points out, opt in consent is problematic as well[66]. There are a few reasons for this: first, that many data collectors have the “sophistication and motivation to find ways to generate high opt-in rates”[67] by “conditioning products, services, or access on opting in”[68]. In essence, they leave individuals no choice but to opt into data collection because using their particular product or service is dependant or ‘conditional’ on explicit consent. A pertinent example of this is the end-user license agreement to Apple’s iTunes Store[69]. Solove rightly notes that “if people want to download apps from the store, they have no choice but to agree. This requirement is akin to an opt-in system — affirmative consent is being sought. But hardly any bargaining or choosing occurs in this process”[70]. Second, as stated earlier, obtaining consent runs the risk of impeding potential innovation or research because it is too cumbersome or expensive to obtain[71].

Third, as Tene and Polonetsky argue, “collective action problems threaten to generate a suboptimal equilibrium where individuals fail to opt into societally beneficial data processing in the hope of free-riding on others’ good will”[72]. A useful example to illustrate this comes from another context where obtaining consent is the difference between life and death: organ donation. The gulf in consenting donors between countries with an opt in regime for organ donation and countries with an opt out regime is staggering. Even countries that are culturally similar, such as Austria and Germany, exhibit vast differences in donation rates – Austria at 99% compared to just 12% in Germany[73]. This suggests that in terms of obtaining consent (especially for socially valuable actions), opt in methods may be limiting, because people may have an aversion to anything being presumed about their choices, even if costs of opting out are low[74].

What the above section demonstrates is how consent may be somewhat limited as a tool for data protection regimes, especially in a big data context. That said, consent is not in itself a useless or outdated concept. The problems raised above articulate the problems that relying on consent extensively pose in a big data context. Consent should still remain a part of data protection regimes. However, there are both better ways to obtain consent (for organisations that collect data) as well as other areas to focus regulatory attention on aside from the time of data collection.

What can organisations do better to obtain more meaningful consent

Organisations that collect data could alter the way the obtain user consent. Most people can attest to having checked a box that was lying surreptitiously next to the words ‘I agree’, thereby agreeing to the Terms and Conditions or End-user License Agreement for a particular service or product. This is in line with the need for both parties to assent to the terms of a contract as part of making valid a contract[75]. Some of the more common types of online agreements that users enter into are Clickwrap and Browsewrap agreements. A Clickwrap agreement is “formed entirely in an online environment such as the Internet, which sets forth the rights and obligations between parties”[76]. They “require a user to click "I agree" or “I accept” before the software can be downloaded or installed”[77]. On the other hand, Browsewrap agreements “try to characterize your simple use of their website as your ‘agreement’ to a set of terms and conditions buried somewhere on the site”[78].

Because Browsewrap agreements do not “require a user to engage in any affirmative conduct”[79], the kind of consent that these types of agreements obtain is highly superficial. In fact, many argue that such agreements are slightly unscrupulous because users are seldom aware that such agreements exist[80], often hidden in small print[81] or below the download button[82] for example. And the courts have begun to consider such terms and practices unfair, which “hold website users accountable for terms and conditions of which a reasonable Internet user would not be aware just by using the site”[83]. For example, In re Zappos.com Inc., Customer Data Security Breach Litigation, the court said of their Terms of Use (which is in a browsewrap agreement):

“The Terms of Use is inconspicuous, buried in the middle to bottom of every Zappos.com webpage among many other links, and the website never directs a user to the Terms of Use. No reasonable user would have reason to click on the Terms of Use”[84]

Clearly, courts recognise the potential for consent or assent to be obtained in a hardly transparent or hands on manner. Organisations that collect data should be aware of this and consider other options for obtaining consent.

A few commentators have suggested that organisations switch to using Clickwrap or clickthrough agreements to obtain consent. Undergirding this argument is the fact that courts have on numerous occasions, upheld the validity of a Clickwrap agreement. Such cases include Groff v. America Online, Inc[85] and Hotmail Corporation v. Van Money Pie, Inc[86]. These cases built upon the precedent-setting case of Pro CD v. Zeidenberg, in which the court ruled that “Shrinkwrap licenses are enforceable unless their terms are objectionable on grounds applicable to contracts in general”[87]. Shrinkwrap licenses, which refer to end user license agreements printed on the shrinkwrap of a software product which a user will definitely notice and have the opportunity to read before opening and using the product, and the rules that govern them, have seen application to clickthrough agreements. As Bayley rightly noted, the validity of clickthrough agreements is dependent on “reasonable notice and opportunity to review—whether the placement of the terms and click-button afforded the user a reasonable opportunity to find and read the terms without much effort”[88].

From the perspective of companies and other organisations which attempt to garner consent from users to collect and process their data, utilizing Clickwrap agreements might be one useful solution to consider in obtaining more meaningful and informed consent. In fact Bayley contends that clear Clickwrap agreements are “the “best practice” mechanism for creating a contractual relationship between an online service and a user”[89]. He suggests the following mechanism for acquiring clear and informed consent via contractual agreement[90]:

  1. Conspicuously present the TOS to the user prior to any payment (or other commitment by the user) or installation of software (or other changes to a user’s machine or browser, like cookies, plug-ins, etc.)
  2. Allow the user to easily read and navigate all of the terms (i.e. be in a normal, readable typeface with no scroll box)
  3. Provide an opportunity to print, and/or save a copy of, the terms
  4. Offer the user the option to decline as prominently and by the same method as the option to agree
  5. Ensure the TOS is easy to locate online after the user agrees.

These principles make a lot of sense for organisations, as it requires relatively minor procedural changes instead of more transformational efforts to alter the way the validate their data processing processes entirely.

Herzfield adds two further suggestions to this list. First, organisations should not allow any use of their product or service until “express and active manifestation of assent”[91]. Also, they should institute processes where users re-iterate their consent and assent to the terms of use[92]. He goes further to propose a baseline that organisations should follow: “companies should always provide at least inquiry notice of all terms, and require counterparties to manifest assent, through action or inaction, in a manner that reasonable people would clearly understand to be assent”[93].

While obtaining informed and meaningful consent is neither fool proof nor a process which has widely accepted clear steps, what is clear is that current efforts by organisations may be insufficient. As Cate and Mayer-Schönberger note, “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[94]. One thing they can do to both ensure more meaningful and informed consent (from the perspective of the users) and preventing potential legal action for unscrupulous or unfair terms is to change the way they obtain consent from opt out to opt in.

Conclusion – how should regulation change

In conclusion, the current emphasis and extensive use of consent in data protection seems to be limited in effectively protecting against illegitimate processing of data in a big data context. More people are starting to use online services extensively. This is coupled by the fact that organisations are realizing the value of collecting and analysing user data to carry out data-driven analytics for insights that can improve the efficacy of the product. Clearly, data protection has never been more crucial.

However not only does emphasising consent seem less relevant, because the consent organisations obtain is seldom informed, but it may even jeopardise the intentions of data protection. Commentators are quick to point out how nimble firms are at acquiring consent in newer ways that may comply with laws but still allow them to maintain their advantageous position of asymmetric power. Kuner, Cate, Millard and Svantesson, all eminent scholars in the field of Big data, asked the prescient question: “Is there a proper role for individual consent?”[95]They believe consent still has a role, but that finding this role in the Big data context is challenging[96]. However, there is surprising consensus on the approach that should be taken as data protection regimes shift away from consent.

In fact, the alternative is staring at us in the face: data protection regimes have to look elsewhere, to other points along the data analysis process for aspects to regulate and ensure legitimate and fair processing of data. One compelling idea which had broad-based support during the aforementioned Microsoft Privacy Summit was that “new approaches must shift responsibility away from data subjects toward data users and toward a focus on accountability for responsible data stewardship”[97], ie creating regulations to guide data processing instead of the data collection. De Zwart et al. suggest that regulation must instead “focus on the processes involved in establishing algorithms and the use of the resulting conclusions”[98].

This might involve regulations relating to requiring data collectors to publish the queries they run on the data. This would be a solution that balances maintaining the ‘trade secret’ of the firm, who has creatively designed an algorithm, with ensuring fairness and legitimacy in data processing. One manifestation of this approach is in conceptualising procedural data due process which “would regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata derived from or associated with personal data) in any adjudicative process, including processes whereby Big Data is being used to determine attributes or categories for an individual”[99]. While there is debate regarding the usefulness of a data due process, the idea of data due process is just part of the consortium of ideas surrounding alternatives to consent in data protection. The main point is that “greater transparency should be required if there are fewer opportunities for consent or if personal data can be lawfully collected without consent”[100].

It is also worth considering exactly what a single use of group or individual’s data is, and what types of uses or processes require a “greater form of authorization”[101]. Certain data processes could require special affirmative consent to be procured, which is not applicable for other less intimate matters. Canada’s Office of the Privacy Commissioner released a privacy toolkit for organisations, in which they provide some exceptions to the consent principle, one of which is if data collection “is clearly in the individual’s interests and consent is not available in a timely way”[102]. Some therefore suggest that “if notice and consent are reserved for more appropriate uses, individuals might pay more attention when this mechanism is used”[103].

Another option for regulators is to consider the development and implementation of a sticky privacy policies regime. This refers to “machine-readable policies [that] can stick to data to define allowed usage and obligations as it travels across multiple parties, enabling users to improve control over their personal information”[104]. Sticky privacy policies seem to alleviate the risk of repurposed, unanticipated uses of data because users who consent to giving out their data will be consenting to how it is used thereafter. However, the counter to sticky policies is that it places even greater obligations on users to decide how they would like their data used, not just at one point but for the long term. To expect organisations to state their purposes for future use of individuals data or that individuals are to give informed consent to such uses seems farfetched from both perspectives.

Still another solution draws from the noted scholar Helen Nissenbaum’s work on privacy. She argues that “the benchmark of privacy is contextual integrity”[105]. ”Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it”[106]. According to this line of thinking, legislators should instead focus their attention on what constitutes appropriateness in certain contexts, although this could be a challenging task as contexts merge and understandings of appropriateness change according to the circumstances of a context. .

While there is little consensus regarding the numerous ways to focus regulatory attention on data processing and the uses of data collected, there is more support for a shift away from consent, as exemplified by the Microsoft privacy Summit:

“There was broad general agreement that privacy frameworks that rely heavily on individual notice and consent are neither sustainable in the face of dramatic increases in the volume and velocity of information flows nor desirable because of the burden they place on individuals to understand the issues, make choices, and then engage in oversight and enforcement.”[107] I think Cate and Mayer- Schönberger make for the most valid conclusion to this article, as well as to summarise the debate I have presented. They say that “in short, ensuring individual control over personal data is not only an increasingly unattainable objective of data protection, but in many settings it is an undesirable one as well.”[108] We might very well be throwing the entire data protection regimes under the bus.


[1] Gordon Rayner and Bill Gardner, “Men Must Prove a Woman Said ‘Yes’ under Tough New Rape Rules - Telegraph,” The Telegraph, January 28, 2015, sec. Law and Order, http://www.telegraph.co.uk/news/uknews/law-and-order/11375667/Men-must-prove-a-woman-said-Yes-under-tough-new-rape-rules.html.

[2] Legal Information Institute, “Implied Consent,” accessed August 25, 2015, https://www.law.cornell.edu/wex/implied_consent.

[3] European Parliament, Council of the European Union, Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, 1995, http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:31995L0046.

[4] See supra note 3.

[5] European Commission, “Stronger Data Protection Rules for Europe,” European Commission Press Release Database, June 15, 2015, http://europa.eu/rapid/press-release_MEMO-15-5170_en.htm.

[6] Council of the European Union, “Data Protection: Council Agrees on a General Approach,” June 15, 2015, http://www.consilium.europa.eu/en/press/press-releases/2015/06/15-jha-data-protection/.

[7] See supra note 6.

[8] Abraham L. Newman, Protectors of Privacy: Regulating Personal Data in the Global Economy (Ithaca, NY: Cornell University Press, 2008).

[9] See supra note 8, at 24.

[10] Ibid.

[11] 15 U.S.C. §1681.

[12] 5 U.S.C. § 552a.

[13] 18 U.S.C. § 2510-22.

[14] Federal Trade Commission, “Privacy Online: A Report to Congress,” June 1998, https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf: 40.

[15] See supra note 14, at 8.

[16] Organisation for Economic Cooperation and Development, “2013 OECD Privacy Guidelines,” 2013, http://www.oecd.org/internet/ieconomy/privacy-guidelines.htm.

[17] Canadian Standards Association, “Canadian Standards Association Model Code,” March 1996, https://www.cippguide.org/2010/06/29/csa-model-code/.

[18] Mary Chlopecki, “The Property Rights Origins of Privacy Rights | Foundation for Economic Education,” August 1, 1992, http://fee.org/freeman/the-property-rights-origins-of-privacy-rights.

[19] See Pope v. Curl (1741), available here.

[20] See Prince Albert v. Strange (1849), available here.

[21] Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, no. 5 (December 15, 1890): 193–220, doi:10.2307/1321160.

[22] See supra note 18.

[23] Ibid.

[24] See supra note 21.

[25] See for example, Richard Epstein, “Privacy, Property Rights, and Misrepresentations,” Georgia Law Review, January 1, 1978, 455. And Richard Posner, “The Right of Privacy,” Sibley Lecture Series, April 1, 1978, http://digitalcommons.law.uga.edu/lectures_pre_arch_lectures_sibley/22.

[26] See supra note 21, at 215.

[27] See supra note 21, at 218.

[28] Ibid.

[29] Adrienne W. Fawcett, “Q: Who Said: ‘A Man’s Home Is His Castle’?,” Chicago Tribune, September 14, 1997, http://articles.chicagotribune.com/1997-09-14/news/9709140446_1_castle-home-sir-edward-coke.

[30] Brendan Purves, “Castle Doctrine from State to State,” South Source, July 15, 2011, http://source.southuniversity.edu/castle-doctrine-from-state-to-state-46514.aspx.

[31] “Volenti Non Fit Injuria,” E-Lawresources, accessed August 25, 2015, http://e-lawresources.co.uk/Volenti-non-fit-injuria.php.

[32] Bryce Clayton Newell, “Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 16, 2013), http://papers.ssrn.com/abstract=2341182.

[33] Alan Westin, Privacy and Freedom (Ig Publishing, 2015).

[34] Helen Nissenbaum, “Privacy as Contextual Integrity,” Washington Law Review 79 (2004): 119.

[35] Ruth Gavison, “Privacy and the Limits of Law,” The Yale Law Journal 89, no. 3 (January 1, 1980): 421–71, doi:10.2307/795891: 423.

[36] Ibid.

[37] Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 4, 2012), http://papers.ssrn.com/abstract=2171018: 1888.

[38] Ibid, at 1889.

[39] Omer Tene and Jules Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, September 20, 2012), http://papers.ssrn.com/abstract=2149364: 261.

[40] See supra note 37, at 1881.

[41] Fred H. Cate and Viktor Mayer-Schönberger, “Notice and Consent in a World of Big Data - Microsoft Global Privacy Summit Summary Report and Outcomes,” Microsoft Global Privacy Summit, November 9, 2012, http://www.microsoft.com/en-us/download/details.aspx?id=35596: 3.

[42] See supra note 39.

[43] See for example, US Securities and Exchange Commission, “Corporation Finance Small Business Compliance Guides,” accessed August 26, 2015, https://www.sec.gov/info/smallbus/secg.shtml and Australian Securities & Investments Commission, “Compliance for Small Business,” accessed August 26, 2015, http://asic.gov.au/for-business/your-business/small-business/compliance-for-small-business/.

[44] See supra note 39.

[45] See supra note 41.

[46] See supra note 8, at 24.

[47] See for example, James Daley, “Don’t Waste Time Reading Terms and Conditions,” The Telegraph, September 3, 2014, and Robert Glancy, “Will You Read This Article about Terms and Conditions? You Really Should Do,” The Guardian, April 24, 2014, sec. Comment is free, http://www.theguardian.com/commentisfree/2014/apr/24/terms-and-conditions-online-small-print-information.

[48] See supra note 37, at 1886.

[49] Alex Hudson, “Is Small Print in Online Contracts Enforceable?,” BBC News, accessed August 26, 2015, http://www.bbc.com/news/technology-22772321.

[50] Aleecia M. McDonald and Lorrie Faith Cranor, “Cost of Reading Privacy Policies, The,” I/S: A Journal of Law and Policy for the Information Society 4 (2009 2008): 541

[51] See supra note 41, at 4.

[52] For Canada, see Office of the Privacy Commissioner of Canada, “Fact Sheet: Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency,” October 23, 2013, https://www.priv.gc.ca/resource/fs-fi/02_05_d_56_tips2_e.asp. And Office of the Privacy Commissioner of Canada, “Privacy Toolkit - A Guide for Businesses and Organisations to Canada’s Personal Information Protection and Electronic Documents Act,” accessed August 26, 2015, https://www.priv.gc.ca/information/pub/guide_org_e.pdf.

For USA, see Federal Trade Commission, “Internet of Things: Privacy & Security in a Connected World,” Staff Report (Federal Trade Commission, January 2015), https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf.

[53] See supra note 37, at 1889.

[54] See supra note 39, at 261.

[55] Jakki Geiger, “The Surprising Link Between Hurricanes and Strawberry Pop-Tarts: Brought to You by Clean, Consistent and Connected Data,” The Informatica Blog - Perspectives for the Data Ready Enterprise, October 3, 2014, http://blogs.informatica.com/2014/03/10/the-surprising-link-between-strawberry-pop-tarts-and-hurricanes-brought-to-you-by-clean-consistent-and-connected-data/#fbid=PElJO4Z_kOu.

[56] Constance L. Hays, “What Wal-Mart Knows About Customers’ Habits,” The New York Times, November 14, 2004, http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html.

[57] M. J. de Zwart, S. Humphreys, and B. Van Dissel, “Surveillance, Big Data and Democracy: Lessons for Australia from the US and UK,” Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2, 2014, https://digital.library.adelaide.edu.au/dspace/handle/2440/90048: 722.

[58] Ibid.

[59] See supra note 41, at 3.

[60] Julie E. Cohen, “What Privacy Is For,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 5, 2012), http://papers.ssrn.com/abstract=2175406.

[61] See supra note 37, at 1901.

[62] Ibid.

[63] See supra note 37, at 1899.

[64] Jon Leibowitz, “So Private, So Public: Individuals, The Internet & The paradox of behavioural marketing” November 1, 2007, https://www.ftc.gov/sites/default/files/documents/public_statements/so-private-so-public-individuals-internet-paradox-behavioral-marketing/071031ehavior_0.pdf: 6.

[65] See supra note 5.

[66] See supra note 37, at 1898.

[67] Ibid.

[68] Ibid.

[69] Ibid.

[70] Ibid.

[71] See supra note 41, at 3.

[72] See supra note 39, at 261.

[73] Richard H. Thaler, “Making It Easier to Register as an Organ Donor,” The New York Times, September 26, 2009, http://www.nytimes.com/2009/09/27/business/economy/27view.html.

[74] Ibid.

[75] The Oxford Introductions to U.S. Law: Contracts, 1 edition (New York: Oxford University Press, 2010): 67.

[76] Francis M. Buono and Jonathan A. Friedman, “Maximizing the Enforceability of Click-Wrap Agreements,” Journal of Technology Law & Policy 4, no. 3 (1999), http://jtlp.org/vol4/issue3/friedman.html.

[77] North Carolina State University, “Clickwraps,” Software @ NC State Information Technology, accessed August 26, 2015, http://software.ncsu.edu/clickwraps.

[78] Ed Bayley, “The Clicks That Bind: Ways Users ‘Agree’ to Online Terms of Service,” Electronic Frontier Foundation, November 16, 2009, https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service.

[79] Ibid, at 2.

[80] Ibid.

[81] See Nguyen v. Barnes & Noble Inc., (9th Cir. 2014), available here.

[82] See Specht v. Netscape Communications Corp.,(2d Cir. 2002), available here.

[83] See supra note 78, at 2.

[84] See In Re: Zappos.com, Inc., Customer Data Security Breach Litigation, No. 3:2012cv00325: pg 8 line 23-26, available here.

[85] See Groff v. America Online, Inc., 1998, available here.

[86] Hotmail Corp. v. Van$ Money Pie, Inc., 1998, available here.

[87] ProCD Inc. v. Zeidenberg, (7th. Cir. 1996), available here.

[88] See supra note 78, at 1.

[89] See supra note 78, at 2.

[90] Ibid.

[91] Oliver Herzfeld, “Are Website Terms Of Use Enforceable?,” Forbes, January 22, 2013, http://www.forbes.com/sites/oliverherzfeld/2013/01/22/are-website-terms-of-use-enforceable/.

[92] Ibid.

[93] Ibid.

[94] See supra note 41, at 3.

[95] Christopher Kuner et al., “The Challenge of ‘big Data’ for Data Protection,” International Data Privacy Law 2, no. 2 (May 1, 2012): 47–49, doi:10.1093/idpl/ips003: 49.

[96] Ibid.

[97] See supra note 41, at 5.

[98] See supra note 57, at 723.

[99] Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2013), http://papers.ssrn.com/abstract=2325784: 109.

[100] See supra note 41, at 13.

[101] See supra note 41, at 5.

[102] See supra note 52, Privacy Toolkit, at 14.

[103] See supra note 41, at 6.

[104] Siani Pearson and Marco Casassa Mont, “Sticky Policies: An Approach for Managing Privacy across Multiple Parties,” Computer, 2011.

[105] See supra note 34, at 138.

[106] See supra note 34, at 118.

[107] See supra note 41, at 5.

[108] See supra note 41, at 4.

CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015

by Elonnai Hickok, Vipul Kharbanda and Vanya Rakesh — last modified Sep 02, 2015 05:09 PM
The Centre for Internet & Society (CIS) submitted a clause-by-clause comments on the Human DNA Profiling Bill that was circulated by the Department of Biotechnology on June 9, 2015.

The Centre for Internet and Society is a non-profit research organisation that works on policy issues relating to privacy, freedom of expression, accessibility for persons with diverse abilities, access to knowledge, intellectual property rights and openness. It engages in academic research to explore and affect the shape and form of Internet, along with its relationship with the Society, with particular emphasis on South-South dialogues and exchange. The Centre for Internet and Society was also a member of the Expert Committee which was constituted in the year 2013 by the Department of Biotechnology to discuss the draft Human DNA Profiling Bill.

Missing aspects from the Bill

The Human DNA Profiling Bill, 2015 has overlooked and has not touched upon the following crucial factors :

  • Objects Clause

An ‘objects clause,’ detailing the intention of the legislature and containing principles to inform the application of a statute, in the main body of the statute is an enforceable mechanism to give directions to a statute and can be a formidable primary aid in statutory interpretation. [See, for example, section 83 of the Patents Act, 1970 that directly informed the Order of the Controller of Patents, Mumbai, in the matter of NATCO Pharma and Bayer Corporation in Compulsory Licence Application No. 1 of 2011.] Therefore, the Bill should incorporate an objects clause that makes clear that

“DNA profiles merely estimate the identity of persons, they do not conclusively establish unique identity, therefore forensic DNA profiling should only have probative value and not be considered as conclusive proof.

The Act recognises that all individuals have a right to privacy that must be continuously weighed against efforts to collect and retain DNA and in order to protect this right to privacy the principles of notice, confidentiality, collection limitation, personal autonomy, purpose limitation and data minimization must be adhered to at all times.”

  • Collection and Consent

The Bill does not contain provisions regarding instances when the DNA samples can be collected from the individuals without consent (nor does the Bill establish or refer to an authorization procedure for such collection), when DNA samples can be collected from individuals only with informed consent, and how and in what instances individuals can withdraw their consent.  The issue of whether DNA samples can be collected without the consent of the individual is a vexed one and requires complex questions relating to individual privacy as well as the right against self incrimination. While the question of whether an accused can be made to give samples of blood, semen, etc. which had been in issue in a wide gamut of decisions in India has finally been settled by section 53 of the Code of Criminal Procedure, which allows collection of medical evidence from an accused, thus laying to rest any claims based on the right against self incrimination. However there are still issues dealing with the right to privacy and the violation thereof due to the non-consensual collection of DNA samples. This is an issue which needs to be addressed in this Act itself and should not be left unaddressed as this would only lead to a lack of clarity and protracted court cases to determine this issue. An illustration of this problem is where the Bill allows for collection of intimate body samples. There is a need for inclusion of stringent safeguard measures regarding the same since without such safeguards, the collection of intimate body samples would be an outright infringement of privacy. Further, maintaining a database for convicts and suspects is one thing, however collecting and storing intimate samples of individuals is a gross violation of the citizens’ right to privacy, and without adequate mechanisms regarding consent and security, stands at a huge risk of being misused.

  • Privacy Safeguards

Presently, the Bill is being introduced without comprehensive privacy safeguards in place on issues such as consent, collection, retention, etc. as is evident from the comments made below. Though the DNA Board is given the responsibility of recommending best practices pertaining to privacy  (clause 13 (l)) – this is not adequate given the fact that India does not have a comprehensive privacy legislation. Though section 43A and associated Rules of the Information Technology Act would apply to the collection, use, and sharing of DNA data by DNA laboratories  (as they would fall under the definition of ‘body corporate’ under the IT Act), the National and State Data Banks and the DNA Board would not clearly be body corporate as per the IT Act and would not fall under the ambit of the provision or Rules.  Safeguards are needed to protect against the invasion of informational privacy and physical privacy at the level of these State controlled bodies.  The fact that the Bill is to be introduced into Parliament prior to the enactment of a privacy legislation in India is significant as according to discussions in the Record Notes of the 4h Meeting of the Expert Committee - “the Expert Committee also discussed and emphasized that the Privacy Bill is being piloted by the Government. That Bill will over-ride all the other provisions on privacy issues in the DNA Bill.”

  • Lack of restriction on type of analysis to be performed

The Bill currently does not provide any restriction on the types of analysis that can be performed on a DNA sample or profile. This could allow for DNA samples to be analyzed for purposes beyond basic identification of an individual – such as for health, genetic, or racial purposes. As a form of purpose limitation the Bill should define narrowly the types of analysis that can be performed on a DNA sample.

  • Purpose Limitation

The Bill does not explicitly restrict the use of a DNA sample or DNA profile to the purpose it was originally collected and created for. This could allow for the re-use of samples and profiles for unintended purposes.

  • Annual Public Reporting

The Bill does not require the DNA Board to disclose publicly available information on an annual basis regarding the functioning and financial aspects of matters contained within the Bill. Such disclosure is crucial in ensuring that the public is able to make informed decisions. Categories that could be included in such reports include: Number of DNA profiles added to each indice within the databank, total number of DNA profiles contained in the database, number of DNA profiles deleted from the database, the number of matches between crime scene DNA profiles and DNA profiles, the number of cases in which DNA profiles were used in and the percentage in which DNA profiles assisted in the final conclusion of the case, and the number and categories of DNA profiles shared with international entities.

  • Elimination Indice

An elimination indice containing the profiles of medical professionals, police, laboratory personnel etc. working on a case is necessary in case they contaminate collected samples by accident.

Clause by Clause Recommendations

As stated the Human DNA Profiling Bill 2015 is to regulate the use of DNA analysis of human body substances profiles and to establish the DNA Profiling Board for laying down the standards for laboratories, collection of human body substances, custody trail from collection to reporting and also to establish a National DNA Data Bank.

Comment:

  1. As stated, the purpose of the DNA Human Profiling Bill is to broadly regulate the of DNA analysis and establish a DNA Data Bank.  Despite this, the majority of provisions in the Bill pertain to the collection, use, access etc. of DNA samples and profiles for civil and criminal purposes. The result of this is an 'unbalanced Bill' - with the majority of provisions focusing on issues related to forensic use. At the same time the Bill is not a comprehensive forensic bill – resulting in legislative gaps.
  2. Additionally, the Bill contains provisions beyond the stated purpose. These include:
  • Facilitating the creation of a Data Bank for statistical purposes (Clause 33(e))
  • Establishing state and regional level databanks in addition to a national level databank (Clause 24)
  • Developing procedure and providing for the international sharing of DNA profiles with foreign Governments, organizations, institutions, or agencies. (Clause 29)

Recommendation:

  • The Bill should ideally be limited to regulating the use of DNA samples and profiles for criminal purposes. If the scope remains broad, all purposes should be equally and comprehensively regulated.
  • The stated purpose of the Bill should address all aspects of the Bill. Provisions beyond the scope of the Bill should be removed.

Chapter 1: Preliminary

  • Clause 2: This clause defines the terms used in the Bill.

Comment: A number of terms are incomplete and some terms used in the Bill have not been included in the list of definitions.

Recommendation:

  • The definition of DNA Data bank manager - clause 2 (1)(g) - must be renamed as National DNA Data bank manager.
  • The definition of “DNA laboratory” in clause 2(1)(h) should refer to the specific clauses that empower the Central Government and State Governments to license and recognise DNA laboratories. This is a drafting error.
  • The definition of “DNA profile” in clause 2(1)(i) is too vague. Merely the results of an analysis of a DNA sample may not be sufficient to create an actual DNA profile. Further, the results of the analysis may yield DNA information that, because of incompleteness or lack of information, is inconclusive. These incomplete bits of information should not be recognised as DNA profiles. This definition should be amended to clearly specify the contents of a complete and valid DNA profile that contains, at least, numerical representations of 17 or more loci of short tandem repeats that are sufficient to estimate biometric individuality of a person. The definition of “DNA profile” does not restrict the analysis to forensic DNA profiles: this means additional information, such as health-related information could be analyzed and stored against the wishes of the individual, even though such information plays no role in solving crimes.
  • The term “known sample” that is defined in clause 2(1)(m) is not used anywhere outside the definitions clause and should be removed.
  • The definition of “offender” in clause 2(1)(q) is vague because it does not specify the offenses for which an “offender” needs to be convicted. It is also linked to an unclear definition of the term “under trial”, which does not specify the nature of pending criminal proceedings and, therefore, could be used to describe simple offenses such as, for example, failure to pay an electricity bill, which also attracts criminal penalties.
  • The term “proficiency testing” that is defined in clause 2(1)(t) is not used anywhere in the text of the DNA Bill and should be removed.
  • The definitions of “quality assurance”, “quality manual” and “quality system” serve no enforceable purpose since they are used only in relation to the DNA Profiling Board’s rule making powers under Chapter IX, clause 58. Their inclusion in the definitions clause is redundant. Accordingly, these definitions should be removed.
  • The term “suspect” defined in clause 2(1)(za) is vague and imprecise. The standard by which suspicion is to be measured, and by whom suspicion may be entertained – whether police or others, has not been specified. The term “suspect” is not defined in either the Code of Criminal Procedure, 1973 ("CrPC") or the Indian Penal Code, 1860 ("IPC").
  • The term volunteer defined in clause 2(zf) only addresses consent from the parent or guardian of a child or an incapable person. This term should be amended to include informed consent from any volunteer.

Chapter II: DNA Profiling Board

  • Clause 4: This clause addresses the composition of the DNA Profiling Board.

Comment: The size and composition of the Board that is staffed under clause 4 is extremely large. The number of members remains to be 15, as it was in the 2012 Bill.

Recommendation: Drawing from the experiences of other administrative and regulatory bodies in India, the size of the Board should be reduced to no more than five members. The Board must contain at least:

  • One ex-Judge or senior lawyer
  • Civil society – both institutional and non-institutional
  • Privacy advocates

Note: The reduction of the size of the Board was agreed upon by the Expert Committee from 16 members (2012 Bill) to 11 members. This recommendation has not been incorporated.

  • Clause 5(1): The clause specifies the term of the Chairperson of the DNA Profiling Board to be five years and also states that the person shall not be eligible for re-appointment or extension of the term so specified.

Comment: The Chairperson of the Board, who is first mentioned in clause 5(1), has not been duly and properly appointed.

Recommendation: Clause 4 should be amended to mention the appointment of the Chairperson and other Members.

  • Clause 7: The clause requires members to react on a case-by-case basis to the business of the Board by excusing themselves from deliberations and voting where necessary.

Comment: This clause addresses the issue of conflict of interest only in narrow cases and does not provide penalty if a member fails to adhere to the laid out procedure.

Recommendation: The Bill should require members to make full and public disclosures of their real and potential conflicts of interest and the Chairperson must have the power to prevent such members from voting on interested matters. Failure to follow such anti-collusion and anti-corruption safeguards should attract criminal penalties.

  • Clause 12(5): The clause states that the board shall have the power to co-opt such number of persons as it may deem necessary to attend the meetings of the Board and take part in the proceedings of the board, but such persons will not have the right to vote.

Comment: While serving on the Expert Committee, CIS provided language   regarding how the Board could consult with the public. This language has not been fully incorporated.

Recommendation: As per the recommendation of CIS, the following language should be adopted in the Bill: The Board, in carrying out its functions and activities, shall be required to consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation. The Board shall, while consulting or co-opting persons, ensure that meetings, workshops, and events are conducted at different places in India to ensure equal regional participation and activities.

  • Clause 13: The clause lays down the functions to be performed by the DNA Profiling Board, which includes it’s role in regulation of the DNA Data Banks, DNA Laboratories and techniques to be adopted for collection of the DNA samples.

Comment: While serving on the Expert Committee, CIS recommended that the functions of the DNA Profiling Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Furthermore, this clause delegates a number of functions to the Board that places the Board in the role of a manager and regulator for issues pertaining to DNA Profiling including functions of the DNA Databases, DNA Laboratories, ethical concerns, privacy concerns etc.

Recommendation: As per CIS’s recommendations the functions of the Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Towards this, the Board should be comprised of separate Committees to address these different functions. At the minimum, there should be a Committee addressing regulatory issues pertaining to the functioning of Data Banks and Laboratories and an Ethics Committee to provide independent scrutiny of ethical issues.  Additionally:

  • Clause 13(j) allows the Board to disseminate best practices concerning the collection and analysis of DNA samples to ensure quality and consistency. The process for collection of DNA samples and analysis should be established in the Bill itself or by regulations. Best practices are not enforceable and do not formalize a procedure.
  • Clause 13(q)  allows the Board to establish procedure for cooperation in criminal investigation between various investigation agencies within the country and with international agencies. This procedure, at the minimum, should be subject to oversight by the Ministry of External Affairs.

Chapter III: Approval of DNA Laboratories

  • Clause 15: This clause states that every DNA Laboratory has to make an application before the Board for the purpose of undertaking DNA profiling and also for renewal.

Comment: Though the Bill requires DNA Laboratories to make an application for the undertaking DNA Profiling, it does not clarify that the Lab must receive approval before collection and analysis of DNA samples and profiles.

Recommendation: The Bill should clarify that all DNA Laboratories must receive approval for functioning prior to the collection or analysis of any DNA samples and profiles.

Chapter IV: Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training

  • Clause 19: This clause defines the obligations of a DNA laboratory. Sub-section (d) maintains that one such obligation is the sharing of the 'DNA data' prepared and maintained by the laboratory with the State DNA Data Bank and the National DNA Data Bank.

Comment: ‘DNA Data’ is a new term that has not been defined under Clause 2  of the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?  It is also unclear in what manner and on what basis the information would be shared.

Recommendation: The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks. The flow of and access to data between the State DNA Data Bank and National DNA Data Bank should also be established in the Bill.

  • Clause 22: The clause lays down the measures to be adopted by a DNA Laboratory and 22(h) includes a provision requiring the conducting of annual audits according to prescribed standards.

Comment:

  • The definition of “audit” under Chapter VI in clause 22 under ‘Explanation’ is relevant for measuring the training programmes and laboratory conditions. However, the term “audit” is subsequently used in an entirely different manner in Chapter VII which relates to financial information and transparency.
  • The standards for the destruction of DNA samples have not been included within the list of measures that DNA laboratories must take.

Recommendation:

  • The definition of ‘audit’ must be amended or removed as it is being used in different contexts. The term “audit” has a well established use for financial information that does not require a definition.
  • Standards for the destruction of DNA samples should be developed and included as a measure DNA laboratories must take.
  • Clause 23: This clause lays down the sources for collection of samples for the purpose of DNA profiling. 23(1)(a) includes collection from bodily substances and 23(1)(c) includes clothing and other objects. Explanation (b) provides a definition of 'intimate body sample'.

Comment:

  • Permitting the collection of DNA samples from bodily substances and clothing and other objects allows for the broad collection of DNA samples without contextualizing such collection. In contrast 23(b) Scene of occurrence or scene of crime limits the collection of samples to a specific context.
  • This clause also raises the issue of consent and invasion of privacy of an individual. If “intimate body samples” are to be taken of individuals, then this would be an invasion of the person’s right to bodily privacy if such collection is done without the person’s consent (except in the specific instance when it is done in pursuance of section 53 of the Criminal Procedure Code).

Recommendation:

  • Sources for the collection of DNA samples should be contextualized to prevent broad, unaccounted for, or unregulated collection. Clause (a) and (c) should be deleted and replaced with contexts in which the collection DNA collection would be permitted.
  • The Bill should specify circumstances on which non-intimate samples can be collected and the process for the same.
  • The Bill should specify that intimate body samples can only be taken with informed consent except as per section 53 of the Criminal Procedure Code.
  • The Bill should require that any individual that has a sample taken (intimate and non-intimate) is provided with notice of their rights and the future uses of their DNA sample and profile.

Chapter V: DNA Data Bank

  • Clause 24:This clause addresses establishment of DNA Data Banks at the State and National Level. 24(5) establishes that the National DNA Data Bank will receive data from State DNA Data Banks and store the approved DNA Profiles  as per regulations.

Comment:

  • As noted previously, ‘DNA Data’ is a new term that has not been defined in the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?
  • The process for sharing Data between the State and National Data Banks is not defined.

Recommendation:

  • The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks.
  • The process for the National DNA Data Bank receiving DNA data from State DNA Data Banks and DNA laboratories needs to be defined in the Bill or by regulation. This includes specifying how frequently information will be shared etc.
  • Clause 25: This clause establishes standards for the maintenance of indices by DNA databanks. 25(1) states that every DNA Data Bank needs to maintain the prescribed indices for various categories of data including an index for a crime scene, suspects, offenders, missing persons, unknown deceased persons, volunteers, and other indices as may be specified by regulation. 25(2) states that in addition to the indices, the DNA Data Bank should contain information regarding each of the DNA profiles. It can either be the identity of the person from whose bodily substance the profile was derived in case of a suspect or an offender, or the case reference number of the investigation associated with such bodily substances in other cases. 25(3) states that the indices maintained shall include information regarding the data which is based on the DNA profiling and the relevant records.

Comment:

  • 25(1): The creation of multiple indices cannot be justified and must be limited since collection of biological source material is an invasion of privacy that must be conducted only in strict conditions when the potential harm to individuals is outweighed by the public good. This balance may only be struck when dealing with the collection and profiling of samples from certain categories of offenders. The implications of collecting and profiling DNA samples from corpses, suspects, missing persons and others are vast.  Specifically a 'volunteer' index could possibly be used for racial/community/religious profiling.
  • 25(2): This clause requires the names of individuals to be connected to their profiles, and hence accessible to persons having access to the databank.
  • 25(3) The clause states that only information related to DNA profiling and will be stored in an indice. Yet, it is unclear what such information might be. This could allow inconsistencies in data stored in an indice and could allow for unnecessary information to be stored on an indice.

Recommendation:

  • 25(1) Ideally, DNA databanks should be created for dedicated purposes. This would mean that a databank for forensic purposes should contain only an offenders’ index and a crime scene index while a databank for missing persons would contain only a missing persons indice etc. If numerous indices are going to be contained in one databank, the Bill needs to recognize the sensitivity of each indice as well as the difference between each indice and lay down appropriate and strict conditions for collection of data for such indice, addition of data into the indice, as well as use, access, and retention of data within the indice.
  • 25(2) DNA profiles, once developed, should be maintained with complete anonymity and retained separate from the names of their owners. This amendment becomes even more important if we consider the fact that an “offender” may be convicted by a lower court and have his profile included in the data bank, but may get acquitted later. However, till the time that such person is acquitted, his/her profile with the identifying information would still be in the data bank, which is an invasion of privacy.
  • 25(3) What information will be stored in indices should be clearly defined in the Bill and should be tailored appropriately to each category of indice.
  • Clause 28: This clause addresses the comparison and communication of DNA profiles.  28(1) states that the DNA profile entered in the offenders or crime scene index shall be compared by the DNA Data Bank Manger against profiles contained in the DNA Data Bank and the DNA Data Bank Manager will communicate such information with any court, tribunal, law enforcement agency, or approved DNA laboratory which he may consider appropriate for the purpose of investigation. 28(2) allows for any information relating to a person's DNA profile contained in the suspect's index or offenders' index to be communicated to authorised persons.

Comment:

  • 28(1) (a-c) allows for the DNA Bank Manager to communicate the following: 1.) if the DNA profile is not contained in the Data Bank and what information is not contained, 2.) if the DNA profile is contained in the data bank and what information is contained, and if in the opinion of the Manager, 3.) the DNA profile is similar to one stored in the Databank. These options of communication are problematic as they 1. allow for all associated information to be communicated – even if such information is not necessary, 2.) Allows for the DNA Databank Manager to communicate that a profile is  'similar' without defining what 'similar' would constitute.
  • 28(1) only addresses the comparison of DNA profiles entered  into the offenders index or the crime scene index against all other profiles entered into the DNA Data Bank.
  • 28(1) gives the DNA Data Bank manager broad discretion in determining if information should be communicated and requires no accountability for such a decision.
  • 28(2) only addresses information in the suspect's and offender's index and does not address information in any other index.

Recommendation:

  • Rather than allowing for broad searches across the entire database, the Bill should be clear about which profiles can be compared against which indices. Such distinctions must take into consideration if a profile was taken on consent and what was consented to.
  • Ideally, the response from the DNA Databank Manager should be limited to a 'yes' or 'no' response and only further information should be revealed on receipt of a court order.
  • The Bill should define what constitutes 'similar'
  • A process for determining if information should be communicated should be established in the Bill and followed by the DNA Data Bank Manager. The Manager should also be held accountable through oversight mechanisms for such decisions. This is particularly important, as a DNA laboratory would be a private body.
  • Information stored in any index should be disclosed to only authorized parties.
  • Clause 29: This clause provides for comparison and sharing of DNA profiles with foreign Government, organisations, institutions or agencies. 29(1) allows the DNA Bank Manager to run a comparison of the received profile against all indices in the databank and communicate specified responses through the Central Bureau of Investigation.

Comment: This clause allows for international disclosures of DNA profiles of  Indians through a procedure that is to be established by the Board (see clause 13(q))

Recommendation: The disclosure of DNA profiles of Indians with international entities should be done via the MLAT process as it is the typical process followed when sharing information with international entities for law enforcement purposes.

  • Clause 30: This clause provides for the permanent retention of information pertaining to a convict in the offenders’ index and the expunging of such information in case of a court order establishing acquittal of a person, or the conviction being set aside.

Comment: This clause addresses only the retention and expunging of records of a  convict stored in the offenders index upon the receipt of a court order or the conviction being set aside. This implies that records in all other indices - including volunteers - can be retained permanently. This clause also does not address situations where an individuals DNA profile is added to the databank, but the case never goes to court.

Recommendation: The Bill should establish retention standards and deletion standards for each indice that it creates. Furthermore, the Bill should require the immediate destruction of DNA samples once a DNA profile for identification purposes has been created. An exception to this should be the destruction of samples stored in the crime scene index.

Chapter VI: Confidentiality of and Access to DNA Profiles, Samples, and Records

  • Clause 33: This provision lays down the cases and the persons to which information pertaining to DNA profiles, samples and records stored in the DNA Data Bank shall be made available. Specifically, 33(e) permits disclosure for the creation and maintenance of a population statistics Data Bank.

Comment:

  • This clause addresses disclosure of information in the DNA Data Bank, but does not directly address the use of DNA samples or DNA profiles. This allows for the possibility of re-use of samples and profiles.
  • There is no limitation on the information that can be disclosed. The clause allows for any information stored in the Data Bank to be disclosed for a number of circumstances/to a variety of people.
  • There is no authorization process for the disclosure of such information. Of the circumstances listed – an authorization process is mentioned only for the disclosure of information in the case of investigations relating to civil disputes or other civil matters with the concurrence of the court. This implies that there is no procedure for authorizing the disclosure of information for identification purposes in criminal cases, in judicial proceedings, for facilitating prosecution and adjudication of criminal cases, for the purpose of taking defence by an accused in a criminal case, and for the creation and maintenance of a population statistics Data Bank.

Recommendation:

  • The Bill should establish an authorization process for the disclosure of information stored in a data bank. This process must limit the disclosure of information to what is necessary and proportionate for achieving the requested purpose.
  • Clause 33(e) should be deleted as the non-consensual disclosure of DNA profiles for the study of population genetics is specifically illegal. The use of the database for statistical purposes should be limited to purposes pertaining to understanding effectiveness of the databank.
  • Clause 33(f) should be deleted as it is not necessary for DNA profiles to be stored in a database to be useful for civil purposes. Instead samples for civil purposes are only needed as per the relevant case and specified persons.
  • Clause 33(g) should be deleted as it allows for the scope of cases in which DNA can be disclosed to by expanded as prescribed.
  • Clause 34: This clause allows for access to information for operation maintenance and training.
  • Comment: This clause would allow individuals in training access to data stored on the database for training purposes. This places the security of the databank and the data stored in the databank at risk.
  • Recommendation: Training of individuals should be conducted via simulation only.
  • Clause 35: This clause allows for access to information in the DNA Data Bank for the purpose of a one time keyboard search. A one time keyboard search allows for information from a DNA sample to be compared with information in the index without the information from the DNA sample being included in the index. The clause allows for an authorized individual to carry out such a search on information obtained from an DNA sample lawfully collected for the purpose of criminal investigation, except if the DNA sample was submitted for elimination purposes.
  • Comment: The purpose of this clause is unclear as is the scope. The clause allows for the sample to be compared against 'the index' without specifying which index. The clause also allows for 'information obtained from a DNA sample' rather than a profile.  Thus, the clause appears to allow for any information derived from a DNA sample collected for a criminal investigation to be compared against all data within the databank – without recording such information. Such a comparison is vast in scope and open to abuse.
  • Recommendation: To ensure that this provision is not used for conducting searches outside of the scope of the original purpose, only DNA profiles, rather than 'information derived from a sample' should be allowed to be compared,  only the indices relevant to the sample should be compared, and the search should be authorized and justified.
  • Clause 36 : This clause addresses the restriction of access to information in the crime scene index if the individual is a victim of a specified offense or if the person has been eliminated as a suspect of an investigation.

Comment:

  • This clause only addresses restriction of access to the crime scene index and does not address restriction of access to other indices.
  • This clause only restricts access to the indice for certain category of individual and for a specific status of a person. Oddly, the clause does not include authorization or rank as a means for determining or restricting access.

Recommendation:

  • This clause should be amended to lay down standards for restriction of access for all indices.
  • Access to all information in the databank should be restricted by default and permission should be based on authorization rather than category or status of individual.
  • Clause 38: This clause sets out a post-conviction right related to criminal procedure and evidence.

Comment: This clause would fundamentally alter the nature of India’s criminal justice system, which currently does not contain specific provisions for post-conviction testing rights.

Recommendation: This clause should be deleted and the issue of post conviction rights related to criminal procedure and evidence referenced to the appropriate legislation.  Clause 38 is implicated by Article 20(2) of the Constitution of India and by section 300 of the CrPC. The principle of autrefois acquit that informs section 300 of the CrPC specifically deals with exceptions to the rule against double jeopardy that permit re-trials. [See, for instance, Sangeeta Mahendrabhai Patel (2012) 7 SCC 721.] The person must be duly accorded with a right to know rules may provide for- the authorized persons to whom information relating to a person’s DNA profile contained in the offenders’ index shall be communicated. Alternatively, this right could be limited only to accused persons who’s trial is still at the stage of production of evidence in the Trial Court. This suggestion is being made because unless the right as it currently stands, is limited in some manner, every convict with the means to engage a lawyer would ask for DNA analysis of the evidence in his/her case thereby flooding the system with useless requests risking a breakdown of the entire machinery.

Chapter VII: Finance, Accounts, and Audit

Clause 39: This clause allows the Central Government to make grants and loans to the DNA Board after due appropriation by Parliament.

Comment: This clause allows the Central Government to grant and loan money to the DNA Board, but does not require any proof or justification for the sum of money being given.

Recommendation: This clause should require a formal cost benefit analysis, and financial assessment prior to the giving of any grants or loans.

Chapter VIII: Offences and Penalties

Chapter IX: Miscellaneous

Clause 53: This clause allows protects the Central Government and the Members of the Board from suit, prosecution, or other legal proceedings for actions that they have taken in good faith.

Comment: Though it is important to take into consideration if an action has been taken in good faith, absolving the Government and Board from accountability for actions leaves little course of redress for the individual. This is particularly true as the Central Government and the Board are given broad powers under the Bill.

Recommended: If the Central Government and the Board will be protected for actions taken in good faith, their powers should be limited. Specifically, they should not have the ability to widen the scope of the Bill.

Clause 57: This clause states that the Central Government will have the powers to make Rules for a number of defined issues.

Comment: 57(d) allows for the regulations to be created regarding the use of population statistics Data Bank created and maintained for the purposes of identification research and protocol development or quality control.

Recommendation: 57(d) should be deleted as any use for the creation of a population statistics Data Bank created and maintained for the purposes of identification research and protocol  development or quality control is beyond the scope of the Bill.

  • Clause 58: This clause empowers the Board to make regulations regarding a number of aspects related to the Bill.
  • Comment: There a number of functions that the Board can make regulations for that should be defined within the Bill itself to ensure that the scope of the Bill does not expand without Parliamentary oversight and approval.
  • Recommendation: 58(2)(g) should be deleted as it allows the Board to create regulations for other relevant uses of DNA techniques and technologies, 58(2)(u) should be deleted as it allows the Board to include new categories of indices to databanks, and 58(2) (aa) should be deleted as it allows the Board to decide which other indices a DNA profile may be compared with in the case of sharing of DNA profiles with foreign Governments, organizations, or institutions.

Clause 61: This clause states that no civil court will have jurisdiction to entertain any suit or proceeding in respect of any matter which the Board is empowered to determine and no injunction shall be granted.

Comment: This clause in practice will limit the recourse that individuals can take and will exclude the Board from the oversight of civil or criminal courts.

Recommendation: The power to collect, store and analyse human DNA samples has wide reaching consequences for people whose samples are being utilised for this purpose, specially if their samples are being labeled in specific indexes such as “index of offenders”, etc. The individual should therefore have a right to approach the court of law to safeguard his/her rights. Therefore this provision barring the jurisdiction of the courts should be deleted.

Schedule

  • Schedule A: The schedule refers to section 33(f) which allows for disclosure of information in relation to DNA profiles, DNA samples, and records in a DNA Data Bank to be communicated in cases of investigations relating to civil disputes or other civil matters or offenses or cases listed in the schedule with the concurrence of the court.

Comment: As 33(f) requires the concurrence of the court for disclosure of information, it is unclear what purpose the schedule serves. If the Schedule is meant to serve as a guide to the Court on appropriate instances for the disclosure of information stored in the DNA databank – the schedule is too general by listing entire Acts, while at the same time being too specific by naming specific Acts. Ideally, courts should use principles and the greater public interest to reach a decision as to whether or not disclosure of information in the DNA databank is appropriate. At a minimum these principles should include necessity (of the disclosure) and proportionality (of the type/amount of information disclosed).

Recommendation: As we recommended the deletion of clause 33(f) as it is not necessary to databank DNA profiles for civil purposes, the schedule should also be deleted.

  • Note: The schedule differs drastically from previous drafts and from discussions  held in the Expert Committee and recommendations agreed upon. As per the Meeting Minutes of the Expert Committee meeting held on November 10th 2014 “The Committee recommended incorporation of the comments received from the members of the Expert Committee appropriately in the draft Bill...Point no. 1 suggested by Mr. Sunil Abraham in the Schedule of the draft Bill to define the cases in which DNA samples can be collected without consent by incorporating point no. 1 (I.e 'Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the code of Criminal Procedure, 1973)

Download CIS submission here. See the cover letter here.

CIS Human DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:04 PM

PDF document icon CIS_Human_DNA_Profiling_Bill_Comments.pdf — PDF document, 200 kB (204983 bytes)

Cover Letter for DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:05 PM

PDF document icon CIS Cover Letter.pdf — PDF document, 105 kB (107663 bytes)

Data Flow in the Unique Identification Scheme of India

by Vidushi Marda last modified Sep 03, 2015 05:02 PM
This note analyses the data flow within the UID scheme and aims at highlighting vulnerabilities at each stage. The data flow within the UID Scheme can be best understood by first delineating the organizations involved in enrolling residents for Aadhaar. The UIDAI partners with various Registrars usually a department of the central or state Government, and some private sector agencies like LIC etc– through a Memorandum of Understanding for assisting with the enrollment process of the UID project.

Many thanks to Elonnai Hickok for her invaluable guidance, input and feedback


These Registrars then appoint Enrollment Agencies that enroll residents by collecting the necessary data and sharing this with the UIDAI for de-duplication and issuance of an Aadhaar number, at enrolment centers that they set up. The data flow process of the UID is described below:[1]

Data Capture

  • Filling out an enrollment form – To enroll for an Aadhaar number, individuals are required to provide proof of address and proof of identity. These documents are verified by an official at the enrollment center.

Vulnerability: Though an official is responsible for verifying these documents, it is unclear how this verification is completed. It is possible for fraudulent proof of address and proof of identity to be verified and approved by this official.

  • The 'introducer' system: For individuals who do not have a Proof of Identity, Proof of Address etc the UIDAI has established an 'introducer' system. The introducer verifies that the individual is who they claim to be and that they live where they claim to live.

Vulnerability: This introducer is akin to the introducer concept in banking; except that here, the introducer must be approved by the Registrar, and need not know the person bring enrolled. This leads to questions of authenticity and validity of the data collected and verified by an 'introducer'. The Home Ministry in 2012, indicated that this must be reviewed.[2]

  • Categories of data for enrollment: The UIDAI has a standard enrollment form and list of documents required for enrollment. This includes: name, address, birth date, gender, proof of address and proof of identity. Some MoUs (Memorandum of Understanding) permit for the Registrars to collect additional information in addition to what is required by the UIDAI. This could be any information the Registrar deems necessary for any purpose.

Vulnerability: The fact that a Registrar may collect any information they deem necessary and for any purpose leads to concerns regarding (1) informed consent – as individuals are in placed in a position of having to provide this information as it is coupled with the Aadhaar enrollment process (2) unauthorized collection - though the MOU between the UIDAI and the Registrar has authorized the Registrar to collect additional information – if the information is personal in nature and the Registrar is a body corporate it must be collected as per the Information Technology Rules 2011 under section 43A. It is unclear if Registrars that are body corporates are collecting data in accordance to these rules. (3) As Registrars are permitted to collect any data they deem necessary for any purpose – this leads to concerns regarding misuse of this data..[3]

  • Verification of Resident’s Documents: true copies of original  documents, after verification are sent to the Registrar for “permanent storage.”[4]

Vulnerability: It is unclear as to what extent and form this storage takes place. There is no clarity on who is responsible for the data once collected, and the permissible uses of such data are also unclear. The contracts between the UID and Registry claim that guidelines must be followed, while the guidelines state that, “The documents are required to be preserved by Registrar till the UIDAI finalizes its document storage agency” and states that the “Registrars must ensure that the documents are stored in a safe and secure manner and protected from unauthorized access.” [5] The question of what is “unauthorized access”, “secure storage”, when is data transferred to the UIDAI and when the UIDAI will access it and why remain unanswered. Moreover, there is nothing about deleting documents once the MoU lapses. The guidelines in question were also developed post facto.

  • Data collection for enrollment: After verification of proof of address and proof of identity, operators at the enrolling the agency will be enrolling individuals.  Data Collection is completed by operators at the enrolling agency. This includes the digitization of enrollment forms and collection of biometrics. Enrollment information is manually collected and entered into computers operating software provided by the UIDAI and then transferred to the UIDAI. Biometrics are collected through devices that have been provided by third parties such as Accenture and L1Identity Solutions.

Vulnerability: After data is collected by enrollment operators it is  possible for data leakage to occur at the point of collection or during transfer to the Registrar and UIDAI. Data operators, are therefore not answerable to the UIDAI, but to a private agency; a fact which has been the cause of concern even within the government.[6] There have also been instances of sub contracting which leads to more complications in respect of accountability. Misuse[7] and loss of data is a very real possibility, and irregularities have been reported as well.[8] By relying on technology that is provided by third parties (in many cases foreign third parties) data collected by these devices is also available to these companies while at the same time the companies are not regulated by Indian law.

  • Import pre-enrolment data into Aadhaar enrollment client, Syncing NPR/census data into the software: The National Population Register (NPR) enrolls usual residents, and is governed by the Citizenship Rules, which prescribe a penalty for non disclosure of information.

Vulnerability: Biometrics does not form part of the Rules that govern NPR data collection; the Citizenship Rules, 2003. In many ways, collection of biometrics without amending the citizenship laws amounts to a worrying situation. The NPR hands over information that it collects to UIDAI, biometrics collected as part of the UIDAI is included in the NPR, leading to concerns surrounding legality and security of such data.

  • Resident’s consent: for “whether the resident has agreed to share the captured information with organizations engaged in delivery of welfare services.”

Vulnerability: This allows the UIDAI to use data in an almost unfettered fashion. The enrolment form reads, “‘‘I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services.” Informed consent, Vague. What info and with whom. Why is necessary for the UIDAI to share this information, when the organization is only supposed to be a passive intermediary? Does beyond the mandate of the UIDAI, which is only to provide and authenticate the number.

  • Biometric exceptions: The operator checks if the resident’s eyes/hands are amputated/missing, and after the Supervisor verifies the same, the record is made as an exception and only the individuals photograph is recorded.

Vulnerability: There has widespread misuse of this clause, with data being fabricated to fall into this category, making it unreliable as a whole. In March 2013, 3.84 lakh numbers were cancelled as they were based on fraudulent use of the exception clause. [9]

  • Operator checks if resident wants Aadhaar enabled bank account: The UID project was touted to be a scheme that would ensure access to benefits and subsidies that are provided through cash transfers as well as enabling financial inclusion. Subsequently, the need for a Aadhaar embedded bank account was made essential to avail of these benefits. The operator at this point checks whether the resident would like to open such a bank account.

Vulnerability: The data provided at the time of linking UID with a bank account cannot be corrected or retracted. Although this has the vision of financial inclusion, it is now a threat of exclusion.

  • Capturing biometrics- The UIDAI scheme includes assigning each individual a unique identification number after collecting their demographic and biometric information. One Time Passwords are used to manually override a situation in which biometric identification fails.[10] The UIDAI data collection process was revamped in 2012 to include best finger detection and multiple try method.[11]

Vulnerabilities: The collection process is not always accurate, in fact, 70% of the residents who enrolled in Salt Lake, will have to re-enroll due to discrepancies at the time of enrollment.[12] Further, a large number of people in India are unable to give biometric information due to manual labour, or cataracts etc.

After such data is entered, the Operator shows such data to the Resident or Introducer or Head of the Family (as the case may be) for validation.

  • Operator Sign off – Each set of data needs to be verified by an Operator whose fingerprint is already stored in the system.

Vulnerability: Vesting authority to sign off in an operator allows for  signing off on inaccurate or fraudulent data. For example, the issuance of aadhaar numbers to biometric exceptions highlight issues surrounding misuse and unreliability of this function.[13]

After this, the Enrolment operator gets supervisor’s sign off for any exceptions that might exist, Acknowledgement and consent for enrolment is stored. Any correction to specified data can be made within 96 hours.

Document Storage, Back up and Sync

After gathering and verifying all the information about the resident, the Enrolment Agency Operator will store photocopies of the documents of the resident. These Agencies also backup data “from time to time” (recommended to be twice a day), and maintain it for a minimum of 60 days. They also sync with the server every 7-10 days.

Vulnerability: The security implications of third party operators storing information is greatly exacerbated by the fact that these operators use technology and devices from companies have close ties to intelligence agencies in other countries; L-1 Identity Solutions have close ties with America’s CIA, Accenture with French intelligence etc. [14]

Transfer of Demographic and Biometric Data Collected to CIDR

“First mile logistics” include transferring data by using Secure File Transfer Protocol) provided by UIDAI or through a “suitable carrier” such as India Post.

Vulnerability: There is no engagement between the UIDAI and the enrolling agencies; the registrars engage private enrolment agencies, and not the UIDAI. Further, the scope of people authorized to collect information, the information that can be collected, how such information is stored etc are all vague. In 2009, there was a notification that claimed that the UIDAI owns the database[15] but there is no indication on how it may be used, how this might react to instances of identity fraud, etc.

Data De-duplication and Aadhar Generation at CIDR

On receiving biometric information, the de-duplication is done to ensure that each individual is given only one UID number.

Vulnerability:

  • This de-duplication is carried out by private companies, some of which are not of indian origin and thus are also not bound by Indian law. Also, the volume of Aadhaar numbers rejected due to quality or technical reasons is a cause of worry; the count reaching 9 crores in May 2015.[16]
  • The MoUs promise registrars access to information contained in the Aadhaar letter, although individuals are ensured that such letter is only sent to them. [17]
  • General compliance and de-duplication has been an issue, with over 34,000 people being issued more than one Aadhaar number,[18] and innumerable examples of faulty Aadhaar cards being issued.[19]

[1] Enrolment Process Essentials : UIDAI , (December 13,2012), http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[2] UIDAI to review biometric data collection process of 60 crore resident Indians: P Chidambaram, Economic Times, (Jan 31, 2012), http://articles.economictimes.indiatimes.com/2012-01-31/news/31010619_1_biometrics-uidai-national-population-register.

[3]See: an MoU signed between the UIDAI and the Government of Madhya Pradesh. Also see: Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[4]http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[5] Document Storage Guidelines for Registrars – Version 1.2, https://uidai.gov.in/images/mou/D11%20Document%20Storage%20Guidelines%20for%20Registrars%20final%2005082010.pdf

[6] Arindham Mukherjee, Lola Nayar, Aadhaar,A Few Basic Issues, Outlook India, (December 5, 2011), http://dataprivacylab.org/TIP/2011sept/India4.pdf.

[7] Aadhaar: UIDAI probing several cases of misuse of personal data, The Hindu, (April 29, 2012), http://www.thehindubusinessline.com/economy/aadhar-uidai-probing-several-cases-of-misuse-of-personal-data/article3367092.ece.

[8] Harsimran Julka, UIDAI wins court battle against HCL technologies, The Economic Times, (October 4, 2011), http://articles.economictimes.indiatimes.com/2011-10-04/news/30242553_1_uidai-bank-guarantee-hp-and-ibm.

[9] Chetan Chauhan, UIDAI cancels 3.84 lakh fake Aadhaar numbers, The Hindustan Times, (December 26, 2012), http://www.hindustantimes.com/newdelhi/uidai-cancels-3-84-lakh-fake-aadhaar-numbers/article1-980634.aspx.

[10] Usha Ramanathan, “Inclusion project that excludes the poor”, The Statesman (July 4, 2013).

[11] UIDAI to Refresh Data Collection Process, Zee News, (February 7, 2012) http://zeenews.india.com/news/delhi/uidai-to-refresh-data-collection-process_757251.html.

[12] Snehal Sengupta, Queue up again to apply for Aadhaar, The Telegraph, (February 27, 2015), http://www.telegraphindia.com/1150227/jsp/saltlake/story_5642.jsp#.VayjDZOqqko

[13] Chauhan, supra note 7.

[14] Usha Ramanathan, Three Supreme Court Orders Later, What’s the Deal with Aadhaar? Yahoo News, (April 13, 2015), https://in.news.yahoo.com/three-supreme-court-orders-later--what-s-the-deal-with-aadhaar-094316180.html.

[15] Usha Ramanathan, “Threat of Exclusion and of Surveillance, The Statesman (July 2, 2013).

[16] Over 9 Crore Aadhaar enrolments rejected by UIDAI, Zee News (May 8, 2015).

[17] Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[18] Surabhi Agarwal, Duplicate Aadhar numbers within estimate, Live Mint (March 5, 2013).

[19] Usha Ramanathan, “Outsourcing enrolment, gathering dogs and trees”, The Statesman (August 7, 2013).

The seedy underbelly of revenge porn

by Prasad Krishna last modified Sep 27, 2015 02:25 PM
Intimate photos posted by angry exes are becoming part of an expanding online body of dirty work.

The article by Sandhya Soman was published in the Times of India on August 23, 2015.


Three lakh 'Likes' aren't easy to come by. But Geeta isn't gloating. She's livid, and waiting for the day a video-sharing site will take down the popular clip of her having sex with her vengeful ex-husband. "Every other day somebody calls or messages to say they've seen me," says Geeta.

She is not alone. Two weeks ago, law student Shrutanjaya Bhardwaj Whatsapped women he knew asking if any of them had come across cases of online sexual harassment. In a few hours, his phone was filled with tales of harassment by ex-boyfriends and strangers. Instances ranged from strangers publishing morphed photographs on Facebook, to ex-husbands and boyfriends circulating intimate photos and videos on porn sites. Of the 40 responses, around 25 were cases of abuse by former partners. "I have heard friends talking about the problem, but never realized it was this bad," says Bhardwaj.

These days, revenge is best served online - it travels faster and has potential for greater damage. But despite the widespread nature of the crime, many targets hesitate to complain for fear of being shamed and blamed. "A 15-year-old girl is going to worry about how her parents will react if she talks about it," says Chinmayi Arun, research director, Centre for Communication Governance at Delhi National Law University. There is also fear of harassment by the police, says Rohini Lakshane, researcher, Centre for Internet and Society. Worst of all is the waiting. "Even if a police complaint is filed, it takes ages to find out who shot it, who uploaded it and where it is circulated. Such content is mirrored across many sites," she says.

Geeta is familiar with the routine. Her harassment started with photographs sent to family, friends and colleagues. After an acrimonious divorce, several videos were released in 2013. "There were some 25-30 videos on various sites.

After an FIR was filed, the police wrote to websites and some of the links were removed," says Geeta, who has been flagging content on a popular site, which has not yet responded to her privacy violation report. "My face is seen clearly on it. People even come up to me in restaurants saying they've seen it. How do I get on with my life?" asks a distraught Geeta. She also recently filed an affidavit supporting the controversial porn ban PIL in a last-ditch effort to erase the abuse that began after her divorce.

The cyber cell officer in charge of her case says he had got websites to shut down several URLs but was thwarted by the repeal of section 66A of the IT Act that dealt with offensive messages sent electronically. When asked why section 67 (cyber pornography) of the same act and various sections in the criminal law couldn't be used, the officer says that only 66A is applicable to the evidence he has. "I asked for more links and she sent them to me. We'll see if other sections can be applied," he says. Lawyers and activists, argue that existing laws are good enough like sections 354A (sexual harassment), 354C (voyeurism), 354D (stalking) and 509 (outraging modesty) of the IPC.

Though there are no official statistics for what is popularly referred to as 'revenge' porn, there is a flood of such images online. Lakshane, who studied consent in amateur pornography for the NGO-run EroTICs India project in 2014, found clandestinely shot clips to exhibitionist ones where faces are blurred or cropped.

Social activist Sunita Krishnan has raised the red flag over several video clips, including two that show gang rape, which were circulated on Whatsapp. Some of the content she came across showed familiarity between the man and woman, indicating an existing relationship. In one clip, the man says: "How dare you go with that fellow. What you did it to him, do it to me."

Most home-grown clips end up on desi sites with servers abroad, making it difficult to take down content. Some do have a policy of asking for consent of people in the frame. But Lakshane, who wanted to test this policy, says when she approached one website that has servers abroad saying that she had a sexually explicit video, the reply was a one-liner asking her to send it. "They didn't ask for any consent emails," she says. In lieu of payment, they offered her a free account on another file-sharing site, which seemed to partner with the site. With no financial links to those submitting videos, sites like these make money out of subscriptions from consumers, or ads.

A few months ago, the CBI arrested a man from Bengaluru for uploading porn clips, using high-end editing software and cameras. Kaushik Kuonar allegedly headed a syndicate and was supposed to be behind the rape clips reported by Krishnan. "I am skeptical of the idea of amateur porn being randomly available across the Internet. There seem to be people like the man in Bengaluru who are apparently sourcing, distributing and making money out of it," says Chinmayi Arun. "He had 474 clips, including some of rape," adds Krishnan.

Social media companies, meanwhile, say they're working with authorities to prevent such violations. Facebook spokesperson says the company removes content that violates its community standards. It also works with the women and child development ministry to help women stay safe online. Google, Microsoft, Twitter and Reddit have promised to remove links to revenge porn on request, while countries like Japan and Israel have made it illegal.

In India, the National Commission for Women started a consultation on online harassment but is yet to submit a report. In the absence of clarity, activists like Krishnan endorse the banning of porn sites. Not all agree with sweeping solutions. Lakshane says sometimes a court order helps to get tech companies to act faster on requests as in the case of a 2012 sex tape scandal where Google removed search results to 360 web pages. Also, the term 'revenge' porn, she says, is a misnomer as the videos are meant to shame women. "These are not movies where actors get paid. Somebody else is making money off this gross violation of privacy."

Human DNA Profiling Bill 2012 v/s 2015 Bill

by Vanya Rakesh last modified Sep 06, 2015 02:10 PM
This entry analyses the Human DNA Profiling Bill introduced in 2012 with the provisions of the 2015 Bill

A comparison of changes that have been introduced in the Human DNA Profiling Bill, June 2015.

  • Definitions:

1. 2012 Bill: The definition of "analytical procedure" was included under clause 2 (1) (a) and was defined as an orderly step by step procedure designed to ensure operational uniformity.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

2. 2012 Bill: The definition of "audit" was earlier defined under clause 2 (1) (b) and was defined as an inspection used to evaluate, confirm or verify activity related to quality.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

3. 2012 Bill: There was no definition of "bodily substance".

2015 Bill: Clause 2(1) (b) defines bodily substance to be any biological material of or from a body of the person (whether living or dead) and includes intimate/non-intimate body samples as well.

4. 2012 Bill: The definition of "calibration" was included under clause 2 (1) (d) in the previous Bill.

2015 Bill: The definition has been removed from the definition clause and has been included as an explanation under clause 22.

5. 2012 Bill: Previously "DNA Data Bank" was defined under clause 2(1)(h) as a consolidated DNA profile storage and maintenance facility, whether in computerized or other form, containing the indices as mentioned in the Bill.

2015 Bill: However, in this version, the definition has been briefed under clause 2(1) (f) to mean as a DNA Data Bank as established under clause 24.

6. 2012 Bill: Previously a "DNA Data Bank Manager" was defined clause 2(1) (i) as the person responsible for supervision, execution and maintenance of the DNA Data Bank.

2015 Bill: In the new Bill, it is defined clause 2(1) (g) as a person appointed under clause 26.

7. 2012 Bill: Under clause 2(1) (j), the definition of "DNA laboratory" was defined to be any laboratory established to perform DNA procedures.

8. 2015 Bill: Under clause 2(1) (h) "DNA laboratory" has been now defined to be any laboratory established to perform DNA profiling.

9. 2012 Bill: "DNA procedure" was defined under clause 2(1) (k) as a procedure to develop DNA profile for use in the applicable instances as specified in the Schedule.

2015 Bill: This definition has been removed from the Bill.

10. 2012 Bill: There was no definition of "DNA Profiling".

2015 Bill: DNA profiling has been defined under clause 2(1) (j) as a procedure to develop DNA profile for human identification.

11. 2012 Bill: "DNA testing" was defined under clause 2(1) (n) as the identification and evaluation of biological evidence using DNA technologies for use in the applicable instances.

2015 Bill: This definition has been removed.

12. 2012 Bill: "forensic material" was defined under clause 2(1) (o) as biological material of or from the body of a person living or dead, and representing an intimate body sample or non-intimate body sample.

2015 Bill: This definition has been included under the definition of "bodily substance" under clause 2(1) (b).

13. 2012 Bill: "intimate body sample" was defined under clause 2(1) (q).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

14. 2012 Bill: "intimate forensic procedure" was defined under 2(1) (r).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

15. 2012 Bill: "non-intimate body sample" was defined under clause 2(1) (v) in 2012 Bill.

2015 Bill: The definition of "non-intimate body sample" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

16. 2012 Bill: "non-intimate forensic procedure" was defined under clause 2(1) (w) in 2012 Bill.

2015 Bill: The definition of "non-intimate forensic procedure" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

17. 2012 Bill: "undertrial" was defined under clause 2(1) (zk) as a person against whom a criminal proceeding is pending in a court of law.

2015 Bill: The definition now states such a person against whom charges have been framed for a specified offence in a court of law under clause 2(1) (zc).

  • DNA Profiling Board:

1. 2012 Bill: Under clause 4 (a), the Bill stated that a renowned molecular biologist must be appointed as the Chairperson.

2015 Bill: Under clause 4 addressing Composition of the Board, the Bill states that the Board shall consist of a Chairperson who shall be appointed by the Central Government and must have at least fifteen years' experience in the field of biological sciences.

2. 2012 Bill: Under clause 4 (i), the Chairman of National Bioethics Committee of Department of Biotechnology, Government of India was to be included as a member under the DNA Profiling Board.

2015 Bill: This member has been removed from the composition.

3. 2012 Bill: Under clause 4 (m), the term of 1 person from the field of genetics was not mentioned in the 2012 Bill.

2015 Bill: In this Bill under clause 4 (m), it has been stated that such a person must have minimum experience of twelve years in the field.

4. 2012 Bill: The term of 2 people from the field of biological sciences was not mentioned in the 2012 Bill under clause 4 (l).

2015 Bill: Under clause 4 (l), it has been stated that such 2 people must have minimum experience of twelve years in the field.

5. The following members have been included in the 2015 Bill-

i. Chairman of National Human Rights Commission or his nominees, as an ex-officio member under clause 4 (a).

ii. Secretary to Government of India, Ministry of Law and Justice or his nominees (not below rank of Joint Secretary), as an ex-officio member under clause 4 (b).

6. 2012 Bill: Under clause 5, the term of the members was not uniform and varied for all members.

2015 Bill: The term of people from the field of biological sciences and the person from the field of genetics has been states to be five years from the date of their entering upon the office, and would be eligible for re-appointment for not more than 2 consecutive terms.

Also, the age of a Chairperson or a member cannot exceed seventy years.

The term of members under clauses (c), (f), (h), and (i) of clause 4 is 3 years and for others the term shall continue as long as they hold the office.

  • Chief Executive Officer:

2012 Bill: Earlier it was stated in the Bill under clause 10 (3) that such a person should be a scientist with understanding of genetics and molecular biology.

2015 Bill: The Bill states under clause 11 (3) that the CEO shall be a person possessing qualifications and experience in science or as specified under regulations. The specific experience has been removed.

A new clause- 12(5) addresses power of the Board to co-opt the number of people for attending the meetings and take part in proceedings; however such a person shall be devoid of voting rights. Also, such a person shall be entitled to specified allowances for attending the meetings.

  • Officers and Other Employees of Board:

2012 Bill: The Bill stated under clause 11 (3) that the Board may appoint consultants required to assist in the discharge of its functions on such terms and conditions as may be specified by the regulations.

2015 Bill: The 2015 Bill states under clause 12 (3) that the Board may appoint experts to assist for discharging its functions and may hold consultations with people whose rights may be affected by DNA profiling.

  • Functions of the Board:

2012 Bill: 26 functions were stated in the 2012 Bill.

2015 Bill: The number of the functions has been reduced to 22 with a few changes based on recommendations of Expert Committee.

  • Power of Board to withdraw approval:

2015 Bill: The circumstances in which the Board could withdraw its approval have not been changed from the 2012 Bill (previously under clause 16). There's an addition to the list as provided under clause 17 (1) (d) wherein the Board can also withdraw its approval in case the DNA laboratory fails to comply with any directions issued by the DNA Profiling Board or any such regulatory Authority under any other Act.

  • Obligations of DNA Laboratory:

2015 Bill: There is an addition to the list of obligations to be undertaken by a DNA laboratory under clause 19 (d). The laboratory has an additional obligation to share the DNA data prepared and maintained by it with the State DNA Data Bank and the National DNA Data Bank.

  • Qualification and experience of Head, technical and managerial staff and employees of DNA Laboratory:

2012 Bill: The previous Bill clearly mandated under clause 19 (2) the qualifications of the Head of every DNA laboratory to be a person possessing educational qualifications of Doctorate in Life Sciences from a recognised University with knowledge and understanding of the foundation of molecular genetics as applied to DNA work and such other qualifications as may be specified by regulations made by the Board.

2015 Bill: The provision has been generalized and provides under clause 20 (1) for a person to be possess the specified educational qualifications and experience.

  • Measures to be taken by DNA Laboratory:

2012 Bill: In the previous Bill, there were separate clauses with regard to security, minimization of contamination, evidence control system, validation process, analytical procedure, equipment calibration and maintenance, audits of laboratory to be followed by a DNA Laboratory.

2015 Bill: In the 2015 Bill, these measures to be adopted by DNA Laboratory have been included under one clause itself-clause 22.

  • Infrastructure and training:

2012 Bill: The specific provisions regarding infrastructure, fee, recruitment, training and installing of security system in the DNA Laboratory were present in the Bill under clauses 28-31.

2015 Bill: These provisions have been removed from the 2015 Bill.

  • Sources and manner of collection of samples for DNA profiling:

2012 Bill: Part II of the Schedule in the Bill provided for sources and manner of collection of samples for DNA Profiling.

The sources include: Tissue and skeleton remains and Already preserved body fluids and other samples.

Also, it provided for a list of the manner in which the profiling can be done:

(1) Medical Examination (2) Autopsy examination (3) Exhumation

Also, provision for collection of intimate and non-intimate body samples was provided as an Explanation.

2015 Bill: Under Clause 23, the sources include bodily substances and other sources as specified in Regulations. The other sources remain unchanged.

Also, provision for collection of intimate and non-intimate body samples is addressed in clause 23(2).

The explanation to the provision states what would be implied by the terms medical practitioner, intimate body sample, intimate forensic procedure, non-intimate body sample and non-intimate forensic procedure.

  • DNA Data Bank:

- Establishment:

2012 Bill: The Bill did not specify any location for establishment of the National DNA Data Bank.

2015 Bill: The Bill states under clause 24 (1) that the Central Government shall establish a National DNA Data Bank at Hyderabad.

-Maintenance of indices of DNA Data Bank:

2012 Bill: Apart from the DNA profiles, every DNA Data Bank shall contain the identity of the person from whose body the substances are taken in case of a profile in the offenders' index as under clause 32 (6) (a).

2015 Bill: Clause 25 (2) (a) states that the DNA Data Bank shall contain the identity for the suspects' or offenders' index.

  • DNA Data Bank Manager:

2012 Bill: The Bill States under clause 33 (1) that a DNA Data Bank Manger shall be appointed for conducting all operations of the National DNA Data Bank. The functions were not specific.

2015 Bill: The Bill states under clause 26 (1) specifically that a DNA Data Bank Manger shall be appointed for the purposes of execution, maintenance and supervision of the National DNA Data Bank.

- Qualification:

2012 Bill: In the previous Bill, it was stated under clause 33 (3) that the DNA Data Bank Manager must be a scientist with understanding of computer applications and statistics.

2015: The Bill states under clause 26 (2) that the DNA Data Bank Manager must possess educational qualification in science and any such experience as prescribed by the regulations.

  • Officers and other employees of the National DNA Data Bank:

2012 Bill: The Bill stated under clause 34 (3) that the Board may appoint consultants required to assist in the discharge of the functions of the DNA Data Banks.

2015 Bill: The Bill provides under clause 27 (3) that the Board may appoint experts required to assist in the discharge of the functions of the DNA Data Banks

  • Comparison and Communication of DNA profiles:

2015 Bill: The New Bill specifically addresses comparison and communication the DNA profiles as that in the offenders' or crime scene index under clause 28 (1). Also, there is an additional provision under clause 29 (3) which states that the National DNA Data Bank Manger may communicate a DNA profile through Central Bureau of Investigation on request of a court, tribunal, law enforcement agency or DNA laboratory to the Government of a foreign State, an international organization or institution of Government.

  • Use of DNA profiles and DNA samples and records:

2012 Bill: The Bill provided under clause 39 that all DNA profiles, samples and records would be used solely for purpose of facilitating identification of perpetrator of an offence as listed under the Schedule. The proviso to this provision addressed the fact that such samples could be used to identify victims of accidents or disaster or missing persons, or any purpose of civil dispute.

2015 Bill: The Bill restricts the use of all DNA profiles, samples and records solely for purpose of facilitating identification of a person under the Act under clause 32.

  • DNA Profiling Board Fund:

2012 Bill: The Bill stated under clause 47 (2) that the financial power for the application of monies of the Fund shall be delegated to the Board in such manner as may be prescribed and as may be specified by the regulations made by the Board.

Also, the Bill stated that the Fund shall be applied for meeting remuneration requirements to be paid to the consultants under clause 47 (3) (c).

2015 Bill: This provision has not been included in the Bill. Also, the Bill does not include the provision of paying the remuneration to the experts from the Fund.

  • Delegation of Powers:

2012 Bill: The Bill provided under clause 61 that The Board may delegate its powers and functions to the Chairperson or any other Member or officer of the Board subject to such conditions, if necessary.

2015 Bill: This provision has not been included in the 2015 Bill.

  • Powers of Board to make rules:

2012 Bill: The Bill provided for an exhaustive list consisting of 33 powers listed under clause 65.

2015 Bill: The Bill provides for a list of 27 powers of the Board under clause 57.

  • Schedule:

2012 Bill: In the list of offense where human DNA profiling would be applicable, there was an inclusion of any law as may be specified by the regulations made by the Board.

2015 Bill: This provision has been removed from the 2015 Bill.

Responsible Data Forum: Discussion on the Risks and Mitigations of releasing Data

by Vanya Rakesh last modified Sep 06, 2015 02:29 PM

The Responsible Data Forum initiated a discussion on 26th August 2015 to discuss the risks and mitigations of releasing data.

The discussion was regarding the question of adoption of adequate measures to mitigate risks to people and communities when some data is prepared to be released or for sharing purposes.

The following concerns entailed the discussion:

  • What is risk- risks in releasing development data and PII
  • What kinds of risks are there
  • Risk to whom?
  • Risks in dealing with PII, discussed by way of several examples
  • What is missing from the world

The first thing to be done is that if a dataset is made, then you have the responsibility that no harm is caused to the people who are connected to the dataset and a balance must be created between good use of the data on one hand and protecting data subjects, sources and managers on the other.

To answer what is risk, it was defined to be the “probability of something happening multiplied by the resulting cost or benefit if it does” (Oxford English Dictionary). So it is based on cost/benefit, probability, and a subject. For probability, all possible risks must be considered and work in terms of how much harm would happen and how likely that is about to happen. These issues must be considered necessarily.

An example in this context was that of the Syrian government where the bakeries were targeted as the bombers knew where the bakeries are, making them easy targets. It was discussed how in this backdrop of secure data release mechanism, local context is an important issue.

Another example of bad practice was the leak of information in the Ashley Madison case wherein several people have committed suicide.

  • Kinds of risk:
  1. physical harm:

The next point of discussion was regarding kinds of the physical risks to data subjects when there is release/sharing of data related to them. Some of them were:

  1. i.  security issues
  2. ii. hate speech
  3. iii. voter issues
  4. iv. police action

Hence PII goes both ways- where some choose to run the risk of PII being identified; on the other hand some run the risk of being identified as the releaser of information.

  1. Legal harms- to explain what can be legal harms posed in releasing or sharing data, an example was discussed of an image marking exercise of a military camp wherein people joined in, marked military equipment and discovered people who are from that country.
  2. Reputational harm as an organization primarily.
  3. Privacy breach- which can lead to all sorts of harms.
  • Risk to whom?

Data subjects – this includes:

  1. i.  Data collectors
  2. ii. Data processing team
  3. iii. Person releasing the data
  4. iv. Person using the data

Also, the likely hood of risk ranges from low, medium and high. We as a community are at a risk at worse.

  • PII:

- Any data which can be used to identify any specific individual. Such information does not only include names, addresses or phone numbers but could also be data sets that don’t in themselves identify an individual.

For example, in some places sharing of social security number is required for HIV+ status check-up; hence, one needs to be aware of the environment of data sets that go into it. In another situation where there is a small population and there is a need to identify people of a street, village or town for the purpose of religion, then even this data set can put them to risk.

Hence, awareness with respect to the demographics is important to ascertain how many people reside in that place, be aware of the environment and accordingly decide what data set must be made.

- Another way to mitigate risks at the time of release/sharing of data is partial release only to some groups, like for the purpose of academics or to data subjects.

- Different examples were discussed to identify how release of data irresponsibly has affected the data subjects and there is a need to work to mitigate harms caused in such cases.

Example- in the New York City taxi case data about every taxi ride was released-including pickup and drop locations, times, fares. Here it becomes more problematic if someone is visiting strip clubs, then re-identification takes place and this necessitates protection of people against such insinuation.

This shows how data sets can lead to re-identification, even when it is not required. Hence, the involved actors must understand the responsibilities when engaging in data collection or release and accordingly mitigate the risks so associated.

- A concern was raised over collection and processing of the information of genetic diseases of a small population since practically it is not possible to guarantee that the information of data subjects to whom the data relates will not be released or exposed or it won’t be re-identifiable. Though best efforts would be made by experts, however, realistically, it is not possible to guarantee people that they will not be identified. So the question of informing people of such risks is highly crucial. It is suggested that one way of mitigating risks is involving the people and letting them know. Awareness regarding potential impact by breach of data or identification is very important.

- Another factor for consideration is the context in which the information was collected. The context for collection of data seems to change over a period of time. For example, many human rights funders want information on their websites changed or removed in the backdrop of changing contexts, circumstances and situation. In this case also, the collection and release of data and the risks associated become important due to changing contexts.

  • What is missing from the world?

Though recognition of risks has been done and is an ongoing process, what is missing from the world are uniform guidelines, rules or law. There are no policies for informed consent or for any means to mitigate risks collectively in a uniform manner. There must be adoption of principles of necessity, proportionality and informed consent.

Connected Choices

by Melissa Hathaway — last modified Sep 09, 2015 01:26 AM

Modern societies are in the middle of a strategic, multi-dimensional competition for money, power and control over all aspects of the Internet and the Internet economy. Ms. Hathaway will discuss the increasing pace of discord and the competing interests that are unfolding in the current debate concerning the control and governance of the Internet and its infrastructure. Some countries are more prepared for and committed to winning tactical battles than others on the road to asserting themselves as an Internet power. Some are acutely aware of what is at stake; the question is whether they will be the master or the victim of these multi-layered power struggles as subtle and not-so-subtle connected choices are being made. Understanding this debate requires an appreciation of the entangled economic, technical, regulatory, political, and social interests implicated by the Internet. Those states that are prepared for and understand the many facets the Internet presents will likely end up on top.

Anonymity in Cyberspace

by Sunil Abraham last modified Sep 09, 2015 01:31 AM

While security threats require one to be identified in the Cyberspace, on the other hand, the need for privacy and freedom of speech without being targeted, calls for providing means for  anonymous browsing and ability to express without being identified. Where do we draw the line , and how do we balance it? The group will dwell on need for anonymity in various sectors such as government, commercial, employers etc. Apart from security & privacy, the presentation will also cover social and technological perspectives.

DIDP Request #11: NETmundial Principles

by Aditya Garg — last modified Sep 14, 2015 03:08 PM
The Centre for Internet & Society (CIS) followed up on the implementation of the NETmundial Principles that ICANN has been endorsing by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

22 July 2015

To:

Mr. Fadi Chehade, CEO and President

Mr. Steve Crocker, Chairman of the Board

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Sub: Details of documents within ICANN regarding implementation of NETmundial Principles and documents modified within ICANN as a result of the same

It  is  our  understanding  that  ICANN  is  one  of  the founding  members  of  the  NETmundial Initiative. And hence, it has been credited in the public forum for championing the Initiative.[1]

Mr.  Fadi  Chehade,  CEO  and  President  of  ICANN,  has  maintained  that  it  is  time  for  the  global community to act and implement the Principles set forth in the initiative.[2]

ICANN itself, in response to one of our earlier requests, has acknowledged that "NETmundial Principles are high-level statements that permeate through the work of any entity –particularly a multistakeholder entity like ICANN."[3]

We,  therefore,  request  for  all  existing  documents  within  ICANN  which  represent  its  efforts  to implement  the  NETmundial  Principles  within  its  working.  Additionally, we would  also  want  to request  for  all  the  documents  which  were  modified  as  the  result  of  ICANN’s support of the NETmundial Initiave, highlighting the modification so made.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.

Thank you very much.

Warm regards,
Aditya Garg,
1st Year, National Law University, Delhi for Centre for Internet & Society

ICANN Response

ICANN in their response pointed to an earlier DIDP request that we had sent in, and they replied along the same lines. They brought to our attention that ICANN was not responsible for the implementation of the NETMundial Principles, despite it being one of the founding members of the Initiative. They reiterated their earlier statement of ICANN not being the “…home for implementation of the NETmundial Principles or the evolution of multistakeholder participation in Internet governance.”  They have failed to provide us with documentary proof of the implementation of these principles, and have only pointed to statements which indicate a potential prospective adoption of said the initiative [4]; the responses have been near identical to those for the earlier DIDP request, which you can find here.

Further, ICANN claims that the information we seek falls within the scope of the exceptions to disclosure they lay down, as it is not within their operational activities, an explanation that fails to satisfy us. As always, they have used the wide scope of their non-disclosure policy to avoid providing us with the requisite information.

The request can be found here, and ICANN’s response has been linked here.


[1]. See McCarthy, I’m Begging You To Join, The Register (12 December 2014), http://www.theregister.co.uk/2014/12/12/im_begging_you_to_join_netmundial_initiative_gets_desperate/

[2]. See NETmundial Initiative Goes Live, Gobal Internet Community Invited to Participate (Press Release), https://www.netmundial.org/press-release-1

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141228-1-NETmundial, https://www.icann.org/en/system/files/files/cis-netmundial-response-27jan15-en.pdf

[4]. Such as Objective 4.3 of their Strategic Five Year Plan. “Demonstrate leadership by implementing best practices in multistakeholder mechanisms within the distributed Internet governance ecosystem while encouraging all stakeholders to implement the principles endorsed at NETmundial” at https://www.icann.org/en/system/files/files/strategic-plan-2016-2020-10oct14-en.pdf

DIDP Request #12: Revenues

by Aditya Garg — last modified Sep 14, 2015 03:32 PM
The Centre for Internet & Society (CIS) sought information from ICANN on their revenue streams by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

CIS Request

22 July 2015

To:

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Mr. Samiran Gupta, ICANN India

All other members of Staff involved in accounting and financial tasks

Sub: Raw data with respect to granular income/revenue statements of ICANN from 1999-2011

We  would  like  to  thank  ICAN  for  their  prompt  response  to  our  earlier  requests.  We appreciate that the granular Revenue Details  for FY14  have been  posted online.[1] We also appreciate that a similar  document  has  been  posted  for  FY13.[2]

And  we  hope  that  one  for  FY12  would  be  posted soon, as noted by you in your Response to our Request No. 20141222-1.[3]

As also noted by you in the same request, similar reports cannot be prepared for FY99 to  FY11 since “[i]t would be extremely time consuming and overly burdensome to cull through the raw data in order to compile the reports for the prior years”.[4]

Additionally, it was also mentioned that the “relevant information is available in other public available documents”.[5]

Hence, we  would like to request  for the raw  data for years FY99 to FY11, for our research on accountability  and  transparency  mechanisms  in  Internet  governance,  specifically  of  ICANN. Additionally,  we  would  also  like  to  request  for  the links  to  such  public  documents where the information is available.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.
Thank you very much.
Warm regards,
Aditya Garg,  
I Year, National Law University, Delhi
For Centre for Internet & Society
W: http://cis-india.org

ICANN Response

ICANN referred to our earlier DIDP request (see here) where we had sought for a detailed report of their granular income and revenue statements from 1999-2014. They refused to disclose the data on grounds that it would be ‘time consuming’ and ‘overly burdensome’, which is a ground for refusal as per their exceptions to disclosure.

Our request may be found here, and their response is linked to here.


[1]. See FY14 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2014-revenue-source-01may15-en.pdf.

[2]. See FY13 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2013-revenue-source-01may15-en.pdf

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

[4]. Id

[5]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

India’s digital check

by Sunil Abraham last modified Sep 15, 2015 02:55 PM
All nine pillars of Digital India directly correlate with policy research conducted at the Centre for Internet and Society, where I have worked for the last seven years. This allows our research outputs to speak directly to the priorities of the government when it comes to digital transformation.

The article was originally published by DNA on July 8, 2015.


Broadband Highways and Universal Access to Mobile Connectivity: The first two pillars have been combined in this paragraph because they both require spectrum policy and governance fixes. Shyam Ponappa, a distinguished fellow at our Centre calls for the leveraging of shared spectrum and also shared backhaul infrastructure. Plurality in spectrum management, for eg, unlicensed spectrum should be promoted for accelerating backhaul or last mile connectivity, and also for community or local government broadband efforts. Other ideas that have been considered by Ponappa include getting state owned telcos to exit completely from the last mile and only focus on running an open access backhaul through Bharat Broadband Limited. Network neutrality regulations are also required to mitigate free speech, diversity and competition harms as ISPs and TSPs innovate with business models such as zero-rating.

Public Internet Access Programme: Continuing investments into Common Service Centres (CSCs) for almost a decade may be questionable and therefore a citizen’s audit should be undertaken to determine how the programme may be redesigned. The reinventing of post offices is very welcome, however public libraries are also in need urgent reinventing. CSCs, post offices and public libraries should all leverage long range WiFi for Internet and intranet, empowering BYOD [Bring Your Own Device] users. Applications will take time to develop and therefore immediate emphasis should be on locally caching Indic language content. State Public Library Acts need to be amended to allow for borrowing of digital content. Flat-fee licensing regimes must be explored to increase access to knowledge and culture. Commons-based peer production efforts like Wikipedia and Wikisource need to be encouraged.

e-Governance: Reforming Government through Technology: DeitY, under the leadership of free software advocate Secretary RS Sharma, has accelerated adoption and implementation of policies supporting non-proprietary approaches to intellectual property in e-governance. Policies exist and are being implemented for free and open source software, open standards and electronic accessibility for the disabled. The proprietary software lobby headed by Microsoft and industry associations like NASSCOM have tried to undermine these policies but have failed so far.

The government should continue to resist such pressures. Universal adoption of electronic signatures within government so that there is a proper audit trail for all communications and transactions should be made an immediate priority. Adherence to globally accepted data protection principles such as minimisation via “form simplification and field reduction” for Digital India should be applauded. But on the other hand the mandatory requirement of Aadhaar for DigiLocker and eSign amounts to contempt of the Supreme Court order in this regard.

e-Kranti — Electronic Delivery of Services: The 41 mission mode projects listed are within the top-down planning paradigm with a high risk of failure — the funds reserved for these projects should instead be converted into incentives for those public, private and public private partnerships that accelerate adoption of e-governance. The dependency on the National Informatics Centre (NIC) for implementation of e-governance needs to be reduced, SMEs need to be able to participate in the development of e-governance applications. The funds allocated for this area to DeitY have also produced a draft bill for Electronic Services Delivery. This bill was supposed to give RTI-like teeth to e-governance service by requiring each government department and ministry to publish service level agreements [SLAs] for each of their services and prescribing punitive action for responsible institutions and individuals when there was no compliance with the SLAs.

Information for All: The open data community and the Right to Information movement in India are not happy with the rate of implementation of National Data Sharing and Accessibility Policy (NDSAP). Many of the datasets on the Open Data Portal are of low value to citizens and cannot be leveraged commercially by enterprise. Publication of high-value datasets needs to be expedited by amending the proactive disclosure section of the Right to Information Act 2005.

Electronics Manufacturing: Mobile patent wars have begun in India with seven big ticket cases filed at the Delhi High Court. Our Centre has written an open letter to the previous minister for HRD and the current PM requesting them to establish a device level patent pool with a compulsory license of 5%. Thereby replicating India’s success at becoming the pharmacy of the developing world and becoming the lead provider of generic medicines through enabling patent policy established in the 1970s. In a forthcoming paper with Prof Jorge Contreras, my colleague Rohini Lakshané will map around fifty thousand patents associated with mobile technologies. We estimate around a billion USD being collected in royalties for the rights-holders whilst eliminating legal uncertainties for manufacturers of mobile technologies.

IT for Jobs: Centralised, top-down, government run human resource development programmes are not useful. Instead the government needs to focus on curriculum reform and restructuring of the education system. Mandatory introduction of free and open source software will give Indian students the opportunity to learn by reading world-class software. They will then grow up to become computer scientists rather than computer operators. All projects at academic institutions should be contributions to existing free software projects — these projects could be global or national, for eg, a local government’s e-governance application. The budget allocated for this pillar should instead be used to incentivise research by giving micro-grants and prizes to those students who make key software contributions or publish in peer-reviewed academic journals or participate in competitions. This would be a more systemic approach to dealing with the skills and knowledge deficit amongst Indian software professionals.

Early Harvest Programmes: Many of the ideas here are very important. For example, secure email for government officials — if this was developed and deployed in a decentralised manner it would prevent future surveillance of the Indian government by the NSA. But a few of the other low-hanging fruit identified here don’t really contribute to governance. For example, biometric attendance for bureaucrats is just glorified bean-counting — it does not really contribute to more accountability, transparency or better governance.


The author works for the Centre for Internet and Society which receives funds from Wikimedia Foundation that has zero-rating alliances with telecom operators in many countries across the world

Sustainable Smart Cities India Conference 2015, Bangalore

by Vanya Rakesh last modified Sep 21, 2015 02:24 AM
Nispana Innovative Platforms organized a Sustainable Smart Cities India Conference 2015, in Bangalore on 3rd and 4th September, 2015. The event saw participation from people across various sectors including Government Representatives from Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers. National and International experts and stakeholders were also present to discuss the opportunities and challenges in creating smart and responsible cities as well as citizens, and creating a roadmap for converting the smart cities vision into a reality that is best suited for India.

The objective of the conference was to discuss the meaning of a smart city, the promises made, the challenges and possible solutions for implementation of ideas by transforming Indian Cities towards a Sustainable and Smart Future.

Smart Cities Mission

Considering the pace of rapid urbanization in India, it has been estimated that the urban population would rise by more than 400 million people by the year 2050[1] and would contribute nearly 75% to India’s GDP by the year 2030. It has been realized that to foster such growth, well planned cities are of utmost importance. For this, the Indian government has come up with a Smart Cities initiative to drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology, especially technology that leads to Smart outcomes.

Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY2015-16 to FY2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporating the learnings into the Mission. This initiative aims to focus on area-based development in the form of redevelopment, or developing new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all, especially the poor and the disadvantaged.[2]

What is being done?

The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year.The Government has come up with 2 missions:Atal Mission for Rejuvenation and Urban Transformation (AMRUT) and Smart Cities Mission for the purpose of achieving urban transformation.The vision is to preserve India’s traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. Additionally, Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission.

What is a Smart City?

Over the two-day conference, various speakers shared a common sentiment that the Governments’ mission does not clearly define what encompasses the idea of a Smart City. There is no universally accepted definition of a Smart City and its conceptualization varies from city to city and country to country.

A global consensus on the idea of a smart city is a city which is livable, sustainable and inclusive. Hence, it would mean a city which has mobility, healthcare, smart infrastructure, smart people, traffic maintenance, efficient waste resource management, etc.

Also, there is a global debate at United Nations regarding developmental goals. One of these goals is gender equality which is very important for the smart city initiative. According to this, a smart city must be such where the women have a life free from violence, must be made to participate and are economically empowered.

Promises

The promises of the Smart City mission include:

Make a sustainable future, reduce carbon footprint, adequate water supply, assured electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing especially for the poor, robust IT connectivity and digitalization, good governance, especially e-Governance and citizen participation, sustainable environment, safety and security of citizens, particularly women, children and the elderly, and health and education.

The vision is to preserve country’s traditional architecture, culture & ethnicity while implementing modern technology. It was discussed how the Smart City Mission is currently attracting global investment, will create new job opportunities, improve communications and infrastructure, decrease pollution and ultimately improve the quality of living.

Challenges

The main challenges for implementation of these objectives are with respect to housing, dealing

with existing cities and adopting the idea of retro-fitting.

Also, another challenge is that of eradicating urban poverty, controlling environment degradation, formulating a fool-proof plan, proper waste management mechanism, widening roads but not at the cost of pedestrians and cyclist and building cities which are inclusive and cater to the needs of women, children and disabled people.

Some of the top challenges will include devising a fool-proof plan to develop smart cities, meaningful public-private partnership, increasing the renewable energy, water supply, effective waste management, traffic management, meeting power demand, urban mobility, ICT connectivity, e-governance, etc., while preparing for new threats that can emerge with implementation of these new technologies.

What needs to be done?

The following suggestions were made by the experts to successfully implement government’s vision of creating successful smart cities in India.

  • Focus on the 4 P’s: Public-Private-People Partnership since people very much form a part of the cities.
  • Integration of organizations, government bodies, and the citizens. The Government can opt for a sentiment analysis.
  • Active participation by state governments since Land is a state subject under the Constitution. There must be a detailed framework to monitor the progress and the responsibilities must be clearly demarcated.
  • Detailed plans, policies and guidelines
  • Strengthen big data initiatives
  • Resource maximization
  • Make citizens smart by informing them and creating awareness
  • Need for competent people to run the projects
  • Visionary leadership
  • Create flexible and shared spaces for community development.

National/International case studies

Several national and international case studies were discussed to list down practical challenges to enable the selected Indian cities learn from their mistakes or include successful schemes in their planning from its inception.

  • Amsterdam Smart City: It is said to be a global village which was transformed into a smart city by involving the people. They took views of the citizens to make the plan a success. The role of big data and open data was highly emphasized. Also, it was suggested that there must be alignment with respect to responsibilities with the central, state and district government to avoid overlap of functions. The city adopted smart grid integration to make intelligent infrastructure and subsidized initiatives to make the city livable.
  • GIFT City, Gujarat: This is an ICT based sustainable city which is a Greenfield development. It is strategically situated. One of the major features of the City is a utility tunnel for providing repair services and the top of the tunnel can be utilized as a walking/jogging track. The city has smart fire safety measures, wide roads to control traffic, smart regulations.
  • TEL AVIV Smart City, Israel: It has been named as the Mediterranean cool city with young and free spirted people. The city comprises of creative class with 3 T’s-talent, technology and tolerance. The city welcomes startups and focuses on G2G, G2C and C2C initiatives by adopting technologically equipped initiatives for effective governance and community building programmes.

Participation

The event saw participation from people across various sectors including Government Representatives of Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers.

  • Foundation for Futuristic Cities: The conference saw participation from this think tank based out of Hyderabad working on establishing vibrant smart cities for a vibrant India. They are currently working on developing a "Smart City Protocol" for Indian cities collaborating with Technology, Government and Corporate partners by making a framework for Smart Cities, Big Data and predictive analytics for safe cities, City Sentiment Analysis, Situation Awareness Tools and mobile Apps for better city life by way of Hackathons and Devthons.
  • Centre for SMART cities, Bangalore: This is a research organization which aims to address the challenge of collaborating and sharing knowledge, resources and best practices that exist both in the private sector and governments/municipal bodies in a usable form and format.
  • BDP – India (Studio Leader – Urbanism): The Organization is based out of Delhi and is involved in providing services relating to master planning, urbanism, design and landscape design. The team includes interior designers, engineers, urbanists, sustainability experts, lighting designers, etc. The vision is to help build and create high quality, effective and inspiring built spaces.
  • UN Women: It is a United Nations Organization working on gender equality, women empowerment and elimination of discrimination. They strive to strengthen rights of women by working with women, men, feminists, women’s networks, governments, local authorities and civil society to create national strategies to advance gender equality in line with national and international priorities. The UN negotiated the 2030 Agenda for Sustainable Development in August 2015 (which would be formally adopted by World leaders in September 2015) and it feature 17 sustainable development goals, one of them being achievement of gender equality and empowerment of all women and girls.
  • Elematic India Pvt. Ltd.: The Company is a leading supplier of precast concrete technology worldwide providing smart solutions for concrete buildings to help enable build smart cities with safe infrastructure.

Conclusion

The event discussed in great detail about what a smart city would look like in a country like India where every city has different demographics, needs and resources.

The Participants had a mutual understanding that a city is not gauged by its length and width, but by the broadness of its vision and height of its dream. The initiative of creating smart cities would echo across the country as a whole and would not be limited to the urban centers. Hence, the plan must be inclusive in implementation and right from its inception, the people and their needs must be given due consideration to make it a success. The issue of the road ahead was resonating in the minds of many, as to how would this exactly happen. Hence, the first step, as was suggested by the experts, was to involve the citizens by primarily informing them, taking their suggestions and planning the project for every city accordingly. While focusing on cities which would be made better by human ingenuity and technology, along with building mechanism for housing, commerce, transportation and utilities, it must not be forgotten that technology is timely, but culture is timeless. The cities must not be faceless and community space must be built with walkable spaces with smart utilization of limited resources. Also, it must be ensured that the cities do not cater to the needs of the elite and skilled population, but also the less privileged community. Adequate urban mapping must be done to ensure placement for community facilities, such as restrooms, trash bins, and information kiosks.

A story shared from personal experience by an expert Architect in building Green infrastructure was highly instrumental in setting the tone of the conference and is bound to stay with many of the participants. The son of the Architect, a small child from Baroda left his father speechless when he questioned him about the absence of butterflies from the Big City of Mumbai since he used to play with butterflies every morning in his hometown in Gujarat. The incident was genuinely thought provoking and left every architect, government representative and engineer thinking that before they step on to build a smart cities with technologically equipped infrastructure and utilities - can we, as a country, come together and ensure to build a smart city with butterflies? Can we pay equal attention to sustainability, environment and requirements of a community in the smart city that is envisioned by the Government to make the city livable and inclusive?

Questions that I, as a participant, am left with are:

  • Building a Greenfield project is comparatively easier than upgrading the existing cities into Smart ones, which requires planning and optimum utilization of resources. The role of local bodies needs to be strengthened which would primarily require skilled workforce, beginning from planning to execution. Therefore, what must be done to make the current cities “Smarter” and how encourage and fund ordinary citizens to redefine and prioritize local needs?
  • The conference touched upon the need for a well-planned policy framework to govern the smart cities; however, what was missing was a discussion on the kind of policies that would be required for every city to ensure governance and monitor the operations. Chalking out well thought of urban policies is the first step towards implementation of the Project and requires deliberation in this regard.
  • The Government plans seem to cater to the needs of a handful of sections of the society and must focus on safety of women, chalk out initiatives to build basic utilities like public toilets, plan the infrastructure keeping in mind the disabled individuals, etc.

This is of paramount importance since it is necessary for the Government to consider who would be the potential inhabitants of these future smart cities and what would be their particular needs. Before the cities are made better by use of technology, there is a requirement of more toilets as a basic utility. Thus, instead of focusing on technological advancement as the sole foundation to make lives of the people easy, the cities must have provision of utilities which are accessible to develop livable smart cities. Hence, what measures would the Government and other bodies involved in the plan take to ensure that the urban enclaves would not oversee the under privileged class?

Another issue that went unnoticed during the two-day event was pertaining to the Fundamental Rights of individuals within the city. For example, the right of privacy, right to access services and utilities, right to security, etc. These basic rights must be given due recognition by the smart city developers to uphold the spirit of these internationally accepted Human Rights principles. Therefore, it is important to ask how these future cities are going to address the rights of its people in the cities?

Apart from plans of working on waste management, another important factor that must not be overlooked is sustainability in terms of maximization of the available resources in the best possible ways and techniques to be adopted to stop the fast paced degradation of the environment.

The conference could suggest more solutions to adopt measures like rain water harvesting, better sewage management in the existing cities.

Also, the importance of big data in building the smart cities was emphasized by many experts. However, the question of regulation of data being generated and released was not talked about. Use of big data analytics involves massive streaming of data which required regulation and control over its use and generation to ensure such information is not misutilised in any way. In such a scenario, how would these cities regulate and govern big data techniques to make the infrastructure and utilities technologically efficient on one hand, but also to use the large data sets in a monitored fashion on the other?

An answer to these crucial issues and questions would have brought about a lot of clarity in minds of all the officials, planners and the potential residents of the Smart Cities in India.


[1] 2014 revision of the World Urbanization Prospects, United Nations, Department of Economic and Social Affairs, July 2014, Available at : http://www.un.org/en/development/desa/publications/2014-revision-world-urbanization-prospects.html

[2] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

Peering behind the veil of ICANN’s DIDP (I)

by Padmini Baruah — last modified Oct 15, 2015 02:42 AM
One of the key elements of the process of enhancing democracy and furthering transparency in any institution which holds power is open access to information for all the stakeholders. This is critical to ensure that there is accountability for the actions of those in charge of a body which utilises public funds and carries out functions in the public interest.

As the body which “...coordinates the Internet Assigned Numbers Authority (IANA) functions, which are key technical services critical to the continued operations of the Internet's underlying address book, the Domain Name System (DNS)[1], the centrality of ICANN in regulating the Internet (a public good if there ever was one) makes it vital that ICANN’s decision-making processes, financial flows, and operations are open to public scrutiny. ICANN itself echoes the same belief, and upholds “...a proven commitment to accountability and transparency in all of its practices[2], which is captured in their By-Laws and Affirmation of Commitments. In furtherance of this, ICANN has created its own Documentary Information Disclosure Policy, where it promises to “...ensure that information contained in documents concerning ICANN's operational activities, and within ICANN's possession, custody, or control, is made available to the public unless there is a compelling reason for confidentiality.[3]

ICANN has a vast array of documents that are already in the public domain, listed here. These include annual reports, budgets, registry reports, speeches, operating plans, correspondence, etc. However, their Documentary Information Disclosure Policy falls short of meeting international standards for information disclosure. In this piece, I have focused on an examination of their defined conditions for non-disclosure of information, which seem to undercut the entire process of transparency that the DIDP process aims towards upholding. The obvious comparison that comes to mind is with the right to information laws that governments the world over have enacted in furtherance of democracy. While ICANN cannot be equated to a democratically elected government, it nonetheless does exercise sufficient regulatory power of the functioning of the Internet for it to owe a similar degree of information to all the stakeholders in the internet community. In this piece, I have made an examination of ICANN’s conditions for non-disclosure, and compared it to the analogous exclusions in India’s Right to Information Act, 2005

ICANN’ꜱ Defined Conditions for Non-Disclosure versus Exclusions in Indian Law :

ICANN, in its DIDP policy identifies a lengthy list of conditions as being sufficient grounds for non-disclosure of information. One of the most important indicators of a strong transparency law is said to be minimum exclusions.[4] However, as seen from the table below, ICANN’s exclusions are extensive and vast, and this has been a barrier in the way of free flow of information. An analysis of their responses to various DIDP requests (available here) shows that the conditions for non-disclosure have been invoked in over 50 of the 85 requests responded to (as of 11.09.2015); i.e., over two-thirds of the requests that ICANN receives are subjected to the non-disclosure policies.

In contrast, an analysis of India’s Right to Information Act, considered to be among the better drafted transparency laws of the world, reveals a much narrower list of exclusions that come in the way of a citizen obtaining any kind of information sought. The table below compares the two lists:

No.

ICANN[5]

India

Analysis

1.

Information provided by or to a government or international organization which was to be kept confidential or would materially affect ICANN’s equation with the concerned body.

Information, disclosure of which would prejudicially affect the sovereignty and integrity of India, the security, "strategic, scientific or economic" interests of the State, relation with foreign State or lead to incitement of an offense[6]/ information received in confidence from foreign government[7]

The threshold for both the bodies is fairly similar for this exclusion.

2.

Internal (staff/Board) information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process

Cabinet papers including records of deliberations of the Council of Ministers, Secretaries and other officers, provided that such decisions the reasons thereof, and the material on the basis of which the decisions were taken shall be made public after the decision has been taken, and the matter is complete, or over (unless subject to these exemptions)[8]

The Indian law is far more transparent as it ultimately allows for the records of internal deliberation to be made public after the decision is taken.

3.

Information related to the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process

No similar provision in Indian Law.

This is an additional restriction that ICANN introduces in addition to the one above, which in itself is quite broad.

4.

Records relating to an individual's personal information

Information which relates to personal information the disclosure of which has no relationship to any public activity or interest, or which would cause unwarranted invasion of the privacy of the individual (but it is also provided that the information which cannot be denied to the Parliament or a State Legislature shall not be denied by this exemption);[9]

Again, the Indian law contains a proviso for information with “relationship to any public activity or interest

5.

Proceedings of internal appeal mechanisms and investigations.

Information which has been expressly forbidden to be published by any court of law or tribunal or the disclosure of which may constitute contempt of court;[10]

While ICANN prohibits the disclosure of all proceedings, in India, the exemption is only to the limited extent of information that the court prohibits from being made public.

6.

Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[11]

This is fairly similar for both lists.

7.

Confidential business information and/or internal policies and procedures.

No similar provision in Indian Law. This is encapsulated in the abovementioned provision

This is fairly similar in both lists.

8.

Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice.

Information, the disclosure of which would endanger the life or physical safety of any person or identify the source of information or assistance given in confidence for law enforcement or security purposes;[12]

This is fairly similar for both lists.

9.

Information subject to any kind of privilege, which might prejudice any investigation

Information, the disclosure of which would cause a breach of privilege of Parliament or the State Legislature[13]/Information which would impede the process of investigation or apprehension or prosecution of offenders;[14]

This is fairly similar in both lists.

10.

Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication.

No similar provision in Indian Law

This exclusion is not present in Indian law, and it is extremely broadly worded, coming in the way of full transparency.

11.

Information that relates in any way to the security and stability of the Internet

No similar provision in Indian Law

This is perhaps necessary to ICANN’s role as the IANA Functions Operator. However, given the large public interest in this matter, there should be some proviso to make information in this regard available to the public as well.

12.

Trade secrets and commercial and financial information not publicly disclosed by ICANN.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[15]

This is fairly similar in both cases.

13.

Information requests:

● which are not reasonable;

● which are excessive or overly burdensome

● complying with which is not feasible

● which are made with an abusive or vexatious purpose or by a vexatious or querulous individual.

No similar provision in Indian Law

Of all the DIDP exclusions, this is the one which is most loosely worded. The terms in this clause are not clearly defined, and it can effectively be used to deflect any request sought from ICANN because of its extreme subjectivity. What amounts to ‘reasonable’? Whom is the process going to ‘burden’? What lens does ICANN use to define a ‘vexatious’ purpose? Where do we look for answers?

14.

No similar provision in ICANN’s DIDP.

Information available to a person in his fiduciary relationship, unless the competent authority is satisfied that the larger public interest warrants the disclosure of such information;[16]

-

15.

No similar provision in ICANN’s DIDP.

Information which providing access to would involve an infringement of copyright subsisting in a person other than the State.[17]

-

Thus, the net cast by the DIDP exclusions policy is more vast than even than that of a democratic state’s transparency law. Clearly, the exclusions above have effectively allowed ICANN to dodge answers to most of the requests floating its way. One can only hope that ICANN realises that these exclusions come in the way of the transparency that they are so committed to, and does away with this unreasonably wide range on the road to the IANA Transition.


[1] https://www.icann.org/resources/pages/welcome-2012-02-25-en

[2] https://www.icann.org/resources/accountability

[3] https://www.icann.org/resources/pages/didp-2012-02-25-en

[4] Shekhar Singh, India: Grassroot Initiatives in Tʜᴇ Rɪɢʜᴛ ᴛᴏ Kɴᴏᴡ 19, 44 (Ann Florin ed., 2007)

[5] In a proviso, ICANN’s DIDP states that all these exemptions can be overridden if the larger public interest is higher. However, this has not yet been reflected in their responses to any DIDP requests.

[6] Section 8(1)(a), Right to Information Act, 2005.

[7] Section 8(1)(f), Right to Information Act, 2005.

[8] Section 8(1)(i), Right to Information Act, 2005.

[9] Section 8(1)(j), Right to Information Act, 2005.

[10] Section 8(1)(b), Right to Information Act, 2005.

[11] Section (1)(d), Right to Information Act, 2005

[12] Section 8(1)(g), Right to Information Act, 2005.

[13] Section 8(1)(c), Right to Information Act, 2005.

[14] Section 8(1)(h), Right to Information Act, 2005.

[15] Section (1)(d), Right to Information Act, 2005

[16] Section 8(1)(e), Right to Information Act, 2005.

[17] Section 9, Right to Information Act, 2005.

Hits and Misses With the Draft Encryption Policy

by Sunil Abraham last modified Sep 26, 2015 04:46 PM
Most encryption standards are open standards. They are developed by open participation in a publicly scrutable process by industry, academia and governments in standard setting organisations (SSOs) using the principles of “rough consensus” – sometimes established by the number of participants humming in unison – and “running code” – a working implementation of the standard. The open model of standards development is based on the Free and Open Source Software (FOSS) philosophy that “many eyes make all bugs shallow”.

The article was published in the Wire on September 26, 2015.


This model has largely been a success but as Edward Snowden in his revelations has told us, the US with its large army of mathematicians has managed to compromise some of the standards that have been developed under public and peer scrutiny. Once a standard is developed, its success or failure depends on voluntary adoption by various sections of the market – the private sector, government (since in most markets the scale of public procurement can shape the market) and end-users. This process of voluntary adoption usually results in the best standards rising to the top. Mandates on high quality encryption standards and minimum key-sizes are an excellent idea within the government context to ensure that state, military, intelligence and law enforcement agencies are protected from foreign surveillance and traitors from within. In other words, these mandates are based on a national security imperative.

However, similar mandates for corporations and ordinary citizens are based on a diametrically opposite imperative – surveillance. Therefore these mandates usually require the use of standards that governments can compromise usually via a brute force method (wherein supercomputers generate and attempt every possible key) and smaller key-lengths for it is generally the case that the smaller the key-length the quicker it is for the supercomputers to break in. These mandates, unlike the ones for state, military, intelligence and law enforcement agencies, interfere with the market-based voluntary adoption of standards and therefore are examples of inappropriate regulation that will undermine the security and stability of information societies.

Plain-text storage requirement

First, the draft policy mandates that Business to Business (B2B) users and Consumer to Consumer (C2C) users store equivalent plain text (decrypted versions) of their encrypted communications and storage data for 90 days from the date of transaction. This requirement is impossible to comply with for three reasons. Foremost, encryption for web sessions are based on dynamically generated keys and users are not even aware that their interaction with web servers (including webmail such as Gmail and Yahoo Mail) are encrypted. Next, from a usability perspective, this would require additional manual steps which no one has the time for as part of their daily usage of technologies. Finally, the plain text storage will become a honey pot for attackers. In effect this requirement is as good as saying “don’t use encryption”.

Second, the policy mandates that B2C and “service providers located within and outside India, using encryption” shall provide readable plain-text along with the corresponding encrypted information using the same software/hardware used to produce the encrypted information when demanded in line with the provisions of the laws of the country. From the perspective of lawful interception and targeted surveillance, it is indeed important that corporations cooperate with Indian intelligence and law enforcement agencies in a manner that is compliant with international and domestic human rights law. However, there are three circumstances where this is unworkable: 1) when the service providers are FOSS communities like the TOR project which don’t retain any user data and as far as we know don’t cooperate with any government; 2) when the service provider provides consumers with solutions based on end-to-end encryption and therefore do not hold the private keys that are required for decryption; and 3) when the Indian market is too small for a foreign provider to take requests from the Indian government seriously.

Where it is technically possible for the service provider to cooperate with Indian law enforcement and intelligence, greater compliance can be ensured by Indian participation in multilateral and multi-stakeholder internet governance policy development to ensure greater harmonisation of substantive and procedural law across jurisdictions. Options here for India include reform of the Mutual Legal Assistance Treaty (MLAT) process and standardisation of user data request formats via the Internet Jurisdiction Project.

Regulatory design

Governments don’t have unlimited regulatory capability or capacity. They have to be conservative when designing regulation so that a high degree of compliance can be ensured. The draft policy mandates that citizens only use “encryption algorithms and key sizes will be prescribed by the government through notification from time to time.” This would be near impossible to enforce given the burgeoning multiplicity of encryption technologies available and the number of citizens that will get online in the coming years. Similarly the mandate that “service providers located within and outside India…must enter into an agreement with the government”, “vendors of encryption products shall register their products with the designated agency of the government” and “vendors shall submit working copies of the encryption software / hardware to the government along with professional quality documentation, test suites and execution platform environments” would be impossible for two reasons: that cloud based providers will not submit their software since they would want to protect their intellectual property from competitors, and that smaller and non-profit service providers may not comply since they can’t be threatened with bans or block orders.

This approach to regulation is inspired by license raj thinking where enforcement requires enforcement capability and capacity that we don’t have. It would be more appropriate to have a “harms”-based approach wherein the government targets only those corporations that don’t comply with legitimate law enforcement and intelligence requests for user data and interception of communication.

Also, while the “Technical Advisory Committee” is the appropriate mechanism to ensure that policies remain technologically neutral, it does not appear that the annexure of the draft policy, i.e. “Draft Notification on modes and methods of Encryption prescribed under Section 84A of Information Technology Act 2000”, has been properly debated by technical experts. According to my colleague Pranesh Prakash, “of the three symmetric cryptographic primitives that are listed – AES, 3DES, and RC4 – one, RC4, has been shown to be a broken cipher.”

The draft policy also doesn’t take into account the security requirements of the IT, ITES, BPO and KPO industries that handle foreign intellectual property and personal information that is protected under European or American data protection law. If clients of these Indian companies feel that the Indian government would be able to access their confidential information, they will take their business to competing countries such as the Philippines.

And the good news is…

On the other hand, the second objective of the policy, which encourages “wider usage of digital Signature by all entities including Government for trusted communication, transactions and authentication” is laudable but should have ideally been a mandate for all government officials as this will ensure non-repudiation. Government officials would not be able to deny authorship for their communications or approvals that they grant for various applications and files that they process.

Second, the setting up of “testing and evaluation infrastructure for encryption products” is also long overdue. The initiation of “research and development programs … for the development of indigenous algorithms and manufacture of indigenous products” is slightly utopian because it will be a long time before indigenous standards are as good as the global state of the art but also notable as an important start.

The more important step for the government is to ensure high quality Indian participation in global SSOs and contributions to global standards. This has to be done through competition and market-based mechanisms wherein at least a billion dollars from the last spectrum auction should be immediately spent on funding existing government organisations, research organisations, independent research scholars and private sector organisations. These decisions should be made by peer-based committees and based on publicly verifiable measures of scientific rigour such as number of publications in peer-reviewed academic journals and acceptance of “running code” by SSOs.

Additionally the government needs to start making mathematics a viable career in India by either employing mathematicians directly or funding academic and independent research organisations who employ mathematicians. The basis of all encryptions standards is mathematics and we urgently need the tribe of Indian mathematicians to increase dramatically in this country.

Cyber 360 Agenda

by Prasad Krishna last modified Oct 02, 2015 03:41 PM

PDF document icon Agenda & Speakers - Cyber 360 conference-1.pdf — PDF document, 886 kB (907878 bytes)

Open Governance and Privacy in a Post-Snowden World : Webinar

by Vanya Rakesh last modified Oct 04, 2015 11:09 AM
On 10th September 2015, the OGP Support Unit, the Open Government Guide, and the World Bank held a webinar on “Open Governance and Privacy in a Post-Snowden World” presented by Carly Nyst, Independent consultant and former Legal Director of Privacy International and Javier Ruiz, Policy Director of Open Rights Group. This is a summary of the key issues that were discussed by the speakers and the participants.

See Open Governance and Privacy in a Post-Snowden World


Summary

The webinar discussed how Government surveillance has become an important and key issue in the 21st century, thanks to Edward Snowden. The main concern raised was with respect to what a democracy should look like in the present day. Should the states’ use of technology enable state surveillance or an open government? Typically, there is a balance that must be achieved between the privacy of an individual and the security of the state – particularly as the former is primarily about social rights and collective interest of citizens.

At the international level, the right to privacy has been recognized as a basic human right and an enabler of other individual freedoms. This right encapsulates protection of personal data where citizens have the authority to choose whether to share or reveal their personal data or not. Due to technological advancement that has enabled collection, storage and sharing of personal data, the right to privacy and data protection frameworks have become of utmost importance and relevance with regard to open government efforts. Therefore, it is important for Governments to be transparent in handling sensitive data that they collect and use.

Many countries have also introduced laws to balance the right to privacy and right to information.  The role of the private sector and NGOs involved in enabling an open and transparent government must also be duly addressed at a national level.

Key Questions:

  • Why should the government release information?

There are multiple reasons for doing so including:

For the purposes of research and public policy (which relates to healthcare, social issues, economics, national statistics, census, etc.)

Transparency and accountability (politicians, registers, public expenses, subsidies, fraud, court records, education)

Public participation and public services (budgets, anti-corruption, engagement, and e-governance).

However, all these have certain risks and privacy implications:

  1. Risk of identification of individual: Any individual whose information is released has the risk of identification, followed by issues like identity theft, discrimination, stigmatization or repression. Normally, the solution for this would be anonymization of the data; however, this is not an absolute solution. Privacy laws can generally cope with such risks, but with pseudonymous data it becomes difficult in preventing identification.
  2. Profiling of social categories which can lead to discrimination: In such a situation, policies and other legislations regulating the use of data and providing remedy for violations can help.
  3. Exploitation and unfair/unethical use of information: When understanding the potential exploitation of information it is useful to consider who is going to benefit from the release of information.  For example, in UK, with respect to release of Health Data, the main concern is that people and companies will benefit commercially from the information released, despite of the result potentially being improved drugs and treatment.
  • What are the Solutions?

The webinar also discussed potential solutions to the questions and challenges posed. For example, when commitments of Open Government Data Partnership are considered, privacy legislations must also be proposed. Further, key stakeholders must make commitments to take pro-active measures to reduce informational asymmetries between the state and citizens.  To reduce the risks, measures must be taken to publish what information the State has or what the Government knows about the citizens. For example, in UK, within the civil society network, it is being duly considered in the national plan that the government will publicize how it will share data and have a centralized view on the process of information handling and usage of the data.

The Open Government Guide provides for Illustrative Commitments like enactment of data protection legislation, establishing programmes for awareness and assessment of their impact, giving citizens control of their personal information and the right to redress when that information is misused, etc.

Surveillance

The issue of surveillance and the role of privacy in an open government context was also discussed.  The need for creating a balance between the legitimate interest of national security and the privacy of individuals was emphasized. With the rise of digital technologies, many governmental measures pertaining to surveillance intervene in individual privacy. There are many forms of surveillance and this has serious privacy implications, especially in developing countries. For example:

  1. Communications surveillance
  2. Visual surveillance
  3. Travel surveillance

This raises the question: When is surveillance legitimate and when must it be allowed?

The International Principles on the Application of Human Rights to Communications Surveillance acts as a soft law and tries to set out what a good surveillance system looks like by ensuring that governments are in compliance with international human rights law.

In essence surveillance does not violate privacy, however, there must be a clear and foreseeable legal framework laying circumstances when the government has the power to collect data and when individuals might be able to foresee when they might be under surveillance.

Also, a competent judicial authority must be established to oversee surveillance and keep a check on executive power by placing restrictions on privacy invasions. The actions of the government must be proportionate and the benefits must not outweigh harm caused by surveillance.

Role of openness in a “mass surveillance” state

Surveillance measures that are being undertaken by governments are increasingly secretive. The European court of Human Rights has held that Secret surveillance may undermine democracy under the cloak of protecting it. Hence, open government and openness will work towards protecting privacy and not undermining it.

To balance the measure of government surveillance with privacy, there is a need to publish laws regulating such powers; publish transparency reports about surveillance, interception and access to communications data; reform legislations relating to surveillance by state agencies to ensure it complies with human rights and establish safeguards to ensure that new technologies used for surveillance and interception respect the right to privacy.

Conclusion

The conclusion one can draw is that Privacy concerns have gained importance in today’s data driven world. The main question that needs to be answered is whether Government’s should adopt surveillance measures or adopt an Open Government?

Considering equal importance of national security and privacy of individuals, it is required that a balance must be crafted between the two. This could be possibly done by enacting foreseeable and clear laws outlining scope of surveillance by the Government on one hand, and informing citizens about such measures on the other. Establishment of a competent judicial authority to keep a check on Government actions is also suggested to work out the delicate balance between surveillance and privacy.

The Legal Validity of Internet Bans: Part I

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:18 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

 

 

In recent months, there has been a spree of bans on access to Internet services in India states, for different reasons. In Gujarat, the State government banned access to mobile Internet (data services) citing breach of peace during the Hardik Patel agitation. In Godhra in Gujarat, mobile Internet was banned as a precautionary measure during Ganesh visarjan. In Kashmir, mobile Internet was banned for three days or more because the government feared that people would share pictures of slaughter of animals during Eid on social media, which would spark unrest across the state.

Can State or Central governments impose a ban on Internet access? If the State or its officials anticipate disorder or a disturbance of ‘public tranquility’, can Internet access through mobiles be banned? According to a recent order of the Gujarat High Court: Yes; Section 144 of the Code of Criminal Procedure, 1973 (“CrPC”) empowers the State government machinery to impose a temporary ban.

But the Gujarat High Court’s order neglects the scope of Section 69A, IT Act, and wrongly finds that the State government can exercise blocking powers under Section 144, CrPC. In this post and the next, we argue that it is Section 69A of the Information Technology Act, 2000 (“IT Act”) which is the legal provision empowering the State to block access to the Internet (including data services), and not Section 144, CrPC. Section 69A covers blocks to Internet access, and since it is a special law dealing with the Internet, it prevails over the general Code of Criminal Procedure.

Moreover, the blocking powers must stay within constitutional boundaries prescribed in, inter alia, Article 19 of the Constitution. Blocking powers are, therefore, subject to the widely-accepted tests of legality (foresight and non-arbitrariness), legitimacy of the grounds for restriction of fundamental rights and proportionality, calling for narrowly tailored restrictions causing minimum disruptions and/or damage.

In Section I of this post, we set out a brief record of the events that preceded the blocking of access to data services (mobile Internet) in several parts of Gujarat. Then in Section II, we summarise the order of the Gujarat High Court, dismissing the petition challenging the State government’s Internet-blocking notification under Section 144, CrPC. In the next post, we examine the scope of Section 69A, IT Act to determine whether it empowers the State and Central government agencies to carry out blocks on Internet access through mobile phones (i.e., data services such as 2G, 3G and 4G) under certain circumstances. We submit that Section 69A does, and that Section 144, CrPC cannot be invoked for this purpose.

I. The Patidar Agitation in Gujarat:

This question arose in the wake of agitation in Gujarat in the Patel community. The Patels or Patidars are politically and economically influential in Gujarat, with several members of the community holding top political, bureaucratic and industrial positions. In the last couple of months, the Patidars have been agitating, demanding to be granted status as Other Backward Classes (OBC). OBC status would make the community eligible for reservations and quotas in educational institutions and for government jobs.

Towards this demand, the Patidars organised multiple rallies across Gujarat in August 2015. The largest rally, called the Kranti Rally, was held in Ahmedabad, Gujarat’s capital city, on August 25, 2015. Hardik Patel, a leader of the agitation, reportedly went on hunger strike seeking that the Patidars’ demands be met by the government, and was arrested as he did not have permission to stay on the rally grounds after the rally. While media reports vary, it is certain that violence and agitation broke out after the rally. Many were injured, some lost their lives, property was destroyed, businesses suffered; the army was deployed and curfew imposed for a few days across the State.

In addition to other security measures, the State government also imposed a ban on mobile Internet services across different parts of Gujarat. Reportedly, Hardik Patel had called for a state-wide bandh over Whatsapp. The police citedconcerns of rumour-mongering and crowd mobilisation through Whatsapp” as a reason for the ban, which was instituted under Section 144, Code of Criminal Procedure, 1973 (“CrPC”). In most of Gujarat, the ban lasted six days, from August 25 to 31, 2015, while it continued in Ahmedabad and Surat for longer.

II. The Public Interest Litigation:

A public interest petition was filed before the Gujarat High Court, challenging the mobile Internet ban. Though the petition was dismissed at the preliminary stage by Acting Chief Justice Jayant Patel and Justice Anjaria by an oral order delivered on September 15, 2015, the legal issues surrounding the ban are important and the order calls for some reflection.

In the PIL, the petitioner prayed that the Gujarat High Court declare that the notification under Section 144, CrPC, which blocked access to mobile Internet, is “void ab initio, ultra vires and unconstitutional” (para 1 of the order). The ban, argued the petitioner, violated Articles 14, 19 and 21 of the Constitution by being arbitrary and excessive, violating citizens’ right to free speech and causing businesses to suffer extensive economic damage. In any event, the power to block websites was specifically granted by Section 69A, IT Act, and so the government’s use of Section 144, CrPC to institute the mobile Internet block was legally impermissible. Not only this, but the government’s ban was excessive in that mobile Internet services were completely blocked; had the government’s concerns been about social media websites like Whatsapp or Facebook, the government could have suspended only those websites using Section 69A, IT Act. And so, the petitioner prayed that the Gujarat High Court issue a writ “permanently restraining the State government from imposing a complete or partial ban on access to mobile Internet/broadband services” in Gujarat.

The State Government saw things differently, of course. At the outset, the government argued that there was “sufficient valid ground for exercise of power” under Section 144, CrPC, to institute a mobile Internet block (para 4 of the order). Had the blocking notification not been issued, “peace could not have been restored with the other efforts made by the State for the maintenance of law and order”. The government stressed that Section 144, CrPC notifications were generally issued as a “last resort”, and in any case, the Internet had not been shut down in Gujarat; broadband and WiFi services continued to be active throughout. Since the government was the competent authority to evaluate law-and-order situations and appropriate actions, the Court ought to dismiss the petition, the State prayed.

The Court agreed with the State government, and dismissed the petition without issuing notice (para 9 of the order). The Court examined two issues in its order (very briefly):

  1. The scope and distinction between Section 144, CrPC and Section 69A, IT Act, and whether the invocation of Section 144, CrPC to block mobile Internet services constituted an arbitrary exercise of power;
  2. The proportionality of the blocking notification (though the Court doesn’t use the term ‘proportionality’).

We will examine the Court’s reading of Section 69A, IT Act and Section 144, CrPC, to see whether their fields of operation are in fact different.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

The Legal Validity of Internet Bans: Part II

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:17 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

As we saw earlier, the Gujarat High Court held that Section 144, CrPC empowers the State apparatus to order blocking of access to data services. According to the Court, Section 69A, IT Act can be used to block certain websites, while under Section 144, CrPC, the District Magistrate can direct telecom companies like Vodafone and Airtel, who extend the facility of Internet access. In effect, the High Court agreed with the State government’s argument that the scope of Section 69A, IT Act covers only blocking of certain websites, while Section 144, CrPC grants a wider power.

This is what the Court said (para 9 of the order):

If the comparison of both the sections in the field of operations is made, barring certain minor overlapping more particularly for public order [sic], one can say that the area of operation of Section 69A is not the same as that of Section 144 of the Code. Section 69A may in a given case also be exercised for blocking certain websites, whereas under Section 144 of the Code, directions may be issued to certain persons who may be the source for extending the facility of internet access. Under the circumstances, we do not find that the contention raised on behalf of the petitioner that the resort to only Section 69A was available and exercise of power under Section 144 of the Code was unavailable, can be accepted.” (emphases ours)

We submit that the High Court’s reasoning failed to examine the scope of Section 69A, IT Act thoroughly. Section 69A does, in fact, empower the government to order blocking of access to data services, and it is a special law. Importantly, it sets forth a procedure that State governments, union territories and the Central Governments must follow to order blocks on websites or data services.

I. Special Law Prevails Over General Law

The IT Act, 2000 is a special law dealing with matters relating to the Internet, including offences and security measures. The CrPC is a general law of criminal procedure.

When a special law and a general law cover the same subject, then the special law supersedes the general law. This is a settled legal principle. Several decisions of the Supreme Court attest to this fact. To take an example, in Maya Mathew v. State of Kerala, (2010) 3 SCR 16 (18 February 2010), when there was a contention between the Special Rules for Kerala State Homoeopathy Services and the general Rules governing state and subordinate services. The Supreme Court held that when a special law and a general law both govern a matter, the Court should try to interpret them harmoniously as far as possible. But if the intention of the legislature is that one law should prevail over another, and this intention is made clear expressly or impliedly, then the Court should give effect to this intention.

On the basis of this principle, let’s take a look at the IT Act, 2000. Section 81, IT Act expressly states that the provisions of the IT Act shall have overriding effect, notwithstanding anything inconsistent with any other law in force. Moreover, in the Statement of Objects and Reasons of the IT (Amendment) Bill, 2006, the legislature clearly notes that amendments inserting offences and security measures into the IT Act are necessary given the proliferation of the Internet and e-transactions, and the rising number of offences. These indicate expressly the legislature’s intention for the IT Act to prevail over general laws like the CrPC in matters relating to the Internet.

Now, we will examine whether the IT Act empowers the Central and State governments to carry out complete blocks on access to the Internet or data services, in the event of emergencies. If the IT Act does cover such a situation, then the CrPC should not be used to block data services. Instead, the IT Act and its Rules should be invoked.

II. Section 69A, IT Act Allows Blocks on Internet Access

Section 69A(1), IT Act says:

“Where the Central Government or any of its officer specially authorised by it in this behalf is satisfied that it is necessary or expedient so to do, in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above, it may subject to the provisions of sub-section (2) for reasons to be recorded in writing, by order, direct any agency of the Government or intermediary to block for access by the public or cause to be blocked for access by the public any information generated, transmitted, received, stored or hosted in any computer resource.” (emphasis ours)

Essentially, Section 69A says that the government can block (or cause to be blocked) for access by the public, any information generated, transmitted, etc. in any computer resource, if the government is satisfied that such a measure is in the interests of public order.

Does this section allow the government to institute bans on Internet access in Gujarat? To determine this, we will examine each underlined term from above.

Access: Section 2(1)(a), IT Act defines access as “...gaining entry into, instructing or communicating with… resources of a computer, computer system or computer network”.

Computer resource: Section 2(1)(k), IT Act defines computer resource as “computer, computer system, computer network...”

Information: Section 2(1)(v), IT Act defines information as “includes… data, message, text, images, sound, voice...”

So ‘blocking for access’ under Section 69A includes preventing gaining entry or communicating with the resources of a computer, computer system or computer network, and it includes blocking communication of data, message, text, images, sound, etc. Now two questions arise:

(1) Do 2G and 3G services, broadband and Wifi fall within the definition of ‘computer network’?

Computer network: Section 2(1)(j), IT Act defines computer network as “inter-connection of one or more computers or computer systems or communication device…” by “...use of satellite, microwave, terrestrial line, wire, wireless or other communication media”.

(2) Do mobile phones that can connect to the Internet (we say smartphones for simplicity) qualify as fall within the definition of ‘computer resource’?

Communication device: Section 2(1)(ha), IT Act defines communication device as “cell phones, personal digital assistance or combination of both or any other device used to communicate, send or transmit any text, video, audio or image”.

So a cell phone is a communication device. A computer network is an inter-connection of communication devices by wire or wireless connections. A computer network is a computer resource also. Blocking of access under Section 69A, IT Act includes, therefore, gaining entry into or communicating with the resources of a computer network, which is an interconnection of communication devices, including smartphones. Add to this, the fact that any information (data, message, text, images, sound, voice) can be blocked, and the conclusion seems clear.

The power to block access to Internet services (including data services) can be found within Section 69A, IT Act itself, the special law enacted to cover matters relating to the Internet. Not only this, the IT Act envisages emergency situations when blocking powers may need to be invoked.

III. Section 69A Permits Blocking in Emergency Situations

Section 69A, IT Act doesn’t act in isolation. The Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“Blocking Rules”) operate together with Section 69A(1).

Rule 9 of the Blocking Rules deals with blocking of information in cases of emergency. It says that in cases of emergency, when “no delay is acceptable”, the Designated Officer (DO) shall examine the request for blocking. If it is within the scope of Section 69A(1) (i.e., within the grounds of public order, etc.), then the DO can submit the request to the Secretary, Department of Electronics and Information Technology (DeitY). If the Secretary is satisfied of the need to block during the emergency, then he may issue a reasoned order for blocking, in writing as an interim measure. The intermediaries do not need to be heard in such a situation.

After a blocking order is issued during an urgent situation, the DO must bring the blocking request to the Committee for Examination of Request constituted under Rule 7, Blocking Rules. There is also a review process, by a Review Committee that meets every two months to evaluate whether blocking directions are in compliance with Section 69A(1) [Rule 14].

We submit, therefore, that the Gujarat High Court erred in holding that Section 144, CrPC is the correct legal provision to enable Internet bans. Not only does Section 69A, IT Act cover blocking of access to Internet services, but it also envisages blocking in emergency situations. As a special law for matters surrounding the Internet, Section 69A should prevail over the general law provision of Section 144, CrPC.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

GSMA Conference Invite

by Prasad Krishna last modified Oct 14, 2015 01:49 AM

PDF document icon Conference Invite.pdf — PDF document, 68 kB (70004 bytes)

Participants of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:49 AM

PDF document icon PARTICIPANTS - I&J MEETING BERLIN 8.-9.10.2015.pdf — PDF document, 131 kB (134278 bytes)

Agenda of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:52 AM

PDF document icon AGENDA - I&J MEETING BERLIN 8.-9.10.2015-2.pdf — PDF document, 96 kB (99176 bytes)

Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India

by Jyoti Panday last modified Oct 14, 2015 02:40 PM
The European Court of Justice has invalidated a European Commission decision, which had previously concluded that the 'Safe Harbour Privacy Principles' provide adequate protections for European citizens’ privacy rights for the transfer of personal data between European Union and United States. The inadequacies of the framework is not news for the European Commission and action by ECJ has been a long time coming. The ruling raises important questions about how the claims of citizenship are being negotiated in the context of the internet, and how increasingly the contestations of personal data are being employed in the discourse.

The European Court of Justice (ECJ) has invalidated a European Commission (EC) decision1 which had previously concluded that the 'Safe Harbor Privacy Principles'2 provide adequate protections for European citizens’ privacy rights3 for the transfer of personal data between European Union and United States. This challenge stems from the claim that public law enforcement authorities in America obtain personal data from organisations in safe harbour for incompatible and disproportionate purposes in violation of the Safe Harbour Privacy Principles. The court's judgment follows the advice of the Advocate General of the Court of Justice of the European Union (CJEU) who recently opined4 that US practices allow for large-scale collection and transfer of personal data belonging to EU citizens without them benefiting from or having access to judicial protection under US privacy laws. The inadequacies of the framework is not news for the Commission and action by ECJ has been a long time coming. The ruling raises important questions about how increasingly the contestations of personal data are being employed in asserting claims of citizenship in context of the internet.

As the highest court in Europe, the ECJ's decisions are binding on all member states. With this ruling the ECJ has effectively restrained US firms from indiscriminate collection and sharing of European citizens’ data on American soil. The implications of the decision are significant, because it shifts the onus of evaluating protections of personal data for EU citizens from the 4,400 companies5 subscribing to the system onto EU privacy watchdogs. Most significantly, in addressing the rights of a citizen against an established global brand, the judgement goes beyond political and legal opinion to challenge the power imbalance that exists with reference to US based firms.

Today, the free movement of data across borders is a critical factor in facilitating trade, financial services, governance, manufacturing, health and development. However, to consider the ruling as merely a clarification of transatlantic mechanisms for data flows misstates the real issue. At the heart of the judgment is the assessment whether US firms apply the tests of ‘necessity and proportionality’ in the collection and surveillance of data for national security purposes. Application of necessity and proportionality test to national security exceptions under safe harbor has been a sticking point that has stalled the renegotiation of the agreement that has been underway between the Commission and the American data protection authorities.6

For EU citizens the stake in the case are even higher, as while their right to privacy is enshrined under EU law, they have no administrative or judicial means of redress, if their data is used for reasons they did not intend. In the EU, citizens accessing and agreeing to use of US based firms are presented with a false choice between accessing benefits and giving up on their fundamental right to privacy. In other words, by seeking that governments and private companies provide better data protection for the EU citizens and in restricting collection of personal data on a generalised basis without objective criteria, the ruling is effectively an assertion of ‘data sovereignty’. The term ‘data sovereignty’, while lacking a firm definition, refers to a spectrum of approaches adopted by different states to control data generated in or passing through national internet infrastructure.7 Underlying the ruling is the growing policy divide between the US and EU privacy and data protection standards, which may lead to what is referred to as the balkanization8 of the internet in the future.

US-EU Data Protection Regime

The safe harbor pact between the EU and US was negotiated in the late 1990s as an attempt to bridge the different approaches to online privacy. Privacy is addressed in the EU as a fundamental human right while in the US it is defined under terms of consumer protection, which allow trade-offs and exceptions when national security seems to be under threat. In order to address the lower standards of data protection prevalent in the US, the pact facilitates data transfers from EU to US by establishing certain safeguards equivalent to the requirements of the EU data protection directive. The safe harbor provisions include firms undertaking not to pass personal information to third parties if the EU data protection standards are not met and giving users right to opt out of data collection.9

The agreement was due to be renewed by May 201510 and while negotiations have been ongoing for two years, EU discontent on safe harbour came to the fore following the Edward Snowden revelations of collection and monitoring facilitated by large private companies for the PRISM program and after the announcement of the TransAtlantic Trade and Investment Partnership (TTIP).11 EU member states have mostly stayed silent as they run their own surveillance programs often times, in cooperation with the NSA. EU institutions cannot intervene in matters of national security however, they do have authority on data protection matters. European Union officials and Members of Parliament have expressed shock and outrage at the surveillance programs unveiled by Snowden's 2013 revelations. Most recently, following the CJEU Advocate General’s opinion, 50 Members of European Parliament (MEP) sent a strongly worded letter the US Congress hitting back on claims of ‘digital protectionism’ emanating from the US12. In no uncertain terms the letter clarified that the EU has different ideas on privacy, platforms, net neutrality, encryption, Bitcoin, zero-days, or copyright and will seek to improve and change any proposal from the EC in the interest of our citizens and of all people.

Towards Harmonization

In November 2013, as an attempt to minimize the loss of trust following the Snowden revelations, the European Commission (EC) published recommendations in its report on 'Rebuilding Trust is EU-US Data Flows'.13 The recommendations revealed two critical initiatives at the EU level—first was the revision of the EU-US safe harbor agreement14 and second the adoption of the 'EU-US Umbrella Agreement15'—a framework for data transfer for the purpose of investigating, detecting, or prosecuting a crime, including terrorism. The Umbrella Agreement was recently initialed by EU and US negotiators and it only addresses the exchange of personal data between law enforcement agencies.16 The Agreement has gained momentum in the wake of recent cases around issues of territorial duties of providers, enforcement jurisdictions and data localisation.17 However, the adoption of the Umbrella Act depends on US Congress adoption of the Judicial Redress Act (JRA) as law.18

Judicial Redress Act

The JRA is a key reform that the EC is pushing for in an attempt to address the gap between privacy rights and remedies available to US citizens and those extended to EU citizens, including allowing EU citizens to sue in American courts. The JRA seeks to extend certain protections under the Privacy Act to records shared by EU and other designated countries with US law enforcement agencies for the purpose of investigating, detecting, or prosecuting criminal offenses. The JRA protections would extend to records shared under the Umbrella Agreement and while it does include civil remedies for violation of data protection, as noted by the Center for Democracy and Technology, the present framework does not provide citizens of EU countries with redress that is at par with that which US persons enjoy under the Privacy Act.19

For example, the measures outlined under the JRA would only be applicable to countries that have outlined appropriate privacy protections agreements for data sharing for investigations and ‘efficiently share’ such information with the US. Countries that do not have agreements with US cannot seek these protections leaving the personal data of their citizens open for collection and misuse by US agencies. Further, the arrangement leaves determination of 'efficiently sharing' in the hands of US authorities and countries could lose protection if they do not comply with information sharing requests promptly. Finally, JRA protections do not apply to non-US persons nor to records shared for purposes other than law enforcement such as intelligence gathering. JRA is also weakened by allowing heads of agencies to exercise their discretion to seek exemption from the Act and opt out of compliance.

Taken together the JRA, the Umbrella Act and the renegotiation of the Safe Harbor Agreement need considerable improvements. It is worth noting that EU’s acceptance of the redundancy of existing agreements and in establishing the independence of national data protection authorities in investigating and enforcing national laws as demonstrated in the Schrems and in the Weltimmo20 case point to accelerated developments in the broader EU privacy landscape.

Consequences

The ECJ Safe Harbor ruling will have far-reaching consequences for the online industry. Often, costly government rulings solidify the market dominance of big companies. As high regulatory costs restrict the entrance of small and medium businesses the market, competition is gradually wiped out. Further, complying with high standards of data protection means that US firms handling European data will need to consider alternative legal means of transfer of personal data. This could include evolving 'model contracts' binding them to EU data protection standards. As Schrems points out, “Big companies don’t only rely on safe harbour: they also rely on binding corporate rules and standard contractual clauses.”21

The ruling is good news for European consumers, who can now approach a national regulator to investigate suspicions of data mishandling. EU data protection regulators may be be inundated with requests from companies seeking authorization of new contracts and with consumer complaints. Some are concerned that the ruling puts a dent in the globalized flow of data22, effectively requiring data localization in Europe.23 Others have pointed out that it is unclear how this decision sits with other trade treaties such as the TPP that ban data localisation.24 While the implications of the decision will take some time in playing out, what is certain is that US companies will be have to restructure management, storage and use of data. The ruling has created the impetus for India to push for reforms to protect its citizens from harms by US firms and improve trade relations with EU.

The Opportunity for India

Multiple data flows taking place over the internet simultaneously and that has led to ubiquity of data transfers o ver the Internet, exposing individuals to privacy risks. There has also been an enhanced economic importance of data processing as businesses collect and correlate data using analytic tools to create new demands, establish relationships and generate revenue for their services. The primary concern of the Schrems case may be the protection of the rights of EU citizens but by seeking to extend these rights and ensure compliance in other jurisdictions, the case touches upon many underlying contestations around data and sovereignty.

Last year, Mr Ram Narain, India Head of Delegation to the Working Group Plenary at ITU had stressed, “respecting the principle of sovereignty of information through network functionality and global norms will go a long way in increasing the trust and confidence in use of ICT.”25 In the absence of the recognition of privacy as a right and empowering citizens through measures or avenues to seek redressal against misuse of data, the demand of data sovereignty rings empty. The kind of framework which empowered an ordinary citizen in the EU to approach the highest court seeking redressal based on presumed overreach of a foreign government and from harms abetted by private corporations simply does not exist in India. Securing citizen’s data in other jurisdictions and from other governments begins with establishing protection regimes within the country.

The Indian government has also stepped up efforts to restrict transfer of data from India including pushing for private companies to open data centers in India.26 Negotiating data localisation does not restrict the power of private corporations from using data in a broad ways including tailoring ads and promoting products. Also, data transfers impact any organisation with international operations for example, global multinationals who need to coordinate employee data and information. Companies like Facebook, Google and Microsoft transfer and store data belonging to Indian citizens and it is worth remembering that the National Security Agency (NSA) would have access to this data through servers of such private companies. With no existing measures to restrict such indiscriminate access, the ruling purports to the need for India to evolve strong protection mechanisms. Finally, the lack of such measures also have an economic impact, as reported in a recent Nasscom-Data Security Council of India (DSCI) survey27 that pegs revenue losses incurred by the Indian IT-BPO industry at $2-2.5 billion for a sample size of 15 companies. DSCI has further estimated that outsourcing business can further grow by $50 billion per annum once India is granted a “data secure” status by the EU.28 EU’s refusal to grant such a status is understandable given the high standard of privacy as incorporated under the European Union Data Protection Directive a standard to which India does not match up, yet. The lack of this status prevents the flow of data which is vital for Digital India vision and also affects the service industry by restricting the flow of sensitive information to India such as information about patient records.

Data and information structures are controlled and owned by private corporations and networks transcend national borders, therefore the foremost emphasis needs to be on improving national frameworks. While, enforcement mechanisms such as the Mutual Legal Assistance Treaty (MLAT) process or other methods of international cooperation may seem respectful of international borders and principles of sovereignty,29 for users that live in undemocratic or oppressive regimes such agreements are a considerable risk. Data is also increasingly being stored across multiple jurisdictions and therefore merely applying data location lens to protection measures may be too narrow. Further it should be noted that when companies begin taking data storage decisions based on legal considerations it will impact the speed and reliability of services.30 Any future regime must reflect the challenges of data transfers taking place in legal and economic spaces that are not identical and may be in opposition. Fundamentally, the protection of privacy will always act as a barrier to the free flow of information even so, as the Schrems case ruling points out not having adequate privacy protections could also restrict flow of data, as has been the case for India.

The time is right for India to appoint a data controller and put in place national frameworks, based on nuanced understanding of issues of applying jurisdiction to govern users and their data. Establishing better protection measures will not only establish trust and enhance the ability of users to control data about themselves it is also essential for sustaining economic and social value generated from data generation and collection. Suggestions for such frameworks have been considered previously by the Group of Experts on Privacy constituted by the Planning Commission.31 By incorporating transparency in mechanisms for data and access requests and premising requests on established necessity and proportionality Indian government can lead the way in data protection standards. This will give the Indian government more teeth to challenge and address both the dangers of theft of data stored on servers located outside of India and restrain indiscriminate access arising from terms and conditions of businesses that grant such rights to third parties. 

1 Commission Decision of 26 July 2000 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce (notified under document number C(2000) 2441) (Text with EEA relevance.) Official Journal L 215 , 25/08/2000 P. 0007 -0047 2000/520/EC: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000D0520:EN:HTML

2 Safe Harbour Privacy Principles Issued by the U.S. Department of Commerce on July 21, 2000 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

4 Advocate General’s Opinion in Case C-362/14 Maximillian Schrems v Data Protection Commissioner Court of Justice of the European Union, Press Release, No 106/15 Luxembourg, 23 September 2015 http://curia.europa.eu/jcms/upload/docs/application/pdf/2015-09/cp150106en.pdf

5 Jennifer Baker, ‘EU desperately pushes just-as-dodgy safe harbour alternatives’, The Register, October 7, 2015 http://www.theregister.co.uk/2015/10/07/eu_pushes_safe_harbour_alternatives/ 

6 Draft Report, General Data Protection Regulation, Committee on Civil Liberties, Justice and Home Affairs, European Parliament, 2009-2014 http://www.europarl.europa.eu/meetdocs/2009_2014/documents/libe/pr/922/922387/922387en.pdf

7 Dana Polatin-Reuben, Joss Wright, ‘An Internet with BRICS Characteristics: Data Sovereignty and the Balkanisation of the Internet’, University of Oxford, July 7, 2014 https://www.usenix.org/system/files/conference/foci14/foci14-polatin-reuben.pdf

8 Sasha Meinrath, The Future of the Internet: Balkanization and Borders, Time, October 2013 http://ideas.time.com/2013/10/11/the-future-of-the-internet-balkanization-and-borders/

9 Safe Harbour Privacy Principles, Issued by the U.S. Department of Commerce, July 2001 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

10 Facebook case may force European firms to change data storage practices, The Guardian, September 23, 2015 http://www.theguardian.com/us-news/2015/sep/23/us-intelligence-services-surveillance-privacy

11 Privacy Tracker, US-EU Safe Harbor Under Pressure, August 2, 2013 https://iapp.org/news/a/us-eu-safe-harbor-under-pressure

12 Kieren McCarthy, Privacy, net neutrality, security, encryption ... Europe tells Obama, US Congress to back off, The Register, 23 September, 2015 http://www.theregister.co.uk/2015/09/23/european_politicians_to_congress_back_off/

13 Communication from the Commission to the European Parliament and the Council, Rebuilding Trust in EU-US Data Flows, European Commission, November 2013 http://ec.europa.eu/justice/data-protection/files/com_2013_846_en.pdf

14 Safe Harbor on trial in the European Union, Access Blog, September 2014 https://www.accessnow.org/blog/2014/11/13/safe-harbor-on-trial-in-the-european-union

15 European Commission - Fact Sheet Questions and Answers on the EU-US data protection "Umbrella agreement", September 8, 2015 http://europa.eu/rapid/press-release_MEMO-15-5612_en.htm 

16 McGuire Woods, ‘EU and U.S. reach “Umbrella Agreement” on data transfers’, Lexology, September 14, 2015 http://www.lexology.com/library/detail.aspx?g=422bca41-2d54-4648-ae57-00d678515e1f

17 Andrew Woods, Lowering the Temperature on the Microsoft-Ireland Case, Lawfare September, 2015 https://www.lawfareblog.com/lowering-temperature-microsoft-ireland-case

18 Jens-Henrik Jeppesen, Greg Nojeim, ‘The EU-US Umbrella Agreement and the Judicial Redress Act: Small Steps Forward for EU Citizens’ Privacy Rights’, October 5, 2015 https://cdt.org/blog/the-eu-us-umbrella-agreement-and-the-judicial-redress-act-small-steps-forward-for-eu-citizens-privacy-rights/

19 Ibid 18.

20 Landmark ECJ data protection ruling could impact Facebook and Google, The Guardian, 2 October, 2015 http://www.theguardian.com/technology/2015/oct/02/landmark-ecj-data-protection-ruling-facebook-google-weltimmo

21 Julia Powles, Tech companies like Facebook not above the law, says Max Schrems, The Guardian, Octover 9, 2015 http://www.theguardian.com/technology/2015/oct/09/facebook-data-privacy-max-schrems-european-court-of-justice

22 Adam Thierer, Unintended Consequences of the EU Safe Harbor Ruling, The Technology Liberation Front, October 6, 2015 http://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/#more-75831

23 Anupam Chander, Tweeted ECJ #schrems ruling may effectively require data localization within Europe, https://twitter.com/AnupamChander/status/651369730754801665

24 Lokman Tsui, Tweeted, “If the TPP bans data localization, but the ECJ ruling effectively mandates it, what does that mean for the internet?” https://twitter.com/lokmantsui/status/651393867376275456

26 Sounak Mitra, Xiaomi bets big on India despite problems, Business Standard, December 2014 http://www.business-standard.com/article/companies/xiaomi-bets-big-on-india-despite-problems-114122201023_1.html

27 Neha Alawadi, Ruling on data flow between EU & US may impact India’s IT sector, Economic Times,October 7, 2015 http://economictimes.indiatimes.com/articleshow/49250738.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

28 Pranav Menon, Data Protection Laws in India and Data Security- Impact on India and Data Security-Impact on India - EU Free Trade Agreement, CIS Access to Knowledge, 2011 http://cis-india.org/a2k/blogs/data-security-laws-india.pdf

29 Surendra Kumar Sinha, India wants Mutual Legal Assistance treaty with Bangladesh, Economic Times, October 7, 2015 http://economictimes.indiatimes.com/articleshow/49262294.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

30 Pablo Chavez, Director, Public Policy and Government Affairs, Testifying before the U.S. Senate on transparency legislation, November 3, 2013 http://googlepublicpolicy.blogspot.in/2013/11/testifying-before-us-senate-on.htm 

31 Report of the Group of Experts on Privacy (Chaired by Justice A P Shah, Former Chief Justice, Delhi High Court), Planning Commission, October 2012 http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

 

 

 

Peering behind the veil of ICANN's DIDP (II)

by Padmini Baruah — last modified Oct 15, 2015 03:14 AM
In a previous blog post, I had introduced the concept of ICANN’s Documentary Information Disclosure Policy (“DIDP”) and their extremely vast grounds for non-disclosure. In this short post, I have made an analysis of every DIDP request that ICANN has ever responded to, to point out the flaws in their policy that need to be urgently remedied.

Read the previous blog post here. Every DIDP request that ICANN has ever responded to can be accessed here.


The table here is a comprehensive breakdown of all the different DIDP requests that ICANN has responded to. This table is to be read with this document, which has a numbered list of the different non-disclosure exceptions outlined in ICANN’s policy. What I sought to scrutinize was the number of times ICANN has provided satisfactory information, the number of times it has denied information, and the grounds for the same. What we found was alarming:

  1. Of a total of 91 requests (as of 13/10/2015), ICANN has fully and positively responded to only 11.
  2. It has responded partially to 47 of 91 requests, with some amount of information (usually that which is available as public records).
  3. It has not responded at all to 33 of 91 requests.
  4. The Non-Disclosure Clause (1)[1] has been invoked 17 times.
  5. The Non-Disclosure Clause (2)[2] has been invoked 39 times.
  6. The Non-Disclosure Clause (3)[3] has been invoked 31 times.
  7. The Non-Disclosure Clause (4)[4] has been invoked 5 times.
  8. The Non-Disclosure Clause (5)[5] has been invoked 34 times.
  9. The Non-Disclosure Clause (6)[6] has been invoked 35 times.
  10. The Non-Disclosure Clause (7)[7] has been invoked once.
  11. The Non-Disclosure Clause (8)[8] has been invoked 22 times.
  12. The Non-Disclosure Clause (9)[9] has been invoked 30 times.
  13. The Non-Disclosure Clause (10)[10] has been invoked 10 times.
  14. The Non-Disclosure Clause (11)[11] has been invoked 12 times.
  15. The Non-Disclosure Clause (12)[12] has been invoked 18 times.

This data is disturbing because it reveals that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide.

Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. Therefore, it is clear that there is a failure on part of ICANN to uphold the transparency it claims to stand for, and this needs to be remedied at the earliest.

Pie Chart 1


 

Pie Chart 2


[1]Information provided by or to a government or international organization, or any form of recitation of such information, in the expectation that the information will be kept confidential and/or would or likely would materially prejudice ICANN's relationship with that party

[2]Internal information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process by inhibiting the candid exchange of ideas and communications, including internal documents, memoranda, and other similar communications to or from ICANN Directors, ICANN Directors' Advisors, ICANN staff, ICANN consultants, ICANN contractors, and ICANN agents

[3]Information exchanged, prepared for, or derived from the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process between and among ICANN, its constituents, and/or other entities with which ICANN cooperates by inhibiting the candid exchange of ideas and communications

[4]Personnel, medical, contractual, remuneration, and similar records relating to an individual's personal information, when the disclosure of such information would or likely would constitute an invasion of personal privacy, as well as proceedings of internal appeal mechanisms and investigations

[5]Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement

[6]Confidential business information and/or internal policies and procedures

[7]Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice

[8]Information subject to the attorney– client, attorney work product privilege, or any other applicable privilege, or disclosure of which might prejudice any internal, governmental, or legal investigation

[9]Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication

[10]Information that relates in any way to the security and stability of the Internet, including the operation of the L Root or any changes, modifications, or additions to the root zone

[11]Trade secrets and commercial and financial information not publicly disclosed by ICANN

[12]Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual

Comments on the Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

by Geetha Hariharan last modified Oct 16, 2015 02:44 AM
On 9 October 2015, the Zero Draft of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released. Comments were sought on the Zero Draft from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

These comments were prepared by Geetha Hariharan with inputs from Sumandro Chattapadhyay, Pranesh Prakash, Sunil Abraham, Japreet Grewal and Nehaa Chaudhari. Download the comments here.


  1. The Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“Zero Draft”) is divided into three sections: (A) ICT for Development; (B) Internet Governance; (C) Implementation and Follow-up. CIS’ comments follow the same structure.
  2. The Zero Draft is a commendable document, covering crucial areas of growth and challenges surrounding the WSIS. The Zero Draft makes detailed references to development-related challenges, noting the persistent digital divide, the importance of universal access, innovation and investment, and of enabling legal and regulatory environments conducive to the same. It also takes note of financial mechanisms, without which principles would remain toothless. Issues surrounding Internet governance, particularly net neutrality, privacy and the continuation of the IGF are included in the Zero Draft.
  3. However, we believe that references to these issues are inadequate to make progress on existing challenges. Issues surrounding ICT for Development and Internet Governance have scarcely changed in the past ten years. Though we may laud the progress so far achieved, universal access and connectivity, the digital divide, insufficient funding, diverse and conflicting legal systems surrounding the Internet, the gender divide and online harassment persist. Moreover, the working of the IGF and the process of Enhanced Cooperation, both laid down with great anticipation in the Tunis Agenda, have been found wanting.
  4. These need to be addressed more clearly and strongly in the Zero Draft. In light of these shortcomings, we suggest the following changes to the Zero Draft, in the hope that they are accepted.
    A. ICT for Development
  5. Paragraphs 16-21 elaborate upon the digital divide – both the progresses made and challenges. While the Zero Draft recognizes the disparities in access to the Internet among countries, between men and women, and of the languages of Internet content, it fails to attend to two issues.
  6. First, accessibility for persons with disabilities continues to be an immense challenge. Since the mandate of the WSIS involves universal access and the bridging of the digital divide, it is necessary that the Zero Draft take note of this continuing challenge.
  7. We suggest the insertion of Para 20A after Para 20:
    “20A. We draw attention also to the digital divide adversely affecting the accessibility of persons with disabilities. We call on all stakeholders to take immediate measures to ensure accessibility for persons with disabilities by 2020, and to enhance their capacity and access to ICTs.”
  8. Second, while the digital divide among the consumers of ICTs has decreased since 2003-2005, the digital production divide goes unmentioned. The developing world continues to have fewer producers of technology compared to their sheer concentration in the developed world – so much so that countries like India are currently pushing for foreign investment through missions like ‘Digital India’. Of course, the Zero Draft refers to the importance of private sector investment (Para 31). But it fails to point out that currently, such investment originates from corporations in the developed world. For this digital production divide to disappear, restrictions on innovation – restrictive patent or copyright regimes, for instance – should be removed, among other measures. Equitable development is the key.
  9. Ongoing negotiations of plurilateral agreements such as the Trans-Pacific Partnership (TPP) go unmentioned in the Zero Draft. This is shocking. The TPP has been criticized for its excessive leeway and support for IP rightsholders, while incorporating non-binding commitments involving the rights of users (see Clause QQ.G.17 on copyright exceptions and limitations, QQ.H.4 on damages and QQ.C. 12 on ccTLD WHOIS, https://wikileaks.org/tpp-ip3/WikiLeaks-TPP-IP-Chapter/WikiLeaks-TPP-IP-Chapter-051015.pdf). Plaudits for progress make on the digital divide would be lip service if such agreements were not denounced.
  10. Therefore, we propose the addition of Para 20B after Para 20:
    “20B. We draw attention also to the digital production divide among countries, recognizing that domestic innovation and production are instrumental in achieving universal connectivity. Taking note of recent negotiations surrounding restrictive and unbalanced plurilateral trade agreements, we call on stakeholders to adopt policies to ensure globally equitable development, removing restrictions on innovation and conducive to fostering domestic and local production.”
  11. Paragraph 22 of the Zero Draft acknowledges that “school curriculum requirements for ICT, open access to data and free flow of information, fostering of competition, access to finance”, etc. have “in many countries, facilitated significant gains in connectivity and sustainable development”.
  12. This is, of course, true. However, as Para 23 also recognises, access to knowledge, data and innovation have come with large costs, particularly for developing countries like India. These costs are heightened by a lack of promotion and adoption of open standards, open access, open educational resources, open data (including open government data), and other free and open source practices. These can help alleviate costs, reduce duplication of efforts, and provide an impetus to innovation and connectivity globally.
  13. Not only this, but the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of “Data Revolution for Sustainable Development”. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.
  14. As such, we suggest that following clause be inserted as Para 23A to the Zero Draft:

“23A. We recognize the importance of access to open, affordable, and reliable technologies and services, open access to knowledge, and open data, including open government data, and encourage all stakeholders to explore concrete options to facilitate the same.”

15. Paragraph 30 of the Zero Draft laments “the lack of progress on the Digital Solidarity Fund”, and calls “for a review of options for its future”.

16. The Digital Solidarity Fund was established with the objective of “transforming the digital divide into digital opportunities for the developing world” through voluntary contributions [Para 28, Tunis Agenda]. It was an innovative financial mechanism to help bridge the digital divide between developed and developing countries. This divide continues to exist, as the Zero Draft itself recognizes in Paragraphs 16-21.

17. Given the persistent digital divide, a “call for review of options” as to the future of the Digital Solidarity Fund is inadequate to enable developing countries to achieve parity with developed countries. A stronger and more definite commitment is required.

18. As such, we suggest the following language in place of the current Para 30:

“30. We express concern at the lack of progress on the Digital Solidarity Fund, welcomed in Tunis as an innovative financial mechanism of a voluntary nature, and we call for voluntary commitments from States to revive and sustain the Digital Solidarity Fund.”

19. Paragraph 31 of the Zero Draft recognizes the importance of “legal and regulatory frameworks conducive to investment and innovation”. This is eminently laudable. However, a broader vision is more compatible with paving the way for affordable and widespread access to devices and technology necessary for universal connectivity.

20. We suggest the following additions to Para 31:

“31. We recognise the critical importance of private sector investment in ICT access, content and services, and of legal and regulatory frameworks conducive to local investment and expansive, permissionless innovation.”

B. Internet Governance

21. Paragraph 32 of the Zero Draft recognizes the “general agreement that the governance of the Internet should be open, inclusive, and transparent”. Para 37 takes into account “the report of the CSTD Working Group on improvements to the IGF”. Para 37 also affirms the intention of the General Assembly to extend the life of the IGF by (at least) another 5 years, and acknowledges the “unique role of the IGF”.

22. The IGF is, of course, unique and crucial to global Internet governance. In the last 10 years, major strides have been made among diverse stakeholders in beginning and sustaining conversations on issues critical to Internet governance. These include issues such as human rights, inclusiveness and diversity, universal access to connectivity, emerging issues such as net neutrality, the right to be forgotten, and several others. Through its many arms like the Dynamic Coalitions, the Best Practices Forums, Birds-of-a-Feather meetings and Workshops, the IGF has made it possible for stakeholders to connect.

23. However, the constitution and functioning of the IGF have not been without lament and controversy. Foremost among the laments was the IGF’s evident lack of outcome-orientation; this continues to be debatable. Second, the composition and functioning of the MAG, particularly its transparency, have come under the microscope several times. One of the suggestions of the CSTD Working Group on Improvements to the IGF concerned the structure and working methods of the Multistakeholder Advisory Group (MAG). The Working Group recommended that the “process of selection of MAG members should be inclusive, predictable, transparent and fully documented” (Section II.2, Clause 21(a), Page 5 of the Report).

24. Transparency in the structure and working methods of the MAG are critical to the credibility and impact of the IGF. The functioning of the IGF depends, in a large part, on the MAG. The UN Secretary General established the MAG, and it advises the Secretary General on the programme and schedule of the IGF meetings each year (see <http://www.intgovforum.org/cms/mag/44-about-the-mag>). Under its Terms of Reference, the MAG decides the main themes and sub-themes for each IGF, sets or modifies the rules of engagement, organizes the main plenary sessions, coordinates workshop panels and speakers, and crucially, evaluates the many submissions it receives to choose from amongst them the workshops for each IGF meeting. The content of each IGF, then, is in the hands of the MAG.

25. But the MAG is not inclusive or transparent. The MAG itself has lamented its opaque ‘black box approach’ to nomination and selection. Also, CIS’ research has shown that the process of nomination and selection of the MAG continues to be opaque. When CIS sought information on the nominators of the MAG, the IGF Secretariat responded that this information would not be made public (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

26. Further, our analysis of MAG membership shows that since 2006, 26 persons have served for 6 years or more on the MAG. This is astounding, since under the MAG Terms of Reference, MAG members are nominated for a term of 1 year. This 1-year-term is “automatically renewable for 2 more consecutive years”, but such renewal is contingent on an evaluation of the engagement of MAG members in their activities (see <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference>). MAG members ought not serve for over 3 consecutive years, in accordance with their Terms of Reference. But out of 182 MAG members, around 62 members have served more than the 3-year terms designated by their Terms of Reference (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

27. Not only this, but our research showed 36% of all MAG members since 2006 have hailed from the Western European and Others Group (see <http://cis-india.org/internet-governance/blog/mag-analysis>). This indicates a lack of inclusiveness, though the MAG is certainly more inclusive than the composition and functioning of other I-Star organisations such as ICANN.

28. Tackling these infirmities within the MAG would go a long way in ensuring that the IGF lives up to its purpose. Therefore, we suggest the following additions to Para 37:

“37. We acknowledge the unique role of the Internet Governance Forum (IGF) as a multistakeholder platform for discussion of Internet governance issues, and take note of the report and recommendations of the CSTD Working Group on improvements to the IGF, which was approved by the General Assembly in its resolution, and ongoing work to implement the findings of that report. We reaffirm the principles of openness, inclusiveness and transparency in the constitution, organisation and functioning of the IGF, and in particular, in the nomination and selection of the Multistakeholder Advisory Group (MAG). We extend the IGF mandate for another five years with its current mandate as set out in paragraph 72 of the Tunis Agenda for the Information Society. We recognize that, at the end of this period, progress must be made on Forum outcomes and participation of relevant stakeholders from developing countries.”

29. Paragraphs 32-37 of the Zero Draft make mention of “open, inclusive, and transparent” governance of the Internet. It fails to take note of the lack of inclusiveness and diversity in Internet governance organisations – extending across representation, participation and operations of these organisations. In many cases, mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle. In substantive terms, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For example, the ‘diversity’ mailing list of IETF has very low traffic. Within ICANN, 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed.

30. We propose, therefore, the addition of a Para 37A after Para 37:

“37A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, and call upon these organisations to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

31. Paragraphs 36 of the Zero Draft notes that “a number of member states have called for an international legal framework for Internet governance.” But it makes no reference to ICANN or the importance of the ongoing IANA transition to global Internet governance. ICANN and its monopoly over several critical Internet resources was one of the key drivers of the WSIS in 2003-2005. Unfortunately, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

32. Therefore, we suggest the following Para 37A after Para 37:

“37A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

33. Paragraph 43 of the Zero Draft encourages “all stakeholders to ensure respect for privacy and the protection of personal information and data”. But the Zero Draft inadvertently leaves out the report of the Office of the UN High Commissioner for Human Rights on digital privacy, ‘The right to privacy in the digital age’ (A/HRC/27/37). This report, adopted by the Human Rights Council in June 2014, affirms the importance of the right to privacy in our increasingly digital age, and offers crucial insight into recent erosions of privacy. It is both fitting and necessary that the General Assembly take note of and affirm the said report in the context of digital privacy.

34. We offer the following suggestion as an addition to Para 43:

“43. We emphasise that no person shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, or correspondence, consistent with countries’ applicable obligations under international human rights law. In this regard, we acknowledge the report of the Office of the UN High Commissioner for Human Rights, ‘The right to privacy in the digital age’ (A/HRC/27/37, 30 June 2014), and take note of its findings. We encourage all stakeholders to ensure respect for privacy and the protection of personal information and data.”

35. Paragraphs 40-44 of the Zero Draft state that communication is a fundamental human need, reaffirming Article 19 of the Covenant on Civil and Political Rights, with its attendant narrow limitations. The Zero Draft also underscores the need to respect the independence of the press. Particularly, it reaffirms the principle that the same rights that people enjoy offline must also be protected online.

36. Further, in Para 31, the Zero Draft recognizes the “critical importance of private sector investment in ICT access, content, and services”. This is true, of course, but corporations also play a crucial role in facilitating the freedom of speech and expression (and all other related rights) on the Internet. As the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online. They are, in sum, intermediaries without whom the Internet cannot function.

37. Given this, it is essential that the outcome document of the WSIS+10 Overall Review recognize and affirm the role of the private sector, and crucially, its responsibilities to respect and protect human rights online.

38. We suggest, therefore, the insertion of the following paragraph Para 42A, after Para 42:

“42A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

C. Implementation and Follow-up

39. Para 57 of the Zero Draft calls for a review of the WSIS Outcomes, and leaves a black space inviting suggestions for the year of the review. How often, then, should the review of implementation of WSIS+10 Outcomes take place?

40. It is true, of course, that reviews of the implementation of WSIS Outcomes are necessary to take stock of progress and challenges. However, we caution against annual, biennal or other such closely-spaced reviews due to concerns surrounding budgetary allocations.

41. Reviews of implementation of outcomes (typically followed by an Outcome Document) come at considerable cost, which are budgeted and achieved through contributions (sometimes voluntary) from states. Were Reviews to be too closely spaced, budgets that ideally ought to be utilized to bridge digital divides and ensure universal connectivity, particularly for developing states, would be misspent in reviews. Moreover, closely-spaced reviews would only provide superficial quantitative assessments of progress, but would not throw light on longer term or qualitative impacts.

Comments on the Zero Draft of the UN General Assembly

by Prasad Krishna last modified Oct 16, 2015 02:41 AM

PDF document icon Final_CIS_Comments_UNGA_WSIS_Zero_Draft.pdf — PDF document, 478 kB (490106 bytes)

CyFy Agenda

by Prasad Krishna last modified Oct 16, 2015 03:01 AM

PDF document icon CyFyAgendaFinal.pdf — PDF document, 190 kB (195156 bytes)

The 'Global Multistakholder Community' is Neither Global Nor Multistakeholder

by Pranesh Prakash last modified Nov 03, 2016 10:42 AM
CIS research shows how Western, male, and industry-driven the IANA transition process actually is.

 

In March 2014, the US government announced that they were going to end the contract they have with ICANN to run something called the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

Analysis of the process since then shows how flawed the “global multistakeholder community” that converges at ICANN has not actually represented the disparate interests and concerns of different stakeholders. CIS research has found that the discussions around IANA transition have not been driven by the “global multistakeholder community”, but mostly by males from industry in North America and Western Europe.

CIS analysed the five main mailing lists where the IANA transition plan was formulated: ICANN’s ICG Stewardship and CCWG Accountability lists; IETF’s IANAPLAN list; and the NRO’s IANAXFER list and CRISP lists. What we found was quite disheartening.

  • A total of 239 individuals participated cumulatively, across all five lists.
  • Only 98 substantively contributed to the final shape of the ICG proposal, if one takes a count of 20 mails (admittedly, an arbitrary cut-off) as a substantive contribution, with 12 of these 98 being ICANN staff some of whom were largely performing an administrative function.

We decided to look at the diversity within these substantive contributors using gender, stakeholder grouping, and region. We relied on public records, including GNSO SOI statements, and extensive searches on the Web. Given that, there may be inadvertent errors, but the findings are so stark that even a few errors wouldn’t affect them much.

  • 2 in 5 (39 of 98, or 40%) were from a single country: the United States of America.
  • 4 in 5 (77 of 98) were from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries.
  • None were from the EEC (Eastern European and Russia) group, and only 5 of 98 from all of GRULAC (Latin American and Caribbean Group).
  • 4 in 5 (77 of 98) were male and 21 were female.
  • 4 in 5 (76 of 98) were from industry or the technical community, and only 4 (or 1 in 25​) were identifiable as primarily speaking on behalf of governments.

This shows also that the process has utterly failed in achieving the recommendation of Paragraph 6 of the 3 in 5 registrars are from the United States of America (624 out of 1010, as of March 2014, according to ICANN's accredited registrars list), with only 0.6% being from the 54 countries in Africa (7 out of 1010).

  • 45% of all the registries are from the United States of America! (307 out of 672 registries listed in ICANN’s registry directory in August 2015.)
  • 66% (34 of 51) of the Business Constituency at ICANN are from a single country: the United States of America. (N.B.: This page doesn’t seem to be up-to-date.)
  • This shows that businesses from the United States of America continues to dominate ICANN to a very significant degree, and this is also reflected in the nature of the dialogue within ICANN, including the fact that the proposal that came out of the ICANN ‘global multistakeholder community’ on IANA transition proposes a clause that requires the ‘IANA Functions Operator’ to be a US-based entity. For more on that issue, see this post on the jurisdiction issue at ICANN (or rather, on the lack of a jurisdiction issue at ICANN).

    Policy Brief: Oversight Mechanisms for Surveillance

    by Elonnai Hickok last modified Nov 24, 2015 06:09 AM

    Download the PDF


    Introduction

    Across jurisdictions, the need for effective and relevant oversight mechanisms (coupled with legislative safeguards) for state surveillance has been highlighted by civil society, academia, citizens and other key stakeholders.[1] A key part of oversight of state surveillance is accountability of intelligence agencies. This has been recognized at the international level. Indeed, the Organization for Economic Co-operation and Development, The United Nations, the Organization for Security and Cooperation in Europe, the Parliamentary Assembly of the Council of Europe, and the Inter-Parliamentary Union have all recognized that intelligence agencies need to be subject to democratic accountability.[2] Since 2013, the need for oversight has received particular attention in light of the information disclosed through the 'Snowden Revelations'. [3] Some countries such as the US, Canada, and the UK have regulatory mechanisms for the oversight of state surveillance and the intelligence community, while many other countries – India included - have piecemeal oversight mechanisms in place. The existence of regulatory mechanisms for state surveillance does not necessarily equate to effective oversight – and piecemeal mechanisms – depending on how they are implemented, could be more effective than comprehensive mechanisms. This policy brief seeks to explore the purpose of oversight mechanisms for state surveillance, different forms of mechanisms, and what makes a mechanism effective and comprehensive. The brief also reviews different oversight mechanisms from the US, UK, and Canada and provides recommendations for ways in which India can strengthen its present oversight mechanisms for state surveillance and the intelligence community.

    What is the purpose and what are the different components of an oversight mechanism for State Surveillance?

    The International Principles on the Application of Human Rights to Communication Surveillance, developed through a global consultation with civil society groups, industry, and international experts recommends that public oversight mechanisms for state surveillance should be established to ensure transparency and accountability of Communications Surveillance. To achieve this, mechanisms should have the authority to:

    • Access all potentially relevant information about State actions, including, where appropriate, access to secret or classified information;
    • Assess whether the State is making legitimate use of its lawful capabilities;
    • Evaluate whether the State has been comprehensively and accurately publishing information about the use and scope of Communications Surveillance techniques and powers in accordance with its Transparency obligations publish periodic reports and other information relevant to Communications Surveillance;
    • Make public determinations as to the lawfulness of those actions, including the extent to which they comply with these Principles[4]

    What can inform oversight mechanisms for state surveillance?

    The development of effective oversight mechanisms for state surveillance can be informed by a number of factors including:

    • Rapidly changing technology – how can mechanisms adapt, account for, and evaluate perpetually changing intelligence capabilities?
    • Expanding surveillance powers – how can mechanisms evaluate and rationalize the use of expanding agency powers?
    • Tensions around secrecy, national interest, and individual rights – how can mechanisms respect, recognize, and uphold multiple competing interests and needs including an agency's need for secrecy, the government's need to protect national security, and the citizens need to have their constitutional and fundamental rights upheld?
    • The structure, purpose, and goals of specific intelligence agencies and circumstances– how can mechanisms be sensitive and attuned to the structure, purpose, and functions of differing intelligence agencies and circumstances?

    These factors lead to further questions around:

    • The purpose of an oversight mechanism: Is an oversight mechanism meant to ensure effectiveness of an agency? Perform general reviews of agency performance? Supervise the actions of an agency? Hold an agency accountable for misconduct?
    • The structure of an oversight mechanism: Is it internal? External? A combination of both? How many oversight mechanisms that agencies should be held accountable to?
    • The functions of an oversight mechanism: Is an oversight mechanism meant to inspect? Evaluate? Investigate? Report?
    • The powers of an oversight mechanism: The extent of access that an oversight mechanism needs and should have to the internal workings of security agencies and law enforcement to carry out due diligence? The extent of legal backing that an oversight mechanism should have to hold agencies legally accountable.

    What oversight mechanisms for State Surveillance exist in India?

    In India the oversight 'ecosystem' for state surveillance is comprised of:

    1. Review committee: Under the Indian Telegraph Act 1885 and the Rules issued thereunder (Rule 419A), a Central Review Committee that consists of the Cabinet Secretary, Secretary of Legal Affairs to the Government of India, Secretary of Department of Telecommunications to the Government of India is responsible for meeting on a bi-monthly basis and reviewing the legality of interception directions. The review committee has the power to revoke the directions and order the destruction of intercepted material.[5] This review committee is also responsible for evaluating interception, monitoring, and decryption orders issued under section 69 of the Information Technology Act 2000.[6] and orders for the monitoring and collection of traffic data under section 69B of the Information Technology Act 2000.[7]
    2. Authorizing Authorities: The Secretary in the Ministry of Home Affairs of the Central Government is responsible for authorizing requests for the interception, monitoring, and decryption of communications issued by central agencies.[8] The Secretary in charge of the Home Department is responsible for authorizing requests for the interception, monitoring, and decryption of communications from state level agencies and law enforcement.[9] The Secretary to the Government of India in the Department of Information Technology under the Ministry of Communications and Information Technology is responsible for authorizing requests for the monitoring and collection of traffic data.[10] Any officer not below the rank of Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary in this behalf, may authorize the interception of communications in case of an emergency.[11] A Commissioner of Police, District Superintendent of Police or Magistrate may issue requests for stored data to any postal or telegraph authority.[12]
    3. Administrative authorities: India does not have an oversight mechanism for intelligence agencies, but agencies do report to different authorities. For example: The Intelligence Bureau reports to the Home Minister, the Research and Anaylsis Wing is under the Cabinet Secretariat and reports to the Prime Minister, the Joint Intelligence Committee (JIC), National Technical Research Organisation (NTRO) and Aviation Research Centre (ARC) report to the National Security Adviser; and the National Security Council Secretariat under the NSA which serves the National Security Council.[13]

    It is important to note that though India has a Right to Information Act, but most of the security agencies are exempt from the purview of the Act[14] as is disclosure of any information that falls under the purview of the Official Secrets Act 1923.[15] [Note: There is no point in listing out all the exceptions given in section 8 and other sections as well. I think the point is sufficiently made when we say that security agencies are exempt from the purview of the Act.] The Official Secrets Act does not provide a definition of an 'official secret' and instead protects information: pertaining to national Security, defence of the country, affecting friendly relations with foreign states, etc.[16] Information in India is designated as classified in accordance to the Manual of Departmental Security Instruction which is circulated by the Ministry of Home Affairs. According to the Public Records Rules 1997, “classified records" means the files relating to the public records classified as top-secret, confidential and restricted in accordance with the procedure laid down in the Manual of Departmental Security Instruction circulated by the Ministry of Home affairs from time to time;”[17] Bi-annually officers evaluate and de-classify classified information and share the same with the national archives.[18] In response to questions raised in the Lok Sabha on the 5th of May 2015 regarding if the Official Secrets Act, 1923 will be reviewed, the number of classified files stored with the Government under the Act, and if the Government has any plans to declassify some of the files – the Ministry of Home Affairs clarified that a committee consisting of Secretaries of the Ministry of Home Affairs, the Department of Personnel and Training, and the Department of Legal Affairs has been established to examine the provisions of the Official Secrets Act, 1923 particularly in light of the Right to Information Act, 2005. The Ministry of Home Affairs also clarified that the classification and declassification of files is done by each Government Department as per the Manual of Departmental Security Instructions, 1994 and thus there is no 'central database of the total number of classified files'.[19]

    How can India's oversight mechanism for state surveillance be clarified?

    Though these mechanisms establish a basic framework for an oversight mechanism for state surveillance in India, there are aspects of this framework that could be clarified and there are ways in which the framework could be strengthened.

    Aspects of the present review committee that could be clarified:

    1. Powers of the review committee: Beyond having the authority to declare that orders for interception, monitoring, decryption, and collection of traffic data are not within the scope of the law and order for destruction of any collected information – what powers does the review committee have? Does the committee have the power to compel agencies to produce additional or supporting evidence? Does the committee have the power to compel information from the authorizing authority?
    2. Obligations of the review committee: The review committee is required to 'record its findings' as to whether the interception orders issued are in accordance with the law. Is there a standard set of questions/information that must be addressed by the committee when reviewing an order? Does the committee only review the content of the order or do they also review the implementation of the order? Beyond recording its findings, are there any additional reporting obligations that the review committee must fulfill?
    3. Accountability of the review committee: Does the review committee answer to a higher authority? Do they have to submit their findings to other branches of the government – such as Parliament? Is there a mechanism to ensure that the review committee does indeed meet every two months and review all orders issued under the relevant sections of the Indian Telegraph Act 1885 and the Information Technology Act 2008?

    Proposed oversight mechanisms in India

    Oversight mechanisms can help with avoiding breaches of national security by ensuring efficiency and effectiveness in the functioning of security agencies. The need for the oversight of state surveillance is not new in India. In 1999 the Union Government constituted a Committee with the mandate of reviewing the events leading up to Pakistani aggression in Kargil and to recommend measures towards ensuring national security. Though the Kargil Committee was addressing surveillance from the perspective of gathering information on external forces, there are parellels in the lessons learned for state surveillance. Among other findings, in their Report the Committee found a number of limitations in the system for collection, reporting, collation, and assessment of intelligence. The Committee also found that there was a lack of oversight for the intelligence community in India – resulting in no mechanisms for tasking the agencies, monitoring their performance and overall functioning, and evaluating the quality of the work.

    The Committee also noted that such a mechanism is a standard feature in jurisdictions across the world. The Committee emphasized this need from an economic perspective – that without oversight – the Government and the nation has no way of evaluating whether or not they are receiving value for their money. The Committee recommended a review of the intelligence system with the objective of solving such deficiencies.[20]

    In 2000 a Group of Ministers was established to review the security and intelligence apparatus of the country. In their report issued to the Prime Minister, the Group of Ministers recommended the establishment of an Intelligence Coordination Group for the purpose of providing oversight of intelligence agencies at the Central level. Specifically the Intelligence Coordination Group would be responsible for:

    • Allocation of resources to the intelligence agencies
    • Consideration of annual reviews on the quality of inputs
    • Approve the annual tasking for intelligence collection
    • Oversee the functions of intelligence agencies
    • Examine national estimates and forecasts[21]

    Past critiques of the Indian surveillance regime have included the fact that intelligence agencies do not come under the purview of any overseeing mechanism including Parliament, the Right to Information Act 2005, or the General Comptroller of India.

    In 2011, Manish Tewari, who at the time was a Member of Parliament from Ludhiana, introduced the Private Member's Bill - “The Intelligence Services (Powers and Regulation) Bill” proposed stand alone statutory regulation of intelligence agencies. In doing so it sought to establish an oversight mechanism for intelligence agencies within and outside of India. The Bill was never introduced into Parliament.[22] Broadly, the Bill sought to establish: a National Intelligence and Security Oversight Committee which would oversee the functionings of intelligence agencies and would submit an annual report to the Prime Minister, a National Intelligence Tribunal for the purpose of investigating complaints against intelligence agencies, an Intelligence Ombudsman for overseeing and ensuring the efficient functioning of agencies, and a legislative framework regulating intelligence agencies.[23]

    Proposed policy in India has also explored the possibility of coupling surveillance regulation and oversight with private regulation and oversight. In 2011 the Right to Privacy Bill was drafted by the Department of Personnel and Training. The Bill proposed to establish a “Central Communication Interception Review Committee” for the purposes of reviewing orders for interception issued under the Telegraph Act. The Bill also sought to establish an authorization process for surveillance undertaken by following a person, through CCTV's, or other electronic means.[24] In contrast, the 2012 Report of the Group of Experts on Privacy, which provided recommendations for a privacy framework for India, recommended that the Privacy Commissioner should exercise broad oversight functions with respect to interception/access, audio & video recordings, the use of personal identifiers, and the use of bodily or genetic material.[25]

    A 2012 report by the Institute for Defence Studies and Analyses titled “A Case for Intelligence Reforms in India” highlights at least four 'gaps' in intelligence that have resulted in breaches of national security including: zero intelligence, inadequate intelligence, inaccurate intelligence, and excessive intelligence – particularly in light of additional technical inputs and open source inputs.[26] In some cases, an oversight mechanism could help in remediating some of these gaps. Returning to the 2012 IDSA Report, the Report recommends the following steps towards an oversight mechanism for Indian intelligence:

    • Establishing an Intelligence Coordination Group (ICG) that will exercise oversight functions for the intelligence community at the Central level. This could include overseeing functions of the agencies, quality of work, and finances.
    • Enacting legislation defining the mandates, functions, and duties of intelligence agencies.
    • Holding intelligence agencies accountable to the Comptroller & Auditor General to ensure financial accountability.
    • Establishing a Minister for National Security & Intelligence for exercising administrative authority over intelligence agencies.
    • Establishing a Parliamentary Accountability Committee for oversight of intelligence agencies through parliament.
    • Defining the extent to which intelligence agencies can be held accountable to reply to requests pertaining to violations of privacy and other human rights issued under the Right to Information Act.

    Highlighting the importance of accountable surveillance frameworks, in 2015 the external affairs ministry director general of India Santosh Jha stated at the UN General Assembly that the global community needs to "to create frameworks so that Internet surveillance practices motivated by security concerns are conducted within a truly transparent and accountable framework.”[27]

    In what ways can India's mechanisms for state surveillance be strengthened?

    Building upon the recommendations from the Kargil Committee, the Report from the Group of Ministers, the Report of the Group of Experts on Privacy, the Draft Privacy Bill 2011, and the IDSA report, ways in which the framework for oversight of state surveillance in India could be strengthened include:

    • Oversight to enhance public understanding, debate, accountability, and democratic governance: State surveillance is unique in that it is enabled with the objective of protecting a nations security. Yet, to do so it requires citizens of a nation to trust the actions taken by intelligence agencies and to allow for possible access into their personal lives and possible activities that might infringe on their constitutional rights (such as freedom of expression) for a larger outcome of security. Because of this, oversight mechanisms for state surveillance must balance securing national security while submitting itself to some form of accountability to the public.
    • Independence of oversight mechanisms: Given the Indian context, it is particularly important that an oversight mechanism for surveillance powers and the intelligence community is capable of addressing and being independent from political interference. Indeed, the majority of cases regarding illegal interceptions that have reached the public sphere pertain to the surveillance of political figures and political turf wars.[28] Furthermore, though the current Review Committee established in the Indian Telegraph Act does not have a member from the Ministry of Home Affairs (the Ministry responsible for authorizing interception requests), it is unclear how independent this committee is from the authorizing Ministry. To ensure non-biased oversight, it is important that oversight mechanisms are independent.
    • Legislative regulation of intelligence agencies: Currently, intelligence agencies are provided surveillance powers through the Information Technology Act and the Telegraph Act, but beyond the National Intelligence Agency Act which establishes the National Intelligence Agency, there is no legal mechanism creating, regulating and overseeing intelligence agencies using these powers. In the 'surveillance ecosystem' this creates a policy vacuum, where an agency is enabled through law with a surveillance power and provided a procedure to follow, but is not held legally accountable for the effective, ethical, and legal use of the power. To ensure legal accountability of the use of surveillance techniques, it is important that intelligence are created through legislation that includes oversight provisions.
    • Comprehensive oversight of all intrusive measures: Currently the Review Committee established under the Telegraph Act is responsible for the evaluation of orders for the interception, monitoring, decryption, and collection of traffic data. The Review Committee is not responsible for reviewing the implementation or effectiveness of such orders and is not responsible for reviewing orders for access to stored information or other forms of electronic surveillance. This situation is a result of 1. Present oversight mechanisms not having comprehensive mandates 2. Different laws in India enabling different levels of access and not providing a harmonized oversight mechanism and 3.Indian law not formally addressing and regulating emerging surveillance technologies and techniques. To ensure effectiveness, it is important for oversight mechanisms to be comprehensive in mandate and scope.
    • Establishment of a tribunal or redress mechanism: India currently does not have a specified means for individuals to seek redress for unlawful surveillance or surveillance that they feel has violated their rights. Thus, individuals must take any complaint to the courts. The downsides of such a system include the fact that the judiciary might not be able to make determinations regarding the violation, the court system in India is overwhelmed and thus due process is slow, and given the sensitive nature of the topic – courts might not have the ability to immediately access relevant documentation. To ensure redress, it is important that a tribunal or a redress mechanism with appropriate powers is established to address complaints or violations pertaining to surveillance.
    • Annual reporting by security agencies, law enforcement, and service providers: Information regarding orders for surveillance and the implementation of the same is not disclosed by the government or by service providers in India.[29] Indeed, service providers by law are required to maintain the confidentiality of orders for the interception, monitoring, or decryption of communications and monitoring or collection of traffic data. At the minimum, an oversight mechanism should receive annual reports from security agencies, law enforcement, and service providers with respect to the surveillance undertaken. Edited versions of these Reports could be shared with Parliament and the public.
    • Consistent and mandatory reviews of relevant legislation: Though committees have been established to review various legislation and policy pertaining to state surveillance, the time frame for these reviews is not clearly defined by law. These reviews should take place on a consistent and publicly stated time frame. Furthermore, legislation enabling surveillance in India do not require review and assessment for relevance, adequacy, necessity, and proportionality after a certain period of time. Mandating that legislation regulating surveillance is subject to review on a consistent is important in ensuring that the provisions are relevant, proportionate, adequate, and necessary.
    • Transparency of classification and declassification process and centralization of de-classified records: Currently, the Ministry of Home Affairs establishes the process that government departments must follow for classifying and de-classifying information. This process is not publicly available and de-classified information is stored only with the respective department. For transparency purposes, it is important that the process for classification of records be made public and the practice of classification of information take place in exceptional cases. Furthermore, de-classified records should be stored centrally and made easily accessible to the public.
    • Executive and administrative orders regarding establishing of agencies and surveillance projects should be in the public domain: Intelligence agencies and surveillance projects in India are typically enabled through executive orders. For example, NATGRID was established via an executive order, but this order is not publicly available. As a form of transparency and accountability to the public, it is important that if executive orders establish an agency or a surveillance project, these are made available to the public to the extent possible.
    • Oversight of surveillance should incorporate privacy and cyber/national security: Increasingly issues of surveillance, privacy, and cyber security are interlinked. Any move to establish an oversight mechanism for surveillance and the intelligence committee must incorporate and take into consideration privacy and cyber security. This could mean that an oversight mechanism for surveillance in India works closely with CERT-IN and a potential privacy commissioner or that the oversight mechanism contains internal expertise in these areas to ensure that they are adequately considered.
    • Oversight by design: Just like the concept of privacy by design promotes the ideal that principles of privacy are built into devices, processes, services, organizations, and regulation from the outset – oversight mechanisms for state surveillance should also be built in from the outset of surveillance projects and enabling legislation. In the past, this has not been the practice in India– the National Intelligence Grid was an intelligence system that sought to link twenty one databases together – making such information easily and readily accessible to security agencies – but the oversight of such a system was never defined.[30] Similarly, the Centralized Monitoring System was conceptualized to automate and internalize the process of intercepting communications by allowing security agencies to intercept communications directly and bypass the service provider.[31] Despite amending the Telecom Licenses to provide for the technical components of this project, oversight of the project or of security agencies directly accessing information has yet to be defined.[32]

    Examples of oversight mechanisms for State Surveillance: US, UK, Canada and United States

    United States

    In the United States the oversight 'ecosystem' for state surveillance is made up of:

    The Foreign Intelligence Surveillance Court

    The U.S Foreign Intelligence Surveillance Court (FISA) is the predominant oversight mechanism for state surveillance and oversees and authorizes the actions of the Federal Bureau of Investigation and the National Security Agency.[33] The court was established by the enactment of the Foreign Intelligence Surveillance Act 1978 and is governed by Rules of Procedure, the current Rules being formulated in 2010.[34] The Court is empowered to ensure compliance with the orders that it issues and the government is obligated to inform the Court if orders are breached.[35] FISA allows for individuals who receive an order from the Court to challenge the same,[36] and public filings are available on the Court's website.[37] Additionally, organizations, including the American Civil Liberties Union[38] and the Electronic Frontier Foundation, have filed motions with the Court for release of records. [39] Similarly, Google has approached the Court for the ability to publish aggregate information regarding FISA orders that the company recieves.[40]

    Government Accountability Office

    The U.S Government Accountability Office (GAO) is an independent office that works for Congress and conducts audits, investigates, provides recommendations, and issues legal decisions and opinions with regard to federal government spending of taxpayer's money by the government and associated agencies including the Defence Department, the FBI, and Homeland Security.[41] The head of the GAO is the Comptroller General of the United States and is appointed by the President. The GAO will initiate an investigation if requested by congressional committees or subcommittees or if required under public law or committee reports. The GOA has reviewed topics relating to Homeland Security, Information Security, Justice and Law Enforcement, National Defense, and Telecommunications.[42] For example, in June 2015 the GOA completed an investigation and report on 'Foreign Terrorist Organization Process and U.S Agency Enforcement Actions” [43] and an investigation on “Cyber Security: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies”.[44]

    Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence

    The U.S. Senate Select Committee on Intelligence is a standing committee of the U.S Senate with the mandate to review intelligence activities and programs and ensure that these are inline with the Constitution and other relevant laws. The Committee is also responsible for submitting to Senate appropriate proposals for legislation, and for reporting to Senate on intelligence activities and programs.[45] The House Permanent Select Committee holds similar jurisdiction. The House Permanent Select Committee is committed to secrecy and cannot disclose classified information excepted authorized to do so. Such an obligation does not exist for the Senate Select Committee on Intelligence and the committee can disclose classified information publicly on its own.[46]

    Privacy and Civil Liberties Oversight Board (PCLOB)

    The Privacy and Civil Liberties Oversight Board was established by the Implementing Recommendations of the 9/11 Commission Act of 2007 and is located within the executive branch.[47] The objective of the PCLOB is to ensure that the Federal Government's actions to combat terrorism are balanced against privacy and civil liberties. Towards this, the Board has the mandate to review and analyse ant-terrorism measures the executive takes and ensure that such actions are balanced with privacy and civil liberties, and to ensure that privacy and civil liberties are liberties are adequately considered in the development and implementation of anti-terrorism laws, regulations and policies.[48] The Board is responsible for developing principles to guide why, whether, when, and how the United States conducts surveillance for authorized purposes. Additionally, officers of eight federal agencies must submit reports to the PCLOB regarding the reviews that they have undertaken, the number and content of the complaints, and a summary of how each complaint was handled. In order to fulfill its mandate, the Board is authorized to access all relevant records, reports, audits, reviews, documents, papers, recommendations, and classified information. The Board may also interview and take statements from necessary personnel. The Board may request the Attorney General to subpoena on the Board's behalf individuals outside of the executive branch.[49]

    To the extent possible, the Reports of the Board are made public. Examples of recommendations that the Board has made in the 2015 Report include: End the NSA”s bulk telephone records program, add additional privacy safeguards to the bulk telephone records program, enable the FISC to hear independent views on novel and significant matters, expand opportunities for appellate review of FISC decisions, take advantage of existing opportunities for outside legal and technical input in FISC matters, publicly release new and past FISC and DISCR decisions that involve novel legal, technical, or compliance questions, publicly report on the operation of the FISC Special Advocate Program, Permit Companies to Disclose Information about their receipt of FISA production orders and disclose more detailed statistics on surveillance, inform the PCLOB of FISA activities and provide relevant congressional reports and FISC decisions, begin to develop principles for transparency, disclose the scope of surveillance authorities affecting US Citizens.[50]

    The Wiretap Report

    The Wiretap Report is an annual compilation of information provided by federal and state officials regarding applications for interception orders of wire, oral, or electronic communications, data address offenses under investigation, types and locations of interception devices, and costs and duration of authorized intercepts.[51] When submitting information for the report a judge will include the name and jurisdiction of the prosecuting official who applied for the order, the criminal offense under investigation, the type of intercept device used, the physical location of the device, and the duration of the intercept. Prosecutors provide information related to the cost of the intercept, the number of days the intercept device was in operation, the number of persons whose communications were intercepted, the number of intercepts, and the number of incriminating intercepts recorded. Results of the interception orders such as arrest, trials, convictions, and the number of motions to suppress evidence are also noted in the prosecutor reports. The Report is submitted to Congress and is legally required under Title III of the Omnibus Crime Control and Safe Streets Act of 1968. The report is issued by the Administrative Office of the United States Courts.[52]

    United Kingdom

    The Intelligence and Security Committee (ISC) of Parliament

    The Intelligence Security Committee was established by the Intelligence Services Act 1994. Members are appointed by the Prime Minster and the Committee reports directly to the same. Additionally, the Committee submits annual reports to Parliament. Towards this, the Committee can take evidence from cabinet ministers, senior officials, and from the public.[53] The most recent report of the Committee is the 2015 “Report on Privacy and Security”.[54] Members of the Committee are subject to the Official Secrets Act 1989 and have access to classified material when carrying out investigations.[55]

    Joint Intelligence Committee (JIC)

    This Joint Intelligence Committee is located in the Cabinet office and is broadly responsible for overseeing national intelligence organizations and providing advice to the Cabinet on issues related to security, defense, and foreign affairs. The JIC is overseen by the Intelligence and Security Committee.[56]

    The Interception of Communications Commissioner

    The Interception of Communications Commissioner is appointed by the Prime Minster under the Regulation of Investigatory Powers Act 2000 for the purpose of reviewing surveillance conducted by intelligence agencies, police forces, and other public authorities. Specifically, the Commissioner inspects the interception of communications, the acquisition and disclosure of communications data, the interception of communications in prisons, and the unintentional electronic interception.[57] The Commissioner submits an annual report to the Prime Minister. The Reports of the Commissioner are publicly available.[58]

    The Intelligence Services Commissioner

    The Intelligence Services Commissioner is an independent body appointed by the Prime Minister that is legally empowered through the Regulation of Investigatory Powers Act (RIPA) 2000. The Commissioner provides independent oversight on the use of surveillance by UK intelligence services.[59] Specifically, the Commissioner is responsible for reviewing authorized interception orders and the actions and performance of the intelligence services.[60] The Commissioner is also responsible for providing assistance to the Investigatory Powers Tribunal, submitting annual reports to the Prime Minister on the discharge of its functions, and advising the Home Office on the need of extending the Terrorism Prevention and Investigation Measures regime.[61] Towards these the Commissioner conducts in-depth audits on the orders for interception to ensure that the surveillance is within the scope of the law, that the surveillance was necessary for a legally established reason, that the surveillance was proportionate, that the information accessed was justified by the privacy invaded, and that the surveillance authorized by the appropriate official. The Commissioner also conducts 'site visits' to ensure that orders are being implemented as per the law.[62] As a note, the Intelligence Services Commissioner does not undertake any subject that is related to the Interception of Communications Commissioner. The Commissioner has access to any information that he feels is necessary to carry out his investigations. The Reports of the Intelligence Service Commissioner are publicly available.[63]

    Investigatory Powers Tribunal

    The Investigatory Powers Tribunal is a court which investigates complaints of unlawful surveillance by public authorities or intelligence/law enforcement agencies.[64] The Tribunal was established under the Regulation of Investigatory Powers Act 2000 and has a range of oversight functions to ensure that public authorities act and agencies are in compliance with the Human Rights Act 1998.[65] The Tribunal specifically is an avenue of redress for anyone who believes that they have been a victim of unlawful surveillance under RIPA or wider human rights infringements under the Human Rights Act 1998. The Tribunal can provide seven possible outcomes for any application including 'found in favor of complainant, no determination in favour of complainant, frivolous or vexatious, out of time, out of jurisdiction, withdrawn, or no valid complaint.[66] The Tribunal has the authority to receive and consider evidence in any form, even if inadmissible in an ordinary court.[67] Where possible, cases are available on the Tribunal's website. Decisions by the Tribunal cannot be appealed, but can be challenged in the European Court of Human Rights.[68]

    Canada

    In Canada the oversight 'ecosystem' for state surveillance includes:

    Security Intelligence Review Committee

    The Security Intelligence Review Committee is an independent body that is accountable to the Parliament of Canada and reports on the Canadian Security Intelligence Service.[69] Members of the Security Intelligence Review Committee are appointed by the Prime Minister of Canada. The committee conducts reviews on a pro-active basis and investigates complaints. Committee members have access to classified information to conduct reviews. The Committee submits an annual report to Parliament and an edited version is publicly available. The 2014 Report was titled “Lifting the Shroud of Secrecy”[70] and includes reviews of the CSIS's activities, reports on complaints and subsequent investigations, and provides recommendations.

    Office of the Communications Security Establishment Commissioner

    The Communications Security Commissioner conducts independent reviews of Communications Security Establishment (CSE) activities to evaluate if they are within the scope of Canadian law.[71] The Commissioner submits a report to Parliament on an annual basis and has a number of powers including the power to subpoena documents and personnel.[72] If the Commissioner believes that the CSE has not complied with the law – it must report this to the Attorney General of Canada and to the Minister of National Defence. The Commissioner may also receive information from persons bound to secrecy if they deem it to be in the public interest to disclose such information.[73] The Commissioner is also responsible for verifying that the CSE does not surveil Canadians and for promoting measures to protect the privacy of Canadians.[74] When conducting a review, the Commissioner has the ability to examine records, receive briefings, interview relevant personnel, assess the veracity of information, listen to intercepted voice recordings, observe CSE operators and analysts to verify their work, examine CSI electronic tools, systems and databases to ensure compliance with the law.[75]

    Office of the Privacy Commissioner

    The Office of the Privacy Commissioner of Canada (OPC) oversees the implementation of and compliance with the Privacy Act and the Personal information and Electronic Documents Act.[76]

    The OPC is an independent body that has the authority to investigate complaints regarding the handling of personal information by government and private companies, but can only comment on the activities of security and intelligence agencies. For example, in 2014 the OPC issued the report “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber Surveillance”[77] The OPC can also provide testimony to Parliament and other government bodies.[78] For example, the OPC has made appearances before the Senate Standing Committee of National Security and Defense on Bill C-51.[79] The OPC cannot conduct joint audits or investigations with other bodies.[80]

    Annual Interception Reports

    Under the Criminal Code of Canada, regional governments must issue annual interception reports. The reports must include number of individuals affected by interceptions, average duration of the interception, type of crimes investigated, numbers of cases brought to court, and number of individuals notified that interception had taken place.[81]

    Conclusion

    The presence of multiple and robust oversight mechanisms for state surveillance does not necessarily correlate to effective oversight. The oversight mechanisms in the UK, Canada, and the U.S have been criticised. For example, Canada . For example, the Canadian regime has been characterized as becoming weaker it has removed one of its key over sight mechanisms – the Inspector General of the Canadian Security Intelligence Service which was responsible for certifying that the Service was in compliance with law.[82]

    Other weaknesses in the Canadian regime that have been highlighted include the fact that different oversight bodies do not have the authority to share information with each other, and transparency reports do not include many new forms of surveillance.[83] Oversight mechanisms in the U.S on the other hand have been criticized as being opaque[84] or as lacking the needed political support to be effective.[85] The UK oversight mechanism has been criticized for not having judicial authorization of surveillance requests, have opaque laws, and for not having a strong right of redress for affected individuals.[86] These critiques demonstrate that there are a number of factors that must come together for an oversight mechanism to be effective. Public transparency and accountability to decision making bodies such as Parliament or Congress can ensure effectiveness of oversight mechanisms, and are steps towards providing the public with means to debate in an informed manner issues related to state surveillance and allows different bodies within the government the ability to hold the state accountable for its actions.


      .[1]. For example, “Public Oversight” is one of the thirteen Necessary and Proportionate principles on state communications surveillance developed by civil society and academia globally, that should be incorporated by states into communication surveillance regimes. The principles can be accessed here: https://en.necessaryandproportionate.org/

      [2]. Hans Born and Ian Leigh, “Making Intelligence Accountable. Legal Standards and Best Practice for Oversight of Intelligence Agencies.” Pg. 13. 2005. Available at: http://www.prsindia.org/theprsblog/wp-content/uploads/2010/07/making-intelligence.pdf. Last accessed: August 6, 2015.

      [3]. For example, this point was made in the context of the UK. For more information see: Nick Clegg, 'Edward Snowden's revelations made it clear: security oversight must be fit for the internet age,”. The Guardian. March 3rd 2014. Available at: http://www.theguardian.com/commentisfree/2014/mar/03/nick-clegg-snowden-security-oversight-internet-age. Accessed: July 27, 2015.

      [4]. International Principles on the Application of Human Rights to Communications Surveillance. Available at: https://en.necessaryandproportionate.org/

      [5]. Sub Rules (16) and (17) of Rule 419A, Indian Telegraph Rules, 1951. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf Note: This review committee is responsible for overseeing interception orders issued under the Indian Telegraph Act and the Information Technology Act.

      [6]. Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information Rules 2009. Definition q. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [7]. Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Definition (n). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [8]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [9]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [10]. Definition (d) and section 3 of the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [11]. Rule 1, of the 419A Rules, Indian Telegraph Act 1885. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act.

      [12]. Section 92, CrPc. Available at: http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

      [13]. Press Information Bureau GOI. Reconstitution of Cabinet Committees. June 19th 2014. Available at: http://pib.nic.in/newsite/PrintRelease.aspx?relid=105747. Accessed August 6, 2015.

      [14]. Press Information Bureau, Government of India. Home minister proposes radical restructuring of security architecture. Available at: http://www.pib.nic.in/newsite/erelease.aspx?relid=56395. Accessed August 6, 2015.

      [15]. Section 24 read with Schedule II of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [16]. Section 8 of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [17]. Abhimanyu Ghosh. “Open Government and the Right to Information”. Legal Services India. Available at: http://www.legalservicesindia.com/articles/og.htm. Accessed: August 8, 2015

      [18]. Public Record Rules 1997. Section 2. Definition c. Available at: http://nationalarchives.nic.in/writereaddata/html_en_files/html/public_records97.html. Accessed: August 8, 2015

      [19]. Times of India. Classified information is reviewed after 25-30 years. April 13th 2015. Available at: http://timesofindia.indiatimes.com/india/Classified-information-is-reviewed-after-25-30-years/articleshow/46901878.cms. Accessed: August 8, 2015.

      [20]. Government of India. Ministry of Home Affairs. Lok Sabha Starred Question No 557. Available at: http://mha1.nic.in/par2013/par2015-pdfs/ls-050515/557.pdf.

      [21]. The Kargil Committee report Executive Summanry. Available at: http://fas.org/news/india/2000/25indi1.htm. Accessed: August 6, 2015.

      [22]. PIB Releases. Group of Ministers Report on Reforming the National Security System”. Available at: http://pib.nic.in/archieve/lreleng/lyr2001/rmay2001/23052001/r2305200110.html. Last accessed: August 6, 2015

      [23]. The Observer Research Foundation. “Manish Tewari introduces Bill on Intelligence Agencies Reform. August 5th 2011. Available at: http://www.observerindia.com/cms/sites/orfonline/modules/report/ReportDetail.html?cmaid=25156&mmacmaid=20327. Last accessed: August 6, 2015.

      [24]. The Intelligence Services (Powers and Regulation) Bill, 2011. Available at: http://www.observerindia.com/cms/export/orfonline/documents/Int_Bill.pdf. Accessed: August 6, 2015.

      [25]. The Privacy Bill 2011. Available at: https://bourgeoisinspirations.files.wordpress.com/2010/03/draft_right-to-privacy.pdf

      [26]. The Report of Group of Experts on Privacy. Available at: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

      [27]. Institute for Defence Studies and Analyses. “A Case for Intelligence Reforms in India”. Available at: http://www.idsa.in/book/AcaseforIntelligenceReformsinIndia.html. Accessed: August 6, 2015.

      [28]. India Calls for Transparency in internet Surveillance. NDTV. July 3rd 2015. Available at: http://gadgets.ndtv.com/internet/news/india-calls-for-transparency-in-internet-surveillance-710945. Accessed: July 6, 2015.

      [29]. Lovisha Aggarwal. “Analysis of News Items and Cases on Surveillance and Digital Evidence in India”. Available at: http://cis-india.org/internet-governance/blog/analysis-of-news-items-and-cases-on-surveillance-and-digital-evidence-in-india.pdf

      [30]. Rule 25 (4) of the Information Technology (Procedures and Safeguards for the Interception, Monitoring, and Decryption of Information Rules) 2011. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [31]. Ministry of Home Affairs, GOI. National Intelligence Grid. Available at: http://www.davp.nic.in/WriteReadData/ADS/eng_19138_1_1314b.pdf. Last accessed: August 6, 2015

      [32]. Press Information Bureau, Government of India. Centralised System to Monitor Communications Rajya Sabha. Available at: http://pib.nic.in/newsite/erelease.aspx?relid=54679. Last accessed: August 6, 2015.

      [33]. Department of Telecommunications. Amendemnt to the UAS License agreement regarding Central Monitoring System. June 2013. Available at: http://cis-india.org/internet-governance/blog/uas-license-agreement-amendment

      [34]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [35]. United States Foreign Intelligence Surveillance Court. Rules of Procedure 2010. Available at: http://www.fisc.uscourts.gov/sites/default/files/FISC%20Rules%20of%20Procedure.pdf

      [36]. United States Foreign Intelligence Court. Honorable Patrick J. Leahy. 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf

      [37]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [38]. Public Filings – U.S Foreign Intelligence Surveillance Court. Available at: http://www.fisc.uscourts.gov/public-filings

      [39]. ACLU. FISC Public Access Motion – ACLU Motion for Release of Court Records Interpreting Section 215 of the Patriot Act. Available at: https://www.aclu.org/legal-document/fisc-public-access-motion-aclu-motion-release-court-records-interpreting-section-215

      [40]. United States Foreign Intelligence Surveillance Court Washington DC. In Re motion for consent to disclosure of court records or, in the alternative a determination of the effect of the Court's rules on statutory access rights. Available at: https://www.eff.org/files/filenode/misc-13-01-opinion-order.pdf

      [41]. Google Official Blog. Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests. February 3rd 2014. Available at: http://googleblog.blogspot.in/2014/02/shedding-some-light-on-foreign.html

      [42]. U.S Government Accountability Office. Available at: http://www.gao.gov/key_issues/overview#t=1. Last accessed: August 8, 2015.

      [43]. Report to Congressional Requesters. Combating Terrorism: Foreign Terrorist Organization Designation Proces and U.S Agency Enforcement Actions. Available at: http://www.gao.gov/assets/680/671028.pdf. Accessed: August 8, 2015

      [44]. United States Government Accountability Office. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. Available: http://www.gao.gov/assets/680/670935.pdf. Last accessed: August 6, 2015.

      [45]. Committee Legislation. Available at: http://ballotpedia.org/United_States_Senate_Committee_on_Intelligence_(Select)#Committee_legislation

      [46]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8, 2015

      [47]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [48]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [49]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8th 2015

      [50]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statistics-reports/analysisreports/wiretap-reports

      [51]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statisticsreports/
      analysis-reports/wiretap-reports/faqs-wiretap-reports#faq-What-information-does-the-AO-receive-from-prosecutors?. Last Accessed: August 8th 2015

      [52]. Intelligence and Security Committee of Parliament. Transcripts and Public Evidence. Available at: http://isc.independent.gov.uk/public-evidence. Last accessed: August 8th 2015.

      [53]. Intelligence and Security Committee of Parliament. Special Reports. Available at http://isc.independent.gov.uk/committee-reports/special-reports. Last accessed: August 8th 2015.

      [54]. Hugh Segal. The U.K. has legislative oversight of surveillance. Why not Canada. The Globe and Mail. June 12th 2013. Available at: http://www.theglobeandmail.com/globe-debate/uk-haslegislative-oversight-of-surveillance-why-not-canada/article12489071/. Last accessed: August 8th 2015

      [55]. The Joint Intelligence Committee home page. For more information see: https://www.gov.uk/government/organisations/national-security/groups/joint-intelligence-committee

      [56]. Interception of Communications Commissioner's Office. RIPA. Available at: http://www.iocco-uk.info/sections.asp?sectionID=2&type=top. Last accessed: August 8th 2015

      [57]. Interception of Communications Commissioner's Office. Reports. Available at: http://www.iocco-uk.info/sections.asp?sectionID=1&type=top. Last accessed: August 8th 2015

      [58]. The Intelligence Services Commissioner's Office Homepage. For more information see: http://intelligencecommissioner.com/

      [59]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [60]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [61]. The Intelligence Services Commissioner's Office. What we do. Available at: http://intelligencecommissioner.com/content.asp?id=5. Last Accessed: August 8th 2015.

      [62]. The Intelligence Services Commissioner's Office. Intelligence Services Commissioner's Annual Reports. Available at: http://intelligencecommissioner.com/content.asp?id=19. Last
      accessed: August 8th 2015

      [63]. The Investigatory Powers Tribunal Homepage. Available at: http://www.ipt-uk.com/

      [64]. The Investigatory Powers Tribunal – Functions – Key role. Available at: http://www.ipt-uk.com/section.aspx?pageid=1

      [65]. Investigatory Powers Tribunal. Functions – Decisions available to the Tribunal. Available at: http://www.ipt-uk.com/section.aspx?pageid=4. Last accessed: August 8th 2015

      [66]. Investigator Powers Tribunal. Operation - Available at: http://www.ipt-uk.com/section.aspx?pageid=7

      [67]. Investigatory Powers Tribunal. Operation- Differences to the ordinary court system. Available at: http://www.ipt-uk.com/section.aspx?pageid=7. Last accessed: August 8th 2015

      [68]. Security Intelligence Review Committee – Homepage. Available at: http://www.sirc-csars.gc.ca/index-eng.html

      [69]. SIRC Annual Report 2013-2014: Lifting the Shroud of Secrecy. Available at: http://www.sirccsars. gc.ca/anrran/2013-2014/index-eng.html. Last accessed: August 6th 2015.

      [70]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [71]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [72]. The Office of the Communications Security Establishment – Mandate. Available at: http://www.ocsecbccst.gc.ca/mandate/index_e.php

      [73]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [74]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [75]. Office of the Privacy Commissioner of Canada. Homepage. Available at: https://www.priv.gc.ca/index_e.ASP

      [76]. Office of the Privacy Commissioner of Canada. Reports and Publications. Special Report to Parliament “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber-Surveillance. January 28th 2014. Available at: https://www.priv.gc.ca/information/srrs/201314/sr_cic_e.asp

      [77]. Office of the Privacy Commissioner of Canada. Available at: https://www.priv.gc.ca/index_e.asp. Last accessed: August 6th 2015.

      [78]. Office of the Privacy Commissioner of Canada. Appearance before the Senate Standing Commitee National Security and Defence on Bill C-51, the Anti-Terrorism Act, 2015. Available at: https://www.priv.gc.ca/parl/2015/parl_20150423_e.asp. Last accessed: August 6th 2015.

      [79]. Office of the Privacy Commissioner of Canada. Special Report to Parliament. January 8th 2014. Available at: https://www.priv.gc.ca/information/sr-rs/201314/sr_cic_e.asp. Last accessed: August 6th 2015.

      [80]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [81]. Patrick Baud. The Elimination of the Inspector General of the Canadian Security Intelligence Serive. May 2013. Ryerson University. Available at; http://www.academia.edu/4731993/The_Elimination_of_the_Inspector_General_of_the_Canadian_Security_Intelligence_Service

      [82]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [83]. Glenn Greenwald. Fisa court oversight: a look inside a secret and empty process. The Guardian. June 19th 2013. Available at: http://www.theguardian.com/commentisfree/2013/jun/19/fisa-court-oversight-process-secrecy, Nadia Kayyali. Privacy and Civil Liberties Oversight Board to NSA: Why is Bulk Collection of Telelphone Records Still Happening? February 2105. Available at :https://www.eff.org/deeplinks/2015/02/privacy-and-civil-liberties-oversight-board-nsa-whybulk-collection-telephone. Last accessed: August 8th 2015.

      [84]. Scott Shance. The Troubled Life of the Privacy and Civil Liberties Oversight Board. August 9th 2012. The Caucus. Available at: http://thecaucus.blogs.nytimes.com/2012/08/09/thetroubled-life-of-the-privacy-and-civil-liberties-oversight-board/?_r=0. Last accessed: August 8th 2015

      [85]. The Open Rights Group. Don't Spy on Us. Reforming Surveillance in the UK. September 2014. Available at: https://www.openrightsgroup.org/assets/files/pdfs/reports/DSOU_Reforming_surveillance_old.pdf

      [86].

    Do we need a Unified Post Transition IANA?

    by Pranesh Prakash, Padmini Baruah and Jyoti Panday — last modified Oct 27, 2015 12:46 AM
    As we stand at the threshold of the IANA Transition, we at CIS find that there has been little discussion on the question of how the transition will manifest. The question we wanted to raise was whether there is any merit in dividing the three IANA functions – names, numbers and protocols – given that there is no real technical stability to be gained from a unified Post Transition IANA. The analysis of this idea has been detailed below.

    The Internet Architecture Board, in a submission to the NTIA in 2011 claims that splitting the IANA functions would not be desirable.[1] The IAB notes, “There exists synergy and interdependencies between the functions, and having them performed by a single operator facilitates coordination among registries, even those that are not obviously related,” and also that that the IETF makes certain policy decisions relating to names and numbers as well, and so it is useful to have a single body. But they don’t say why having a single email address for all these correspondences, rather than 3 makes any difference: Surely, what’s important is cooperation and coordination. Just as IETF, ICANN, NRO being different entities doesn’t harm the Internet, splitting the IANA function relating to each entity won’t harm the Internet either. Instead will help stability by making each community responsible for the running of its own registers, rather than a single point of failure: ICANN and/or “PTI”.

    A number of commentators have supported this viewpoint in the past: Bill Manning of University of Southern California’s ISI (who has been involved in DNS operations since DNS started), Paul M. Kane (former Chairman of CENTR's Board of Directors), Jean-Jacques Subrenat (who is currently an ICG member), Association française pour le nommage Internet en coopération (AFNIC), the Internet Governance Project, InternetNZ, and the Coalition Against Domain Name Abuse (CADNA).

    The Internet Governance Project stated: “IGP supports the comments of Internet NZ and Bill Manning regarding the feasibility and desirability of separating the distinct IANA functions. Structural separation is not only technically feasible, it has good governance and accountability implications. By decentralizing the functions we undermine the possibility of capture by governmental or private interests and make it more likely that policy implementations are based on consensus and cooperation.”[2]

    Similarly, CADNA in its 2011 submission to NTIA notes that that in the current climate of technical innovation and the exponential expansion of the Internet community, specialisation of the IANA functions would result in them being better executed. The argument is also that delegation of the technical and administrative functions among other capable entities (such as the IETF and IAB for protocol parameters, or an international, neutral organization with understanding of address space protocols as opposed to RIRs) determined by the IETF is capable of managing this function would ensure accountability in Internet operation. Given that the IANA functions are mainly registry-maintenance function, they can to a large extent be automated. However, a single system of automation would not fit all three.

    Instead of a single institution having three masters, it is better for the functions to be separated. Most importantly, if one of the current customers wishes to shift the contract to another IANA functions operator, even if it isn’t limited by contract, it is limited by the institutional design, since iana.org serves as a central repository. This limitation didn’t exist, for instance, when the IETF decided to enter into a new contract for the RFC Editor role. This transition presents the best opportunity to cleave the functions logically, and make each community responsible for the functioning of their own registers, with IETF, which is mostly funded by ISOC, taking on the responsibility of handing the residual registries, and a discussion about the .ARPA and .INT gTLDs.

    From the above discussion, three main points emerge:

    • Splitting of the IANA functions allows for technical specialisation leading to greater efficiency of the IANA functions.
    • Splitting of the IANA functions allows for more direct accountability, and no concentration of power.
    • Splitting of the IANA functions allows for ease of shifting of the {names,number,protocol parameters} IANA functions operator without affecting the legal structure of any of the other IANA function operators.

    [1]. IAB response to the IANA FNOI, July 28, 2011. See: https://www.iab.org/wp-content/IAB-uploads/2011/07/IANA-IAB-FNOI-2011.pdf

    [2]. Internet Governance Project, Comments of the Internet Governance Project on the NTIA's "Request for Comments on the Internet Assigned Numbers Authority (IANA) Functions" (Docket # 110207099-1099-01) February 25, 2011 See: http://www.ntia.doc.gov/federal-register-notices/2011/request-comments-internet-assigned-numbers-authority-iana-functions

    Connected Trouble

    by Sunil Abraham last modified Oct 28, 2015 04:47 PM
    The internet of things phenomenon is based on a paradigm shift from thinking of the internet merely as a means to connect individuals, corporations and other institutions to an internet where all devices in (insulin pumps and pacemakers), on (wearable technology) and around (domestic appliances and vehicles) humans beings are connected.

    The guest column was published in the Week, issue dated November 1, 2015.


    Proponents of IoT are clear that the network effects, efficiency gains, and scientific and technological progress unlocked would be unprecedented, much like the internet itself.

    Privacy and security are two sides of the same coin―you cannot have one without the other. The age of IoT is going to be less secure thanks to big data. Globally accepted privacy principles articulated in privacy and data protection laws across the world are in conflict with the big data ideology. As a consequence, the age of internet of things is going to be less stable, secure and resilient. Three privacy principles are violated by most IoT products and services.

    Data minimisation

    According to this privacy principle, the less the personal information about the data subject that is collected and stored by the data controller, the more the data subject's right to privacy is protected. But, big data by definition requires more volume, more variety and more velocity and IoT products usually collect a lot of data, thereby multiplying risk.

    Purpose limitation

    This privacy principle is a consequence of the data minimisation principle. If only the bare minimum of personal information is collected, then it can only be put to a limited number of uses. But, going beyond that would harm the data subject. IoT innovators and entrepreneurs are trying to rapidly increase features, efficiency gains and convenience. Therefore, they don't know what future purposes their technology will be put to tomorrow and, again by definition, resist the principle of purpose limitation.

    Privacy by design

    Data protection regulation required that products and services be secure and protect privacy by design and not as a superficial afterthought. IoT products are increasingly being built by startups that are disrupting markets and taking down large technology incumbents. The trouble, however, is that most of these startups do not have sufficient internal security expertise and in their tearing hurry to take products to the market, many IoT products may not be comprehensively tested or audited from a privacy perspective.

    There are other cyber security principles and internet design principles that are disregarded by the IoT phenomenon, further compromising security and privacy of users.

    Centralisation

    Most of the network effects that IoT products contribute to require centralisation of data collected from users and their devices. For instance, if users of a wearable physical activity tracker would like to use gamification to keep each other motivated during exercise, the vendor of that device has to collect and store information about all its users. Since some users always wear them, they become highly granular stores of data that can also be used to inflict privacy harms.

    Decentralisation was a key design principle when the internet was first built. The argument was that you can never take down a decentralised network by bombing any of the nodes. Unfortunately, because of the rise of internet monopolies like Google, the age of cloud computing, and the success of social media giants, the internet is increasingly becoming centralised and, therefore, is much more fragile than it used be. IoT is going to make this worse.

    Complexity

    The more complex a particular technology is, the more fragile and vulnerable it is. This is not necessarily true but is usually the case given that more complex technology needs more quality control, more testing and more fixes. IoT technology raises complexity exponentially because the devices that are being connected are complex themselves and were not originally engineered to be connected to the internet. The networks they constitute are nothing like the internet which till now consisted of clients, web servers, chat servers, file servers and database servers, usually quite removed from the physical world. Compromised IoT devices, on the other hand, could be used to inflict direct harm on life and property.

    Death of the air gap

    The things that will be connected to the internet were previously separated from the internet through the means of an air gap. This kept them secure but also less useful and usable. In other words, the very act of connecting devices that were previously unconnected will expose them to a range of attacks. Security and privacy related laws, standards, audits and enforcement measures are the best way to address these potential pitfalls. Governments, privacy commissioners and data protections authorities across the world need to act so that the privacy of people and the security of our information society are protected.

    Breaking Down ICANN Accountability: What It Is and What the Internet Community Wants

    by Ramya Chandrasekhar last modified Nov 05, 2015 03:29 PM
    At the recent ICANN conference held in Dublin (ICANN54), one issue that was rehashed and extensively deliberated was ICANN's accountability and means to enhance the same. In light of the impending IANA stewardship transition from the NTIA to the internet's multi-stakeholder community, accountability of ICANN to the internet community becomes that much more important. In this blog post, some aspects of the various proposals to enhance ICANN's accountability have been deconstructed and explained.

    The Internet Corporation for Assigned Names and Numbers, known as ICANN, is a private not-for-profit organization, registered in California. Among other functions, it is tasked with carrying out the IANA function[1], pursuant to a contract between the US Government (through the National Telecommunications and Information Administration – NTIA) and itself. Which means, as of now, there exists legal oversight by the USG over ICANN with regard to the discharge of these IANA functions.[2]

    However, in 2014, the NTIA, decided to completely handover stewardship of the IANA functions to the internet’s ‘global multistakeholder community’. But the USG put down certain conditions before this transition could be effected, one of which was to ensure that there exists proper accountability within the ICANN.[3]

    The reason for this, was that the internet community feared a shift of ICANN to a FIFA-esque organization with no one to keep it in check, post the IANA transition if these accountability concerns weren’t addressed.[4]

    And thus, to answer these concerns, the Cross Community Working Group (CCWG-Accountability) has come up with reports that propose certain changes to the structure and functioning of ICANN.

    In light of the discussions that took place at ICANN54 in Dublin, this blog post is directed towards summarizing some of these proposals - those pertaining to the Independent Review Process or IRP (explained below) as well the various accountability models that are the subject of extensive debate both on and off the internet.

    Building Blocks Identified by the CCWG-Accountability

    The CCWG-Accountability put down four “building blocks”, as they call it, on which all their work is based. One of these is what is known as the Independent Review Process (or IRP). This is a mechanism by which internal complaints, either by individuals or by SOs/ACs[5], are addressed. However, the current version of the IRP is criticized for being an inefficient mechanism of dispute resolution.[6]

    And thus the CCWG-Accountability proposed a variety of amendments to the same.

    Another building block that the CCWG-Accountability identified is the need for an “empowered internet community”, which means more engagement between the ICANN Board and the internet community, as well as increased oversight by the community over the Board. As of now, the USG acts as the oversight-entity. Post the IANA transition however, the community feels they should step in and have an increased say with regard to decisions taken by the ICANN Board.

    As part of empowering the community, the CCWG-Accountability identified five core areas in which the community needs to possess some kind of powers or rights. These areas are – review and rejection of the ICANN budget, strategic plans and operating plans; review, rejection and/or approval of standard bylaws as well fundamental bylaws; review and rejection of Board decisions pertaining to IANA functions; appointment and removal of individual directors on the Board; and recall of the entire Board itself. And it is with regard to what kind of powers and rights are to be vested with the community that a variety of accountability models have been proposed, both by the CCWG-Accountability as well as the ICANN Board. However, of all these models, discussion is now primarily centered on three of them – the Sole Member Model (SMM), the Sole Designator Model (SDM) and the Multistakeholder Empowerment Model (MEM).

    What is the IRP?

    The Independent Review Process or IRP is the dispute resolution mechanism, by which complaints and/or oppositions by individuals with regard to Board resolutions are addressed. Article 4 of the ICANN bylaws lay down the specifics of the IRP. As of now, a standing panel of six to nine arbitrators is constituted, from which a panel is selected for hearing every complaint. However, the primary criticism of the current version of the IRP is the restricted scope of issues that the panel passes decisions on.[7]

    The bylaws explicitly state that the panel needs to focus on a set on procedural questions while hearing a complaint – such as whether the Board acted in good faith or exercised due diligence in passing the disputed resolution.

    Changes Proposed by the Internet Community to Enhance the IRP

    To tackle this and other concerns with the existing version of the IRP, the CCWG-Accountability proposed a slew of changes in the second draft proposal that they released in August this year. What they proposed is to make the IRP arbitral panel hear complaints and decide the matter on both procedural (as they do now) and substantive grounds. In addition, they also propose a broadening of who all have locus to initiate an IRP, to include individuals, groups and other entities. Further, they also propose a more precedent-based method of dispute resolution, wherein a panel refers to and uses decisions passed by past panels in arriving at a decision.

    At the 19th October “Enhancing ICANN-Accountability Engagement Session” that took place in Dublin as part of ICANN54, the mechanism to initiate an IRP was explained by Thomas Rickert, CCWG Co-Chair.[8]

    Briefly, the modified process is as follows -

    • An objection may be raised by any individual, even a non-member.
    • This individual needs to find an SO or an AC that shares the objection.
    • A “pre-call” or remote meeting between all the SOs and ACs is scheduled, to see if objection receives prescribed threshold of approval from the community.
    • If this threshold is met, dialogue is undertaken with the Board, to see if the objection is sustained by the Board.
    • If this dialogue also fails, then IRP can be initiated.

    The question of which “enforcement model” empowers the community arises post the initiation of this IRP, and in the event that the community receives an unfavourable decision through the IRP or that the ICANN Board refuses to implement the IRP decision. Thus, all the “enforcement models” retain the IRP as the primary method of internal dispute resolution.

    The direction that the CCWG-Accountability has taken with regard to enhancement of the IRP is heartening. And these proposals have received large support from the community. What is to be seen now is whether these proposals will be fully implemented by the Board or not, in addition to all the other proposals made by the CCWG.

    Enforcement  – An Overview of the Different Models

    In addition to trying to enhance the existing dispute resolution mechanism, the CCWG-Accountability also came up with a variety of “enforcement models”, by which the internet community would be vested with certain powers. And in response to the models proposed by the CCWG-Accountability, the ICANN Board came up with a counter proposal, called the MEM.

    Below is a tabular representation of what kinds of powers are vested with the community under the SMM, the SDM and the MEM.

    Power

    SMM

    SDM

    MEM

    Reject/Review Budget, Strategies and OPs.

    +

    Review/Reject Board decisions with regard to IANA functions.

    Sole Member has the reserved power to reject the budget up to 2 times.

    Member also has standing to enforce bylaw restrictions on the budget, etc.

    Sole Designator can only trigger Board consultations if opposition to budget, etc exists. Further, bylaws specify how many times such a consultation can be triggered.

    Designator only possesses standing to enforce this consultation.

    Community can reject Budget up to two times. Board is required by bylaws to reconsider budget post such rejection, by consulting with the community. If still no change is made, then community can initiate process to recall the Board.

    Reject/Review amendments to Standard bylaws and Fundamental bylaws

    Sole Member has right to veto these changes. Further, member also standing to enforce this right under the relevant Californian law.

    Sole Designator can also veto these changes. However, ambiguity regarding standing of designator to enforce this right.

    No veto power granted to any SO or AC.

    Each SO and AC evaluate if they want to voice the said objection. If certain threshold of agreement reached, then as per the bylaws, the Board cannot go ahead with the amendment.

    Appointment and Removal of individual ICANN directors

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    The SOs/ACs cannot appoint individual directors. But they can initiate process for their removal.

    However, directors can only be removed for breach of or on the basis of certain clauses in a “pre-service letter” that they sign.

    Recall of ICANN Board

    Sole Member has the power to recall Board.

    Further, it has standing to enforce this right in Californian courts.

    Sole Designator also has the power to recall the Board.

    However, ambiguity regarding standing to enforce this right.

    Community is not vested with power to recall the Board.

    However, if simultaneous trigger of pre-service letters occurs, in some scenarios, only then can something similar to a recall of the Board occur.

    A Critique of these Models

    SMM:

    The Sole Member Model (or SMM) was discussed and adopted in the second draft proposal, released in August 2015. This model is in fact the simplest and most feasible variant of all the other membership-based models, and has received substantial support from the internet community. The SMM proposes only one amendment to the ICANN bylaws - a move from having no members to one member, while ICANN itself retains its character as a non-profit mutual-benefit corporation under Californian laws.

    This “sole member” will be the community as a whole, represented by the various SOs and ACs. The SOs and ACs require no separate legal personhood to be a part of this “sole member”, but can directly participate. This participation is to be effected by a voting system, explained in the second draft, which allocates the maximum number of votes each SO and AC can cast. This ensures that each SO/AC doesn’t have to cast a unanimous vote, but each differing opinion within an SO/AC is given equal weight.

    SDM:

    A slightly modified and watered down version of the SMM, proposed by the CCWG-Accountability as an alternative to the same, is the “Sole Designator Model” or the SDM. Such a model requires an amendment to the ICANN bylaws, by which certain SOs/ACs are assigned “designator” status. By virtue of this status, they may then exercise certain rights - the right to recall the Board in certain scenarios and the right to veto budgets and strategic plans.

    However, there is some uncertainty in Californian law regarding who can be a designator - an individual or an entity as well. So whether unincorporated associations, such as the SOs and ACs, can be a “designator” as per the law is a question that doesn’t have a clear answer yet.

    Where most discussion with respect to the SDM has occurred has been in the area of the designator being vested with the power to “spill” or remove all the members of the ICANN Board. The designator is vested with this power as a sort of last-resort mechanism for the community’s voice to be heard. However, an interesting point raised in one of the Accountability sessions at ICANN54 was the almost negligible probability of this course of action ever being taken, i.e. the Board being “spilled”. So while in theory this model seems to vest the community with massive power, in reality, because the right to “spill” the Board may never be invoked, the SDM is actually a weak enforceability model.

    Other Variants of the Designator Model:

    The CCWG-Accountability, in both its first and second report, discussed variants of the designator model as well. A generic SO/AC Designator model was discussed in the first draft. The Enhanced SO/AC Designator model, discussed in the second draft, also functions along similar lines. However, only those SOs and ACs that wanted to be made designators apply to become so, as opposed to the requirement of a mandatory designator under the SDM model.

    After the second draft released by the CCWG-Accountability and the counter-proposal released by the ICANN Board (see below for the ICANN Board’s proposal), discussion was mostly directed towards the SMM and the MEM. However, the discussion with regard to the designator model has recently been revived by members of the ALAC at ICANN54 in Dublin, who unanimously issued a statement supporting the SDM.[9] And following this, many more in the community have expressed their support towards adopting the designator model.[10]

    MEM:

    The Multi-stakeholder Enforcement Model or MEM was the ICANN Board’s counter-model to all the models put forth by the CCWG-Accountability, specifically the SMM. However, there is no clarity with regard to the specifics of this model. In fact, the vagueness surrounding the model is one of the biggest criticisms of the model itself.

    The CCWG-Accountability accounts for possible consequences of implementation every model by a mechanism known as “stress-tests”. The Board’s proposal, on the other hand, rejects the SMM due to its “unintended consequences”, but does not provide any clarity on what these consequences are or what in fact the problems with the SMM itself are.[11]

    In addition, many are opposed to the Board proposal in general because it wasn’t created by the community, and therefore not reflective of the community’s views, as opposed to the SMM.[12]

    Instead, the Board’s solution is to propose a counter-model that doesn’t in fact fix the existing problems of accountability.

    What is known of the MEM though, gathered primarily from an FAQ published on the ICANN community forum, is this: The community, through the various SOs and ACs, can challenge any action of the Board that is CONTRADICTORY TO THE FUNDAMENTAL BYLAWS only, through a binding arbitration. The arbitration panel will be decided by the Board and the arbitration itself will be financed by ICANN. Further, this process will not replace the existing Independent Review Process or IRP, but will run parallely.

    Even this small snippet of the MEM is filled with problems. Concerns of neutrality with regard to the arbitral panel and challenge of the award itself have been raised.[13]

    Further, the MEM seems to be in direct opposition to the ‘gold standard’ multi-stakeholder model of ICANN. Essentially, there is no increased accountability of the ICANN under the MEM, thus eliciting severe opposition from the community.

    What is interesting to note about all these models, is that they are all premised on ICANN continuing to remain within the jurisdiction of the United States. And even more surprising is that hardly anyone questions this premise. However, at ICANN54 this issue received a small amount of traction, enough for the setting up of an ad-hoc committee to address these jurisdictional concerns. But even this isn’t enough traction. The only option now though is to wait and see what this ad-hoc committee, as well as the CCWG-Accountability through its third draft proposal to be released later this year, comes up with.


    [1]. The IANA functions or the technical functions are the name, number and protocol functions with regard to the administration of the Domain Name System or the DNS.

    [2]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [3]. http://www.theregister.co.uk/2015/10/19/congress_tells_icann_quit_escaping_accountability/?page=1

    [4]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [5]. SOs are Supporting Organizations and ACs are Advisory Committees. They form part of ICANN’s operational structure.

    [6]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [7]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [8]. Thomas Rickert (GNSO-appointed CCWG co-chair) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 15,16) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [9]. http://www.brandregistrygroup.org/alac-throws-spanner-in-icann-accountability-discussions

    [10]. http://www.theregister.co.uk/2015/10/22/internet_community_icann_accountability/

    [11]. http://www.theregister.co.uk/2015/09/07/icann_accountability_latest/

    [12]. http://www.circleid.com/posts/20150923_empire_strikes_back_icann_accountability_at_the_inflection_point/

    [13]. http://www.internetgovernance.org/2015/09/06/icann-accountability-a-three-hour-call-trashes-a-year-of-work/

    Bios and Photos of Speakers for Big Data in the Global South International Workshop

    by Prasad Krishna last modified Nov 06, 2015 02:01 AM

    PDF document icon Bios&Photos_BigDataWorkshop.pdf — PDF document, 1825 kB (1869456 bytes)

    Comments on the Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

    by Geetha Hariharan last modified Nov 18, 2015 06:33 AM
    Following the comment-period on the Zero Draft, the Draft Outcome Document of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released on 4 November 2015. Comments were sought on the Draft Outcome Document from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

     

    The WSIS+10 Overall Review of the Implementation of WSIS Outcomes, scheduled for December 2015, comes as a review of the WSIS process initiated in 2003-05. At the December summit of the UN General Assembly, the WSIS vision and mandate of the IGF are to be discussed. The Draft Outcome Document, released on 4 November 2015, is towards an outcome document for the summit. Comments were sought on the Draft Outcome Document. Our comments are below.

    1. The Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“the current Draft”) stands considerably altered from the Zero Draft. With references to development-related challenges, the Zero Draft covered areas of growth and challenges of the WSIS. It noted the persisting digital divide, the importance of innovation and investment, and of conducive legal and regulatory environments, and the inadequacy of financial mechanisms. Issues crucial to Internet governance such as net neutrality, privacy and the mandate of the IGF found mention in the Zero Draft.
    2. The current Draft retains these, and adds to them. Some previously-omitted issues such as surveillance, the centrality of human rights and the intricate relationship of ICTs to the Sustainable Development Goals, now stand incorporated in the current Draft. This is most commendable. However, the current Draft still lacks teeth with regard to some of these issues, and fails to address several others.
    3. In our comments to the Zero Draft, CIS had called for these issues to be addressed. We reiterate our call in the following paragraphs.

    (1) ICT for Development

    1. In the current Draft, paragraphs 14-36 deal with ICTs for development. While the draft contains rubrics like ‘Bridging the digital divide’, ‘Enabling environment’, and ‘Financial mechanisms’, the following issues are unaddressed:
    2. Equitable development for all;
    3. Accessibility to ICTs for persons with disabilities;
    4. Access to knowledge and open data.

    Equitable development

    1. In the Geneva Declaration of Principles (2003), two goals are set forth as the Declaration’s “ambitious goal”: (a) the bridging of the digital divide; and (b) equitable development for all (¶ 17). The current Draft speaks in detail about the bridging of the digital divide, but the goal of equitable development is conspicuously absent. At WSIS+10, when the WSIS vision evolves to the creation of inclusive ‘knowledge societies’, equitable development should be both a key principle and a goal to stand by.
    2. Indeed, inequitable development underscores the persistence of the digital divide. The current Draft itself refers to several instances of inequitable development; for ex., the uneven production capabilities and deployment of ICT infrastructure and technology in developing countries, landlocked countries, small island developing states, countries under occupation or suffering natural disasters, and other vulnerable states; lack of adequate financial mechanisms in vulnerable parts of the world; variably affordable (or in many cases, unaffordable) spread of ICT devices, technology and connectivity, etc.
    3. What underscores these challenges is the inequitable and uneven spread of ICTs across states and communities, including in their production, capacity-building, technology transfers, gender-concentrated adoption of technology, and inclusiveness.
    4. As such, it is essential that the WSIS+10 Draft Outcome Document reaffirm our commitment to equitable development for all peoples, communities and states.
    5. We suggest the following inclusion to paragraph 5 of the current Draft:
    “5. We reaffirm our common desire and commitment to the WSIS vision to build an equitable, people-centred, inclusive, and development-oriented Information Society…”

    Accessibility for persons with disabilities

    10. Paragraph 13 of the Geneva Declaration of Principles (2003) pledges to “pay particular attention to the special needs of marginalized and vulnerable groups of society” in the forging of an Information Society. Particularly, ¶ 13 recognises the special needs of older persons and persons with disabilities.

    11. Moreover, ¶ 31 of the Geneva Declaration of Principles calls for the special needs of persons with disabilities, and also of disadvantaged and vulnerable groups, to be taken into account while promoting the use of ICTs for capacity-building. Accessibility for persons with disabilities is thus core to bridging the digital divide – as important as bridging the gender divide in access to ICTs.

    12. Not only this, but the WSIS+10 Statement on the Implementation of WSIS Outcomes (June 2014) also reaffirms the commitment to “provide equitable access to information and knowledge for all… including… people with disabilities”, recognizing that it is “crucial to increase the participation of vulnerable people in the building process of Information Society…” (¶8).

    13. In our previous submission, CIS had suggested language drawing attention to this. Now, the current Draft only acknowledges that “particular attention should be paid to the specific ICT challenges facing… persons with disabilities…” (paragraph 11). It acknowledges also that now, accessibility for persons with disabilities constitutes one of the core elements of quality (paragraph 22). However, there is a glaring omission of a call to action, or a reaffirmation of our commitment to bridging the divide experienced by persons with disabilities.

    14. We suggest, therefore, the addition of the following language the addition of paragraph 24A to the current Draft. Sections of this suggestion are drawn from ¶8, WSIS+10 Statement on the Implementation of WSIS Outcomes.

    "24A. Recalling the UN Convention on the rights of people with disabilities, the Geneva principles paragraph 11, 13, 14 and 15, Tunis Commitment paras 20, 22 and 24, and reaffirming the commitment to providing equitable access to information and knowledge for all, building ICT capacity for all and confidence in the use of ICTs by all, including youth, older persons, women, indigenous and nomadic peoples, people with disabilities, the unemployed, the poor, migrants, refugees and internally displaced people and remote and rural communities, it is crucial to increase the participation of vulnerable people in the building process of information Society and to make their voice heard by stakeholders and policy-makers at different levels. It can allow the most fragile groups of citizens worldwide to become an integrated part of their economies and also raise awareness of the target actors on the existing ICTs solution (such as tolls as e- participation, e-government, e-learning applications, etc.) designed to make their everyday life better. We recognise need for continued extension of access for people with disabilities and vulnerable people to ICTs, especially in developing countries and among marginalized communities, and reaffirm our commitment to promoting and ensuring accessibility for persons with disabilities. In particular, we call upon all stakeholders to honour and meet the targets set out in Target 2.5.B of the Connect 2020 Agenda that enabling environments ensuring accessible telecommunication/ICT for persons with disabilities should be established in all countries by 2020.”

    Access to knowledge and open data

    15. The Geneva Declaration of Principles dedicates a section to access to information and knowledge (B.3). It notes, in ¶26, that a “rich public domain” is essential to the growth of Information Society. It urges that public institutions be strengthened to ensure free and equitable access to information (¶26), and also that assistive technologies and universal design can remove barriers to access to information and knowledge (¶25). Particularly, the Geneva Declaration advocates the use of free and open source software, in addition to proprietary software, to meet these ends (¶27).

    16. It was also recognized in the WSIS+10 Statement on the Implementation of WSIS Outcomes (‘Challenges-during implementation of Action Lines and new challenges that have emerged’) that there is a need to promote access to all information and knowledge, and to encourage open access to publications and information (C, ¶¶9 and 12).

    17. In our previous submission, CIS had highlighted the importance of open access to knowledge thus: “…the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of ‘Data Revolution for Sustainable Development’. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.”

    18. This crucial element is missing from the current Draft of the WSIS+10 Outcome Document. Of course, the current Draft notes the importance of access to information and free flow of data. But it stops short of endorsing and advocating the importance of access to knowledge and free and open source software, which are essential to fostering competition and innovation, diversity of consumer/ user choice and ensuring universal access.

    19. We suggest the following addition – of paragraph 23A to the current Draft:

    "23A. We recognize the need to promote access for all to information and knowledge, open data, and open, affordable, and reliable technologies and services, while respecting individual privacy, and to encourage open access to publications and information, including scientific information and in the research sector, and particularly in developing and least developed countries.”

    (2) Human Rights in Information Society

    20. The current Draft recognizes that human rights have been central to the WSIS vision, and reaffirms that rights offline must be protected online as well. However, the current Draft omits to recognise the role played by corporations and intermediaries in facilitating access to and use of the Internet.

    21. In our previous submission, CIS had noted that “the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online”.

    22. We reiterate our suggestion for the inclusion of paragraph 43A to the current Draft:

    "43A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

    (3) Internet Governance

    The support for multilateral governance of the Internet

    23. While the section on Internet governance is not considerably altered from the zero draft, there is a large substantive change in the current Draft. The current Draft states that the governance of the Internet should be “multilateral, transparent and democratic, with full involvement of all stakeholders” (¶50). Previously, the zero draft recognized the “the general agreement that the governance of the Internet should be open, inclusive, and transparent”.

    24. A return to purely ‘multilateral’ Internet governance would be regressive. Governments are, without doubt, crucial in Internet governance. As scholarship and experience have both shown, governments have played a substantial role in shaping the Internet as it is today: whether this concerns the availability of content, spread of infrastructure, licensing and regulation, etc. However, these were and continue to remain contentious spaces.

    25. As such, it is essential to recognize that a plurality of governance models serve the Internet, in which the private sector, civil society, the technical community and academia play important roles. We recommend returning to the language of the zero draft in ¶32: “open, inclusive and transparent governance of the Internet”.

    Governance of Critical Internet Resources

    26. It is curious that the section on Internet governance in both the zero and the current Draft makes no reference to ICANN, and in particular, to the ongoing transition of IANA stewardship and the discussions surrounding the accountability of ICANN and the IANA operator. The stewardship of critical Internet resources, such as the root, is crucial to the evolution and functioning of the Internet. Today, ICANN and a few other institutions have a monopoly over the management and policy-formulation of several critical Internet resources.

    27. While the WSIS in 2003-05 considered this a troubling issue, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

    28. Therefore, we reiterate our suggestion to add paragraph 53A after Para 53:

    "53A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

    Inclusiveness and Diversity in Internet Governance

    29. The current Draft, in ¶52, recognizes that there is a need to “promote greater participation and engagement in Internet governance of all stakeholders…”, and calls for “stable, transparent and voluntary funding mechanisms to this end.” This is most commendable.

    30. The issue of inclusiveness and diversity in Internet governance is crucial: today, Internet governance organisations and platforms suffer from a lack of inclusiveness and diversity, extending across representation, participation and operations of these organisations. As CIS submitted previously, the mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle in many cases.

    31. As we submitted before, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For ex., 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based.

    32. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed. Representation must include not only those from developing countries, but must also extend across gender and communities.

    33. We propose, therefore, the addition of a paragraph 51A after Para 51:

    "51A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, including in their representation, participation and operations. We note with concern that the representation of developing countries, of women, persons with disabilities and other vulnerable groups, is far from equitable and adequate. We call upon organisations involved in Internet governance to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

     


    Prepared by Geetha Hariharan, with inputs from Sunil Abraham and Japreet Grewal. All comments submitted towards the Draft Outcome Document may be found at this link.

    Summary Report Internet Governance Forum 2015

    by Jyoti Panday last modified Nov 30, 2015 10:47 AM
    Centre for Internet and Society (CIS), India participated in the Internet Governance Forum (IGF) held at Poeta Ronaldo Cunha Lima Conference Center, Joao Pessoa in Brazil from 10 November 2015 to 13 November 2015. The theme of IGF 2015 was ‘Evolution of Internet Governance: Empowering Sustainable Development’. Sunil Abraham, Pranesh Prakash & Jyoti Panday from CIS actively engaged and made substantive contributions to several key issues affecting internet governance at the IGF 2015. The issue-wise detail of their engagement is set out below.

    INTERNET GOVERNANCE

    I. The Multi-stakeholder Advisory Group to the IGF organised a discussion on Sustainable Development Goals (SDGs) and Internet Economy at the Main Meeting Hall from 9:00 am to 12:30 pm on 11 November, 2015. The discussions at this session focused on the importance of Internet Economy enabling policies and eco-system for the fulfilment of different SDGs. Several concerns relating to internet entrepreneurship, effective ICT capacity building, protection of intellectual property within and across borders were availability of local applications and content were addressed. The panel also discussed the need to identify SDGs where internet based technologies could make the most effective contribution. Sunil Abraham contributed to the panel discussions by addressing the issue of development and promotion of local content and applications. List of speakers included:

    1. Lenni Montiel, Assistant-Secretary-General for Development, United Nations

    2. Helani Galpaya, CEO LIRNEasia

    3. Sergio Quiroga da Cunha, Head of Latin America, Ericsson

    4. Raúl L. Katz, Adjunct Professor, Division of Finance and Economics, Columbia Institute of Tele-information

    5. Jimson Olufuye, Chairman, Africa ICT Alliance (AfICTA)

    6. Lydia Brito, Director of the Office in Montevideo, UNESCO

    7. H.E. Rudiantara, Minister of Communication & Information Technology, Indonesia

    8. Daniel Sepulveda, Deputy Assistant Secretary, U.S. Coordinator for International and Communications Policy at the U.S. Department of State  

    9. Deputy Minister Department of Telecommunications and Postal Services for the republic of South Africa

    10. Sunil Abraham, Executive Director, Centre for Internet and Society, India

    11. H.E. Junaid Ahmed Palak, Information and Communication Technology Minister of Bangladesh

    12. Jari Arkko, Chairman, IETF

    13. Silvia Rabello, President, Rio Film Trade Association

    14. Gary Fowlie, Head of Member State Relations & Intergovernmental Organizations, ITU

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2327-2015-11-11-internet-economy-and-sustainable-development-main-meeting-room

    Video link Internet economy and Sustainable Development here https://www.youtube.com/watch?v=D6obkLehVE8

     II. Public Knowledge organised a workshop on The Benefits and Challenges of the Free Flow of Data at Workshop Room 5 from 11:00 am to 12:00 pm on 12 November, 2015. The discussions in the workshop focused on the benefits and challenges of the free flow of data and also the concerns relating to data flow restrictions including ways to address them. Sunil Abraham contributed to the panel discussions by addressing the issue of jurisdiction of data on the internet. The panel for the workshop included the following.

    1. Vint Cerf, Google

    2. Lawrence Strickling, U.S. Department of Commerce, NTIA

    3. Richard Leaning, European Cyber Crime Centre (EC3), Europol

    4. Marietje Schaake, European Parliament

    5. Nasser Kettani, Microsoft

    6. Sunil Abraham, CIS India

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2467-2015-11-12-ws65-the-benefits-and-challenges-of-the-free-flow-of-data-workshop-room-5

    Video link https://www.youtube.com/watch?v=KtjnHkOn7EQ

     III. Article 19 and Privacy International organised a workshop on Encryption and Anonymity: Rights and Risks at Workshop Room 1 from 11:00 am to 12:30 pm on 12 November, 2015. The workshop fostered a discussion about the latest challenges to protection of anonymity and encryption and ways in which law enforcement demands could be met while ensuring that individuals still enjoyed strong encryption and unfettered access to anonymity tools. Pranesh Prakash contributed to the panel discussions by addressing concerns about existing south Asian regulatory framework on encryption and anonymity and emphasizing the need for pervasive encryption. The panel for this workshop included the following.

    1. David Kaye, UN Special Rapporteur on Freedom of Expression

    2. Juan Diego Castañeda, Fundación Karisma, Colombia

    3. Edison Lanza, Organisation of American States Special Rapporteur

    4. Pranesh Prakash, CIS India

    5. Ted Hardie, Google

    6. Elvana Thaci, Council of Europe

    7. Professor Chris Marsden, Oxford Internet Institute

    8. Alexandrine Pirlot de Corbion, Privacy International

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2407-2015-11-12-ws-155-encryption-and-anonymity-rights-and-risks-workshop-room-1

    Video link available here https://www.youtube.com/watch?v=hUrBP4PsfJo

     IV. Chalmers & Associates organised a session on A Dialogue on Zero Rating and Network Neutrality at the Main Meeting Hall from 2:00 pm to 4:00 pm on 12 November, 2015. The Dialogue provided access to expert insight on zero-rating and a full spectrum of diverse views on this issue. The Dialogue also explored alternative approaches to zero rating such as use of community networks. Pranesh Prakash provided a detailed explanation of harms and benefits related to different approaches to zero-rating. The panellists for this session were the following.

    1. Jochai Ben-Avie, Senior Global Policy Manager, Mozilla, USA

    2. Igor Vilas Boas de Freitas, Commissioner, ANATEL, Brazil

    3. Dušan Caf, Chairman, Electronic Communications Council, Republic of Slovenia

    4. Silvia Elaluf-Calderwood, Research Fellow, London School of Economics, UK/Peru

    5. Belinda Exelby, Director, Institutional Relations, GSMA, UK

    6. Helani Galpaya, CEO, LIRNEasia, Sri Lanka

    7. Anka Kovacs, Director, Internet Democracy Project, India

    8. Kevin Martin, VP, Mobile and Global Access Policy, Facebook, USA

    9. Pranesh Prakash, Policy Director, CIS India

    10. Steve Song, Founder, Village Telco, South Africa/Canada

    11. Dhanaraj Thakur, Research Manager, Alliance for Affordable Internet, USA/West Indies

    12. Christopher Yoo, Professor of Law, Communication, and Computer & Information Science, University of Pennsylvania, USA

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2457-2015-11-12-a-dialogue-on-zero-rating-and-network-neutrality-main-meeting-hall-2

     V. The Internet & Jurisdiction Project organised a workshop on Transnational Due Process: A Case Study in MS Cooperation at Workshop Room 4 from 11:00 am to 12:00 pm on 13 November, 2015. The workshop discussion focused on the challenges in developing an enforcement framework for the internet that guarantees transnational due process and legal interoperability. The discussion also focused on innovative approaches to multi-stakeholder cooperation such as issue-based networks, inter-sessional work methods and transnational policy standards. The panellists for this discussion were the following.

    1. Anne Carblanc Head of Division, Directorate for Science, Technology and Industry, OECD

    2. Eileen Donahoe Director Global Affairs, Human Rights Watch

    3. Byron Holland President and CEO, CIRA (Canadian ccTLD)

    4. Christopher Painter Coordinator for Cyber Issues, US Department of State

    5. Sunil Abraham Executive Director, CIS India

    6. Alice Munyua Lead dotAfrica Initiative and GAC representative, African Union Commission

    7. Will Hudsen Senior Advisor for International Policy, Google

    8. Dunja Mijatovic Representative on Freedom of the Media, OSCE

    9. Thomas Fitschen Director for the United Nations, for International Cooperation against Terrorism and for Cyber Foreign Policy, German Federal Foreign Office

    10. Hartmut Glaser Executive Secretary, Brazilian Internet Steering Committee

    11. Matt Perault, Head of Policy Development Facebook

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2475-2015-11-13-ws-132-transnational-due-process-a-case-study-in-ms-cooperation-workshop-room-4

    Video link Transnational Due Process: A Case Study in MS Cooperation available here https://www.youtube.com/watch?v=M9jVovhQhd0

     VI. The Internet Governance Project organised a meeting of the Dynamic Coalition on Accountability of Internet Governance Venues at Workshop Room 2 from 14:00 – 15:30 on 12 November, 2015. The coalition brought together panelists to highlight the challenges in developing an accountability framework for internet governance venues that include setting up standards and developing a set of concrete criteria. Jyoti Panday provided the perspective of civil society on why acountability is necessary in internet governance processes and organizations. The panelists for this workshop included the following.

    1. Robin Gross, IP Justice

    2. Jeanette Hofmann, Director Alexander von Humboldt Institute for Internet and Society

    3. Farzaneh Badiei, Internet Governance Project

    4. Erika Mann, Managing Director Public PolicyPolicy Facebook and Board of Directors ICANN

    5. Paul Wilson, APNIC

    6. Izumi Okutani, Japan Network Information Center (JPNIC)

    7. Keith Drazek , Verisign

    8. Jyoti Panday, CIS

    9. Jorge Cancio, GAC representative

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c23/dynamic-coalition-on-accountability-of-internet-governance-venues?iframe=no&w=&sidebar=yes&bg=no

    Video link https://www.youtube.com/watch?v=UIxyGhnch7w

     VII. Digital Infrastructure Netherlands Foundation organized an open forum at Workshop Room 3 from 11:00 – 12:00 on 10 November, 2015. The open forum discussed the increase in government engagement with “the internet” to protect their citizens against crime and abuse and to protect economic interests and critical infrastructures. It brought together panelists topresent ideas about an agenda for the international protection of ‘the public core of the internet’ and to collect and discuss ideas for the formulation of norms and principles and for the identification of practical steps towards that goal. Pranesh Prakash participated in the e open forum. Other speakers included

    1. Bastiaan Goslings AMS-IX, NL

    2. Pranesh Prakash CIS, India

    3. Marilia Maciel (FGV, Brasil

    4. Dennis Broeders (NL Scientific Council for Government Policy)

    Detailed description of the open forum is available here http://schd.ws/hosted_files/igf2015/3d/DINL_IGF_Open%20Forum_The_public_core_of_the_internet.pdf

    Video link available here https://www.youtube.com/watch?v=joPQaMQasDQ

    VIII. UNESCO, Council of Europe, Oxford University, Office of the High Commissioner on Human Rights, Google, Internet Society organised a workshop on hate speech and youth radicalisation at Room 9 on Thursday, November 12. UNESCO shared the initial outcome from its commissioned research on online hate speech including practical recommendations on combating against online hate speech through understanding the challenges, mobilizing civil society, lobbying private sectors and intermediaries and educating individuals with media and information literacy. The workshop also discussed how to help empower youth to address online radicalization and extremism, and realize their aspirations to contribute to a more peaceful and sustainable world. Sunil Abraham provided his inputs. Other speakers include

    1. Chaired by Ms Lidia Brito, Director for UNESCO Office in Montevideo

    2.Frank La Rue, Former Special Rapporteur on Freedom of Expression

    3. Lillian Nalwoga, President ISOC Uganda and rep CIPESA, Technical community

    4. Bridget O’Loughlin, CoE, IGO

    5. Gabrielle Guillemin, Article 19

    6. Iyad Kallas, Radio Souriali

    7. Sunil Abraham executive director of Center for Internet and Society, Bangalore, India

    8. Eve Salomon, global Chairman of the Regulatory Board of RICS

    9. Javier Lesaca Esquiroz, University of Navarra

    10. Representative GNI

    11. Remote Moderator: Xianhong Hu, UNESCO

    12. Rapporteur: Guilherme Canela De Souza Godoi, UNESCO

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c1X/ws-128-mitigate-online-hate-speech-and-youth-radicalisation?iframe=no&w=&sidebar=yes&bg=no

    Video link to the panel is available here https://www.youtube.com/watch?v=eIO1z4EjRG0

     INTERMEDIARY LIABILITY

    IX. Electronic Frontier Foundation, Centre for Internet Society India, Open Net Korea and Article 19 collaborated to organize a workshop on the Manila Principles on Intermediary Liability at Workshop Room 9 from 11:00 am to 12:00 pm on 13 November 2015. The workshop elaborated on the Manila Principles, a high level principle framework of best practices and safeguards for content restriction practices and addressing liability for intermediaries for third party content. The workshop saw particpants engaged in over lapping projects considering restriction practices coming togetehr to give feedback and highlight recent developments across liability regimes. Jyoti Panday laid down the key details of the Manila Principles framework in this session. The panelists for this workshop included the following.

    1. Kelly Kim Open Net Korea,

    2. Jyoti Panday, CIS India,

    3. Gabrielle Guillemin, Article 19,

    4. Rebecca McKinnon on behalf of UNESCO

    5. Giancarlo Frosio, Center for Internet and Society, Stanford Law School

    6. Nicolo Zingales, Tilburg University

    7. Will Hudson, Google

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2423-2015-11-13-ws-242-the-manila-principles-on-intermediary-liability-workshop-room-9

    Video link available here https://www.youtube.com/watch?v=kFLmzxXodjs

     ACCESSIBILITY

    X. Dynamic Coalition on Accessibility and Disability and Global Initiative for Inclusive ICTs organised a workshop on Empowering the Next Billion by Improving Accessibility at Workshop Room 6 from 9:00 am to 10:30 am on 13 November, 2015. The discussion focused on the need and ways to remove accessibility barriers which prevent over one billion potential users to benefit from the Internet, including for essential services. Sunil Abraham specifically spoke about the lack of compliance of existing ICT infrastructure with well established accessibility standards specifically relating to accessibility barriers in the disaster management process. He discussed the barriers faced by persons with physical or psychosocial disabilities. The panelists for this discussion were the following.

    1. Francesca Cesa Bianchi, G3ICT

    2. Cid Torquato, Government of Brazil

    3. Carlos Lauria, Microsoft Brazil

    4. Sunil Abraham, CIS India

    5. Derrick L. Cogburn, Institute on Disability and Public Policy (IDPP) for the ASEAN(Association of Southeast Asian Nations) Region

    6. Fernando H. F. Botelho, F123 Consulting

    7. Gunela Astbrink, GSA InfoComm

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2438-2015-11-13-ws-253-empowering-the-next-billion-by-improving-accessibility-workshop-room-3

    Video Link Empowering the next billion by improving accessibility https://www.youtube.com/watch?v=7RZlWvJAXxs

     OPENNESS

    XI. A workshop on FOSS & a Free, Open Internet: Synergies for Development was organized at Workshop Room 7 from 2:00 pm to 3:30 pm on 13 November, 2015. The discussion was focused on the increasing risk to openness of the internet and the ability of present & future generations to use technology to improve their lives. The panel shred different perspectives about the future co-development of FOSS and a free, open Internet; the threats that are emerging; and ways for communities to surmount these. Sunil Abraham emphasised the importance of free software, open standards, open access and access to knowledge and the lack of this mandate in the draft outcome document for upcoming WSIS+10 review and called for inclusion of the same. Pranesh Prakash further contributed to the discussion by emphasizing the need for free open source software with end‑to‑end encryption and traffic level encryption based on open standards which are decentralized and work through federated networks. The panellists for this discussion were the following.

    1. Satish Babu, Technical Community, Chair, ISOC-TRV, Kerala, India

    2. Judy Okite, Civil Society, FOSS Foundation for Africa

    3. Mishi Choudhary, Private Sector, Software Freedom Law Centre, New York

    4. Fernando Botelho, Private Sector, heads F123 Systems, Brazil

    5. Sunil Abraham, CIS India

    6. Pranesh Prakash, CIS India

    7. Nnenna Nwakanma- WWW.Foundation

    8. Yves MIEZAN EZO, Open Source strategy consultant

    9. Corinto Meffe, Advisor to the President and Directors, SERPRO, Brazil

    10. Frank Coelho de Alcantara, Professor, Universidade Positivo, Brazil

    11. Caroline Burle, Institutional and International Relations, W3C Brazil Office and Center of Studies on Web Technologies

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2468-2015-11-13-ws10-foss-and-a-free-open-internet-synergies-for-development-workshop-room-7

    Video link available here https://www.youtube.com/watch?v=lwUq0LTLnDs



    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    by Nishant Shah last modified Nov 23, 2015 01:27 PM
    The idea of a 'digital' Diwali reduces our social relationships to a ledger of give and take. The last fortnight, I have been bombarded with advertisements selling the idea of a “Digital Diwali”. We have become so used to the idea that everything that is digital is modern, better and more efficient.
    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    For me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. (Source: Reuters)

    The article was published in the Indian Express on November 22, 2015.


    I have WhatsApp messages with exploding fireworks, singing greeting cards that chant mystic sounding messages, an app that turns my smartphone into a flickering diya, another app that remotely controls the imitation LED candles on my windows, an invitation to Skype in for a puja at a friend’s house 3,000 km away, and the surfeit of last minute shopping deals, each one offering a dhamaka of discounts.

    However, to me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. Think of Diwali as sharing the fundamental logic that governs the digital — the logic of counting. As we explode with joy this festive season, we count our blessings, our loved ones, the gifts and presents that we exchange. If we are on the new Fitbit trend, we count the calories we consume and burn as we make our way through parties where it is important to see and be seen, compare and contrast, connect with all the people who could be thought of as friends, followers, connectors, or connections.

    While there is no denying that there is a sociality that the festival brings in, there is also a cruel algebra of counting that comes along with it. It is no surprise that as we celebrate the victory of good over evil and right over wrong, we also simultaneously bow our heads to the goddess of wealth in this season.

    Look beyond the glossy surface of Diwali festivities, and you realise that it is exactly like the digital. Digital is about counting. It is right there in the name — digits refers to numbers. Or digits refer to fingers — these counting appendages which we can manipulate and flex in order to achieve desired results. At the core of digital systems is the logic of counting, and counting, as anybody will tell us, is not a benign process. What gets counted, gets accounted for, thus producing a ledger of give and take which often becomes the measure of our social relationships.

    I remember, as a child, my mother meticulously making a note of every gift or envelope filled with money that ever came our way from the relatives, so that there would be precise and exact reciprocation. I am certain that there is now an app which can keep a track of these exchanges. I am not suggesting that these occasions of gifting are merely mercenary, but they are embodiments of finely calibrated values and worth of relationships defined by proximity, intimacy, hierarchy and distance. The digital produces and works on a similar algorithm, which is often as inscrutable and opaque as the unspoken codes of the Diwali ledger.

    There is something else that happens with counting. The only things that can have value are things that have value. I don’t know which ledger counts the coming together of my very distributed family for an evening of chatting, talking, sharing lives and laughter. I don’t know how anybody would reciprocate that one late night when a cousin came to our home and spent hours with my younger brother making a rangoli to surprise the rest of us. I have no idea how they will ever reciprocate gifts that one of the younger kids made at school for all the members of the family.

    Diwali is about the things, but like the digital system, these are things that cannot be counted. And within the digital system, things that cannot be counted are things that get discounted. They become unimportant. They become noise, or rubbish. Our social networks are counting systems that might notice the low frequency of my connections with my extended family but they cannot quantify the joy I hear in the voice of my grandmother when I call her from a different time-zone to catch up with her. Digital systems can only deal with things with value and not their worth.

    I do want to remind myself that there is more to this occasion than merely counting. And for once, I want to go beyond the digital, where my memories of the past and the expectations of the future are not shaped by the digital systems of counting and quantifying. Instead, I want Diwali to be analogue. I shall still be mediating my collectivity with the promises of connectivity, but I want to think of this moment as beyond the logics and logistics of counting that codify our social transactions and take such a central location in our personal functioning. This Diwali, I am rooting for a post-digital Diwali, that accounts for all those things that cannot be counted, but are sometimes the only things that really count.

    CIS Submission on CCWG-Accountability 2nd Draft Proposal on Work Stream 1 Recommendations

    by Pranesh Prakash last modified Nov 23, 2015 02:58 PM
    The Centre for Internet & Society (CIS) submitted the below to ICANN's CCWG-Accountability.

    The CCWG Accountability proposal is longer than many countries' constitutions.  Given that, we will keep our comments brief, addressing a very limited set of the issues in very broad terms.

    Human Rights

    ICANN is unique in many ways.  It is a global regulator that has powers of taxation to fund its own operation.  ICANN is not a mere corporation. For such a regulator, ensuring fair process (what is often referred to as "natural justice") as well as substantive human rights (such as the freedom of expression, right against discrimination, right to privacy, and cultural diversity), are important.  Given this, the narrow framing of "free expression and the free flow of information" in Option 1, we believe Option 2 is preferable.

    Diversity

    We are glad that diversity is being recognized as an important principle.  As we noted during the open floor session at ICANN49: [We are] extremely concerned about the accountability of ICANN to the global community.  Due to various decisions made by the US government relating to ICANN's birth, ICANN has had a troubled history with legitimacy.  While it has managed to gain and retain the confidence of the technical community, it still lacks political legitimacy due to its history.  The NTIA's decision has presented us an opportunity to correct this.

    However, ICANN can't hope to do so without going beyond the current ICANN community, which while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

    Of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.  In a session yesterday, a large number of the policies that favour entrenched incumbents from richer countries were discussed.  But without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies.

    This is true not just of the business sector, but of all the 'stakeholders' that are part of global Internet policymaking, whether they follow the ICANN multistakeholder model or another.  A look at the board members of the Internet Architecture Board, for instance, would reveal how skewed the technical community can be, whether in terms of geographic or gender diversity.

    Without greater diversity within the global Internet policymaking communities, there is no hope of equity, respect for human rights — civil, political, cultural, social and economic — and democratic functioning, no matter how 'open' the processes seem to be, and no hope of ICANN accountability either.

    Meanwhile, there are those who are concerned that diversity should not prevail over skill and experience.  Those who have the greatest skill and experience will be those who are insiders in the ICANN system.  To believe that being an insider in the ICANN system ought to be privileged over diversity is wrong.  A call for diversity isn't just political correctness.  It is essential for legitimacy of ICANN as a globally-representative body, and not just one where the developed world (primarily US-based persons) makes policies for the whole globe, which is what it has so far been.  Of course, this cannot be corrected overnight, but it is crucial that this be a central focus of the accountability initiative.

    Jurisdiction, Membership Models and Voting Rights

    The Sole-Member Community Mechanism (SMCM) that has been proposed seems in large part the best manner provided under Californian law relating to public benefit corporations of dealing with accountability issues, and is the lynchpin of the whole accountability mechanism under workstream.

    However, the jurisdictional analysis laid down in 11.3 will only be completed post-transition, as part of workstream. Thus the SMCM may not necessarily be the best model under a different legal jurisdiction. It would be useful to discuss the dependency between these more clearly.  In this vein, it is essential that the Article XVIII Section 1 not be designated a fundamental bylaw.  Further, it would be useful to add that for some limited aspects of the transition (such as IANA functioning), ICANN should seek to enter into a host country agreement to provide legal immunity, thus providing a qualification to para 125 ("ICANN accountability requires compliance with applicable legislation, in jurisdictions where it operates.") since the IANA functions operator ought not be forced by a country not to honour requests made by, for example, North Korea.

    It should also be noted that accountability needs independence, which may be of two kinds: independence of financial source, and independence of appointment.  From what one could gather from the CCWG proposal, the Independent Review Panel will be funded by the budget the ICANN Board prepares, while the appointment process is still unclear.

    One of the most important accountability mechanisms with regard to the IANA functions is that of changing the IANA Functions Operator.  As per the CWG Stewardship's current proposal, the "Post-Transition IANA" won't be an entity that is independent of ICANN.  If the PTI's governance is permanently made part of ICANN's fundamental bylaws (as an affiliate controlled by ICANN), how is it proposed that the IFO be moved from PTI to some other entity if the IANA Functions Review Team so decides? Additionally, for such an important function, the composition of the IFRT should not be left unspecified.

    While it is welcome that a separation is proposed between the IANA budget and budget for rest of ICANN's functioning, the current discussion around budgets seems to be based on the assumption that all IANA functions will be funded by ICANN, whereas if the IANA functions are separated, each community might fund it separately.  That provides two levels of insulation to IANA functions operator(s): separate sources of operational revenue, as well as separate budgets within ICANN.

    It should be noted that there have been some responses that express concern about the shifting of existing power structures within ICANN through some of the proposed alternative voting allocations in the SMCM. However, rather than present arguments as to why these shifts would be beneficial or harmful for ICANN's overall accountability, these responses seem to assume that shift from the current power structures are harmful.  This is an unfounded assumption and cannot be a valid reason, nor can speculation of how the United States Congress will behave be a valid reason for rejecting an otherwise valid proposal.  If there are harms, they ought to be clearly articulated: shifts from the status quo and fear of the US Congress aren't valid harms.  Thus, while it is important to consider how different voting rights models might change the status quo while arriving at any judgments, that cannot be the sole criterion for judgment of its merits.  Further, as the French government notes:

    [T]he French Government still considers that linking Stress Test 18 to a risk of capture of ICANN by governments and NTIA’s requirement that no “government-led or intergovernmental organization solution would be acceptable”, makes no sense. . . . Logically, the risk of capture of ICANN by governments in the future is as low as it is now and in any case, it cannot lead to a “government-led or intergovernmental organization solution”.

    While dealing with the question of relative voting proportions, the community must remembered that not all parts of the world are equally developed with regard to the domain name industry and with respect to civil society as those countries in North America, Western Europe, and other developed nations, and thus may not find adequate representation via the SOs.  In many parts of the world, civil society organizations — especially those focussed on Internet governance and domain name policies — are non-existent.  Thus a system that privileges the SOs to the exclusion of other components of a multistakeholder governance model would not be representative or diverse.  A multistakeholder model cannot disproportionately represent business interests over all other interests.

    In this regard, the comments of former ICANN Chairperson, Rod Beckstrom, at ICANN43 ought to be recalled:

    ICANN must be able to act for the public good while placing commercial and financial interests in the appropriate context . . . How can it do this if all top leadership is from the very domain name industry it is supposed to coordinate independently?

    As Kieren McCarthy points out about ICANN:

    The Board does have too many conflicted members
    The NomCom is full of conflicts
    There are not enough independent voices within the organization

    Reforms in these ought to be as crucial to accountability as the membership model.

    The current mechanisms for ensuring transparency, such as the DIDP process, are wholly inadequate.  We have summarized our experience with the DIDP process, and how often we were denied information on baseless grounds in this table.

    Predictive Policing: What is it, How it works, and its Legal Implications

    by Rohan George — last modified Nov 24, 2015 04:31 PM
    This article reviews literature surrounding big data and predictive policing and provides an analysis of the legal implications of using predictive policing techniques in the Indian context.

    Introduction

    For the longest time, humans have been obsessed with prediction. Perhaps the most well-known oracle in history, Pythia, the infallible Oracle of Delphi was said to predict future events in hysterical outbursts on the seventh day of the month, inspired by the god Apollo himself. This fascination with informing ourselves about future events has hardly subsided in us humans. What has changed however is the methods we employ to do so. The development of Big data technologies for one, has seen radical applications into many parts of life as we know it, including enhancing our ability to make accurate predictions about the future.

    One notable application of Big data into prediction caters to another basic need since the dawn of human civilisation, the need to protect our communities and cities. The word 'police' itself originates from the Greek word 'polis', which means city. The melding of these two concepts prediction and policing has come together in the practice of Predictive policing, which is the application of computer modelling to historical crime data and metadata to predict future criminal activity[1]. In the subsequent sections, I will attempt an introduction of predictive policing and explain some of the main methods within the domain of predictive policing. Because of the disruptive nature of these technologies, it will also be prudent to expand on the implications predictive technologies have for justice, privacy protections and protections against discrimination among others.

    In introducing the concept of predictive policing, my first step is to give a short explanation about current predictive analytics techniques, because these techniques are the ones which are applied into a law enforcement context as predictive policing.

    What is predictive analysis

    Facilitated by the availability of big data, predictive analytics uses algorithms to recognise data patterns and predict future outcomes[2]. Predictive analytics encompasses data mining, predictive modeling, machine learning, and forecasting[3]. Predictive analytics also relies heavily on machine learning and artificial intelligence approaches [4]. The aim of such analysis is to identify relationships among variables that may not be immediately apparent using hypothesis-driven methods.[5] In the mainstream media, one of the most infamous stories about the use of predictive analysis comes from USA, regarding a department store Target and their data analytics practices [6]. Target mined data from purchasing patterns of people who signed onto their baby registry. From this they were able to predict approximately when customers may be due and target advertisements accordingly. In the noted story, they were so successful that they predicted pregnancy before the pregnant girl's father knew she was pregnant. [7]

    Examples of predictive analytics

    • Predicting the success of a movie based on its online ratings[8]
    • Many universities, sometimes in partnership with other firms use predictive analytics to provide course recommendations to students, track student performance, personalize curriculum to individual students and foster networking between students.[9]
    • Predictive Analysis of Corporate Bond Indices Returns[10]

    Relationship between predictive analytics and predictive policing

    The same techniques used in many of the predictive methods mentioned above find application into some predictive policing methods. However two important points need to be raised:

    First, predictive analytics is actually a subset of predictive policing. This is because while the steps in creating a predictive model, of defining a target variable, exposing your model to training data, selecting appropriate features and finally running predictive analysis [11] maybe the same in a policing context, there are other methods which may be used to predict crime, but which do not rely on data mining. These techniques may instead use other methods, such as some of those detailed below along with data about historical crime to generate predictions.

    In her article "Policing by Numbers: Big Data and the Fourth Amendment"[12], Joh categorises 3 main applications of Big data into policing. These are Predictive Policing, Domain Awareness systems and Genetic Data Banks. Genetic data banks refer to maintaining large databases of DNA that was collected as part of the justice system. Issues arise when the DNA collected is repurposed in order to conduct familial searches, instead of being used for corroborating identity. Familial searches may have disproportionate impacts on minority races. Domain Awareness systems use various computer software and other digital surveillance tools such as Geographical Information Systems [13] or more illicit ones such as Black Rooms[14] to "help police create a software-enhanced picture of the present, using thousands of data points from multiple sources within a city" [15]. I believe Joh was very accurate in separating Predictive Policing from Domain Awareness systems, especially when it comes to analysing the implications of the various applications of Big data into policing.

    In such an analysis of the implications of using predictive policing methods, the issues surrounding predictive technologies often get conflated with larger issues about the application of big data into law enforcement. That opens the debate up to questions about overly intrusive evidence gathering and mass surveillance systems, which though used along with predictive technology, are not themselves predictive in nature. In this article, I aim to concentrate on the specific implications that arise due to predictive methods.

    One important point regarding the impact of predictive policing is how the insights that predictive policing methods offer are used. There is much support for the idea that predictive policing does not replace policing methods, but actually augments them. The RAND report specifically cites one myth about predictive policing as "the computer will do everything for you[16]". In reality police officers need to act on the recommendations provided by the technologies.

    What is Predictive policing?

    Predictive policing is the "application of analytical techniques-particularly quantitative techniques-to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions".[17] It is important to note that the use of data and statistics to inform policing is not new. Indeed, even twenty years ago, before the deluge of big data we have today, law enforcement regimes such as the New York Police Department (NYPD) were already using crime data in a major way. In order to keep track of crime trends, NYPD used the software CompStat[18] to map "crime statistics along with other indicators of problems, such as the locations of crime victims and gun arrests"[19]. The senior officers used the information provided by CompStat to monitor trends of crimes on a daily basis and such monitoring became an instrumental way to track the performance of police agencies[20]. CompStat has since seen application in many other jurisdictions [21].

    But what is new is the amount of data available for collection, as well as the ease with which organisations can analyse and draw insightful results from that data. Specifically, new technologies allow for far more rigorous interrogation of data and wide-ranging applications, including adding greater accuracy to the prediction of future incidence of crime.

    Predictive Policing methods

    Some methods of predictive policing involve application of known standard statistical methods, while other methods involve modifying these standard techniques. Predictive techniques that forecast future criminal activities can be framed around six analytic categories. They all may overlap in the sense that multiple techniques are used to create actual predictive policing software and in fact it is similar theories of criminology which undergird many of these methods, but the categorisation in such a way helps clarify the concept of predictive policing. The basis for the categorisation below comes from a RAND Corporation report entitled 'Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations' [22], which is a comprehensive and detailed contribution to scholarship in this nascent area.

    Hot spot analysis: Methods involving hot spot analysis attempt to "predict areas of increased crime risk based on historical crime data"[23]. The premise behind such methods lies in the adage that "crime tends to be lumpy" [24]. Hot Spot analysis seeks to map out these previous incidences of crime in order to inform potential future crime.

    Regression methods: A regression aims to find relationships between independent variables (factors that may influence criminal activity) and certain variables that one aims to predict. Hence, this method would track more variables than just crime history.

    Data mining techniques: Data mining attempts to recognise patterns in data and use it to make predictions about the future. One important variant in the various types of data mining methods used in policing are different types of algorithms that are used to mine data in different ways. These are dependent on the nature of the data the predictive model was trained on and will be used to interrogate in the future. Two broad categories of algorithms commonly used are clustering algorithms and classification algorithms:

    · Clustering algorithms "form a class of data mining approaches that seek to group data into clusters with similar attributes" [25]. One example of clustering algorithms is spatial clustering algorithms, which use geospatial crime incident data to predict future hot spots for crime[26].

    · Classification algorithms "seek to establish rules assigning a class or label to events"[27]. These algorithms use training data sets "to learn the patterns that determine the class of an observation"[28] The patterns identified by the algorithm will be applied to future data, and where applicable, the algorithm will recognise similar patterns in the data. This can be used to make predictions about future criminal activity for example.

    Near-repeat methods: Near-repeat methods work off the assumption that future crimes will take place close to timing and location of current crimes. Hence, it could be postulated that areas of high crime will experience more crime in the near future[29]. This involves the use of a 'self-exciting' algorithm, very similar to algorithms modelling earthquake aftershocks [30]. The premise undergirding such methods is very similar to that of hot spot analysis.

    Spatiotemporal analysis: Using "environmental and temporal features of the crime location" [31] as the basis for predicting future crime. By combining the spatiotemporal features of the crime area with crime incident data, police could use the resultant information to predict the location and time of future crimes. Examples of factors that may be considered include timing of crimes, weather, distance from highways, time from payday and many more.

    Risk terrain analysis: Analyses other factors that are useful in predicting crimes. Examples of such factors include "the social, physical, and behavioural factors that make certain areas more likely to be affected by crime"[32]

    Various methods listed above are used, often together, to predict the where and when a crime may take place or even potential victims. The unifying thread which relates these methods is their dependence on historical crime data.

    Examples of predictive policing:

    Most uses of predictive policing that have been studied and reviewed in scholarly work come from the USA, though I will detail one case study from Derbyshire, UK. Below is a collation of various methods that are a practical application of the methods raised above.

    Hot Spot analysis in Sacramento: In February 2011, Sacramento Police Department began using hot spot analysis along with research on optimal patrol time to act as a sufficient deterrent to inform how they patrol high-risk areas. This policy was aimed at preventing serious crimes by patrolling these predicted hot spots. In places where there was such patrolling, serious crimes reduced by a quarter with no significant increases such crimes in surrounding areas[33].

    Data Mining and Hot Spot Mapping in Derbyshire, UK: The Safer Derbyshire Partnership, a group of law enforcement agencies and municipal authorities sought to identify juvenile crime hotspots[34]. They used MapInfo software to combine "multiple discrete data sets to create detailed maps and visualisations of criminal activity, including temporal and spatial hotspots" [35]. This information informed law enforcement about how to optimally deploy their resources.

    Regression models in Pittsburgh: Researchers used reports from Pittsburgh Bureau of Police about violent crimes and "leading indicator" [36] crimes, crimes that were relatively minor but which could be a sign of potential future violent offences. The researcher ran analysis of areas with violent crimes, which were used as the dependent variable in analysing whether violent crimes in certain areas could be predicted by the leading indicator data. From the 93 significant violent crime areas that were studied, 19 areas were successfully predicted by the leading indicator data.[37]

    Risk terrain modelling analysis in Morris County, New Jersey: Police in Morris County, used risk terrain analysis to tackle violent crimes and burglaries. They considered five inputs in their model: "past burglaries, the address of individuals recently arrested for property crimes, proximity to major highways, the geographic concentration of young men and the location of apartment complexes and hotels." [38] The Morris County law enforcement officials linked the significant reductions in violent and property crime to their use of risk terrain modelling[39].

    Near-repeat & hot spot analysis used by Santa Cruz Police Department: Uses PredPol software that applies the Mohler's algorithm [40] to a database with five years' worth of crime data to assess the likelihood of future crime occurring in the geographic areas within the city. Before going on shift, officers receive information identifying 15 such areas with the highest probability of crime[41]. The initiative has been cited as being very successful at reducing burglaries, and was used in Los Angeles and Richmond, Virginia[42].

    Data Mining and Spatiotemporal analysis to predict future criminal activities in Chicago: Officers in Chicago Police Department made visits to people their software predicted were likely to be involved in violent crimes[43], guided by an algorithm-generated "Heat List"[44]. Some of the inputs used in the predictions include some types of arrest records, gun ownership, social networks[45] (police analysis of social networking is also a rising trend in predictive policing[46]) and generally type of people you are acquainted with [47] among others, but the full list of the factors are not public. The list sends police officers (or sometimes mails letters) to peoples' homes to offer social services or deliver warnings about the consequences for offending. Based in part on the information provided by the algorithm, officers may provide people on the Heat List information about vocational training programs or warnings about how Federal Law provides harsher punishments for reoffending[48].

    Predictive policing in India

    In this section, I map out some of the developments in the field of predictive policing within India. On the whole, predictive policing is still very new in India, with Jharkhand being the only state that appears to already have concrete plans in place to introduce predictive policing.

    Jharkhand Police

    The Jharkhand police began developing their IT infrastructure such as a Geographic Information System (GIS) and Server room when they received funding for Rs. 18.5 crore from the Ministry of Home Affairs[49]. The Open Group on E-governance (OGE), founded as a collaboration between the Jharkhand Police and National Informatics Centre[50], is now a multi-disciplinary group which takes on different projects related to IT[51]. With regards to predictive policing, some members of OGE began development in 2013 of data mining software which will scan online records that are digitised. The emerging crime trends "can be a building block in the predictive policing project that the state police want to try."[52]

    The Jharkhand Police was also reported in 2012 to be in the final stages of forming a partnership with IIM-Ranchi[53]. It was alleged the Jharkhand police aimed to tap into IIM's advanced business analytics skills [54], skills that can be very useful in a predictive policing context. Mr Pradhan suggested that "predictive policing was based on intelligence-based patrol and rapid response"[55] and that it could go a long way to dealing with the threat of Naxalism in Jharkhand[56].

    However, in Jharkhand, the emphasis appears to be targeted at developing a massive Domain Awareness system, collecting data and creating new ways to present that data to officers on the ground, instead of architecting and using predictive policing software. For example, the Jharkhand police now have in place "a Naxal Information System, Crime Criminal Information System (to be integrated with the CCTNS) and a GIS that supplies customised maps that are vital to operations against Maoist groups"[57]. The Jharkhand police's "Crime Analytics Dashboard" [58] shows the incidence of crime according to type, location and presents it in an accessible portal, providing up-to-date information and undoubtedly raises the situational awareness of the officers. Arguably, the domain awareness systems that are taking shape in Jharkhand would pave the way for predictive policing methods to be applied in the future. These systems and hot spot maps seem to be the start of a new age of policing in Jharkhand.

    Predictive Policing Research

    One promising idea for predictive policing in India comes from the research conducted by Lavanya Gupta and others entitled "Predicting Crime Rates for Predictive Policing"[59], which was a submission for the Gandhian Young Technological Innovation Award. The research uses regression modelling to predict future crime rates. Drawing from First Information Reports (FIRs) of violent crimes (murder, rape, kidnapping etc.) from Chandigarh Police, the team attempted "to extrapolate annual crime rate trends developed through time series models. This approach also involves correlating past crime trends with factors that will influence the future scope of crime, in particular demographic and macro-economic variables" [60]. The researchers used early crime data as the training data for their model, which after some testing, eventually turned out to have an accuracy of around 88.2%.[61] On the face of it, ideas like this could be the starting point for the introduction of predictive policing into India.

    The rest of India's law enforcement bodies do not appear to be lagging behind. In the 44th All India police science congress, held in Gandhinagar, Gujarat in March this year, one of the Themes for discussion was the "Role of Preventive Forensics and latest developments in Voice Identification, Tele-forensics and Cyber Forensics"[62].Mr A K Singh, (Additional Director General of Police, Administration) the chairman of the event also said in an interview that there was to be a round-table DGs (Director General of Police) held at the conference to discuss predictive policing[63]. Perhaps predictive policing in India may not be that far away from reality.

    CCTNS and the building blocks of Predictive policing

    The Ministry of Home Affairs conceived of a Crime and Criminals Tracking and Network System (CCTNS) as part of national e-Governance plans. According to the website of the National Crime Records Bureau (NCRB), CCTNS aims to develop "a nationwide networked infrastructure for evolution of IT-enabled state-of-the-art tracking system around 'investigation of crime and detection of criminals' in real time" [64]

    The plans for predictive policing seem in the works, but first steps that are needed in India across police forces involve digitizing data collection by the police, as well as connecting law enforcement agencies. The NCRB's website described the current possibility of exchange of information between neighbouring police stations, districts or states as being "next to impossible"[65]. The aim of CCTNS is precisely to address this gap and integrate and connect the segregated law enforcement arms of the state in India, which would be a foundational step in any initiatives to apply predictive methods.

    What are the implications of using predictive policing? Lessons from USA

    Despite the moves by law enforcement agencies to adopt predictive policing, one reality is that the implications of predictive policing methods are far from clear. This section will examine these implications on the carriage of justice and its use in law, as well as how it impacts privacy concerns for the individual. It frames the existing debates surrounding these issues with predictive policing, and aims to apply these principles into an Indian context.

    Justice, Privacy & IV Amendment

    Two key concerns about how predictive policing methods may be used by law enforcement relate to how insights from predictive policing methods are acted upon and how courts interpret them. In the USA, this issue may finds its place under the scope of IV Amendment jurisprudence. The IV amendment states that all citizens are "secure from unreasonable searches and seizures of property by the government"[66]. In this sense, the IV amendment forms the basis for search and surveillance law in the USA.

    A central aspect of the IV Amendment jurisprudence is drawn from United States v. Katz. In Katz, the FBI attached a microphone to the outside of a public phone booth to record the conversations of Charles Katz, who was making phone calls related to illegal gambling. The court ruled that such actions constituted a search within the auspices of the 4th amendment. The ruling affirmed constitutional protection of all areas where someone has a "reasonable expectation of privacy"[67].

    Later cases have provided useful tests for situations where government surveillance tactics may or may not be lawful, depending on whether it violates one's reasonable expectation of privacy. For example, in United States v. Knotts, the court held that "police use of an electronic beeper to follow a suspect surreptitiously did not constitute a Fourth Amendment search"[68]. In fact, some argue that that the Supreme Court's reasoning in such cases suggests " any 'scientific enhancement' of the senses used by the police to watch activity falls outside of the Fourth Amendment's protections if the activity takes place in public"[69]. This reasoning is based on the third party doctrine which holds that "if you voluntarily provide information to a third party, the IV Amendment does not preclude the government from accessing it without a warrant"[70]. The clearest exposition of this reasoning was in Smith v. Maryland, where the presiding judges noted that "this Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties"[71].

    However, the third party has seen some challenge in recent time. In United States v. Jones, it was ruled that the government's warrantless GPS tracking of his vehicle 24 hours a day for 28 days violated his Fourth Amendment rights[72]. Though the majority ruling was that warrantless GPS tracking constituted a search, it was in a concurring opinion written by Justice Sonya Sotomayor that such intrusive warrantless surveillance was said to infringe one's reasonable expectation of privacy. As Newell reflected on Sotomayor's opinion,

    "Justice Sotomayor stated that the time had come for Fourth Amendment jurisprudence to discard the premise that legitimate expectations of privacy could only be found in situations of near or complete secrecy. Sotomayor argued that people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties"[73].

    She said that the court's current reasoning on what constitutes reasonable expectations of privacy in information disclosed to third parties, such as email or phone records or even purchase histories, is "ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks"[74].

    Predictive policing vs. Mass surveillance and Domain Awareness Systems

    However, there is an important distinction to be drawn between these cases and evidence from predictive policing. This has to do with the difference in nature of the evidence collection. Arguably, from Jones and others, what we see is that use of mass surveillance and domain awareness systems, drawing from Joh's categorisation of domain awareness systems as being distinct from predictive policing mentioned above, could potentially encroach on one's reasonable expectation of privacy. However, I think that predictive policing, and the possible implications for justice associated with it, its predictive harms, are quite distinct from what has been heard by courts thus far.

    The reason for distinct risks between predictive harms and privacy harms originating from information gathering is related to the nature of predictive policing technologies, and how they are used. It is highly unlikely that the evidence submitted by the State to indict an offender will be mainly predictive in nature. For example, would it be possible to convict an accused person solely on the premise that he was predicted to be highly likely to commit a crime, and that subsequently he did? The legal standard of proving guilt beyond a reasonable doubt [75] can hardly be met solely on predictive evidence for a multitude of reasons. Predictive policing methods could at most, be said to inform police about the risk of someone committing a crime or of crime happening at a certain location, as demonstrated above.

    Predictive policing and Criminal Procedure

    It may therefore pay to analyse how predictive policing may be used across the various processes within the criminal justice system. In fact, in an analysis of the various stages of criminal procedure, from opening an investigation to gathering evidence, followed by arrest, trial, conviction and sentencing, we see that as the individual gets subject to more serious incursions or sanctions by the state, it takes a higher standard of certainty about wrongdoing and a higher burden of proof, in order to legitimize that particular action.

    Hence, at more advanced stages of the criminal justice process such as seeking arrest warrants or trial, it is very unlikely that predictive policing on its own can have a tangible impact, because the nature of predictive evidence is probability based. It aims to calculate the risk of future crime occurring based on statistical analysis of past crime data[76]. While extremely useful, probabilities on their own will not come remotely close meet the legal standards of proving 'guilt beyond reasonable doubt'. It may be at the earlier stages of the criminal justice process that evidence predictive policing might see more widespread application, in terms of applying for search warrants and searching suspicious people while on patrol.

    In fact, in the law enforcement context, prediction as a concept is not new to justice. Both courts and law enforcement officials already make predictions about future likelihood of crimes. In the case of issuing warrants, the IV amendment makes provisions that law enforcement officials show that the potential search is based "upon probable cause"[77] in order for a judge to grant a warrant. In US v. Brinegar, probable cause was defined as existing "where the facts and circumstances within the officers' knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed" [78]. Again, this legal standard seems too high for predictive evidence meet.

    However, the police also have an important role to play in preventing crimes by looking out for potential crimes while on patrol or while doing surveillance. When the police stop a civilian on the road to search him, reasonable suspicion must be established. This standard of reasonable suspicion was defined in most clearly in Terry v. Ohio, which required police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[79]. Therefore, "reasonable suspicion that 'criminal activity may be afoot' is at base a prediction that the facts and circumstances warrant the reasonable prediction that a crime is occurring or will occur"[80]. Despite the assertion that "there are as of yet no reported cases on predictive policing in the Fourth Amendment context"[81], examining the impact of predictive policing on the doctrine of reasonable suspicion could be very instructive in understanding the implications for justice and privacy [82].

    Predictive Policing and Reasonable Suspicion

    Ferguson's insightful contribution to this area of scholarship involves the identification of existing areas where prediction already takes place in policing, and analogising them into a predictive policing context[83]. These three areas are: responding to tips, profiling, and high crime areas (hot spots).

    Tips

    Tips are pieces of information shared with the police by members of the public. Often tips, either anonymous or from known police informants, may predict future actions of certain people, and require the police to act on this information. The precedent for understanding the role of tips in probable cause comes from Illinois v. Gates[84]. It was held that "an informant's 'veracity,' 'reliability,' and 'basis of knowledge'-remain 'highly relevant in determining the value'"[85] of the said tip. Anonymous tips need to be detailed, timely and individualised enough[86] to justify reasonable suspicion [87]. And when the informant is known to be reliable, then his prior reliability may justify reasonable suspicion despite lacking a basis in knowledge[88].

    Ferguson argues that whereas predictive policing cannot provide individualised tips, it is possible to consider reliable tips about certain areas as a parallel to predictive policing[89]. And since the courts had shown a preference for reliability even in the face of a weak basis in knowledge, it is possible to see the reasonable suspicion standard change in its application[90]. It also implies that IV protections may be different in places where crime is predicted to occur [91].

    Profiling

    Despite the negative connotations and controversial overtones at the mere sound of the word, profiling is already a method commonly used by law enforcement. For example, after a crime has been committed and general features of the suspect identified by witnesses, police often stop civilians who fit this description. Another example of profiling is common in combating drug trafficking[92], where agents keep track of travellers at airports to watch for suspicious behaviour. Based on their experience of common traits which distinguish drug traffickers from regular travellers (a profile), agents may search travellers if they fit the profile[93]. In the case of United States v. Sokolow[94], the courts "recognized that a drug courier profile is not an irrelevant or inappropriate consideration that, taken in the totality of circumstances, can be considered in a reasonable suspicion determination" [95]. Similar lines of thinking could be employed in observing people exchanging small amounts of money in an area known for high levels of drug activity, conceiving predictive actions as a form of profile[96].

    It is valid to consider predictive policing as a form of profiling[97], but Ferguson argues that the predictive policing context means this 'new form' of profiling could change IV analysis. The premise behind such an argument lies in the fact that a prediction made by some algorithm about potential high risk of crime in a certain area, could be taken in conjunction observations of ordinarily innocuous events. Read in the totality of circumstances, these two threads may justify individual reasonable suspicion [98]. For example, a man looking into cars at a parking lot may not by itself justify reasonable suspicion, but taken together with a prediction of high risk of car theft at that locality, it may well justify reasonable suspicion. It is this impact of predictive policing, which influences the analysis of reasonable suspicion in a totality of circumstances that may represent new implications for courts looking at IV amendment protections.

    Profiling, Predictive Policing and Discrimination

    The above sections have already brought up the point that law enforcement agencies already utilize profiling methods in their operations. Also, as the sections on how predictive analytics works and on methods of predictive policing make clear, predictive policing definitely incorporates the development of profiles for predicting future criminal activity. Concerns about predictive models generate potentially discriminatory predictions therefore are very serious, and need addressing. Potential discrimination may be either overt, though far less likely, or unintended. A valuable case study of which sheds light on such discriminatory data mining practices can be found in US Labour law. It was shown how predictive models could be discriminatory at various stages, from conceptualising the model and training it with training data, to eventually selecting inappropriate features to search for [99]. It is also possible for data scientists to (intentionally or not) use proxies for identifiers like race, income level, health condition and religion. Barocas and Selbst argue that "the current distribution of relevant attributes-attributes that can and should be taken into consideration in apportioning opportunities fairly-are demonstrably correlated with sensitive attributes" [100]. Hence, what may result is unintended discrimination, as predictive models and their subjective and implicit biases are reflected in predicted decisions, or that the discrimination is not even accounted for in the first place. While I have not found any case law where courts have examined such situations in a criminal context, at the very least, law enforcement agencies need to be aware of these possibilities and guard against any forms of discriminatory profiling.

    However, Ferguson argues that "the precision of the technology may in fact provide more protection for citizens in broadly defined high crime areas" [101]. This is because the label of a 'high-crime area' may no longer apply to large areas but instead to very specific areas of criminal activity. This implies that previously defined areas of high crime, like entire neighbourhoods may not be scrutinised in such detail. Instead, police now may be more precise in locating and policing areas of high crime, such as an individual street corner or a particular block of flats instead of an entire locality.

    Hot Spots

    Courts have also considered the existence of notoriously 'high-crime areas as part of considering reasonable suspicion[102]. This was seen in Illinois v. Wardlow [103], where the "high crime nature of an area can be considered in evaluating the officer's objective suspicion"[104]. Many cases have since applied this reasoning without scrutinising the predictive value of such a label. In fact, Ferguson asserts that such labelling has questionable evidential value[105]. He uses the facts of the Wardlow case itself to challenge the 'high crime area' factor. Ferguson cites the reasoning of one of the judges in the case:

    "While the area in question-Chicago's District 11-was a low-income area known for violent crimes, how that information factored into a predictive judgment about a man holding a bag in the afternoon is not immediately clear."[106]

    Especially because "the most basic models of predictive policing rely on past crimes"[107], it is likely that the predictive policing methods like hot spot or spatiotemporal analysis and risk terrain modelling may help to gather or build data models about high crime areas. Furthermore, the mathematical rigour of the predictive modelling could help clarify the term 'high crime area'. As Ferguson argues, "courts may no longer need to rely on the generalized high crime area terminology when more particularized and more relevant information is available" [108].

    Summary

    Ferguson synthesises four themes to which encapsulate reasonable suspicion analysis:

    1. Predictive information is not enough on its own. Instead, it is "considered relevant to the totality of circumstances, but must be corroborated by direct police observation"[109].
    2. The prediction must also "be particularized to a person, a profile, or a place, in a way that directly connects the suspected crime to the suspected person, profile, or place"[110].
    3. It must also be detailed enough to distinguish a person or place from others not the focus of the prediction [111].
    4. Finally, predicted information becomes less valuable over time. Hence it must be acted on quickly or be lost [112].

    Conclusions from America

    The main conclusion to draw from the analysis of the parallels between existing predictions in IV amendment law and predictive policing is that "predictive policing will impact the reasonable suspicion calculus by becoming a factor within the totality of circumstances test"[113]. Naturally, it reaffirms the imperative for predictive techniques to collect reliable data [114] and analyse it transparently[115]. Moreover, in order for courts to evaluate the reliability of the data and the processes used (since predictive methods become part of the reasonable suspicion calculus), courts need to be able to analyse the predictive process. This has implications for the how hearings may be conducted, for how legal adjudicators may require training and many more. Another important concern is that the model of predictive information and police corroboration or direct observation[116] may mean that in areas which were predicted to have low risk of crime, the reasonable suspicion doctrine works against law enforcement. There may be less effort paid to patrolling these other areas as a result of predictions.

    Implications for India

    While there have been no cases directly involving predictive policing methods, it would be prudent to examine the parts of Indian law which would inform the calculus on the lawfulness of using predictive policing methods. A useful lens to examine this might be found in the observation that prediction is not in itself a novel concept in justice, and is already used by courts and law enforcement in numerous circumstances.

    Criminal Procedure in Non-Warrant Contexts

    The most logical way to begin analysing the legal implications of predictive policing in India may probably involve identifying parallels between American and Indian criminal procedure, specifically searching for instances where 'reasonable suspicion' or some analogous requirement exists for justifying police searches.

    In non-warrant scenarios, we find conditions for officers to conduct such a warrantless search in Section 165 of the Criminal Procedure Code (Cr PC). For clarity purposes I have stated section 165 (1) in full:

    "Whenever an officer in charge of a police station or a police officer making an investigation has reasonable grounds for believing that anything necessary for the purposes of an investigation into any offence which he is authorised to investigate may be found in any place with the limits of the police station of which he is in charge, or to which he is attached, and that such thing cannot in his opinion be otherwise obtained without undue delay, such officer may, after recording in writing the grounds of his belief and specifying in such writing, so far as possible, the thing for which search is to be made, search, or cause search to be made, for such thing in any place within the limits of such station." [117]

    However, India differs from the USA in that its Cr PC allows for police to arrest individuals without a warrant as well. As observed in Gulab Chand Upadhyaya vs State Of U.P, "Section 41 Cr PC gives the power to the police to arrest without warrant in cognizable offences, in cases enumerated in that Section. One such case is of receipt of a 'reasonable complaint' or 'credible information' or 'reasonable suspicion'" [118] Like above, I have stated section 41 (1) and subsection (a) in full:

    "41. When police may arrest without warrant.

    (1) Any police officer may without an order from a Magistrate and without a warrant, arrest any person-

    (a) who has been concerned in any cognizable offence, or against whom a reasonable complaint has been made, or credible information has been received, or a reasonable suspicion exists, of his having been so concerned"[119]

    In analysing the above sections of Indian criminal procedure from a predictive policing angle, one may find both similarities and differences between the proposed American approach and possible Indian approaches to interpreting or incorporating predictive policing evidence.

    Similarity of 'reasonable suspicion' requirement

    For one, the requirement for "reasonable grounds" or "reasonable suspicion" seems to be analogous to the American doctrine of reasonable suspicion. This suggests that the concepts used in forming reasonable suspicion, for the police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[120] may also be useful in the Indian context.

    One case which sheds light on an Indian interpretation of reasonable suspicion or grounds is State of Punjab v. Balbir Singh[121]. In that case, the court observes a requirement for "reason to believe that such an offence under Chapter IV has been committed and, therefore, an arrest or search was necessary as contemplated under these provisions"[122] in the context of Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act, 1985[123]. In examining the requirement of having "reason to believe", the court draws on Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act[124], where the judge observed that "the expression 'reason to believe' is not synonymous with subjective satisfaction of the officer. The belief must be held in good faith; it cannot be merely a pretence….."[125]

    In light of this, the judge in Balbir Singh remarked that "whether there was such reason to believe and whether the officer empowered acted in a bona fide manner, depends upon the facts and circumstances of the case and will have a bearing in appreciation of the evidence" [126]. The standard considered by the court in Balbir Singh and Partap Singh is different from the 'reasonable suspicion' or 'reasonable grounds' standard as per Section 41 and 165 of Cr PC. But I think the discussion can help to inform our analysis of the idea of reasonableness in law enforcement actions. Of importance was the court requirement of something more than mere "pretence" as well as a belief held in good faith. This could suggest that in fact the reasoning in American jurisprudence about reasonable suspicion might be at least somewhat similar to how Indian courts view reasonable suspicion or grounds in the context of predictive policing, and therefore how we could similarly conjecture that predictive evidence could form part of the reasonable suspicion calculus in India as well.

    Difference in judicial treatment of illegally obtained evidence - Indian lack of exclusionary rules

    However, the apparent similarity of how police in America and India may act in non-warrant situations - guided by the idea of reasonable suspicion - is only veneered by linguistic parallels. Despite the existence of such conditions which govern the searches without a warrant, I believe that Indian courts currently may provide far less protection against unlawful use of predictive technologies. The main premise behind this argument is that Indian courts refuse to exclude evidence that was obtained in breaches of the conditions of sections of the Cr PC. What exists in place of evidentiary safeguards is a line of cases in which courts routinely admit unlawfully or illegally obtained evidence. Without protections against unlawfully gathered evidence being considered relevant by courts, any regulations on search or conditions to be met before a search is lawful become ineffective. Evidence may simply enter the courtroom through a backdoor.

    In the USA, this is by and large, not the case. Although there are exceptions to these rules, exclusionary rules are set out to prevent admission of evidence which violates the constitution[127]. "The exclusionary rule applies to evidence gained from an unreasonable search or seizure in violation of the Fourth Amendment "[128]. Mapp v. Ohio [129] set the precedent for excluding unconstitutionally gathered evidence, where the court ruled that "all evidence obtained by searches and seizures in violation of the Federal Constitution is inadmissible in a criminal trial in a state court" [130].

    Any such evidence which then leads law enforcement to collect new information may also be excluded, as part of the "fruit of the poisonous tree" doctrine[131], established in Silverthorne Lumber Co. v. United States [132]. The doctrine is a metaphor which suggests that if the source of certain evidence is tainted, so is 'fruit' or derivatives from that unconstitutional evidence. One such application was in Beck v. Ohio[133], where the courts overturned a petitioner's conviction because the evidence used to convict him was obtained via an unlawful arrest.

    However in India's context, there is very little protection against the admission and use of unlawfully gathered evidence. In fact, there are a line of cases which lay out the extent of consideration given to unlawfully gathered evidence - both cases that specifically deal with the rules as per the Indian Cr PC as well as cases from other contexts - which follow and develop this line of reasoning of allowing illegally obtained evidence.

    One case to pay attention to is State of Maharastra v. Natwarlal Damodardas Soni - in this case, the Anti-Corruption Bureau searched the house of the accused after receiving certain information as a tip. The police "had powers under the Code of Criminal Procedure to search and seize this gold if they had reason to believe that a cognizable offence had been committed in respect thereof"[134]. Justice Sarkaria, in delivering his judgement, observed that for argument's sake, even if the search was illegal, "then also, it will not affect the validity of the seizure and further investigation"[135]. The judge drew reasoning from Radhakishan v. State of U.P[136]. This which was a case involving a postman who had certain postal items that were undelivered recovered from his house. As the judge in Radhakishan noted:

    "So far as the alleged illegality of the search is concerned, it is sufficient to say that even assuming that the search was illegal the seizure of the articles is not vitiated. It may be that where the provisions of Sections 103 and 165 of the Code of Criminal Procedure, are contravened the search could be resisted by the person whose premises are sought to be searched. It may also be that because of the illegality of the search the court may be inclined to examine carefully the evidence regarding the seizure. But beyond these two consequences no further consequence ensues." [137]

    Shyam Lal Sharma v. State of M.P.[138] was also drawn upon, where it was held that "even if the search is illegal being in contravention with the requirements of Section 165 of the Criminal Procedure Code, 1898, that provision ceases to have any application to the subsequent steps in the investigation"[139].

    Even in Gulab Chand Upadhyay, mentioned above, the presiding judge contended that even "if arrest is made, it does not require any, much less strong, reasons to be recorded or reported by the police. Thus so long as the information or suspicion of cognizable offence is "reasonable" or "credible", the police officer is not accountable for the discretion of arresting or no arresting"[140].

    A more complete articulation of the receptiveness of Indian courts to admit illegally gathered evidence can be seen in the aforementioned Balbir Singh. The judgement aimed to:

    "dispose of one of the contentions that failure to comply with the provisions of Cr PC in respect of search and seizure even up to that stage would also vitiate the trial. This aspect has been considered in a number of cases and it has been held that the violation of the provisions particularly that of Sections 100, 102, 103 or 165 Cr PC strictly per se does not vitiate the prosecution case. If there is such violation, what the courts have to see is whether any prejudice was caused to the accused and in appreciating the evidence and other relevant factors, the courts should bear in mind that there was such a violation and from that point of view evaluate the evidence on record."[141]

    The judges then consulted a series of authorities on the failure to comply with provisions of the Cr PC:

    1. State of Punjab v. Wassan Singh[142]: "irregularity in a search cannot vitiate the seizure of the articles"[143].
    2. Sunder Singh v. State of U.P[144]: 'irregularity cannot vitiate the trial unless the accused has been prejudiced by the defect and it is also held that if reliable local witnesses are not available the search would not be vitiated."[145]
    3. Matajog Dobey v.H.C. Bhari[146]: "when the salutory provisions have not been complied with, it may, however, affect the weight of the evidence in support of the search or may furnish a reason for disbelieving the evidence produced by the prosecution unless the prosecution properly explains such circumstance which made it impossible for it to comply with these provisions."[147]
    4. R v. Sang[148]: "reiterated the same principle that if evidence was admissible it matters not how it was obtained."[149] Lord Diplock, one of the Lords adjudicating the case, observed that "however much the judge may dislike the way in which a particular piece of evidence was obtained before proceedings were commenced, if it is admissible evidence probative of the accused's guilt "it is no part of his judicial function to exclude it for this reason". [150] As the judge in Balbir Singh quoted from Lord Diplock, a judge "has no discretion to refuse to admit relevant admissible evidence on the ground that it was obtained by improper or unfair means. The court is not concerned with how it was obtained."[151]

    The vast body of case law presented above provides observers with a clear image of the courts willingness to admit and consider illegally obtained evidence. The lack of safeguards against admission of unlawful evidence are important from the standpoint of preventing the excessive or unlawful use of predictive policing methods. The affronts to justice and privacy, as well as the risks of profiling, seem to become magnified when law enforcement use predictive methods more than just to augment their policing techniques but to replace some of them. The efficacy and expediency offered by using predictive policing needs to be balanced against the competing interest of ensuring rule of law and due process. In the Indian context, it seems courts sparsely consider this competing interest.

    Naturally, weighing in on which approach is better depends on a multitude of criteria like context, practicality, societal norms and many more. It also draws on existing debates in administrative law about the role of courts, which may emphasise protecting individuals and preventing excessive state power (red light theory) or emphasise efficiency in the governing process with courts assisting the state to achieve policy objectives (green light theory) [152].

    A practical response may be that India should aim to embrace both elements and balance them appropriately, although what an appropriate balance again may vary. There are some who claim that this balance already exists in India. Evidence for such a claim may come from R.M. Malkani v. State of Maharashtra[153], where the court considered whether an illegally tape-recorded conversation could be admissible. In its reasoning, the court drew from Kuruma, Son of Kanju v. R. [154], noting that

    " if evidence was admissible it matters not how it was obtained. There is of course always a word of caution. It is that the Judge has a discretion to disallow evidence in a criminal case if the strict rules of admissibility would operate unfairly against the accused. That caution is the golden rule in criminal jurisprudence"[155].

    While this discretion exists at least principally in India, in practice the cases presented above show that judges rarely exercise that discretion to prevent or bar the admission of illegally obtained evidence or evidence that was obtained in a manner that infringed the provisions governing search or arrest in the Cr PC. Indeed, the concern is that perhaps the necessary safeguards required to keep law enforcement practices, including predictive policing techniques, in check would be better served by a greater focus on reconsidering the legality of unlawfully gathered evidence. If not, evidence which should otherwise be inadmissible may find its way into consideration by existing legal backdoors.

    Risk of discriminatory predictive analysis

    Regarding the risk of discriminatory profiling, Article 15 of India's Constitution[156] states that "the State shall not discriminate against any citizen on grounds only of religion, race, caste, sex, place of birth or any of them" [157]. The existence of constitutional protection for such forms of discrimination suggests that India will be able to guard against discriminatory predictive policing. However, as mentioned before, predictive analytics often discriminates institutionally, "whereby unconscious implicit biases and inertia within society's institutions account for a large part of the disparate effects observed, rather than intentional choices"[158]. As in most jurisdictions, preventing these forms of discrimination are much harder. Especially in a jurisdiction whose courts are already receptive to allowing admission of illegally obtained evidence, the risk of discriminatory data mining or prejudiced algorithms being used by police becomes magnified. Because the discrimination may be unintentional, it may be even harder for evidence from discriminatory predictive methods to be scrutinised or when applicable, dismissed by the courts.

    Conclusion for India

    One thing which is eminently clear from the analysis of possible interpretations of predictive evidence is that Indian Courts have had no experience with any predictive policing cases, because the technology itself is still at a nascent stage. There is in fact a long way to go before predictive policing will become used on a scale similar to that of USA for example.

    But, even in places where predictive policing is used much more prominently, there is no precedent to observe how courts may view predictive policing. Ferguson's method of locating analogous situations to predictive policing which courts have already considered is one notable approach, but even this does not provide complete answer. One of his main conclusions that predictive policing will affect the reasonable suspicion calculus, or in India's case, contribute to 'reasonable grounds' in some ways, is perhaps the most valid one.

    However, what provides more cause for concern in India's context are the limited protections against use of unlawfully gathered evidence. The lack of 'exclusionary rules' unlike those present in the US amplifies the various risks of predictive policing because individuals have little means of redress in such situations where predictive policing may be used unjustly against them.

    Yet, the promise of predictive policing remains undeniably attractive for India. The successes predictive policing methods seem to have had In the US and UK coupled with the more efficient allocation of law enforcement's resources as a consequence of adapting predictive policing evidence this point. The government recognises this and seems to be laying the foundation and basic digital infrastructure required to utilize predictive policing optimally. One ought also to ask whether it is the even within the court's purview to decide what kind of policing methods are to be permissible through evaluating the nature of evidence. There is a case to be made for the legislative arm of the state to provide direction on how predictive policing is to be used in India. Perhaps the law must also evolve with the changes in technology, especially if courts are to scrutinise the predictive policing methods themselves.


    [1] Joh, Elizabeth E. "Policing by Numbers: Big Data and the Fourth Amendment." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 1, 2014. http://papers.ssrn.com/abstract=2403028.

    [2] Tene, Omer, and Jules Polonetsky. "Big Data for All: Privacy and User Control in the Age of Analytics." Northwestern Journal of Technology and Intellectual Property 11, no. 5 (April 17, 2013): 239.

    [3] Datta, Rajbir Singh. "Predictive Analytics: The Use and Constitutionality of Technology in Combating Homegrown Terrorist Threats." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 1, 2013. http://papers.ssrn.com/abstract=2320160.

    [4] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [5] Ibid.

    [6] Duhigg, Charles. "How Companies Learn Your Secrets." The New York Times, February 16, 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

    [7] Ibid.

    [8] Lijaya, A, M Pranav, P B Sarath Babu, and V R Nithin. "Predicting Movie Success Based on IMDB Data." International Journal of Data Mining Techniques and Applications 3 (June 2014): 365-68.

    [9] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [10] Sangvinatsos, Antonios A. "Explanatory and Predictive Analysis of Corporate Bond Indices Returns." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, June 1, 2005. http://papers.ssrn.com/abstract=891641.

    [11] Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 13, 2015. http://papers.ssrn.com/abstract=2477899.

    [12] Joh, supra note 1.

    [13] US Environmental Protection Agency. "How We Use Data in the Mid-Atlantic Region." US EPA. Accessed November 6, 2015. http://archive.epa.gov/reg3esd1/data/web/html/.

    [14] See here for details of blackroom.

    [15] Joh, supra note 1, at pg 48.

    [16] Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith and John S. Hollywood. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: RAND Corporation, 2013. http://www.rand.org/pubs/research_reports/RR233. Also available in print form.

    [17] Ibid, at pg 2.

    [18] Chan, Sewell. "Why Did Crime Fall in New York City?" City Room. Accessed November 6, 2015. http://cityroom.blogs.nytimes.com/2007/08/13/why-did-crime-fall-in-new-york-city/.

    [19] Bureau of Justice Assistance. "COMPSTAT: ITS ORIGINS, EVOLUTION, AND FUTURE IN LAW ENFORCEMENT AGENCIES," 2013. http://www.policeforum.org/assets/docs/Free_Online_Documents/Compstat/compstat%20-%20its%20origins%20evolution%20and%20future%20in%20law%20enforcement%20agencies%202013.pdf.

    [20] 1996 internal NYPD article "Managing for Results: Building a Police Organization that Dramatically Reduces Crime, Disorder, and Fear."

    [21] Bratton, William. "Crime by the Numbers." The New York Times, February 17, 2010. http://www.nytimes.com/2010/02/17/opinion/17bratton.html.

    [22] RAND CORP, supra note 16.

    [23] RAND CORP, supra note 16, at pg 19.

    [24] Joh, supra note 1, at pg 44.

    [25] RAND CORP, supra note 16, pg 38.

    [26] Ibid.

    [27] RAND CORP, supra note 16, at pg 39.

    [28] Ibid.

    [29] RAND CORP, supra note 16, at pg 41.

    [30] Data-Smart City Solutions. "Dr. George Mohler: Mathematician and Crime Fighter." Data-Smart City Solutions, May 8, 2013. http://datasmart.ash.harvard.edu/news/article/dr.-george-mohler-mathematician-and-crime-fighter-166.

    [31] RAND CORP, supra note 16, at pg 44.

    [32] Joh, supra note 1, at pg 45.

    [33] Ouellette, Danielle. "Dispatch - A Hot Spots Experiment: Sacramento Police Department," June 2012. http://cops.usdoj.gov/html/dispatch/06-2012/hot-spots-and-sacramento-pd.asp.

    [34] Pitney Bowes Business Insight. "The Safer Derbyshire Partnership." Derbyshire, 2013. http://www.mapinfo.com/wp-content/uploads/2013/05/safer-derbyshire-casestudy.pdf.

    [35] Ibid.

    [36] Daniel B Neill, Wilpen L. Gorr. "Detecting and Preventing Emerging Epidemics of Crime," 2007.

    [37] RAND CORP, supra note 16, at pg 33.

    [38] Joh, supra note 1, at pg 46.

    [39] Paul, Jeffery S, and Thomas M. Joiner. "Integration of Centralized Intelligence with Geographic Information Systems: A Countywide Initiative." Geography and Public Safety 3, no. 1 (October 2011): 5-7.

    [40] Mohler, supra note 30.

    [41] Ibid.

    [42] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement
    Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

    [43] Gorner, Jeremy. "Chicago Police Use Heat List as Strategy to Prevent Violence." Chicago Tribune. August 21, 2013. http://articles.chicagotribune.com/2013-08-21/news/ct-met-heat-list-20130821_1_chicago-police-commander-andrew-papachristos-heat-list.

    [44] Stroud, Matt. "The Minority Report: Chicago's New Police Computer Predicts Crimes, but Is It Racist?" The Verge. Accessed November 13, 2015. http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

    [45] Moser, Whet. "The Small Social Networks at the Heart of Chicago Violence." Chicago Magazine, December 9, 2013. http://www.chicagomag.com/city-life/December-2013/The-Small-Social-Networks-at-the-Heart-of-Chicago-Violence/.

    [46] Lester, Aaron. "Police Clicking into Crimes Using New Software." Boston Globe, March 18, 2013. https://www.bostonglobe.com/business/2013/03/17/police-intelligence-one-click-away/DzzDbrwdiNkjNMA1159ybM/story.html.

    [47] Stanley, Jay. "Chicago Police 'Heat List' Renews Old Fears About Government Flagging and Tagging." American Civil Liberties Union, February 25, 2014. https://www.aclu.org/blog/chicago-police-heat-list-renews-old-fears-about-government-flagging-and-tagging.

    [48] Rieke, Aaron, David Robinson, and Harlan Yu. "Civil Rights, Big Data, and Our Algorithmic Future," September 2014. https://bigdata.fairness.io/wp-content/uploads/2015/04/2015-04-20-Civil-Rights-Big-Data-and-Our-Algorithmic-Future-v1.2.pdf.

    [49] Edmond, Deepu Sebastian. "Jhakhand's Digital Leap." Indian Express, September 15, 2013. http://www.jhpolice.gov.in/news/jhakhands-digital-leap-indian-express-15092013-18219-1379316969.

    [50] Jharkhand Police. "Jharkhand Police IT Vision 2020 - Effective Shared Open E-Governance." 2012. http://jhpolice.gov.in/vision2020. See slide 2

    [51] Edmond, supra note 49.

    [52] Edmond, supra note 49.

    [53] Kumar, Raj. "Enter, the Future of Policing - Cops to Team up with IIM Analysts to Predict & Prevent Incidents." The Telegraph. August 28, 2012. http://www.telegraphindia.com/1120828/jsp/jharkhand/story_15905662.jsp#.VkXwxvnhDWK.

    [54] Ibid.

    [55] Ibid.

    [56] Ibid.

    [57] See supra note 49.

    [58] See here for Jharkhand Police crime dashboard.

    [59] Lavanya Gupta, and Selva Priya. "Predicting Crime Rates for Predictive Policing." Gandhian Young Technological Innovation Award, December 29, 2014. http://gyti.techpedia.in/project-detail/predicting-crime-rates-for-predictive-policing/3545.

    [60] Gupta, Lavanya. "Minority Report: Minority Report." Accessed November 13, 2015. http://cmuws2014.blogspot.in/2015/01/minority-report.html.

    [61] See supra note 59.

    [62] See here for details about 44th All India Police Science Congress.

    [63] India, Press Trust of. "Police Science Congress in Gujarat to Have DRDO Exhibition." Business Standard India, March 10, 2015. http://www.business-standard.com/article/pti-stories/police-science-congress-in-gujarat-to-have-drdo-exhibition-115031001310_1.html.

    [64] National Crime Records Bureau. "About Crime and Criminal Tracking Network & Systems - CCTNS." Accessed November 13, 2015. http://ncrb.gov.in/cctns.htm.

    [65] Ibid. (See index page)

    [66] U.S. Const. amend. IV, available here

    [67] United States v Katz, 389 U.S. 347 (1967) , see here

    [68] See supra note 1, at pg 60.

    [69] See supra note 1, at pg 60.

    [70] Villasenor, John. "What You Need to Know about the Third-Party Doctrine." The Atlantic, December 30, 2013. http://www.theatlantic.com/technology/archive/2013/12/what-you-need-to-know-about-the-third-party-doctrine/282721/.

    [71] Smith v Maryland, 442 U.S. 735 (1979), see here

    [72] United States v Jones, 565 U.S. ___ (2012), see here

    [73] Newell, Bryce Clayton. "Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, October 16, 2013. http://papers.ssrn.com/abstract=2341182, at pg 24.

    [74] See supra note 72.

    [75] Dahyabhai Chhaganbhai Thakker vs State Of Gujarat, 1964 AIR 1563

    [76] See supra note 16.

    [77] See supra note 66.

    [78] Brinegar v. United States, 338 U.S. 160 (1949), see here

    [79] Terry v. Ohio, 392 U.S. 1 (1968), see here

    [80] Ferguson, Andrew Guthrie. "Big Data and Predictive Reasonable Suspicion." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 4, 2014. http://papers.ssrn.com/abstract=2394683, at pg 287. See also supra note 79.

    [81] See supra note 80.

    [82] See supra note 80.

    [83] See supra note 80.

    [84] See supra note 80, at pg 289.

    [85] Illinois v. Gates, 462 U.S. 213 (1983). See here

    [86] See Alabama v. White, 496 U.S. 325 (1990). See here

    [87] See supra note 80, at pg 291.

    [88] See supra note 80, at pg 293.

    [89] See supra note 80, at pg 308.

    [90] Ibid.

    [91] Ibid.

    [92] Larissa Cespedes-Yaffar, Shayona Dhanak, and Amy Stephenson. "U.S. v. Mendenhall, U.S. v. Sokolow, and the Drug Courier Profile Evidence Controversy." Accessed July 6, 2015. http://courses2.cit.cornell.edu/sociallaw/student_projects/drugcourier.html.

    [93] Ibid.

    [94] United States v. Sokolow, 490 U.S. 1 (1989), see here

    [95] See supra note 80, at pg 295.

    [96] See supra note 80, at pg 297.

    [97] See supra note 80, at pg 308.

    [98] See supra note 80, at pg 310.

    [99] See supra note 11.

    [100] See supra note 11.

    [101] See supra note 80, at pg 303.

    [102] See supra note 80, at pg 300.

    [103] Illinois v. Wardlow, 528 U.S. 119 (2000), see here

    [104] Ibid.

    [105] See supra note 80, at pg 301.

    [106] Ibid.

    [107] See supra note 1, at pg 42.

    [108] See supra note 80, at pg 303.

    [109] See supra note 80, at pg 303.

    [110] Ibid.

    [111] Ibid.

    [112] Ibid.

    [113] See supra note 80, at pg 312.

    [114] See supra note 80, at pg 317.

    [115] See supra note 80, at pg 319.

    [116] See supra note 80, at pg 321.

    [117] Section 165 Indian Criminal Procedure Code, see here

    [118] Gulab Chand Upadhyaya vs State Of U.P, 2002 CriLJ 2907

    [119] Section 41 Indian Criminal Procedure Code

    [120] See supra note 79

    [121] State of Punjab v. Balbir Singh. (1994) 3 SCC 299

    [122] Ibid.

    [123] Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act 1985, see here

    [124] Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act. (1985) 3 SCC 72 : 1985 SCC (Cri) 312 : 1985 SCC (Tax) 352 : AIR 1985 SC 989

    [125] Ibid, at SCC pg 77-78.

    [126] See supra note 121, at pg 313.

    [127] Carlson, Mr David. "Exclusionary Rule." LII / Legal Information Institute, June 10, 2009. https://www.law.cornell.edu/wex/exclusionary_rule.

    [128] Ibid.

    [129] Mapp v Ohio, 367 U.S. 643 (1961), see here

    [130] Ibid.

    [131] Busby, John C. "Fruit of the Poisonous Tree." LII / Legal Information Institute, September 21, 2009. https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree.

    [132] Silverthorne Lumber Co., Inc. v. United States, 251 U.S. 385 (1920), see here.

    [133] Beck v. Ohio, 379 U.S. 89 (1964), see here.

    [134] State of Maharashtra v. Natwarlal Damodardas Soni, (1980) 4 SCC 669, at 673.

    [135] Ibid.

    [136] Radhakishan v. State of U.P. [AIR 1963 SC 822 : 1963 Supp 1 SCR 408, 411, 412 : (1963) 1 Cri LJ 809]

    [137] Ibid, at SCR pg 411-12.

    [138] Shyam Lal Sharma v. State of M.P. (1972) 1 SCC 764 : 1974 SCC (Cri) 470 : AIR 1972 SC 886

    [139] See supra note 135, at page 674.

    [140] See supra note 119, at para. 10.

    [141] See supra note 121, at pg 309.

    [142] State of Punjab v. Wassan Singh, (1981) 2 SCC 1 : 1981 SCC (Cri) 292

    [143] See supra note 121, at pg 309.

    [144] Sunder Singh v. State of U.P, AIR 1956 SC 411 : 1956 Cri LJ 801

    [145] See supra note 121, at pg 309.

    [146] Matajog Dobey v.H.C. Bhari, AIR 1956 SC 44 : (1955) 2 SCR 925 : 1956 Cri LJ 140

    [147] See supra note 121, at pg 309.

    [148] R v. Sang, (1979) 2 All ER 1222, 1230-31

    [149] See supra note 121, at pg 309.

    [150] Ibid.

    [151] Ibid.

    [152] Harlow, Carol, and Richard Rawlings. Law and Administration. 3rd ed. Law in Context. Cambridge University Press, 2009.

    [153] R.M. Malkani v. State of Maharashtra, (1973) 1 SCC 471

    [154] Kuruma, Son of Kanju v. R., (1955) AC 197

    [155] See supra note 154, at 477.

    [156] Indian Const. Art 15, see here

    [157] Ibid.

    [158] See supra note 11.

    Response by the Centre for Internet and Society to the Draft Proposal to Transition the Stewardship of the Internet Assigned Numbers Authority (IANA) Functions from the U.S. Commerce Department’s National Telecommunications and Information Administration

    by Pranesh Prakash last modified Nov 29, 2015 06:35 AM
    This proposal was made to the Global Multistakeholder Community on August 9, 2015. The proposal was drafted by Pranesh Prakash and Jyoti Panday. The research assistance was provided by Padmini Baruah, Vidushi Marda, and inputs from Sunil Abraham.

    For more than a year now, the customers and operational communities performing key internet functions related to domain names, numbers and protocols have been negotiating the transfer of IANA stewardship. India has dual interests in the ICANN IANA Transition negotiations: safeguarding independence, security and stability of the DNS for development, and promoting an effective transition agreement that internationalizes the IANA Functions Operator (IFO). Last month the IANA Stewardship Transition Coordination Group (ICG) set in motion a public review of its combined assessment of the proposals submitted by the names, numbers and protocols communities. In parallel to the transition of the NTIA oversight, the community has also been developing mechanisms to strengthen the accountability of ICANN and has devised two workstreams that consider both long term and short term issues. This 2 is our response to the consolidated ICG proposal which considers the proposals for the transition of the NTIA oversight over the IFO.

    Click to download the submission.

    The Humpty-Dumpty Censorship of Television in India

    by Bhairav Acharya last modified Nov 29, 2015 08:37 AM
    The Modi government’s attack on Sathiyam TV is another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    The article originally published in the Wire on September 8, 2015 was also mirrored on the website Free Speech/Privacy/Technology.


    It is tempting to think of the Ministry of Information and Broadcasting’s (MIB) attack on Sathiyam TV solely as another authoritarian exhibition of Prime Minister Narendra Modi’s government’s intolerance of criticism and dissent. It certainly is. But it is also another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    Sathiyam TV’s transgressions

    Sathiyam’s transgressions began more than a year ago, on May 9, 2014, when it broadcast a preacher saying of an unnamed person: “Oh Lord! Remove this satanic person from the world!” The preacher also allegedly claimed this “dreadful person” was threatening Christianity. This, the MIB reticently claims, “appeared to be targeting a political leader”, referring presumably to Prime Minister Modi, to “potentially give rise to a communally sensitive situation and incite the public to violent tendencies.”

    The MIB was also offended by a “senior journalist” who, on the same day, participated in a non-religious news discussion to allegedly claim Modi “engineered crowds at his rallies” and used “his oratorical skills to make people believe his false statements”. According to the MIB, this was defamatory and “appeared to malign and slander the Prime Minister which was repugnant to (his) esteemed office”.

    For these two incidents, Sathiyam was served a show-cause notice on 16 December 2014 which it responded to the next day, denying the MIB’s claims. Sathiyam was heard in-person by a committee of bureaucrats on 6 February 2015. On 12 May 2015, the MIB handed Sathiyam an official an official “Warning” which appears to be unsupported by law. Sathiyam moved the Delhi High Court to challenge this.

    As Sathiyam sought judicial protection, the MIB issued the channel a second warning August 26, 2016 citing three more objectionable news broadcasts of: a child being subjected to cruelty by a traditional healer in Assam; a gun murder inside a government hospital in Madhya Pradesh; and, a self-immolating man rushing the dais at a BJP rally in Telangana. All three news items were carried by other news channels and websites.

    Governing communications

    Most news providers use multiple media to transmit their content and suffer from complex and confusing regulation. Cable television is one such medium, so is the Internet; both media swiftly evolve to follow technological change. As the law struggles to keep up, governmental anxiety at the inability to perfectly control this vast field of speech and expression frequently expresses itself through acts of overreach and censorship.

    In the newly-liberalised media landscape of the early 1990s, cable television sprang up in a legal vacuum. Doordarshan, the sole broadcaster, flourished in the Centre’s constitutionally-sanctioned monopoly of broadcasting which was only broken by the Supreme Court in 1995. The same year, Parliament enacted the Cable Television Networks (Regulation) Act, 1995 (“Cable TV Act”) to create a licence regime to control cable television channels. The Cable TV Act is supplemented by the Cable Television Network Rules, 1994 (“Cable Rules”).

    The state’s disquiet with communications technology is a recurring motif in modern Indian history. When the first telegraph line was laid in India, the colonial state was quick to recognize its potential for transmitting subversive speech and responded with strict controls. The fourth iteration of the telegraph law represents the colonial government’s perfection of the architecture of control. This law is the Indian Telegraph Act, 1885, which continues to dominate communications governance in India today including, following a directive in 2004, broadcasting.

    Vague and arbitrary law

    The Cable TV Act requires cable news channels such as Sathiyam to obey a list of restrictions on content that is contained in the Cable Rules (“Programme Code“). Failure to conform to the Programme Code can result in seizure of equipment and imprisonment; but, more importantly, creates the momentum necessary to invoke the broad powers of censorship to ban a programme, channel, or even the cable operator. But the Programme Code is littered with vague phrases and undefined terms that can mean anything the government wants them to mean.

    By its first warning of May 12, 2015, the MIB claimed Sathiyam violated four rules in the Programme Code. These include rule 6(1)(c) which bans visuals or words “which promote communal attitudes”; rule 6(1)(d) which bans “deliberate, false and suggestive innuendos and half-truths”; rule 6(1)(e) which bans anything “which promotes anti-national attitudes”; and, rule 6(1)(i) which bans anything that “criticises, maligns or slanders any…person or…groups, segments of social, public and moral life of the country” (sic).

    The rest of the Programme Code is no less imprecise. It proscribes content that “offends against good taste” and “reflects a slandering, ironical and snobbish attitude” against communities. On the face of it, several provisions of the Programme Code travel beyond the permissible restrictions on free speech listed in Article 19(2) of the Constitution to question their validity. The fiasco of implementing the vague provisions of the erstwhile section 66A of the Information Technology Act, 2000 is a recent reminder of the dangers presented by poorly-drafted censorship law – which is why it was struck down by the Supreme Court for infringing the right to free speech. The Programme Code is an older creation, it has simply evaded scrutiny for two decades.

    The arbitrariness of the Programme Code is amplified manifold by the authorities responsible for interpreting and implementing it. An Inter-Ministerial Committee (IMC) of bureaucrats, supposedly a recommendatory body, interprets the Programme Code before the MIB takes action against channels. This is an executive power of censorship that must survive legal and constitutional scrutiny, but has never been subjected to it. Curiously, the courts have shied away from a proper analysis of the Programme Code and the IMC.

    Judicial challenges

    In 2011, a single judge of the Delhi High Court in the Star India case (2011) was asked to examine the legitimacy of the IMC as well as four separate clauses of the Programme Code including rule 6(1)(i), which has been invoked against Sathiyam. But the judge neatly sidestepped the issues. This feat of judicial adroitness was made possible by the crass indecency of the content in question, which could be reasonably restricted. Since the show clearly attracted at least one ground of legitimate censorship, the judge saw no cause to examine the other provisions of the Programme Code or even the composition of the IMC.

    This judicial restraint has proved detrimental. In May 2013, another single judge of the Delhi High Court, who was asked by Comedy Central to adjudge the validity of the IMC’s decision-making process, relied on Star India (2011) to uphold the MIB’s action against the channel. The channel’s appeal to the Supreme Court is currently pending. If the Supreme Court decides to examine the validity of the IMC, the Delhi High Court may put aside Sathiyam’s petition to wait for legal clarity.

    As it happens, in the Shreya Singhal case (2015) that struck down section 66A of the IT Act, the Supreme Court has an excellent precedent to follow to demand clarity and precision from the Programme Code, perhaps even strike it down, as well as due process from the MIB. On the accusation of defaming the Prime Minister, probably the only clearly stated objection by the MIB, the Supreme Court’s past law is clear: public servants cannot, for non-personal acts, claim defamation.

    Censorship by blunt force

    Beyond the IMC’s advisories and warnings, the Cable TV Act contains two broad powers of censorship. The first empowerment in section 19 enables a government official to ban any programme or channel if it fails to comply with the Programme Code or, “if it is likely to promote, on grounds of religion, race, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, linguistic or regional groups or castes or communities or which is likely to disturb the public tranquility.”

    The second empowerment is much wider. Section 20 of the Cable TV Act permits the Central Government to ban an entire cable television operator, as opposed to a single channel or programmes within channels, if it “thinks it necessary or expedient so to do in public interest”. No reasons need be given and no grounds need be considered. Such a blunt use of force creates an overwhelming power of censorship. It is not a coincidence that section 20 resembles some provisions of nineteenth-century telegraph laws, which were designed to enable the colonial state to control the flow of information to its native subjects.

    A manual for television bans

    Film and television have always attracted political attention and state censorship. In 1970, Justice Hidayatullah of the Supreme Court explained why: “It has been almost universally recognised that the treatment of motion pictures must be different from that of other forms of art and expression. This arises from the instant appeal of the motion picture… The motion picture is able to stir up emotions more deeply than any other product of art.”

    Within this historical narrative of censorship, television regulation is relatively new. Past governments have also been quick to threaten censorship for attacking an incumbent Prime Minister. There seems to be a pan-governmental consensus that senior political leaders ought to be beyond reproach, irrespective of their words and deeds.

    But on what grounds could the state justify these bans? Lord Atkins’ celebrated war-time dissent in Liversidge (1941) offers an unlikely answer:

    “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’”

    The Short-lived Adventure of India’s Encryption Policy

    by Bhairav Acharya last modified Nov 29, 2015 09:03 AM
    Written for the Berkeley Information Privacy Law Association (BIPLA).

    During his recent visit to Silicon Valley, Indian Prime Minister Narendra Modi said his government was “giving the highest importance to data privacy and security, intellectual property rights and cyber security”. But a proposed national encryption policy circulated in September 2015 would have achieved the opposite effect.

    The policy was comically short-lived. After its poorly-drafted provisions invited ridicule, it was swiftly withdrawn. But the government has promised to return with a fresh attempt to regulate encryption soon. The incident highlights the worrying assault on communications privacy and free speech in India, a concern compounded by the enormous scale of the telecommunications and Internet market.

    Even with only around 26 percent of its population online, India is already the world’s second-largest Internet user, recently overtaking the United States. The number of Internet users in India is set to grow exponentially, spurred by ambitious governmental schemes to build a ‘Digital India’ and a country-wide fiber-optic backbone. There will be a corresponding increase in the use of the Internet for communicating and conducting commerce.

    Encryption on the Internet

    Encryption protects the security of Internet users from invasions of privacy, theft of data, and other attacks. By applying an algorithmic cipher (key), ordinary data (plaintext) is encoded into an unintelligible form (ciphertext), which is decrypted using the key. The ciphertext can be intercepted but will remain unintelligible without the key. The key is secret.

    There are several methods of encryption. SSL/TLS, a family of encryption protocols, is commonly used by major websites. But while some companies encrypt sensitive data, such as passwords and financial information, during its transit through the Internet, most data at rest on servers is largely unencrypted. For instance, email providers regularly store plaintext messages on their servers. As a result, governments simply demand and receive backdoor access to information directly from the companies that provide these services. However, governments have long insisted on blanket backdoor access to all communications data, both encrypted and unencrypted, and whether at rest or in transit.

    On the other hand, proper end-to-end encryption – full encryption from the sender to recipient, where the service provider simply passes on the ciphertext without storing it, and deletes the metadata – will defeat backdoors and protect privacy, but may not be profitable. End-to-end encryption alarms the surveillance establishment, which is why British Prime Minister David Cameron wants to ban it, and many in the US government want Silicon Valley companies to stop using it.

    Communications privacy

    Instead of relying on a company to secure communications, the surest way to achieve end-to-end encryption is for the sender to encrypt the message before it leaves her computer. Since only the sender and intended recipient have the key, even if the data is intercepted in transit or obtained through a backdoor, only the ciphertext will be visible.

    For almost all of human history, encryption relied on a single shared key; that is, both the sender and recipient used a pre-determined key. But, like all secrets, the more who know it, the less secure the key becomes. From the 1970s onwards, revolutionary advances in cryptography enabled the generation of a pair of dissimilar keys, one public and one private, which are uniquely and mathematically linked. This is asymmetric or public key cryptography, where the private key remains an exclusive secret. It offers the strongest protection for communications privacy because it returns autonomy to the individual and is immune to backdoors.

    For those using public key encryption, Edward Snowden’s revelation that the NSA had cracked several encryption protocols including SSL/TLS was worrying. Brute-force decryption (the use of supercomputers to mathematically attack keys) questions the integrity of public key encryption. But, since the difficulty of code-breaking is directly proportional to key size, notionally, generating longer keys will thwart the NSA, for now.

    The crypto-wars in India

    Where does India’s withdrawn encryption policy lie in this landscape of encryption and surveillance? It is difficult to say. Because it was so badly drafted, understanding the policy was a challenge. It could have been a ham-handed response to commercial end-to-end encryption, which many major providers such as Apple and WhatsApp are adopting following consumer demand. But curiously, this did not appear to be the case, because the government later exempted WhatsApp and other “mass use encryption products”.

    The Indian establishment has a history of battling commercial encryption. From 2008, it fought Blackberry for backdoor access to its encrypted communications, coming close to banning the service, which dissipated only once the company lost its market share. There have been similar attempts to force Voice over Internet Protocol providers to fall in line, including Skype and Google. And there is a new thrust underway to regulate over-the-top content providers, including US companies.

    The policy could represent a new phase in India’s crypto-wars. The government, emboldened by the sheer scale of the country’s market, might press an unyielding demand for communications backdoors. The policy made no bones of this desire: it sought to bind communications companies by mandatory contracts, regulate key-size and algorithms, compel surrender of encryption products including “working copies” of software (the key generation mechanism), and more.

    The motives of regulation

    The policy’s deeply intrusive provisions manifest a long-standing effort of the Indian state to dominate communications technology unimpeded by privacy concerns. From wiretaps to Internet metadata, intrusive surveillance is not judicially warranted, does not require the demonstration of probable cause, suffers no external oversight, and is secret. These shortcomings are enabling the creation of a sophisticated surveillance state that sits ill with India’s constitutional values.

    Those values are being steadily besieged. India’s Supreme Court is entertaining a surge of clamorous litigation to check an increasingly intrusive state. Only a few months ago, the Attorney-General – the government’s foremost lawyer – argued in court that Indians did not have a right to privacy, relying on 1950s case law which permitted invasive surveillance. Encryption which can inexpensively lock the state out of private communications alarms the Indian government, which is why it has skirmished with commercially-available encryption in the past.

    On the other hand, the conflict over encryption is fueled by irregular laws. Telecoms licensing regulations restrict Internet Service Providers to 40-bit symmetric keys, a primitively low standard; higher encryption requires permission and presumably surrender of the shared key to the government. Securities trading on the Internet requires 128-bit SSL/TLS encryption while the country’s central bank is pushing for end-to-end encryption for mobile banking. Seen in this light, the policy could simply be an attempt to rationalize an uneven field.

    Encryption and freedom

    Perhaps the government was trying to restrict the use of public key encryption and Internet anonymization services, such as Tor or I2P, by individuals. India’s telecoms minister stated: “The purport of this encryption policy relates only to those who encrypt.” This was not particularly illuminating. If the government wants to pre-empt terrorism – a legitimate duty, this approach is flawed since regardless of the law’s command arguably no terrorist will disclose her key to the government. Besides, since there are very few Internet anonymizers in India who are anyway targeted for special monitoring, it would be more productive for the surveillance establishment to maintain the status quo.

    This leaves harmless encrypters – businesses, journalists, whistle blowers, and innocent privacy enthusiasts. For this group, impediments to encryption interferes with their ability to freely communicate. There is a proportionate link between encryption and the freedom of speech and expression, a fact acknowledged by Special Rapporteur David Kaye of the UN Human Rights Council, where India is a participating member. Kaye notes: “Encryption and anonymity are especially useful for the development and sharing of opinions, which often occur through online correspondence such as e-mail, text messaging, and other online interactions.”

    This is because encryption affords privacy which promotes free speech, a relationship reiterated by the previous UN Special Rapporteur, Frank La Rue. On the other hand, surveillance has a “chilling effect” on speech. In 1962, Justice Subba Rao’s famous dissent in the Indian Supreme Court presciently connected privacy and free speech:

    The act of surveillance is certainly a restriction on the [freedom of speech]. It cannot be suggested that the said freedom…will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like. To extend the analogy to the present case is to treat the man under surveillance as a prisoner within the confines of our country and the authorities enforcing surveillance as guards. So understood, it must be held that the petitioner’s freedom under [the right to free speech under the Indian] Constitution is also infringed.

    Kharak Singh v. State of Uttar Pradesh (1964) 1 SCR 332, pr. 30.

    Perhaps the policy expressed the government’s discomfort at individual encrypters escaping surveillance, like free agents evading the state’s control. How should the law respond to this problem? Daniel Solove says the security of the state need not compromise individual privacy. On the other hand, as Ronald Dworkin influentially maintained, the freedoms of the individual precede the interests of the state.

    Security and trade interests

    However, even when assessed from the perspective of India’s security imperatives, the policy would have had harmful consequences. It required users of encryption, including businesses and consumers, to store plaintext versions of their communications for ninety days to surrender to the government upon demand. This outrageously ill-conceived provision would have created real ‘honeypots’ (originally, honeypots are decoy servers to lure hackers) of unencrypted data, ripe for theft. Note that India does not have a data breach law.

    The policy’s demand for encryption companies to register their products and give working copies of their software and encryption mechanisms to the Indian government would have flown in the face of trade secrecy and intellectual property protection. The policy’s hurried withdrawal was a public relations exercise on the eve of Prime Minister Modi’s visit to Silicon Valley. It was successful. Modi encountered no criticism of his government’s visceral opposition to privacy, even though the policy would have severely disrupted the business practices of US communications providers operating in India.

    Encryption invites a convergence of state interests between India and US as well: both countries want to control it. Last month’s joint statement from the US-India Strategic and Commercial Dialogue pledges “further cooperation on internet and cyber issues”. This innocuous statement masks a robust information-gathering and -sharing regime. There is no guarantee against the sharing of any encryption mechanisms or intercepted communications by India.

    The government has promised to return with a reworked proposal. It would be in India’s interest for this to be preceded by a broad-based national discussion on encryption and its links to free speech, privacy, security, and commerce.


    Click to read the post published on Free Speech / Privacy / Technology website.

    How India Regulates Encryption

    by Pranesh Prakash & Japreet Grewal — last modified Jul 23, 2016 01:24 PM
    Contributors: Geetha Hariharan

    Governments across the globe have been arguing for the need to regulate the use of encryption for law enforcement and national security purposes. Various means of regulation such as backdoors, weak encryption standards and key escrows have been widely employed which has left the information of online users vulnerable not only to uncontrolled access by governments but also to cyber-criminals. The Indian regulatory space has not been untouched by this practice and constitutes laws and policies to control encryption. The regulatory requirements in relation to the use of encryption are fragmented across legislations such as the Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) and several sector-specific regulations. The regulatory framework is designed to either limit encryption or gain access to the means of decryption or decrypted information.

    Limiting encryption

    The IT Act does not prescribe the level or type of encryption to be used by online users. Under Section 84A, it grants the Government the authority to prescribe modes and methods of encryption. The Government has not issued any rules in exercise of these powers so far but had released a draft encryption policy on September 21, 2015. Under the draft policy, only those encryption algorithms and key sizes were permitted to be used as were to be notified by the Government. The draft policy was withdrawn due to widespread criticism of various requirements under the policy of which retention of unencrypted user information for 90 days and mandatory registration of all encryption products offered in the country were noteworthy.

    The Internet Service Providers License Agreement (ISP License), entered between the Department of Telecommunication (DoT) and an Internet Service Provider (ISP) to provide internet services (i.e. internet access and internet telephony services), permits the use of encryption up to 40 bit key length in the symmetric algorithms or its equivalent in others.[1] The restriction applies not only to the ISPs but also to individuals, groups and organisations that use encryption. In the event an individual, group or organisation decides to deploy encryption that is higher than 40 bits, prior permission from the DoT must be obtained and the decryption key must be deposited with the DoT. There are, however no parameters laid down for use of the decryption key by the Government. Several issues arise in relation enforcement of these license conditions.

    1. While this requirement is applicable to all individuals, groups and organisations using encryption it is difficult to enforce it as the ISP License only binds DoT and the ISP and cannot be enforced against third parties.
    2. Further, a 40 bit symmetric key length is considered to be an extremely weak standard[2] and is inadequate for protection of data stored or communicated online. Various sector-specific regulations that are already in place in India prescribe encryption of more than 40 bits.
      • The Reserve Bank of India has issued guidelines for Internet banking[3] where it prescribes 128-bit as the minimum level of encryption and acknowledges that constant advances in computer hardware and cryptanalysis may induce use of larger key lengths. The Securities and Exchange Board of India also prescribes[4] a 64-bit/128-bit encryption for standard network security and use of secured socket layer security preferably with 128-bit encryption, for securities trading over a mobile phone or a wireless application platform.  Further, under Rule 19 (2) of the Information Technology (Certifying Authorities) Rules, 2000 (CA Rules), the Government has prescribed security guidelines for management and implementation of information technology security of the certifying authorities. Under these guidelines, the Government has suggested the use of suitable security software or even encryption software to protect sensitive information and devices that are used to transmit or store sensitive information such as routers, switches, network devices and computers (also called information assets). The guidelines acknowledge the need to use internationally proven encryption techniques to encrypt stored passwords such as PKCS#1 RSA Encryption Standard (512, 1024, 2048 bit), PKCS#5 Password Based Encryption Standard or PKCS#7 Cryptographic Message Syntax Standard as mentioned under Rule 6 of the CA Rules. These encryption algorithms are very strong and secure as compared to a 40 bit encryption key standard.
      • The ISP License also contains a clause which provides that use of any hardware or software that may render the network security vulnerable would be considered a violation of the license conditions.[5] Network security may be compromised by using a weak security measure such as the 40 bit encryption or its equivalent prescribed by the DoT but the liability will be imputed to the ISP. As a result, an ISP which is merely complying with the license conditions by employing not more than a 40 bit encryption may be liable for what appears to be contradictory license conditions.
      • It is noteworthy that the restriction on the key size under the ISP License has not been imported to the Unified Service License Agreement (UL Agreement) that has been formulated by the DoT. The UL Agreement does not prescribe a specific level of encryption to be used for provision of services. Clause 37.5 of the UL Agreement however makes it clear that use of encryption will be governed by the provisions of the IT Act. As noted earlier, the Government has not specified any limit to level and type of encryption under the IT Act however it had released a draft encryption policy that has been suspended due to widespread criticism of its mandate.

     

    The Telecom Licenses (ISP License, UL Agreement, and Unified Access Service License) prohibit the use of bulk encryption by the service providers but they continue to remain responsible for maintaining privacy of communication and preventing unauthorized interception.

    Gaining access to means of decryption or decrypted information

    Besides restrictions on the level of encryption, the ISP License and the UL Agreement make it mandatory for the service providers including ISPs to provide to the DoT all details of the technology that is employed for operations and furnish all documentary details like concerned literature, drawings, installation materials and tools and testing instruments relating to the system intended to be used for operations as and when required by the DoT.[6] While these license conditions do not expressly lay down that access to means of decryption must be given to the government the language is sufficiently broad to include gaining such access as well. Further, ISPs are required to take prior approval of the DoT for installation of any equipment or execution of any project in areas which are sensitive from security point of view. The ISPs are in fact subject to and further required to facilitate continuous monitoring by the DoT. These obligations ensure that the Government has complete access to and control over the infrastructure for providing internet services which includes any installation or equipment required for the purpose of encryption and decryption.

    The Government has also been granted the power to gain access to means of decryption or simply, decrypted information under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

    1. A decryption order usually entails a direction to a decryption key holder to disclose a decryption key, allow access to or facilitate conversion of encrypted information and must contain reasons for such direction. In fact, Rule 8 of the Decryption Rules makes it mandatory for the authority to consider other alternatives to acquire the necessary information before issuing a decryption order.
    2. The Secretary in the Ministry of Home Affairs or the Secretary in charge of the Home Department in a state or union territory is authorised to issue an order of decryption in the interest of sovereignty or integrity of India, defense of India, security of the state, friendly relations with foreign states or public order or preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence. It is useful to note that this provision was amended in 2009 to expand the grounds on which a direction for decryption can be passed. Post 2009, the Government can issue a decryption order for investigation of any offence.  In the absence of any specific process laid down for collection of digital evidence do we follow the procedure under the criminal law or is it necessary that we draw a distinction between the investigation process in the digital and the physical environment and see if adequate safeguards exist to check the abuse of investigatory powers of the police herein.
    3. The orders for decryption must be examined by a review committee constituted under Rule 419A of the Indian Telegraph Rules, 1951 to ensure compliance with the provisions under the IT Act. The review committee is required to convene atleast once in two months for this purpose. However, we have been informed in a response by the Department of Electronics and Information Technology to an RTI dated April 21, 2015 filed by our organisation that since the constitution of the review committee has met only once in January 2013.

    Conclusion

    While studying a regulatory framework for encryption it is necessary that we identify the lens through which encryption is looked at i.e. whether encryption is considered as a means of information security or a threat to national security. As noted earlier, the encryption mandates for banking systems and certifying authorities in India are contradictory to those under the telecom licenses and the Decryption Rules. Would it help to analyse whether the prevailing scepticism of the Government is well founded against the need to have strong encryption? It would be useful to survey the statistics of cyber incidents where strong encryption was employed as well as look at instances that reflect on whether strong encryption has made it difficult for law enforcement agencies to prevent or resolve crimes. It would also help  to record cyber incidents that have resulted from vulnerabilities such as backdoors or key escrows deliberately introduced by law. These statistics would certainly clear the air about the role of encryption in securing cyberspace and facilitate appropriate regulation.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     



    [1] Clause 2.2 (vii) of the ISP License

    [2] Schneier, Bruce (1996). Applied Cryptography (Second ed.). John Wiley & Sons

    [3] Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds- Implementation of recommendations, 2011

    [4] Report on Internet Based Trading by the SEBI Committee on Internet based Trading and Services, 2000; It is useful to note that subsequently SEBI had acknowledged that the level of encryption would be governed by DoT policy in a SEBI circular no CIR/MRD/DP/25/2010 dated August 27, 2010 on Securities Trading using Wireless Technology

    [5] Clause 34.25 of the ISP License

    [6] Clauses 22 and  23 of Part IV  of the ISP License

    Concept Note: Network Neutrality in South Asia

    by Prasad Krishna last modified Dec 01, 2015 02:34 AM

    PDF document icon Network Neutrality South Asia Concept Note _ORF CIS.pdf — PDF document, 238 kB (244150 bytes)

    The Case of Whatsapp Group Admins

    by Japreet Grewal — last modified Dec 08, 2015 10:25 AM
    Contributors: Geetha Hariharan

    Censorship laws in India have now roped in group administrators of chat groups on instant messaging platforms such as Whatsapp (group admin(s)) for allegedly objectionable content that was posted by other users of these chat groups. Several incidents[1] were reported this year where group admins were arrested in different parts of the country for allowing content that was allegedly objectionable under law. A few reports mentioned that these arrests were made under Section 153A[2] read with Section 34[3] of the Indian Penal Code (IPC) and Section 67[4] of the Information Technology Act (IT Act).

    Targeting of a group admin for content posted by other members of a chat group has raised concerns about how this liability is imputed. Whether a group admin should be considered an intermediary under Section 2 (w) of the IT Act? If yes, whether a group admin would be protected from such liability?

    Group admin as an intermediary

    Whatsapp is an instant messaging platform which can be used for mass communication by opting to create a chat group. A chat group is a feature on Whatsapp that allows joint participation of Whatsapp users. The number of Whatsapp users on a single chat group can be up to 100. Every chat group has one or more group admins who control participation in the group by deleting or adding people. [5] It is imperative that we understand that by choosing to create a chat group on Whatsapp whether a group admin can become liable for content posted by other members of the chat group.

    Section 34 of the IPC provides that when a number of persons engage in a criminal act with a common intention, each person is made liable as if he alone did the act. Common intention implies a pre-arranged plan and acting in concert pursuant to the plan. It is interesting to note that group admins have been arrested under Section 153A on the ground that a group admin and a member posting content on a chat group that is actionable under this provision have common intention to post such content on the group. But would this hold true when for instance, a group admin creates a chat group for posting lawful content (say, for matchmaking purposes) and a member of the chat group posts content which is actionable under law (say, posting a video abusing Dalit women)? Common intention can be established by direct evidence or inferred from conduct or surrounding circumstances or from any incriminating facts.[6]

    We need to understand whether common intention can be established in case of a user merely acting as a group admin. For this purpose it is necessary to see how a group admin contributes to a chat group and whether he acts as an intermediary.

    We know that parameters for determining an intermediary differ across jurisdictions and most global organisations have categorised them based on their role or technical functions.[7] Section 2 (w) of the Information Technology Act, 2000 (IT Act) defines an intermediary as any person, who on behalf of another person, receives, stores or transmits messages or provides any service with respect to that message and includes the telecom services providers, network providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces and cyber cafés. Does a group admin receive, store or transmit messages on behalf of group participants or provide any service with respect to messages of group participants or falls in any category mentioned in the definition? Whatsapp does not allow a group admin to receive, or store on behalf of another participant on a chat group. Every group member independently controls his posts on the group. However, a group admin helps in transmitting messages of another participant to the group by allowing the participant to be a part of the group thus effectively providing service in respect of messages. A group admin therefore, should be considered an intermediary. However his contribution to the chat group is limited to allowing participation but this is discussed in further detail in the section below.

    According to the Organisation for Economic Co-operation and Development (OECD), in a 2010 report[8], an internet intermediary brings together or facilitates transactions between third parties on the Internet. It gives access to, hosts, transmits and indexes content, products and services originated by third parties on the Internet or provide Internet-based services to third parties. A Whatsapp chat group allows people who are not on your list to interact with you if they are on the group admins’ contact list. In facilitating this interaction, according to the OECD definition, a group admin may be considered an intermediary.

    Liability as an intermediary

    Section 79 (1) of the IT Act protects an intermediary from any liability under any law in force (for instance, liability under Section 153A pursuant to the rule laid down in Section 34 of IPC) if an intermediary fulfils certain conditions laid down therein. An intermediary is required to carry out certain due diligence obligations laid down in Rule 3 of the Information Technology (Intermediaries Guidelines) Rules, 2011 (Rules). These obligations include monitoring content that infringes intellectual property, threatens national security or public order, or is obscene or defamatory or violates any law in force (Rule 3(2)).[9] An intermediary is liable for publishing or hosting such user generated content, however, as mentioned earlier, this liability is conditional. Section 79 of IT Act states that an intermediary would be liable only if it initiates transmission, selects receiver of the transmission and selects or modifies information contained in the transmission that falls under any category mentioned in Rule 3 (2) of the Rules. While we know that a group admin has the ability to facilitate sharing of information and select receivers of such information, he has no direct editorial control over the information shared. Group admins can only remove members but cannot remove or modify the content posted by members of the chat group. An intermediary is liable in the event it fails to comply with due diligence obligations laid down under rule 3 (2) and 3 (3) of the Rules however, since a group admin lacks the authority to initiate transmission himself and control content, he can’t comply with these obligations. Therefore, a group admin would be protected from any liability arising out of third party/user generated content on his group pursuant to Section 79 of the IT Act.

    It is however relevant to note whether the ability of a group admin to remove participants amounts to an indirect form of editorial control.

    Other pertinent observations

    In several reports[10] there have been discussions about how holding a group admin liable makes the process convenient as it is difficult to locate all the users of a particular group. This reasoning may not be correct as the Whatsapp policy[11] makes it mandatory for a prospective user to provide his mobile number in order to use the platform and no additional information is collected from group admins which may justify why group admins are targeted. Investigation agencies can access mobile numbers of Whatsapp users and gain more information from telecom companies.

    It is also interesting to note that the group admins were arrested after a user or someone familiar to a user filed a complaint with the police about content being objectionable or hurtful. Earlier this year, the apex court had ruled in the case of Shreya Singhal v. Union of India[12] that an intermediary needed a court order or a government notification for taking down information. With actions taken against group admins on mere complaints filed by anyone, it is clear that the law enforcement officials have been overriding the mandate of the court.

    Conclusion

     

    According to a study conducted by a global research consultancy, TNS Global, around 38 % of internet users in India use instant messaging applications such as Snapchat and Whatsapp on a daily basis, Whatsapp being the most widely used application. These figures indicate the scale of impact that arrests of group admins may have on our daily communication.

    It is noteworthy that categorising a group admin as an intermediary would effectively make the Rules applicable to all Whatsapp users intending to create groups and make it difficult to enforce and would perhaps blur the distinction between users and intermediaries.

    The critical question however is whether a chat group is considered a part of the bundle of services that Whatsapp offers to its users and not as an independent platform that makes a group admin a separate entity. Also, would it be correct to draw comparison of a Whatsapp group chat with a conference call on Skype or sharing a Google document with edit rights to understand the domain in which censorship laws are penetrating today?

     

    Valuable contribution by Pranesh Prakash and Geetha Hariharan


    [1] http://www.nagpurtoday.in/whatsapp-admin-held-for-hurting-religious-sentiment/06250951http://www.catchnews.com/raipur-news/whatsapp-group-admin-arrested-for-spreading-obscene-video-of-mahatma-gandhi-1440835156.html ; http://www.financialexpress.com/article/india-news/whatsapp-group-admin-along-with-3-members-arrested-for-objectionable-content/147887/

    [2] Section 153A. “Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.— (1) Whoever— (a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different reli­gious, racial, language or regional groups or castes or communi­ties…” or 2) Whoever commits an offence specified in sub-section (1) in any place of worship or in any assembly engaged in the performance of religious wor­ship or religious ceremonies, shall be punished with imprisonment which may extend to five years and shall also be liable to fine.

    [3] Section 34. Acts done by several persons in furtherance of common intention – When a criminal act is done by several persons in furtherance of common intention of all, each of such persons is liable for that act in the same manner as if it were done by him alone.

    [4] Section 67 Publishing of information which is obscene in electronic form. -Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees."

    [5] https://www.whatsapp.com/faq/en/general/21073373

    [6] Pandurang v. State of Hyderabad AIR 1955 SC 216

    [7]https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf;  http://unesdoc.unesco.org/images/0023/002311/231162e.pdf

    [8] http://www.oecd.org/internet/ieconomy/44949023.pdf

    [9] Rule 3(2) (b) of the Rules

    [10]http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp-group-admin-better-be-careful/article7531350.ece; http://www.newindianexpress.com/states/tamil_nadu/Social-Media-Administrator-You-Could-Land-in-Trouble/2015/10/10/article3071815.ece;  http://www.medianama.com/2015/10/223-whatsapp-group-admin-arrest/http://www.thenewsminute.com/article/whatsapp-group-admin-you-are-intermediary-and-here%E2%80%99s-what-you-need-know-35031

    [11] https://www.whatsapp.com/legal/

    [12] http://supremecourtofindia.nic.in/FileServer/2015-03-24_1427183283.pdf

    DNA Research

    by Vanya Rakesh last modified Jul 21, 2016 11:02 AM
    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament. This legislation will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. Here is a collection of our research on privacy and security concerns related to the Bill.

     

    The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues related to the DNA Bill:

    1. Validity and legality of collection, usage and storage of DNA samples and information derived from the same.
    2. Monitoring projects and policies around Human DNA Profiling.
    3. Raising public awareness around issues concerning biometrics.

    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament.

    The Bill seeks to establish DNA Databases at the state and regional level and a national level database. The databases would store DNA profiles of suspects, offenders, missing persons, and deceased persons. The database could be used by courts, law enforcement (national and international) agencies, and other authorized persons for criminal and civil purposes. The Bill will also regulate DNA laboratories collecting DNA samples. Lack of adequate consent, the broad powers of the board, and the deletion of innocent persons profiles are just a few of the concerns voiced about the Bill.

    DNA Profiling Bill - Infographic
    Download the infographic. Credit: Scott Mason and CIS team.

     

    1. DNA Bill

    The Human DNA Profiling bill is a legislation that will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. There have been many concerns raised about the infringement of privacy and the power that the government will have with such information raised by Human Rights Groups, individuals and NGOs. The bill proposes to profile people through their fingerprints and retinal scans which allow the government to create different unique profiles for individuals. Some of the concerns raised include the loss of privacy by such profiling and the manner in which they are conducted. Unless strictly controlled, monitored and protected, such a database of the citizens' fingerprints and retinal scans could lead to huge blowbacks in the form of security risks and privacy invasions. The following articles elaborate upon these matters.

       

      2. Comparative Analysis with other Legislatures

      Human DNA Profiling is a system that isn't proposed only in India. This system of identification has been proposed and implemented in many nations. Each of these systems differs from the other on bases dependent on the nation's and society's needs. The risks and criticisms that DNA profiling has faced may be the same but the manner in which solutions to such issues are varying. The following articles look into the different systems in place in different countries and create a comparison with the proposed system in India to give us a better understanding of the risks and implications of such a system being implemented.

       

      Privacy Policy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:40 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Raising public awareness  and dialogue around privacy,
      2. Undertaking in depth research of domestic and international policy pertaining to privacy
      3. Driving comprehensive privacy legislation in India through research.

      India does not have a comprehensive legislation covering issues of privacy or establishing the right to privacy In 2010 an "Approach Paper on Privacy" was published, in 2011 the Department of Personnel and Training released a draft Right to Privacy Bill, in 2012 the Planning Commission constituted a group of experts which published The Report of the Group of Experts on Privacy, in 2013 CIS drafted the citizens Privacy Protection Bill, and in 2014 the Right to Privacy Bill was leaked. Currently the Government is in the process of drafting and finalizing the Bill.

      Draft Right to Privacy

      Privacy Research -

      1. Approach Paper on Privacy, 2010 -

      The following article contains the reply drafted by CIS in response to the Paper on Privacy in 2010. The Paper on Privacy was a document drafted by a group of officers created to develop a framework for a privacy legislation that would balance the need for privacy protection, security, sectoral interests, and respond to the domain legislation on the subject.

      2. Report on Privacy, 2012 -

      The Report on Privacy, 2012 was drafted and published by a group of experts under the Planning Commission pertaining to the current legislation with respect to privacy. The following articles contain the responses and criticisms to the report and the current legislation.

      3. Privacy Protection Bill, 2013 -

      The Privacy Protection Bill, 2013 was a legislation that aims to formulate the rules and law that governs privacy protection. The following articles refer to this legislation including a citizen's draft of the legislation.

      4. Right to Privacy Act, 2014 (Leaked Bill) -

      The Right to Privacy Act, 2014 is a bill still under proposal that was leaked, linked below.

      • Leaked Privacy Bill: 2014 vs. 2011 http://bit.ly/QV0Y0w

      Sectoral Privacy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:46 AM
      The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      The Right to Privacy has evolved in India since many decades, where the question of it being a Fundamental Right has been debated many times in courts of Law. With the advent of information technology and digitisation of the services, the issue of Privacy holds more relevance in sectors like Banking, Healthcare, Telecommunications, ITC, etc., The Right to Privacy is also addressed in light of the Sexual minorities, Whistle-blowers, Government services, etc.

      Sectors -

      1. Consumer Privacy and other sectors -

      Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate customer privacy measures. The following articles deal with the current consumer privacy laws in place in India and around the world. Also, privacy concerns have been considered along with other sectors like Copyright law, data protection, etc.

      § Consumer Privacy - How to Enforce an Effective Protective Regime? http://bit.ly/1a99P2z

      § Privacy and Information Technology Act: Do we have the Safeguards for Electronic Privacy? http://bit.ly/10VJp1P

      • Limits to Privacy http://bit.ly/19mPG6I

      § Copyright Enforcement and Privacy in India http://bit.ly/18fi9fM

      • Privacy in India: Country Report http://bit.ly/14pnNwl

      § Transparency and Privacy http://bit.ly/1a9dMnC

      § The Report of the Group of Experts on Privacy (Contributed by CIS) http://bit.ly/VqzKtr

      § The (In) Visible Subject: Power, Privacy and Social Networking http://bit.ly/15koqol

      § Privacy and the Indian Copyright Act, 1857 as Amended in 2010 http://bit.ly/1euwX0r

      § Should Ratan Tata be afforded the Right to Privacy? http://bit.ly/LRlXin

      § Comments on Information Technology (Guidelines for Cyber Café) Rules, 2011 http://bit.ly/15kojJn

      § Broadcasting Standards Authority Censures TV9 over Privacy Violations! http://bit.ly/16L4izl

      § Is Data Protection Enough? http://bit.ly/1bvaWx2

      § Privacy, speech at stake in cyberspace http://cis-india.org/news/privacy-speech-at-stake-in-cyberspace-1

      § Q&A to the Report of the Group of Experts on Privacy http://bit.ly/TPhzQQ

      § Privacy worries cloud Facebook's WhatsApp Deal http://cis-india.org/internet-governance/blog/economic-times-march-14-2014-sunil-abraham-privacy-worries-cloud-facebook-whatsapp-deal

      § GNI Assessment Finds ICT Companies Protect User Privacy and Freedom of Expression http://bit.ly/1mjbpmL

      § A Stolen Perspective http://bit.ly/1bWHyzv

      § Is Data Protection enough? http://cis-india.org/internet-governance/blog/privacy/is-data-protection-enough

      § I don't want my fingerprints taken http://bit.ly/aYdMia

      § Keeping it Private http://bit.ly/15wjTVc

      § Personal Data, Public Profile http://bit.ly/15vlFk4

      § Why your Facebook Stalker is Not the Real Problem http://bit.ly/1bI2MSc

      § The Private Eye http://bit.ly/173ypSI

      § How Facebook is Blatantly Abusing our Trust http://bit.ly/OBXGXk

      § Open Secrets http://bit.ly/1b5uvK0

      § Big Brother is Watching You http://bit.ly/1cGpg0K

      2. Banking/Finance -

      Privacy in the banking and finance industry is crucial as the records and funds of one person must not be accessible by another without the due authorisation. The following articles deal with the current system in place that governs privacy in the financial and banking industry.

      § Privacy and Banking: Do Indian Banking Standards Provide Enough Privacy Protection? http://bit.ly/18fhsTM

      § Finance and Privacy http://bit.ly/15aUPh6

      § Making the Powerful Accountable http://bit.ly/1nvzSpC

      3. Telecommunications -

      The telecommunications industry is the backbone of current technology with respect to ICTs. The telecommunications industry has its own rules and regulations. These rules are the focal point of the following articles including criticism and acclaim.

      § Privacy and Telecommunications: Do We Have the Safeguards? http://bit.ly/10VJp1P

      § Privacy and Media Law http://bit.ly/18fgDfF

      § IP Addresses and Expeditious Disclosure of Identity in India http://bit.ly/16dBy4N

      § Telecommunications and Internet Privacy Read more: http://bit.ly/16dEcaF

      § Encryption Standards and Practices http://bit.ly/KT9BTy

      § Encryption Standards and Practices http://cis-india.org/internet-governance/blog/privacy/privacy_encryption

      § Security: Privacy, Transparency and Technology http://cis-india.org/internet-governance/blog/security-privacy-transparency-and-technolog y

      4. Sexual Minorities -

      While the internet is a global forum of self-expression and acceptance for most of us, it does not hold true for sexual minorities. The internet is a place of secrecy for those that do not conform to the typical identities set by society and therefore their privacy is more important to them than most. When they reveal themselves or are revealed by others, they typically face a lot of group hatred from the rest of the people and therefore value their privacy. The following article looks into their situation.

      · Privacy and Sexual Minorities http://bit.ly/19mQUyZ

      5. Health -

      The privacy between a doctor and a patient is seen as incredibly important and so should the privacy of a person in any situation where they reveal more than they would to others in the sense of CT scans and other diagnoses. The following articles look into the present scenario of privacy in places like a hospital or diagnosis center.

      § Health and Privacy http://bit.ly/16L1AJX

      § Privacy Concerns in Whole Body Imaging: A Few Questions http://bit.ly/1jmvH1z

      6. e-Governance -

      The main focus of governments in ICTs is their gain for governance. There have many a multiplicity of laws and legislation passed by various countries including India in an effort to govern the universal space that is the internet. Surveillance is a major part of that governance and control. The articles listed below deal with the issues of ethics and drawbacks in the current legal scenario involving ICTs.

      § E-Governance and Privacy http://bit.ly/18fiReX

      § Privacy and Governmental Databases http://bit.ly/18fmSy8

      § Killing Internet Softly with its Rules http://bit.ly/1b5I7Z2

      § Cyber Crime & Privacy http://bit.ly/17VTluv

      § Understanding the Right to Information http://bit.ly/1hojKr7

      § Privacy Perspectives on the 2012-2013 Goa Beach Shack Policy http://bit.ly/ThAovQ

      § Identifying Aspects of Privacy in Islamic Law http://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law

      § What Does Facebook's Transparency Report Tell Us About the Indian Government's Record on Free Expression & Privacy? http://cis-india.org/internet-governance/blog/what-does-facebook-transparency-report-tell -us-about-indian-government-record-on-free-expression-and-privacy

      § Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India http://cis-india.org/internet-governance/blog/search-and-seizure-and-right-to-privacy-in-digital-age

      § Internet Privacy in India http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-i ndia

      § Internet-driven Developments - Structural Changes and Tipping Points http://bit.ly/10s8HVH

      § Data Retention in India http://bit.ly/XR791u

      § 2012: Privacy Highlights in India http://bit.ly/1kWe3n7

      § Big Dog is Watching You! The Sci-fi Future of Animal and Insect Drones http://bit.ly/1kWee1W

      7. Whistle-blowers -

      Whistle-blowers are always in a difficult situation when they must reveal the misdeeds of their corporations and governments due to the blowback that is possible if their identity is revealed to the public. As in the case of Edward Snowden and many others, a whistle-blowers identity is to be kept the most private to avoid the consequences of revealing the information that they did. This is the main focus of the article below.

      § The Privacy Rights of Whistle-blowers http://bit.ly/18GWmM3

      8. Cloud and Open Source -

      Cloud computing and open source software have grown rapidly over the past few decades. Cloud computing is when an individual or company uses offsite hardware on a pay by usage basis provided and owned by someone else. The advantages are low costs and easy access along with decreased initial costs. Open source software on the other hand is software where despite the existence of proprietary elements and innovation, the software is available to the public at no charge. These software are based of open standards and have the obvious advantage of being compatible with many different set ups and are free. The following article highlights these computing solutions.

      § Privacy, Free/Open Source, and the Cloud http://bit.ly/1cTmGoI

      9. e-Commerce -

      One of the fastest growing applications of the internet is e-Commerce. This includes many facets of commerce such as online trading, the stock exchange etc. in these cases, just as in the financial and banking industries, privacy is very important to protect ones investments and capital. The following article's main focal point is the world of e-Commerce and its current privacy scenario.

      § Consumer Privacy in e-Commerce http://bit.ly/1dCtgTs

      Security Research

      by Vanya Rakesh last modified Jan 03, 2016 09:55 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      State surveillance in India has been carried out by Government agencies for many years. Recent projects include: NATGRID, CMS, NETRA, etc. which aim to overhaul the overall security and intelligence infrastructure in the country. The purpose of such initiatives has been to maintain national security and ensure interconnectivity and interoperability between departments and agencies. Concerns regarding the structure, regulatory frameworks (or lack thereof), and technologies used in these programmes and projects have attracted criticism.

      Surveillance/Security Research -

      1. Central Monitoring System -

      The Central Monitoring System or CMS is a clandestine mass electronic surveillance data mining program installed by the Center for Development of Telematics (C-DOT), a part of the Indian government. It gives law enforcement agencies centralized access to India's telecommunications network and the ability to listen in on and record mobile, landline, satellite, Voice over Internet Protocol (VoIP) calls along with private e-mails, SMS, MMS. It also gives them the ability to geo-locate individuals via cell phones in real time.

      • The Central Monitoring System: Some Questions to be Raised in Parliament http://bit.ly/1fln2vu

      2. Surveillance Industry : Global And Domestic -

      The surveillance industry is a multi-billion dollar economic sector that tracks individuals along with their actions such as e-mails and texts. With the cause for its existence being terrorism and the government's attempts to fight it, a network has been created that leaves no one with their privacy. All that an individual does in the digital world is suspect to surveillance. This included surveillance in the form of snooping where an individual's phone calls, text messages and e-mails are monitored or a more active kind where cameras, sensors and other devices are used to actively track the movements and actions of an individual. This information allows governments to bypass the privacy that an individual has in a manner that is considered unethical and incorrect. This information that is collected also in vulnerable to cyber-attacks that are serious risks to privacy and the individuals themselves. The following set of articles look into the ethics, risks, vulnerabilities and trade-offs of having a mass surveillance industry in place.

      • Surveillance Technologies http://bit.ly/14pxg74
      • New Standard Operating Procedures for Lawful Interception and Monitoring http://bit.ly/1mRRIo4

      3. Judgements By the Indian Courts -

      The surveillance industry in India has been brought before the court in different cases. The following articles look into the cause of action in these cases along with their impact on India and its citizens.

      4. International Privacy Laws -

      Due to the universality of the internet, many questions of accountability arise and jurisdiction becomes a problem. Therefore certain treaties, agreements and other international legal literature was created to answer these questions. The articles listed below look into the international legal framework which governs the internet.

      5. Indian Surveillance Framework -

      The Indian government's mass surveillance systems are configured a little differently from the networks of many countries such as the USA and the UK. This is because of the vast difference in infrastructure both in existence and the required amount. In many ways, it is considered that the surveillance network in India is far worse than other countries. This is due to the present form of the legal system in existence. The articles below explore the system and its functioning including the various methods through which we are spied on. The ethics and vulnerabilities are also explored in these articles.

      • A Comparison of Indian Legislation to Draft International Principles on Surveillance of Communications http://bit.ly/U6T3xy
      • Surveillance and the Indian Constitution - Part 2: Gobind and the Compelling State Interest Test http://bit.ly/1dH3meL
      • Surveillance and the Indian Constitution - Part 3: The Public/Private Distinction and the Supreme Court's Wrong Turn http://bit.ly/1kBosnw
      • Mastering the Art of Keeping Indians Under Surveillance http://cis-india.org/internet-governance/blog/the-wire-may-30-2015-bhairav-acharya-mastering-the-art-of-keeping-indians-under-surveillance

      UID Research

      by Vanya Rakesh last modified Jan 03, 2016 09:59 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Researching the vision and implementation of the UID Scheme - both from a technical and regulatory perspective.
      2. Understanding the validity and legality of collection, usage and storage of Biometric information for this scheme.
      3. Raising public awareness around issues concerning privacy, data security and the objectives of the UID Scheme.

      The UID scheme seeks to provide all residents of India an identity number based on their biometrics that can be used to authenticate individuals for the purpose of Government benefits and services. A 2015 Supreme Court ruling has clarified that the UID can only be used in the PDS and LPG Schemes.

      Concerns with the scheme include the broad consent taken at the time of enrolment, the lack of clarity as to what happens with transactional metadata, the centralized storage of the biometric information in the CIDR, the seeding of the aadhaar number into service providers’ databases, and the possibility of function creep. Also, there are concerns due to absence of a legislation to look into the privacy and security concerns.

      UID Research -

      1. Ramifications of Aadhar and UID schemes -

      The UID and Aadhar systems have been bombarded with criticisms and plagued with issues ranging from privacy concerns to security risks. The following articles deal with the many problems and drawbacks of these systems.

      § UID and NPR: Towards Common Ground http://cis-india.org/internet-governance/blog/uid-npr-towards-common-ground

      § Public Statement to Final Draft of UID Bill http://bit.ly/1aGf1NN

      § UID Project in India - Some Possible Ramifications http://cis-india.org/internet-governance/blog/uid-in-india

      § Aadhaar Number vs the Social Security Number http://cis-india.org/internet-governance/blog/aadhaar-vs-social-security-number

      § Feedback to the NIA Bill http://cis-india.org/internet-governance/blog/cis-feedback-to-nia-bill

      § Unique ID System: Pros and Cons http://bit.ly/1jmxbZS

      § Submitted seven open letters to the Parliamentary Finance Committee on the UID covering the following aspects: SCOSTA Standards (http://bit.ly/1hq5Rqd), Centralized Database (http://bit.ly/1hsHJDg), Biometrics (http://bit.ly/196drke), UID Budget (http://bit.ly/1e4c2Op), Operational Design (http://bit.ly/JXR61S), UID and Transactions (http://bit.ly/1gY6B8r), and Deduplication (http://bit.ly/1c9TkSg)

      § Comments on Finance Committee Statements to Open Letters on Unique Identity: The Parliamentary Finance Committee responded to the open letters sent by CIS through an email on 12 October 2011. CIS has commented on the points raised by the Committee: http://bit.ly/1kz4H0F

      § Unique Identification Scheme (UID) & National Population Register (NPR), and Governance http://cis-india.org/internet-governance/blog/uid-and-npr-a-background-note

      § Financial Inclusion and the UID http://cis-india.org/internet-governance/privacy_uidfinancialinclusion

      § The Aadhaar Case http://cis-india.org/internet-governance/blog/the-aadhaar-case

      § Do we need the Aadhaar scheme http://bit.ly/1850wAz

      § 4 Popular Myths about UID http://bit.ly/1bWFoQg

      § Does the UID Reflect India? http://cis-india.org/internet-governance/blog/privacy/uid-reflects-india

      § Would it be a unique identity crisis? http://cis-india.org/news/unique-identity-crisis

      § UID: Nothing to Hide, Nothing to Fear? http://cis-india.org/internet-governance/blog/privacy/uid-nothing-to-hide-fear

      2. Right to Privacy and UID -

      The UID system has been hit by many privacy concerns from NGOs, private individuals and others. The sharing of one's information, especially fingerprints and retinal scans to a system that is controlled by the government and is not vetted as having good security irks most people. These issues are dealt with the in the following articles.

      § India Fears of Privacy Loss Pursue Ambitious ID Project http://cis-india.org/news/india-fears-of-privacy-loss

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://bit.ly/1bWFoQg

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://cis-india.org/internet-governance/blog/privacy/privacy-uiddevaprasad

      § Supreme Court order is a good start, but is seeding necessary? http://cis-india.org/internet-governance/blog/supreme-court-order-is-a-good-start-but-is-seeding-necessary

      § Right to Privacy in Peril http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

      3. Data Flow in the UID -

      The articles below deal with the manner in which data is moved around and handled in the UID system in India.

      § UIDAI Practices and the Information Technology Act, Section 43A and Subsequent Rules http://cis-india.org/internet-governance/blog/uid-practices-and-it-act-sec-43-a-and-subsequent-rules

      § Data flow in the Unique Identification Scheme of India http://cis-india.org/internet-governance/blog/data-flow-in-unique-identification-scheme-of-india

      CIS's Position on Net Neutrality

      by Sunil Abraham last modified Dec 09, 2015 01:06 PM
      Contributors: pranesh
      As researchers committed to the principle of pluralism we rarely produce institutional positions. This is also because we tend to update our positions based on research outputs. But the lack of clarity around our position on network neutrality has led some stakeholders to believe that we are advocating for forbearance. Nothing can be farther from the truth. Please see below for the current articulation of our common institutional position.

       

      1. Net Neutrality violations can potentially have multiple categories of harms — competition harms, free speech harms, privacy harms, innovation and ‘generativity’ harms, harms to consumer choice and user freedoms, and diversity harms thanks to unjust discrimination and gatekeeping by Internet service providers.

      2. Net Neutrality violations (including some those forms of zero-rating that violate net neutrality) can also have different kinds benefits — enabling the right to freedom of expression, and the freedom of association, especially when access to communication and publishing technologies is increased; increased competition [by enabling product differentiation, can potentially allow small ISPs compete against market incumbents]; increased access [usually to a subset of the Internet] by those without any access because they cannot afford it, increased access [usually to a subset of the Internet] by those who don't see any value in the Internet, reduced payments by those who already have access to the Internet especially if their usage is dominated by certain services and destinations.

      3. Given the magnitude and variety of potential harms, complete forbearance from all regulation is not an option for regulators nor is self-regulation sufficient to address all the harms emerging from Net Neutrality violations, since incumbent telecom companies cannot be trusted to effectively self-regulate. Therefore, CIS calls for the immediate formulation of Net Neutrality regulation by the telecom regulator [TRAI] and the notification thereof by the government [Department of Telecom of the Ministry of Information and Communication Technology]. CIS also calls for the eventual enactment of statutory law on Net Neutrality.  All such policy must be developed in a transparent fashion after proper consultation with all relevant stakeholders, and after giving citizens an opportunity to comment on draft regulations.

      4. Even though some of these harms may be large, CIS believes that a government cannot apply the precautionary principle in the case of Net Neutrality violations. Banning technical innovations and business model innovations is not an appropriate policy option. The regulation must toe a careful line to solve the optimization problem: refraining from over-regulation of ISPs and harming innovation at the carrier level (and benefits of net neutrality violations mentioned above) while preventing ISPs from harming innovation and user choice.  ISPs must be regulated to limit harms from unjust discrimination towards consumers as well as to limit harms from unjust discrimination towards the services they carry on their networks.

      5. Based on regulatory theory, we believe that a regulatory framework that is technologically neutral, that factors in differences in technological context, as well as market realities and existing regulation, and which is able to respond to new evidence is what is ideal.

        This means that we need a framework that has some bright-line rules based, but which allows for flexibility in determining the scope of exceptions and in the application of the rules.  Candidate principles to be embodied in the regulation include: transparency, non-exclusivity, limiting unjust discrimination.

      6. The harms emerging from walled gardens can be mitigated in a number of waysOn zero-rating the form of regulation must depend on the specific model and the potential harms that result from that model. Zero-rating can be: paid for by the end consumer or subsidized by ISPs or subsidized by content providers or subsidized by government or a combination of these; deal-based or criteria-based or government-imposed; ISP-imposed or offered by the ISP and chosen by consumers; Transparent and understood by consumers vs. non-transparent; based on content-type or agnostic to content-type; service-specific or service-class/protocol-specific or service-agnostic; available on one ISP or on all ISPs.  Zero-rating by a small ISP with 2% penetration will not have the same harms as zero-rating by the largest incumbent ISP.  For service-agnostic / content-type agnostic zero-rating, which Mozilla terms ‘equal rating’, CIS advocates for no regulation.

      7. CIS believes that Net Neutrality regulation for mobile and fixed-line access must be different recognizing the fundamental differences in technologies.

      8. On specialized services CIS believes that there should be logical separation and that all details of such specialized services and their impact on the Internet must be made transparent to consumers both individual and institutional, the general public and to the regulator.  Further, such services should be available to the user only upon request, and not without their active choice, with the requirement that the service cannot be reasonably provided with ‘best efforts’ delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or that the discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

      9. On incentives for telecom operators, CIS believes that the government should consider different models such as waiving contribution to the Universal Service Obligation Fund for prepaid consumers, and freeing up additional spectrum for telecom use without royalty using a shared spectrum paradigm, as well as freeing up more spectrum for use without a licence.

      10. On reasonable network management CIS still does not have a common institutional position.

      Smart Cities in India: An Overview

      by Vanya Rakesh last modified Jan 11, 2016 01:30 AM
      The Government of India is in the process of developing 100 smart cities in India which it sees as the key to the country's economic and social growth. This blog post gives an overview of the Smart Cities project currently underway in India. The smart cities mission in India is at a nascent stage and an evolving area for research. The Centre for Internet and Society will continue work in this area.

      Overview of the 100 Smart Cities Mission

      The Government of India announced its flagship programme- the 100 Smart Cities mission in the year 2014 and was launched in June 2015 to achieve urban transformation, drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology. Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY 2015-16 to FY 2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporation of the learnings into the Mission. The Mission aims to focus on area-based development in the form of redevelopment of existing spaces, or the development of new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all - especially the poor and the disadvantaged. [1] On 27th August 2015 the Centre unveiled 98 smart cities across India which were selected for this Project. Across the selected cities, 13 crore population ( 35% of the urban population will be included in the development plans. [2] The mission has been developed for the purpose of achieving urban transformation. The vision is to preserve India's traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. [3]

      The promises of the Smart City mission include reduction of carbon footprint, adequate water and electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing, robust IT connectivity and digitalization, good governance, citizen participation, security of citizens, health and education.

      Questions unanswered

      • Why and How was the Smart Cities project conceptualized in India? What was the need for such a project in India?
      • What was the role of the public/citizens at the ideation and conceptualization stage of the project?
      • Which actors from the Government, Private industry and the civil society are involved in this mission? Though the smart cities mission has been initiated by the Government of India under the Ministry of Urban Development, there is no clarity about the involvement of the associated offices and departments of the Ministry.

      How are the Smart Cities being selected?

      The 100 cities were supposed to be selected on the basis of Smart cities challenge[4] involving two stages. Stage I of the challenge involved Intra-State city selection on objective criteria to identify cities to compete in stage-II. In August 2015, The Ministry of Urban Development, Government of India announced 100 smart cities [5] evaluated on parameters such as service levels, financial and institutional capacity, past track record, called as the 'shortlisted cities' for this purpose. The selected cities are now competing for selection in the Second stage of the challenge, which is an All India competition. For this crucial stage, the potential 100 smart cities are required to prepare a Smart City Proposal (SCP) stating the model chosen (retrofitting, redevelopment, Greenfield development or a mix), along with a Pan-City dimension with Smart Solutions. The proposal must also include suggestions collected by way of consultations held with city residents and other stakeholders, along with the proposal for financing of the smart city plan including the revenue model to attract private participation. The country saw wide participation from the citizens to voice their aspirations and concerns regarding the smart city. 15th December 2015 has been declared as the deadline for submission of the SCP, which must be in consonance with evaluation criteria set by The MoUD, set on the basis of professional advice. [6] On the basis of this, 20 cities will be selected for the first year. According to the latest reports, the Centre is planning to fund only 10 cities for the first phase in case the proposals sent by the states do not match the expected quality standards and are unable to submit complete area-development plans by the deadline, i.e. 15th December, 2015. [7]

      Questions unanswered

      • Who would be undertaking the task of evaluating and selecting the cities for this project?
      • What are the criteria for selection of a city to qualify in the first 20 (or 10, depending on the Central Government) for the first phase of implementation?

      How are the smart cities going to be Funded?

      The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year. [8] The additional resources will have to be mobilized by the State/ ULBs from external/internal sources. According to the scheme, once list of shortlisted Smart Cities is finalized, Rs. 2 crore would have been disbursed to each city for proposal preparation.[9]

      According to estimates of the Central Government, around Rs 4 lakh crore of funds will be infused mainly through private investments and loans from multilateral institutions among other sources, which accounts to 80% of the total spending on the mission. [10] For this purpose, the Government will approach the World Bank and the Asian Development Bank (ADB) for a loan costing £500 million and £1 billion each for 2015-20. If ADB approves the loan, it would be it will be the bank's highest funding to India's urban sector so far.[11] Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission. [12]

      Questions unanswered

      • The Government notes on Financing of the project mentions PPPs for private funding and leveraging of resources from internal and external resources. There is lack of clarity on the external resources the Government has/will approach and the varied PPP agreements the Government is or is planning to enter into for the purpose of private investment in the smart cities.

      How is the scheme being implemented?

      Under this scheme, each city is required to establish a Special Purpose Vehicle (SPV) having flexibility regarding planning, implementation, management and operations. The body will be headed by a full-time CEO, with nominees of Central Government, State Government and ULB on its Board. The SPV will be a limited company incorporated under the Companies Act, 2013 at the city-level, in which the State/UT and the Urban Local Body (ULB) will be the promoters having equity shareholding in the ratio 50:50. The private sector or financial institutions could be considered for taking equity stake in the SPV, provided the shareholding pattern of 50:50 of the State/UT and the ULB is maintained and the State/UT and the ULB together have majority shareholding and control of the SPV. Funds provided by the Government of India in the Smart Cities Mission to the SPV will be in the form of tied grant and kept in a separate Grant Fund.[13]

      For the purpose of implementation and monitoring of the projects, the MoUD has also established an Apex Committee and National Mission Directorate for National Level Monitoring[14], a State Level High Powered Steering Committee (HPSC) for State Level Monitoring[15] and a Smart City Advisory Forum at the City Level [16].

      Also, several consulting firms[17] have been assigned to the 100 cities to help them prepare action plans.[18] Some of them include CRISIL, KPMG, McKinsey, etc. [19]

      Questions unanswered

      • What policies and regulations have been put in place to account for the smart cities, apart from policies looking at issues of security, privacy, etc.?
      • What international/national standards will be adopted while development of the smart cities? Though the Bureau of Indian Standards is in the process of formulating standardized guidelines for the smart cities in India[20], yet there is lack of clarity on adoption of these national standards, along with the role of international standards like the ones formulated by ISO.

      What is the role of Foreign Governments and bodies in the Smart cities mission?

      Ever since the government's ambitious project has been announced and cities have been shortlisted, many countries across the globe have shown keen interest to help specific shortlisted cities in building the smart cities and are willing to invest financially. Countries like Sweden, Malaysia, UAE, USA, etc. have agreed to partner with India for the mission.[21] For example, UK has partnered with the Government to develop three India cities-Pune, Amravati and Indore.[22] Israel's start-up city Tel Aviv also entered into an agreement to help with urban transformation in the Indian cities of Pune, Nagpur and Nashik to foster innovation and share its technical know-how.[23] France has piqued interest for Nagpur and Puducherry, while the United States is interested in Ajmer, Vizag and Allahabad. Also, Spain's Barcelona Regional Agency has expressed interest in exchanging technology with the Delhi. Apart from foreign government, many organizations and multilateral agencies are also keen to partner with the Indian government and have offered financial assistance by way of loans. Some of them include the UK government-owned Department for International Development, German government KfW development bank, Japan International Cooperation Agency, the US Trade and Development Agency, United Nations Industrial Development Organization and United Nations Human Settlements Programme. [24]

      Questions unanswered

      • Do these governments or organization have influence on any other component of the Smart cities?
      • How much are the foreign governments and multilateral bodies spending on the respective cities?
      • What kind of technical know-how is being shared with the Indian government and cities?

      What is the way ahead?

      On the basis of the SCP, the MoUD will evaluate, assess the credibility and select 20 smart cities out of the short-listed ones for execution of the plan in the first phase. The selected city will set up a SPV and receive funding from the Government.

      Questions unanswered

      • Will the deadline of submission of the Smart Cities Proposal be pushed back?
      • After the SCP is submitted on the basis of consultation with the citizens and public, will they be further involved in the implementation of the project and what will be their role?
      • How will the MoUD and other associated organizations as well as actors consider the implementation realities of the project, like consideration of land displacement, rehabilitation of the slum people, etc.
      • How are ICT based systems going to be utilized to make the cities and the infrastructure "smart"?
      • How is the MoUD going to respond to the concerns and criticism emerging from various sections of the society, as being reflected in the news items?
      • How will the smart cities impact and integrate the existing laws, regulations and policies? Does the Government intend to use the existing legislations in entirety, or update and amend the laws for implementation of the Smart Cities Mission?


      [1] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

      [2] http://articles.economictimes.indiatimes.com/2015-08-27/news/65929187_1_jammu-and-kashmir-12-cities-urban-development-venkaiah-naidu

      [3] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [4] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [5] Full list : http://www.scribd.com/doc/276467963/Smart-Cities-Full-List

      [6] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [7] http://www.ibtimes.co.in/modi-govt-select-only-10-cities-under-smart-city-project-this-year-report-658888

      [8] http://smartcities.gov.in/writereaddata/Financing%20of%20Smart%20Cities.pdf

      [9] Smart Cities presentation by MoUD : http://smartcities.gov.in/writereaddata/Presentation%20on%20Smart%20Cities%20Mission.pdf

      [10] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/

      [11] http://indianexpress.com/article/india/india-others/funding-for-smart-cities-key-to-coffer-lies-outside-india/#sthash.5lnW9Jsq.dpuf

      [12] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [13] http://smartcities.gov.in/writereaddata/SPVs.pdf

      [14] http://smartcities.gov.in/writereaddata/National%20Level%20Monitoring.pdf

      [15] http://smartcities.gov.in/writereaddata/State%20Level%20Monitoring.pdf

      [16] http://smartcities.gov.in/writereaddata/City%20Level%20Monitoring.pdf

      [17] http://smartcities.gov.in/writereaddata/List_of_Consulting_Firms.pdf

      [18] http://pib.nic.in/newsite/PrintRelease.aspx?relid=128457

      [20] http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html

      [21] http://accommodationtimes.com/foreign-countries-have-keen-interest-in-development-of-smart-cities/

      [22] http://articles.economictimes.indiatimes.com/2015-11-20/news/68440402_1_uk-trade-three-smart-cities-british-deputy-high-commissioner

      [23] http://www.jpost.com/Business-and-Innovation/Tech/Tel-Aviv-to-help-India-build-smart-cities-435161?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

      [24] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/#sthash.nCMxEKkc.dpuf

      ISO/IEC/ JTC 1/SC 27 Working Groups Meeting, Jaipur

      by Vanya Rakesh last modified Dec 21, 2015 02:38 AM
      I attended this event held from October 26 to 30, 2015 in Jaipur.

      The Bureau of Indian Standards (BIS) in collaboration with Data Security Council of India (DSCI) hosted the global standards’ meeting – ISO/IEC/ JTC 1/SC 27 Working Groups Meeting in Jaipur, Rajasthan at Hotel Marriott from 26th to 30th of October, 2015, followed by a half day conference on Friday, 30th October on the importance of Standards in the domain. The event witnessed experts from across the globe deliberating on forging international standards on Privacy, Security and Risk management in IoT, Cloud Computing and many other contemporary technologies, along with updating existing standards. Under SC 27, 5 working groups parallely held the meetings on varied Projects and Study periods respectively. The 5 Working Groups are as follows:

      1. WG1: Information Security Management Systems;
      2. WG 2 :Cryptography and Security Mechanisms;
      3. WG 3 : Security Evaluation, Testing and Specification;
      4. WG 4 : Security Controls and Services; and
      5. WG 5 :Identity Management and Privacy technologies; competence of security management

      This key set of Working Groups (WG)met in India for the first time.  Professionals discussed and debated development of standards under each working group to develop international standards to address issues regarding security, identity management and privacy.

      CIS had the opportunity to attend meetings under Working Group 5. This group further had parallel meetings on several topics namely:

      • Privacy enhancing data de-identification techniques ISO/IEC NWIP 20889 : Data de-identification techniques are important when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles. The selection, design, use and assessment of these techniques need to be performed appropriately in order to effectively address the risks of re-identification in a given context.  There is thus a need to classify known de-identification techniques using standardized terminology, and to describe their characteristics, including the underlying technologies, the applicability of each technique to reducing the risk of re-identification, and the usability of the de-identified data.  This is the main goal of this International Standard. Meetings were conducted to resolve comments sent by organisations across the world, review draft documents and agree on next steps.
      • A study period on Privacy Engineering framework : This session deliberated upon contributions, terms of reference and discuss the scope for the emerging field of privacy engineering framework. The session also reviewed important terms to be included in the standard and identify possible improvements to existing privacy impact assessment and management standards. It was identified that the goal of this standard is to integrate privacy into systems as part of the systems engineering process. Another concern raised was that the framework must be consistent with Privacy framework under ISO 29100 and HL7 Privacy and security standards.
      • A study period on user friendly online privacy notice and consent: The basic purpose of this New Work Item Proposal is to assess the viability of producing a guideline for PII Controllers on providing easy to understand notices and consent procedures to PII Principals within WG5. At the Meeting, a brief overview of the contributions received was given,along with assessment of  liaison to ISO/IEC JTC 1/SC 35 and other entities. This International Standard gives guidelines for the content and the structure of online privacy notices as well as documents asking for consent to collect and process personally identifiable information (PII) from PII principals online and is applicable to all situations where a PII controller or any other entity processing PII informs PII principals in any online context.
      • Some of the other sessions under Working Group 5 were on Privacy Impact Assessment ISO/IEC 29134, Standardization in the area of Biometrics and Biometric information protection, Code of Practise for the protection of personally identifiable information, Study period on User friendly online privacy notice and consent, etc.

      ISO/IEC/JTC 1/ SC27 is a joint technical committee of the international standards bodies – ISO and IEC on Information Technology security techniques which conducts regular meetings across the world. JTC 1 has over 2600 published standards developed under the broad umbrella of the committee and its 20 subcommittees. Draft International Standards adopted by the joint technical committees are circulated to the national bodies for voting. Publication as an International Standard requires approval by at least 75% of the national bodies casting a vote in favour of the same. In India, the Bureau of Indian Standards (BIS) is the National Standards Body. Standards are formulated keeping in view national priorities, industrial development, technical needs, export promotion, health, safety etc. and are harmonized with ISO/IEC standards (wherever they exist) to the extent possible, in order to facilitate adoption of ISO/IEC standards by all segments of industry and business.BIS has been actively participating in the  Technical Committee  work of ISO/IEC and is currently a Participating member in 417 and 74 Technical Committees/ Subcommittees and Observer member in 248 and 79 Technical Committees/Subcommittees of ISO and IEC respectively.  BIS  holds Secretarial responsibilities of 2 Technical Committees and 6 Subcommittees of ISO.

      The last meeting was held in the month of May, 2015 in Malaysia, followed by this meeting in October, 2015 Jaipur. 51 countries play an active role as the ‘Participating Members, India being one, while a few countries as observing members. As a part of these sessions, the participating countries also have rights to vote in all official ballots related to standards. The representatives of the country work on the preparation and development of the International Standards and provide feedback to their national organizations.

      There was an additional study group meeting on IoT to discuss comments on the previous drafts, suggest changes , review responses and identify standard gaps in SC 27.

      On October 30, 2015  BIS-DSCI hosted a half day International conference on 30 October, 2015 on Cyber Security and Privacy Standards, comprising of keynotes and panel discussions, bringing together national and international experts to share experience and exchange views on cyber security techniques and protection of data and privacy in international standards, and their growing importance in their society.  The conference looked at various themes like the Role of standards in smart cities, Responding to the Challenges of Investigating Cyber Crimes through Standards, etc. It was emphasised that due to an increasing digital world, there is a universal agreement for the need of cyber security as the infrastructure is globally connected, the cyber threats are also distributed as they are not restricted by the geographical boundaries. Hence, the need for technical and policy solutions, along with standards was highlighted for future protection of the digital world which is now deeply embedded in life, businesses and the government. Standards will help in setting crucial infrastructure for in data security and build associated infrastructure on these lines.

      The importance of standards was highlighted in context of smart cities wherein the need for standards was discussed by experts. Harmonization of regulations with standards must be looked at, by primarily creating standards which could be referred to by the regulators. Broadly, the challenges faced by smart cities are data security, privacy and digital resilience of the infrastructure. It was suggested that in the beginning, these areas must be looked at for development of standards in smart cities. Also, the ISO/IEC  has a Working Group and a Strategic Group focussing on Smart Cities. The risks of digitisation, network, identity management, etc. must be looked at to create the standards.

      The next meeting has been scheduled for April 2016 in Tampa (USA).

      This meeting was a good opportunity to interact with experts from various parts of the World and understand the working of ISO Meetings which are held twice/thrice every year. The Centre for Internet and Society will be continuing work and becoming involved in the standard setting process at the future Working group meetings.

      Document Actions