Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

Comments on the Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

by Geetha Hariharan last modified Oct 16, 2015 02:44 AM
On 9 October 2015, the Zero Draft of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released. Comments were sought on the Zero Draft from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

These comments were prepared by Geetha Hariharan with inputs from Sumandro Chattapadhyay, Pranesh Prakash, Sunil Abraham, Japreet Grewal and Nehaa Chaudhari. Download the comments here.


  1. The Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“Zero Draft”) is divided into three sections: (A) ICT for Development; (B) Internet Governance; (C) Implementation and Follow-up. CIS’ comments follow the same structure.
  2. The Zero Draft is a commendable document, covering crucial areas of growth and challenges surrounding the WSIS. The Zero Draft makes detailed references to development-related challenges, noting the persistent digital divide, the importance of universal access, innovation and investment, and of enabling legal and regulatory environments conducive to the same. It also takes note of financial mechanisms, without which principles would remain toothless. Issues surrounding Internet governance, particularly net neutrality, privacy and the continuation of the IGF are included in the Zero Draft.
  3. However, we believe that references to these issues are inadequate to make progress on existing challenges. Issues surrounding ICT for Development and Internet Governance have scarcely changed in the past ten years. Though we may laud the progress so far achieved, universal access and connectivity, the digital divide, insufficient funding, diverse and conflicting legal systems surrounding the Internet, the gender divide and online harassment persist. Moreover, the working of the IGF and the process of Enhanced Cooperation, both laid down with great anticipation in the Tunis Agenda, have been found wanting.
  4. These need to be addressed more clearly and strongly in the Zero Draft. In light of these shortcomings, we suggest the following changes to the Zero Draft, in the hope that they are accepted.
    A. ICT for Development
  5. Paragraphs 16-21 elaborate upon the digital divide – both the progresses made and challenges. While the Zero Draft recognizes the disparities in access to the Internet among countries, between men and women, and of the languages of Internet content, it fails to attend to two issues.
  6. First, accessibility for persons with disabilities continues to be an immense challenge. Since the mandate of the WSIS involves universal access and the bridging of the digital divide, it is necessary that the Zero Draft take note of this continuing challenge.
  7. We suggest the insertion of Para 20A after Para 20:
    “20A. We draw attention also to the digital divide adversely affecting the accessibility of persons with disabilities. We call on all stakeholders to take immediate measures to ensure accessibility for persons with disabilities by 2020, and to enhance their capacity and access to ICTs.”
  8. Second, while the digital divide among the consumers of ICTs has decreased since 2003-2005, the digital production divide goes unmentioned. The developing world continues to have fewer producers of technology compared to their sheer concentration in the developed world – so much so that countries like India are currently pushing for foreign investment through missions like ‘Digital India’. Of course, the Zero Draft refers to the importance of private sector investment (Para 31). But it fails to point out that currently, such investment originates from corporations in the developed world. For this digital production divide to disappear, restrictions on innovation – restrictive patent or copyright regimes, for instance – should be removed, among other measures. Equitable development is the key.
  9. Ongoing negotiations of plurilateral agreements such as the Trans-Pacific Partnership (TPP) go unmentioned in the Zero Draft. This is shocking. The TPP has been criticized for its excessive leeway and support for IP rightsholders, while incorporating non-binding commitments involving the rights of users (see Clause QQ.G.17 on copyright exceptions and limitations, QQ.H.4 on damages and QQ.C. 12 on ccTLD WHOIS, https://wikileaks.org/tpp-ip3/WikiLeaks-TPP-IP-Chapter/WikiLeaks-TPP-IP-Chapter-051015.pdf). Plaudits for progress make on the digital divide would be lip service if such agreements were not denounced.
  10. Therefore, we propose the addition of Para 20B after Para 20:
    “20B. We draw attention also to the digital production divide among countries, recognizing that domestic innovation and production are instrumental in achieving universal connectivity. Taking note of recent negotiations surrounding restrictive and unbalanced plurilateral trade agreements, we call on stakeholders to adopt policies to ensure globally equitable development, removing restrictions on innovation and conducive to fostering domestic and local production.”
  11. Paragraph 22 of the Zero Draft acknowledges that “school curriculum requirements for ICT, open access to data and free flow of information, fostering of competition, access to finance”, etc. have “in many countries, facilitated significant gains in connectivity and sustainable development”.
  12. This is, of course, true. However, as Para 23 also recognises, access to knowledge, data and innovation have come with large costs, particularly for developing countries like India. These costs are heightened by a lack of promotion and adoption of open standards, open access, open educational resources, open data (including open government data), and other free and open source practices. These can help alleviate costs, reduce duplication of efforts, and provide an impetus to innovation and connectivity globally.
  13. Not only this, but the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of “Data Revolution for Sustainable Development”. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.
  14. As such, we suggest that following clause be inserted as Para 23A to the Zero Draft:

“23A. We recognize the importance of access to open, affordable, and reliable technologies and services, open access to knowledge, and open data, including open government data, and encourage all stakeholders to explore concrete options to facilitate the same.”

15. Paragraph 30 of the Zero Draft laments “the lack of progress on the Digital Solidarity Fund”, and calls “for a review of options for its future”.

16. The Digital Solidarity Fund was established with the objective of “transforming the digital divide into digital opportunities for the developing world” through voluntary contributions [Para 28, Tunis Agenda]. It was an innovative financial mechanism to help bridge the digital divide between developed and developing countries. This divide continues to exist, as the Zero Draft itself recognizes in Paragraphs 16-21.

17. Given the persistent digital divide, a “call for review of options” as to the future of the Digital Solidarity Fund is inadequate to enable developing countries to achieve parity with developed countries. A stronger and more definite commitment is required.

18. As such, we suggest the following language in place of the current Para 30:

“30. We express concern at the lack of progress on the Digital Solidarity Fund, welcomed in Tunis as an innovative financial mechanism of a voluntary nature, and we call for voluntary commitments from States to revive and sustain the Digital Solidarity Fund.”

19. Paragraph 31 of the Zero Draft recognizes the importance of “legal and regulatory frameworks conducive to investment and innovation”. This is eminently laudable. However, a broader vision is more compatible with paving the way for affordable and widespread access to devices and technology necessary for universal connectivity.

20. We suggest the following additions to Para 31:

“31. We recognise the critical importance of private sector investment in ICT access, content and services, and of legal and regulatory frameworks conducive to local investment and expansive, permissionless innovation.”

B. Internet Governance

21. Paragraph 32 of the Zero Draft recognizes the “general agreement that the governance of the Internet should be open, inclusive, and transparent”. Para 37 takes into account “the report of the CSTD Working Group on improvements to the IGF”. Para 37 also affirms the intention of the General Assembly to extend the life of the IGF by (at least) another 5 years, and acknowledges the “unique role of the IGF”.

22. The IGF is, of course, unique and crucial to global Internet governance. In the last 10 years, major strides have been made among diverse stakeholders in beginning and sustaining conversations on issues critical to Internet governance. These include issues such as human rights, inclusiveness and diversity, universal access to connectivity, emerging issues such as net neutrality, the right to be forgotten, and several others. Through its many arms like the Dynamic Coalitions, the Best Practices Forums, Birds-of-a-Feather meetings and Workshops, the IGF has made it possible for stakeholders to connect.

23. However, the constitution and functioning of the IGF have not been without lament and controversy. Foremost among the laments was the IGF’s evident lack of outcome-orientation; this continues to be debatable. Second, the composition and functioning of the MAG, particularly its transparency, have come under the microscope several times. One of the suggestions of the CSTD Working Group on Improvements to the IGF concerned the structure and working methods of the Multistakeholder Advisory Group (MAG). The Working Group recommended that the “process of selection of MAG members should be inclusive, predictable, transparent and fully documented” (Section II.2, Clause 21(a), Page 5 of the Report).

24. Transparency in the structure and working methods of the MAG are critical to the credibility and impact of the IGF. The functioning of the IGF depends, in a large part, on the MAG. The UN Secretary General established the MAG, and it advises the Secretary General on the programme and schedule of the IGF meetings each year (see <http://www.intgovforum.org/cms/mag/44-about-the-mag>). Under its Terms of Reference, the MAG decides the main themes and sub-themes for each IGF, sets or modifies the rules of engagement, organizes the main plenary sessions, coordinates workshop panels and speakers, and crucially, evaluates the many submissions it receives to choose from amongst them the workshops for each IGF meeting. The content of each IGF, then, is in the hands of the MAG.

25. But the MAG is not inclusive or transparent. The MAG itself has lamented its opaque ‘black box approach’ to nomination and selection. Also, CIS’ research has shown that the process of nomination and selection of the MAG continues to be opaque. When CIS sought information on the nominators of the MAG, the IGF Secretariat responded that this information would not be made public (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

26. Further, our analysis of MAG membership shows that since 2006, 26 persons have served for 6 years or more on the MAG. This is astounding, since under the MAG Terms of Reference, MAG members are nominated for a term of 1 year. This 1-year-term is “automatically renewable for 2 more consecutive years”, but such renewal is contingent on an evaluation of the engagement of MAG members in their activities (see <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference>). MAG members ought not serve for over 3 consecutive years, in accordance with their Terms of Reference. But out of 182 MAG members, around 62 members have served more than the 3-year terms designated by their Terms of Reference (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

27. Not only this, but our research showed 36% of all MAG members since 2006 have hailed from the Western European and Others Group (see <http://cis-india.org/internet-governance/blog/mag-analysis>). This indicates a lack of inclusiveness, though the MAG is certainly more inclusive than the composition and functioning of other I-Star organisations such as ICANN.

28. Tackling these infirmities within the MAG would go a long way in ensuring that the IGF lives up to its purpose. Therefore, we suggest the following additions to Para 37:

“37. We acknowledge the unique role of the Internet Governance Forum (IGF) as a multistakeholder platform for discussion of Internet governance issues, and take note of the report and recommendations of the CSTD Working Group on improvements to the IGF, which was approved by the General Assembly in its resolution, and ongoing work to implement the findings of that report. We reaffirm the principles of openness, inclusiveness and transparency in the constitution, organisation and functioning of the IGF, and in particular, in the nomination and selection of the Multistakeholder Advisory Group (MAG). We extend the IGF mandate for another five years with its current mandate as set out in paragraph 72 of the Tunis Agenda for the Information Society. We recognize that, at the end of this period, progress must be made on Forum outcomes and participation of relevant stakeholders from developing countries.”

29. Paragraphs 32-37 of the Zero Draft make mention of “open, inclusive, and transparent” governance of the Internet. It fails to take note of the lack of inclusiveness and diversity in Internet governance organisations – extending across representation, participation and operations of these organisations. In many cases, mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle. In substantive terms, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For example, the ‘diversity’ mailing list of IETF has very low traffic. Within ICANN, 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed.

30. We propose, therefore, the addition of a Para 37A after Para 37:

“37A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, and call upon these organisations to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

31. Paragraphs 36 of the Zero Draft notes that “a number of member states have called for an international legal framework for Internet governance.” But it makes no reference to ICANN or the importance of the ongoing IANA transition to global Internet governance. ICANN and its monopoly over several critical Internet resources was one of the key drivers of the WSIS in 2003-2005. Unfortunately, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

32. Therefore, we suggest the following Para 37A after Para 37:

“37A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

33. Paragraph 43 of the Zero Draft encourages “all stakeholders to ensure respect for privacy and the protection of personal information and data”. But the Zero Draft inadvertently leaves out the report of the Office of the UN High Commissioner for Human Rights on digital privacy, ‘The right to privacy in the digital age’ (A/HRC/27/37). This report, adopted by the Human Rights Council in June 2014, affirms the importance of the right to privacy in our increasingly digital age, and offers crucial insight into recent erosions of privacy. It is both fitting and necessary that the General Assembly take note of and affirm the said report in the context of digital privacy.

34. We offer the following suggestion as an addition to Para 43:

“43. We emphasise that no person shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, or correspondence, consistent with countries’ applicable obligations under international human rights law. In this regard, we acknowledge the report of the Office of the UN High Commissioner for Human Rights, ‘The right to privacy in the digital age’ (A/HRC/27/37, 30 June 2014), and take note of its findings. We encourage all stakeholders to ensure respect for privacy and the protection of personal information and data.”

35. Paragraphs 40-44 of the Zero Draft state that communication is a fundamental human need, reaffirming Article 19 of the Covenant on Civil and Political Rights, with its attendant narrow limitations. The Zero Draft also underscores the need to respect the independence of the press. Particularly, it reaffirms the principle that the same rights that people enjoy offline must also be protected online.

36. Further, in Para 31, the Zero Draft recognizes the “critical importance of private sector investment in ICT access, content, and services”. This is true, of course, but corporations also play a crucial role in facilitating the freedom of speech and expression (and all other related rights) on the Internet. As the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online. They are, in sum, intermediaries without whom the Internet cannot function.

37. Given this, it is essential that the outcome document of the WSIS+10 Overall Review recognize and affirm the role of the private sector, and crucially, its responsibilities to respect and protect human rights online.

38. We suggest, therefore, the insertion of the following paragraph Para 42A, after Para 42:

“42A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

C. Implementation and Follow-up

39. Para 57 of the Zero Draft calls for a review of the WSIS Outcomes, and leaves a black space inviting suggestions for the year of the review. How often, then, should the review of implementation of WSIS+10 Outcomes take place?

40. It is true, of course, that reviews of the implementation of WSIS Outcomes are necessary to take stock of progress and challenges. However, we caution against annual, biennal or other such closely-spaced reviews due to concerns surrounding budgetary allocations.

41. Reviews of implementation of outcomes (typically followed by an Outcome Document) come at considerable cost, which are budgeted and achieved through contributions (sometimes voluntary) from states. Were Reviews to be too closely spaced, budgets that ideally ought to be utilized to bridge digital divides and ensure universal connectivity, particularly for developing states, would be misspent in reviews. Moreover, closely-spaced reviews would only provide superficial quantitative assessments of progress, but would not throw light on longer term or qualitative impacts.

Comments on the Zero Draft of the UN General Assembly

by Prasad Krishna last modified Oct 16, 2015 02:41 AM

PDF document icon Final_CIS_Comments_UNGA_WSIS_Zero_Draft.pdf — PDF document, 478 kB (490106 bytes)

CyFy Agenda

by Prasad Krishna last modified Oct 16, 2015 03:01 AM

PDF document icon CyFyAgendaFinal.pdf — PDF document, 190 kB (195156 bytes)

The 'Global Multistakholder Community' is Neither Global Nor Multistakeholder

by Pranesh Prakash last modified Nov 03, 2016 10:42 AM
CIS research shows how Western, male, and industry-driven the IANA transition process actually is.

 

In March 2014, the US government announced that they were going to end the contract they have with ICANN to run something called the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

Analysis of the process since then shows how flawed the “global multistakeholder community” that converges at ICANN has not actually represented the disparate interests and concerns of different stakeholders. CIS research has found that the discussions around IANA transition have not been driven by the “global multistakeholder community”, but mostly by males from industry in North America and Western Europe.

CIS analysed the five main mailing lists where the IANA transition plan was formulated: ICANN’s ICG Stewardship and CCWG Accountability lists; IETF’s IANAPLAN list; and the NRO’s IANAXFER list and CRISP lists. What we found was quite disheartening.

  • A total of 239 individuals participated cumulatively, across all five lists.
  • Only 98 substantively contributed to the final shape of the ICG proposal, if one takes a count of 20 mails (admittedly, an arbitrary cut-off) as a substantive contribution, with 12 of these 98 being ICANN staff some of whom were largely performing an administrative function.

We decided to look at the diversity within these substantive contributors using gender, stakeholder grouping, and region. We relied on public records, including GNSO SOI statements, and extensive searches on the Web. Given that, there may be inadvertent errors, but the findings are so stark that even a few errors wouldn’t affect them much.

  • 2 in 5 (39 of 98, or 40%) were from a single country: the United States of America.
  • 4 in 5 (77 of 98) were from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries.
  • None were from the EEC (Eastern European and Russia) group, and only 5 of 98 from all of GRULAC (Latin American and Caribbean Group).
  • 4 in 5 (77 of 98) were male and 21 were female.
  • 4 in 5 (76 of 98) were from industry or the technical community, and only 4 (or 1 in 25​) were identifiable as primarily speaking on behalf of governments.

This shows also that the process has utterly failed in achieving the recommendation of Paragraph 6 of the 3 in 5 registrars are from the United States of America (624 out of 1010, as of March 2014, according to ICANN's accredited registrars list), with only 0.6% being from the 54 countries in Africa (7 out of 1010).

  • 45% of all the registries are from the United States of America! (307 out of 672 registries listed in ICANN’s registry directory in August 2015.)
  • 66% (34 of 51) of the Business Constituency at ICANN are from a single country: the United States of America. (N.B.: This page doesn’t seem to be up-to-date.)
  • This shows that businesses from the United States of America continues to dominate ICANN to a very significant degree, and this is also reflected in the nature of the dialogue within ICANN, including the fact that the proposal that came out of the ICANN ‘global multistakeholder community’ on IANA transition proposes a clause that requires the ‘IANA Functions Operator’ to be a US-based entity. For more on that issue, see this post on the jurisdiction issue at ICANN (or rather, on the lack of a jurisdiction issue at ICANN).

    Policy Brief: Oversight Mechanisms for Surveillance

    by Elonnai Hickok last modified Nov 24, 2015 06:09 AM

    Download the PDF


    Introduction

    Across jurisdictions, the need for effective and relevant oversight mechanisms (coupled with legislative safeguards) for state surveillance has been highlighted by civil society, academia, citizens and other key stakeholders.[1] A key part of oversight of state surveillance is accountability of intelligence agencies. This has been recognized at the international level. Indeed, the Organization for Economic Co-operation and Development, The United Nations, the Organization for Security and Cooperation in Europe, the Parliamentary Assembly of the Council of Europe, and the Inter-Parliamentary Union have all recognized that intelligence agencies need to be subject to democratic accountability.[2] Since 2013, the need for oversight has received particular attention in light of the information disclosed through the 'Snowden Revelations'. [3] Some countries such as the US, Canada, and the UK have regulatory mechanisms for the oversight of state surveillance and the intelligence community, while many other countries – India included - have piecemeal oversight mechanisms in place. The existence of regulatory mechanisms for state surveillance does not necessarily equate to effective oversight – and piecemeal mechanisms – depending on how they are implemented, could be more effective than comprehensive mechanisms. This policy brief seeks to explore the purpose of oversight mechanisms for state surveillance, different forms of mechanisms, and what makes a mechanism effective and comprehensive. The brief also reviews different oversight mechanisms from the US, UK, and Canada and provides recommendations for ways in which India can strengthen its present oversight mechanisms for state surveillance and the intelligence community.

    What is the purpose and what are the different components of an oversight mechanism for State Surveillance?

    The International Principles on the Application of Human Rights to Communication Surveillance, developed through a global consultation with civil society groups, industry, and international experts recommends that public oversight mechanisms for state surveillance should be established to ensure transparency and accountability of Communications Surveillance. To achieve this, mechanisms should have the authority to:

    • Access all potentially relevant information about State actions, including, where appropriate, access to secret or classified information;
    • Assess whether the State is making legitimate use of its lawful capabilities;
    • Evaluate whether the State has been comprehensively and accurately publishing information about the use and scope of Communications Surveillance techniques and powers in accordance with its Transparency obligations publish periodic reports and other information relevant to Communications Surveillance;
    • Make public determinations as to the lawfulness of those actions, including the extent to which they comply with these Principles[4]

    What can inform oversight mechanisms for state surveillance?

    The development of effective oversight mechanisms for state surveillance can be informed by a number of factors including:

    • Rapidly changing technology – how can mechanisms adapt, account for, and evaluate perpetually changing intelligence capabilities?
    • Expanding surveillance powers – how can mechanisms evaluate and rationalize the use of expanding agency powers?
    • Tensions around secrecy, national interest, and individual rights – how can mechanisms respect, recognize, and uphold multiple competing interests and needs including an agency's need for secrecy, the government's need to protect national security, and the citizens need to have their constitutional and fundamental rights upheld?
    • The structure, purpose, and goals of specific intelligence agencies and circumstances– how can mechanisms be sensitive and attuned to the structure, purpose, and functions of differing intelligence agencies and circumstances?

    These factors lead to further questions around:

    • The purpose of an oversight mechanism: Is an oversight mechanism meant to ensure effectiveness of an agency? Perform general reviews of agency performance? Supervise the actions of an agency? Hold an agency accountable for misconduct?
    • The structure of an oversight mechanism: Is it internal? External? A combination of both? How many oversight mechanisms that agencies should be held accountable to?
    • The functions of an oversight mechanism: Is an oversight mechanism meant to inspect? Evaluate? Investigate? Report?
    • The powers of an oversight mechanism: The extent of access that an oversight mechanism needs and should have to the internal workings of security agencies and law enforcement to carry out due diligence? The extent of legal backing that an oversight mechanism should have to hold agencies legally accountable.

    What oversight mechanisms for State Surveillance exist in India?

    In India the oversight 'ecosystem' for state surveillance is comprised of:

    1. Review committee: Under the Indian Telegraph Act 1885 and the Rules issued thereunder (Rule 419A), a Central Review Committee that consists of the Cabinet Secretary, Secretary of Legal Affairs to the Government of India, Secretary of Department of Telecommunications to the Government of India is responsible for meeting on a bi-monthly basis and reviewing the legality of interception directions. The review committee has the power to revoke the directions and order the destruction of intercepted material.[5] This review committee is also responsible for evaluating interception, monitoring, and decryption orders issued under section 69 of the Information Technology Act 2000.[6] and orders for the monitoring and collection of traffic data under section 69B of the Information Technology Act 2000.[7]
    2. Authorizing Authorities: The Secretary in the Ministry of Home Affairs of the Central Government is responsible for authorizing requests for the interception, monitoring, and decryption of communications issued by central agencies.[8] The Secretary in charge of the Home Department is responsible for authorizing requests for the interception, monitoring, and decryption of communications from state level agencies and law enforcement.[9] The Secretary to the Government of India in the Department of Information Technology under the Ministry of Communications and Information Technology is responsible for authorizing requests for the monitoring and collection of traffic data.[10] Any officer not below the rank of Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary in this behalf, may authorize the interception of communications in case of an emergency.[11] A Commissioner of Police, District Superintendent of Police or Magistrate may issue requests for stored data to any postal or telegraph authority.[12]
    3. Administrative authorities: India does not have an oversight mechanism for intelligence agencies, but agencies do report to different authorities. For example: The Intelligence Bureau reports to the Home Minister, the Research and Anaylsis Wing is under the Cabinet Secretariat and reports to the Prime Minister, the Joint Intelligence Committee (JIC), National Technical Research Organisation (NTRO) and Aviation Research Centre (ARC) report to the National Security Adviser; and the National Security Council Secretariat under the NSA which serves the National Security Council.[13]

    It is important to note that though India has a Right to Information Act, but most of the security agencies are exempt from the purview of the Act[14] as is disclosure of any information that falls under the purview of the Official Secrets Act 1923.[15] [Note: There is no point in listing out all the exceptions given in section 8 and other sections as well. I think the point is sufficiently made when we say that security agencies are exempt from the purview of the Act.] The Official Secrets Act does not provide a definition of an 'official secret' and instead protects information: pertaining to national Security, defence of the country, affecting friendly relations with foreign states, etc.[16] Information in India is designated as classified in accordance to the Manual of Departmental Security Instruction which is circulated by the Ministry of Home Affairs. According to the Public Records Rules 1997, “classified records" means the files relating to the public records classified as top-secret, confidential and restricted in accordance with the procedure laid down in the Manual of Departmental Security Instruction circulated by the Ministry of Home affairs from time to time;”[17] Bi-annually officers evaluate and de-classify classified information and share the same with the national archives.[18] In response to questions raised in the Lok Sabha on the 5th of May 2015 regarding if the Official Secrets Act, 1923 will be reviewed, the number of classified files stored with the Government under the Act, and if the Government has any plans to declassify some of the files – the Ministry of Home Affairs clarified that a committee consisting of Secretaries of the Ministry of Home Affairs, the Department of Personnel and Training, and the Department of Legal Affairs has been established to examine the provisions of the Official Secrets Act, 1923 particularly in light of the Right to Information Act, 2005. The Ministry of Home Affairs also clarified that the classification and declassification of files is done by each Government Department as per the Manual of Departmental Security Instructions, 1994 and thus there is no 'central database of the total number of classified files'.[19]

    How can India's oversight mechanism for state surveillance be clarified?

    Though these mechanisms establish a basic framework for an oversight mechanism for state surveillance in India, there are aspects of this framework that could be clarified and there are ways in which the framework could be strengthened.

    Aspects of the present review committee that could be clarified:

    1. Powers of the review committee: Beyond having the authority to declare that orders for interception, monitoring, decryption, and collection of traffic data are not within the scope of the law and order for destruction of any collected information – what powers does the review committee have? Does the committee have the power to compel agencies to produce additional or supporting evidence? Does the committee have the power to compel information from the authorizing authority?
    2. Obligations of the review committee: The review committee is required to 'record its findings' as to whether the interception orders issued are in accordance with the law. Is there a standard set of questions/information that must be addressed by the committee when reviewing an order? Does the committee only review the content of the order or do they also review the implementation of the order? Beyond recording its findings, are there any additional reporting obligations that the review committee must fulfill?
    3. Accountability of the review committee: Does the review committee answer to a higher authority? Do they have to submit their findings to other branches of the government – such as Parliament? Is there a mechanism to ensure that the review committee does indeed meet every two months and review all orders issued under the relevant sections of the Indian Telegraph Act 1885 and the Information Technology Act 2008?

    Proposed oversight mechanisms in India

    Oversight mechanisms can help with avoiding breaches of national security by ensuring efficiency and effectiveness in the functioning of security agencies. The need for the oversight of state surveillance is not new in India. In 1999 the Union Government constituted a Committee with the mandate of reviewing the events leading up to Pakistani aggression in Kargil and to recommend measures towards ensuring national security. Though the Kargil Committee was addressing surveillance from the perspective of gathering information on external forces, there are parellels in the lessons learned for state surveillance. Among other findings, in their Report the Committee found a number of limitations in the system for collection, reporting, collation, and assessment of intelligence. The Committee also found that there was a lack of oversight for the intelligence community in India – resulting in no mechanisms for tasking the agencies, monitoring their performance and overall functioning, and evaluating the quality of the work.

    The Committee also noted that such a mechanism is a standard feature in jurisdictions across the world. The Committee emphasized this need from an economic perspective – that without oversight – the Government and the nation has no way of evaluating whether or not they are receiving value for their money. The Committee recommended a review of the intelligence system with the objective of solving such deficiencies.[20]

    In 2000 a Group of Ministers was established to review the security and intelligence apparatus of the country. In their report issued to the Prime Minister, the Group of Ministers recommended the establishment of an Intelligence Coordination Group for the purpose of providing oversight of intelligence agencies at the Central level. Specifically the Intelligence Coordination Group would be responsible for:

    • Allocation of resources to the intelligence agencies
    • Consideration of annual reviews on the quality of inputs
    • Approve the annual tasking for intelligence collection
    • Oversee the functions of intelligence agencies
    • Examine national estimates and forecasts[21]

    Past critiques of the Indian surveillance regime have included the fact that intelligence agencies do not come under the purview of any overseeing mechanism including Parliament, the Right to Information Act 2005, or the General Comptroller of India.

    In 2011, Manish Tewari, who at the time was a Member of Parliament from Ludhiana, introduced the Private Member's Bill - “The Intelligence Services (Powers and Regulation) Bill” proposed stand alone statutory regulation of intelligence agencies. In doing so it sought to establish an oversight mechanism for intelligence agencies within and outside of India. The Bill was never introduced into Parliament.[22] Broadly, the Bill sought to establish: a National Intelligence and Security Oversight Committee which would oversee the functionings of intelligence agencies and would submit an annual report to the Prime Minister, a National Intelligence Tribunal for the purpose of investigating complaints against intelligence agencies, an Intelligence Ombudsman for overseeing and ensuring the efficient functioning of agencies, and a legislative framework regulating intelligence agencies.[23]

    Proposed policy in India has also explored the possibility of coupling surveillance regulation and oversight with private regulation and oversight. In 2011 the Right to Privacy Bill was drafted by the Department of Personnel and Training. The Bill proposed to establish a “Central Communication Interception Review Committee” for the purposes of reviewing orders for interception issued under the Telegraph Act. The Bill also sought to establish an authorization process for surveillance undertaken by following a person, through CCTV's, or other electronic means.[24] In contrast, the 2012 Report of the Group of Experts on Privacy, which provided recommendations for a privacy framework for India, recommended that the Privacy Commissioner should exercise broad oversight functions with respect to interception/access, audio & video recordings, the use of personal identifiers, and the use of bodily or genetic material.[25]

    A 2012 report by the Institute for Defence Studies and Analyses titled “A Case for Intelligence Reforms in India” highlights at least four 'gaps' in intelligence that have resulted in breaches of national security including: zero intelligence, inadequate intelligence, inaccurate intelligence, and excessive intelligence – particularly in light of additional technical inputs and open source inputs.[26] In some cases, an oversight mechanism could help in remediating some of these gaps. Returning to the 2012 IDSA Report, the Report recommends the following steps towards an oversight mechanism for Indian intelligence:

    • Establishing an Intelligence Coordination Group (ICG) that will exercise oversight functions for the intelligence community at the Central level. This could include overseeing functions of the agencies, quality of work, and finances.
    • Enacting legislation defining the mandates, functions, and duties of intelligence agencies.
    • Holding intelligence agencies accountable to the Comptroller & Auditor General to ensure financial accountability.
    • Establishing a Minister for National Security & Intelligence for exercising administrative authority over intelligence agencies.
    • Establishing a Parliamentary Accountability Committee for oversight of intelligence agencies through parliament.
    • Defining the extent to which intelligence agencies can be held accountable to reply to requests pertaining to violations of privacy and other human rights issued under the Right to Information Act.

    Highlighting the importance of accountable surveillance frameworks, in 2015 the external affairs ministry director general of India Santosh Jha stated at the UN General Assembly that the global community needs to "to create frameworks so that Internet surveillance practices motivated by security concerns are conducted within a truly transparent and accountable framework.”[27]

    In what ways can India's mechanisms for state surveillance be strengthened?

    Building upon the recommendations from the Kargil Committee, the Report from the Group of Ministers, the Report of the Group of Experts on Privacy, the Draft Privacy Bill 2011, and the IDSA report, ways in which the framework for oversight of state surveillance in India could be strengthened include:

    • Oversight to enhance public understanding, debate, accountability, and democratic governance: State surveillance is unique in that it is enabled with the objective of protecting a nations security. Yet, to do so it requires citizens of a nation to trust the actions taken by intelligence agencies and to allow for possible access into their personal lives and possible activities that might infringe on their constitutional rights (such as freedom of expression) for a larger outcome of security. Because of this, oversight mechanisms for state surveillance must balance securing national security while submitting itself to some form of accountability to the public.
    • Independence of oversight mechanisms: Given the Indian context, it is particularly important that an oversight mechanism for surveillance powers and the intelligence community is capable of addressing and being independent from political interference. Indeed, the majority of cases regarding illegal interceptions that have reached the public sphere pertain to the surveillance of political figures and political turf wars.[28] Furthermore, though the current Review Committee established in the Indian Telegraph Act does not have a member from the Ministry of Home Affairs (the Ministry responsible for authorizing interception requests), it is unclear how independent this committee is from the authorizing Ministry. To ensure non-biased oversight, it is important that oversight mechanisms are independent.
    • Legislative regulation of intelligence agencies: Currently, intelligence agencies are provided surveillance powers through the Information Technology Act and the Telegraph Act, but beyond the National Intelligence Agency Act which establishes the National Intelligence Agency, there is no legal mechanism creating, regulating and overseeing intelligence agencies using these powers. In the 'surveillance ecosystem' this creates a policy vacuum, where an agency is enabled through law with a surveillance power and provided a procedure to follow, but is not held legally accountable for the effective, ethical, and legal use of the power. To ensure legal accountability of the use of surveillance techniques, it is important that intelligence are created through legislation that includes oversight provisions.
    • Comprehensive oversight of all intrusive measures: Currently the Review Committee established under the Telegraph Act is responsible for the evaluation of orders for the interception, monitoring, decryption, and collection of traffic data. The Review Committee is not responsible for reviewing the implementation or effectiveness of such orders and is not responsible for reviewing orders for access to stored information or other forms of electronic surveillance. This situation is a result of 1. Present oversight mechanisms not having comprehensive mandates 2. Different laws in India enabling different levels of access and not providing a harmonized oversight mechanism and 3.Indian law not formally addressing and regulating emerging surveillance technologies and techniques. To ensure effectiveness, it is important for oversight mechanisms to be comprehensive in mandate and scope.
    • Establishment of a tribunal or redress mechanism: India currently does not have a specified means for individuals to seek redress for unlawful surveillance or surveillance that they feel has violated their rights. Thus, individuals must take any complaint to the courts. The downsides of such a system include the fact that the judiciary might not be able to make determinations regarding the violation, the court system in India is overwhelmed and thus due process is slow, and given the sensitive nature of the topic – courts might not have the ability to immediately access relevant documentation. To ensure redress, it is important that a tribunal or a redress mechanism with appropriate powers is established to address complaints or violations pertaining to surveillance.
    • Annual reporting by security agencies, law enforcement, and service providers: Information regarding orders for surveillance and the implementation of the same is not disclosed by the government or by service providers in India.[29] Indeed, service providers by law are required to maintain the confidentiality of orders for the interception, monitoring, or decryption of communications and monitoring or collection of traffic data. At the minimum, an oversight mechanism should receive annual reports from security agencies, law enforcement, and service providers with respect to the surveillance undertaken. Edited versions of these Reports could be shared with Parliament and the public.
    • Consistent and mandatory reviews of relevant legislation: Though committees have been established to review various legislation and policy pertaining to state surveillance, the time frame for these reviews is not clearly defined by law. These reviews should take place on a consistent and publicly stated time frame. Furthermore, legislation enabling surveillance in India do not require review and assessment for relevance, adequacy, necessity, and proportionality after a certain period of time. Mandating that legislation regulating surveillance is subject to review on a consistent is important in ensuring that the provisions are relevant, proportionate, adequate, and necessary.
    • Transparency of classification and declassification process and centralization of de-classified records: Currently, the Ministry of Home Affairs establishes the process that government departments must follow for classifying and de-classifying information. This process is not publicly available and de-classified information is stored only with the respective department. For transparency purposes, it is important that the process for classification of records be made public and the practice of classification of information take place in exceptional cases. Furthermore, de-classified records should be stored centrally and made easily accessible to the public.
    • Executive and administrative orders regarding establishing of agencies and surveillance projects should be in the public domain: Intelligence agencies and surveillance projects in India are typically enabled through executive orders. For example, NATGRID was established via an executive order, but this order is not publicly available. As a form of transparency and accountability to the public, it is important that if executive orders establish an agency or a surveillance project, these are made available to the public to the extent possible.
    • Oversight of surveillance should incorporate privacy and cyber/national security: Increasingly issues of surveillance, privacy, and cyber security are interlinked. Any move to establish an oversight mechanism for surveillance and the intelligence committee must incorporate and take into consideration privacy and cyber security. This could mean that an oversight mechanism for surveillance in India works closely with CERT-IN and a potential privacy commissioner or that the oversight mechanism contains internal expertise in these areas to ensure that they are adequately considered.
    • Oversight by design: Just like the concept of privacy by design promotes the ideal that principles of privacy are built into devices, processes, services, organizations, and regulation from the outset – oversight mechanisms for state surveillance should also be built in from the outset of surveillance projects and enabling legislation. In the past, this has not been the practice in India– the National Intelligence Grid was an intelligence system that sought to link twenty one databases together – making such information easily and readily accessible to security agencies – but the oversight of such a system was never defined.[30] Similarly, the Centralized Monitoring System was conceptualized to automate and internalize the process of intercepting communications by allowing security agencies to intercept communications directly and bypass the service provider.[31] Despite amending the Telecom Licenses to provide for the technical components of this project, oversight of the project or of security agencies directly accessing information has yet to be defined.[32]

    Examples of oversight mechanisms for State Surveillance: US, UK, Canada and United States

    United States

    In the United States the oversight 'ecosystem' for state surveillance is made up of:

    The Foreign Intelligence Surveillance Court

    The U.S Foreign Intelligence Surveillance Court (FISA) is the predominant oversight mechanism for state surveillance and oversees and authorizes the actions of the Federal Bureau of Investigation and the National Security Agency.[33] The court was established by the enactment of the Foreign Intelligence Surveillance Act 1978 and is governed by Rules of Procedure, the current Rules being formulated in 2010.[34] The Court is empowered to ensure compliance with the orders that it issues and the government is obligated to inform the Court if orders are breached.[35] FISA allows for individuals who receive an order from the Court to challenge the same,[36] and public filings are available on the Court's website.[37] Additionally, organizations, including the American Civil Liberties Union[38] and the Electronic Frontier Foundation, have filed motions with the Court for release of records. [39] Similarly, Google has approached the Court for the ability to publish aggregate information regarding FISA orders that the company recieves.[40]

    Government Accountability Office

    The U.S Government Accountability Office (GAO) is an independent office that works for Congress and conducts audits, investigates, provides recommendations, and issues legal decisions and opinions with regard to federal government spending of taxpayer's money by the government and associated agencies including the Defence Department, the FBI, and Homeland Security.[41] The head of the GAO is the Comptroller General of the United States and is appointed by the President. The GAO will initiate an investigation if requested by congressional committees or subcommittees or if required under public law or committee reports. The GOA has reviewed topics relating to Homeland Security, Information Security, Justice and Law Enforcement, National Defense, and Telecommunications.[42] For example, in June 2015 the GOA completed an investigation and report on 'Foreign Terrorist Organization Process and U.S Agency Enforcement Actions” [43] and an investigation on “Cyber Security: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies”.[44]

    Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence

    The U.S. Senate Select Committee on Intelligence is a standing committee of the U.S Senate with the mandate to review intelligence activities and programs and ensure that these are inline with the Constitution and other relevant laws. The Committee is also responsible for submitting to Senate appropriate proposals for legislation, and for reporting to Senate on intelligence activities and programs.[45] The House Permanent Select Committee holds similar jurisdiction. The House Permanent Select Committee is committed to secrecy and cannot disclose classified information excepted authorized to do so. Such an obligation does not exist for the Senate Select Committee on Intelligence and the committee can disclose classified information publicly on its own.[46]

    Privacy and Civil Liberties Oversight Board (PCLOB)

    The Privacy and Civil Liberties Oversight Board was established by the Implementing Recommendations of the 9/11 Commission Act of 2007 and is located within the executive branch.[47] The objective of the PCLOB is to ensure that the Federal Government's actions to combat terrorism are balanced against privacy and civil liberties. Towards this, the Board has the mandate to review and analyse ant-terrorism measures the executive takes and ensure that such actions are balanced with privacy and civil liberties, and to ensure that privacy and civil liberties are liberties are adequately considered in the development and implementation of anti-terrorism laws, regulations and policies.[48] The Board is responsible for developing principles to guide why, whether, when, and how the United States conducts surveillance for authorized purposes. Additionally, officers of eight federal agencies must submit reports to the PCLOB regarding the reviews that they have undertaken, the number and content of the complaints, and a summary of how each complaint was handled. In order to fulfill its mandate, the Board is authorized to access all relevant records, reports, audits, reviews, documents, papers, recommendations, and classified information. The Board may also interview and take statements from necessary personnel. The Board may request the Attorney General to subpoena on the Board's behalf individuals outside of the executive branch.[49]

    To the extent possible, the Reports of the Board are made public. Examples of recommendations that the Board has made in the 2015 Report include: End the NSA”s bulk telephone records program, add additional privacy safeguards to the bulk telephone records program, enable the FISC to hear independent views on novel and significant matters, expand opportunities for appellate review of FISC decisions, take advantage of existing opportunities for outside legal and technical input in FISC matters, publicly release new and past FISC and DISCR decisions that involve novel legal, technical, or compliance questions, publicly report on the operation of the FISC Special Advocate Program, Permit Companies to Disclose Information about their receipt of FISA production orders and disclose more detailed statistics on surveillance, inform the PCLOB of FISA activities and provide relevant congressional reports and FISC decisions, begin to develop principles for transparency, disclose the scope of surveillance authorities affecting US Citizens.[50]

    The Wiretap Report

    The Wiretap Report is an annual compilation of information provided by federal and state officials regarding applications for interception orders of wire, oral, or electronic communications, data address offenses under investigation, types and locations of interception devices, and costs and duration of authorized intercepts.[51] When submitting information for the report a judge will include the name and jurisdiction of the prosecuting official who applied for the order, the criminal offense under investigation, the type of intercept device used, the physical location of the device, and the duration of the intercept. Prosecutors provide information related to the cost of the intercept, the number of days the intercept device was in operation, the number of persons whose communications were intercepted, the number of intercepts, and the number of incriminating intercepts recorded. Results of the interception orders such as arrest, trials, convictions, and the number of motions to suppress evidence are also noted in the prosecutor reports. The Report is submitted to Congress and is legally required under Title III of the Omnibus Crime Control and Safe Streets Act of 1968. The report is issued by the Administrative Office of the United States Courts.[52]

    United Kingdom

    The Intelligence and Security Committee (ISC) of Parliament

    The Intelligence Security Committee was established by the Intelligence Services Act 1994. Members are appointed by the Prime Minster and the Committee reports directly to the same. Additionally, the Committee submits annual reports to Parliament. Towards this, the Committee can take evidence from cabinet ministers, senior officials, and from the public.[53] The most recent report of the Committee is the 2015 “Report on Privacy and Security”.[54] Members of the Committee are subject to the Official Secrets Act 1989 and have access to classified material when carrying out investigations.[55]

    Joint Intelligence Committee (JIC)

    This Joint Intelligence Committee is located in the Cabinet office and is broadly responsible for overseeing national intelligence organizations and providing advice to the Cabinet on issues related to security, defense, and foreign affairs. The JIC is overseen by the Intelligence and Security Committee.[56]

    The Interception of Communications Commissioner

    The Interception of Communications Commissioner is appointed by the Prime Minster under the Regulation of Investigatory Powers Act 2000 for the purpose of reviewing surveillance conducted by intelligence agencies, police forces, and other public authorities. Specifically, the Commissioner inspects the interception of communications, the acquisition and disclosure of communications data, the interception of communications in prisons, and the unintentional electronic interception.[57] The Commissioner submits an annual report to the Prime Minister. The Reports of the Commissioner are publicly available.[58]

    The Intelligence Services Commissioner

    The Intelligence Services Commissioner is an independent body appointed by the Prime Minister that is legally empowered through the Regulation of Investigatory Powers Act (RIPA) 2000. The Commissioner provides independent oversight on the use of surveillance by UK intelligence services.[59] Specifically, the Commissioner is responsible for reviewing authorized interception orders and the actions and performance of the intelligence services.[60] The Commissioner is also responsible for providing assistance to the Investigatory Powers Tribunal, submitting annual reports to the Prime Minister on the discharge of its functions, and advising the Home Office on the need of extending the Terrorism Prevention and Investigation Measures regime.[61] Towards these the Commissioner conducts in-depth audits on the orders for interception to ensure that the surveillance is within the scope of the law, that the surveillance was necessary for a legally established reason, that the surveillance was proportionate, that the information accessed was justified by the privacy invaded, and that the surveillance authorized by the appropriate official. The Commissioner also conducts 'site visits' to ensure that orders are being implemented as per the law.[62] As a note, the Intelligence Services Commissioner does not undertake any subject that is related to the Interception of Communications Commissioner. The Commissioner has access to any information that he feels is necessary to carry out his investigations. The Reports of the Intelligence Service Commissioner are publicly available.[63]

    Investigatory Powers Tribunal

    The Investigatory Powers Tribunal is a court which investigates complaints of unlawful surveillance by public authorities or intelligence/law enforcement agencies.[64] The Tribunal was established under the Regulation of Investigatory Powers Act 2000 and has a range of oversight functions to ensure that public authorities act and agencies are in compliance with the Human Rights Act 1998.[65] The Tribunal specifically is an avenue of redress for anyone who believes that they have been a victim of unlawful surveillance under RIPA or wider human rights infringements under the Human Rights Act 1998. The Tribunal can provide seven possible outcomes for any application including 'found in favor of complainant, no determination in favour of complainant, frivolous or vexatious, out of time, out of jurisdiction, withdrawn, or no valid complaint.[66] The Tribunal has the authority to receive and consider evidence in any form, even if inadmissible in an ordinary court.[67] Where possible, cases are available on the Tribunal's website. Decisions by the Tribunal cannot be appealed, but can be challenged in the European Court of Human Rights.[68]

    Canada

    In Canada the oversight 'ecosystem' for state surveillance includes:

    Security Intelligence Review Committee

    The Security Intelligence Review Committee is an independent body that is accountable to the Parliament of Canada and reports on the Canadian Security Intelligence Service.[69] Members of the Security Intelligence Review Committee are appointed by the Prime Minister of Canada. The committee conducts reviews on a pro-active basis and investigates complaints. Committee members have access to classified information to conduct reviews. The Committee submits an annual report to Parliament and an edited version is publicly available. The 2014 Report was titled “Lifting the Shroud of Secrecy”[70] and includes reviews of the CSIS's activities, reports on complaints and subsequent investigations, and provides recommendations.

    Office of the Communications Security Establishment Commissioner

    The Communications Security Commissioner conducts independent reviews of Communications Security Establishment (CSE) activities to evaluate if they are within the scope of Canadian law.[71] The Commissioner submits a report to Parliament on an annual basis and has a number of powers including the power to subpoena documents and personnel.[72] If the Commissioner believes that the CSE has not complied with the law – it must report this to the Attorney General of Canada and to the Minister of National Defence. The Commissioner may also receive information from persons bound to secrecy if they deem it to be in the public interest to disclose such information.[73] The Commissioner is also responsible for verifying that the CSE does not surveil Canadians and for promoting measures to protect the privacy of Canadians.[74] When conducting a review, the Commissioner has the ability to examine records, receive briefings, interview relevant personnel, assess the veracity of information, listen to intercepted voice recordings, observe CSE operators and analysts to verify their work, examine CSI electronic tools, systems and databases to ensure compliance with the law.[75]

    Office of the Privacy Commissioner

    The Office of the Privacy Commissioner of Canada (OPC) oversees the implementation of and compliance with the Privacy Act and the Personal information and Electronic Documents Act.[76]

    The OPC is an independent body that has the authority to investigate complaints regarding the handling of personal information by government and private companies, but can only comment on the activities of security and intelligence agencies. For example, in 2014 the OPC issued the report “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber Surveillance”[77] The OPC can also provide testimony to Parliament and other government bodies.[78] For example, the OPC has made appearances before the Senate Standing Committee of National Security and Defense on Bill C-51.[79] The OPC cannot conduct joint audits or investigations with other bodies.[80]

    Annual Interception Reports

    Under the Criminal Code of Canada, regional governments must issue annual interception reports. The reports must include number of individuals affected by interceptions, average duration of the interception, type of crimes investigated, numbers of cases brought to court, and number of individuals notified that interception had taken place.[81]

    Conclusion

    The presence of multiple and robust oversight mechanisms for state surveillance does not necessarily correlate to effective oversight. The oversight mechanisms in the UK, Canada, and the U.S have been criticised. For example, Canada . For example, the Canadian regime has been characterized as becoming weaker it has removed one of its key over sight mechanisms – the Inspector General of the Canadian Security Intelligence Service which was responsible for certifying that the Service was in compliance with law.[82]

    Other weaknesses in the Canadian regime that have been highlighted include the fact that different oversight bodies do not have the authority to share information with each other, and transparency reports do not include many new forms of surveillance.[83] Oversight mechanisms in the U.S on the other hand have been criticized as being opaque[84] or as lacking the needed political support to be effective.[85] The UK oversight mechanism has been criticized for not having judicial authorization of surveillance requests, have opaque laws, and for not having a strong right of redress for affected individuals.[86] These critiques demonstrate that there are a number of factors that must come together for an oversight mechanism to be effective. Public transparency and accountability to decision making bodies such as Parliament or Congress can ensure effectiveness of oversight mechanisms, and are steps towards providing the public with means to debate in an informed manner issues related to state surveillance and allows different bodies within the government the ability to hold the state accountable for its actions.


      .[1]. For example, “Public Oversight” is one of the thirteen Necessary and Proportionate principles on state communications surveillance developed by civil society and academia globally, that should be incorporated by states into communication surveillance regimes. The principles can be accessed here: https://en.necessaryandproportionate.org/

      [2]. Hans Born and Ian Leigh, “Making Intelligence Accountable. Legal Standards and Best Practice for Oversight of Intelligence Agencies.” Pg. 13. 2005. Available at: http://www.prsindia.org/theprsblog/wp-content/uploads/2010/07/making-intelligence.pdf. Last accessed: August 6, 2015.

      [3]. For example, this point was made in the context of the UK. For more information see: Nick Clegg, 'Edward Snowden's revelations made it clear: security oversight must be fit for the internet age,”. The Guardian. March 3rd 2014. Available at: http://www.theguardian.com/commentisfree/2014/mar/03/nick-clegg-snowden-security-oversight-internet-age. Accessed: July 27, 2015.

      [4]. International Principles on the Application of Human Rights to Communications Surveillance. Available at: https://en.necessaryandproportionate.org/

      [5]. Sub Rules (16) and (17) of Rule 419A, Indian Telegraph Rules, 1951. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf Note: This review committee is responsible for overseeing interception orders issued under the Indian Telegraph Act and the Information Technology Act.

      [6]. Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information Rules 2009. Definition q. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [7]. Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Definition (n). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [8]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [9]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [10]. Definition (d) and section 3 of the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [11]. Rule 1, of the 419A Rules, Indian Telegraph Act 1885. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act.

      [12]. Section 92, CrPc. Available at: http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

      [13]. Press Information Bureau GOI. Reconstitution of Cabinet Committees. June 19th 2014. Available at: http://pib.nic.in/newsite/PrintRelease.aspx?relid=105747. Accessed August 6, 2015.

      [14]. Press Information Bureau, Government of India. Home minister proposes radical restructuring of security architecture. Available at: http://www.pib.nic.in/newsite/erelease.aspx?relid=56395. Accessed August 6, 2015.

      [15]. Section 24 read with Schedule II of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [16]. Section 8 of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [17]. Abhimanyu Ghosh. “Open Government and the Right to Information”. Legal Services India. Available at: http://www.legalservicesindia.com/articles/og.htm. Accessed: August 8, 2015

      [18]. Public Record Rules 1997. Section 2. Definition c. Available at: http://nationalarchives.nic.in/writereaddata/html_en_files/html/public_records97.html. Accessed: August 8, 2015

      [19]. Times of India. Classified information is reviewed after 25-30 years. April 13th 2015. Available at: http://timesofindia.indiatimes.com/india/Classified-information-is-reviewed-after-25-30-years/articleshow/46901878.cms. Accessed: August 8, 2015.

      [20]. Government of India. Ministry of Home Affairs. Lok Sabha Starred Question No 557. Available at: http://mha1.nic.in/par2013/par2015-pdfs/ls-050515/557.pdf.

      [21]. The Kargil Committee report Executive Summanry. Available at: http://fas.org/news/india/2000/25indi1.htm. Accessed: August 6, 2015.

      [22]. PIB Releases. Group of Ministers Report on Reforming the National Security System”. Available at: http://pib.nic.in/archieve/lreleng/lyr2001/rmay2001/23052001/r2305200110.html. Last accessed: August 6, 2015

      [23]. The Observer Research Foundation. “Manish Tewari introduces Bill on Intelligence Agencies Reform. August 5th 2011. Available at: http://www.observerindia.com/cms/sites/orfonline/modules/report/ReportDetail.html?cmaid=25156&mmacmaid=20327. Last accessed: August 6, 2015.

      [24]. The Intelligence Services (Powers and Regulation) Bill, 2011. Available at: http://www.observerindia.com/cms/export/orfonline/documents/Int_Bill.pdf. Accessed: August 6, 2015.

      [25]. The Privacy Bill 2011. Available at: https://bourgeoisinspirations.files.wordpress.com/2010/03/draft_right-to-privacy.pdf

      [26]. The Report of Group of Experts on Privacy. Available at: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

      [27]. Institute for Defence Studies and Analyses. “A Case for Intelligence Reforms in India”. Available at: http://www.idsa.in/book/AcaseforIntelligenceReformsinIndia.html. Accessed: August 6, 2015.

      [28]. India Calls for Transparency in internet Surveillance. NDTV. July 3rd 2015. Available at: http://gadgets.ndtv.com/internet/news/india-calls-for-transparency-in-internet-surveillance-710945. Accessed: July 6, 2015.

      [29]. Lovisha Aggarwal. “Analysis of News Items and Cases on Surveillance and Digital Evidence in India”. Available at: http://cis-india.org/internet-governance/blog/analysis-of-news-items-and-cases-on-surveillance-and-digital-evidence-in-india.pdf

      [30]. Rule 25 (4) of the Information Technology (Procedures and Safeguards for the Interception, Monitoring, and Decryption of Information Rules) 2011. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [31]. Ministry of Home Affairs, GOI. National Intelligence Grid. Available at: http://www.davp.nic.in/WriteReadData/ADS/eng_19138_1_1314b.pdf. Last accessed: August 6, 2015

      [32]. Press Information Bureau, Government of India. Centralised System to Monitor Communications Rajya Sabha. Available at: http://pib.nic.in/newsite/erelease.aspx?relid=54679. Last accessed: August 6, 2015.

      [33]. Department of Telecommunications. Amendemnt to the UAS License agreement regarding Central Monitoring System. June 2013. Available at: http://cis-india.org/internet-governance/blog/uas-license-agreement-amendment

      [34]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [35]. United States Foreign Intelligence Surveillance Court. Rules of Procedure 2010. Available at: http://www.fisc.uscourts.gov/sites/default/files/FISC%20Rules%20of%20Procedure.pdf

      [36]. United States Foreign Intelligence Court. Honorable Patrick J. Leahy. 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf

      [37]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [38]. Public Filings – U.S Foreign Intelligence Surveillance Court. Available at: http://www.fisc.uscourts.gov/public-filings

      [39]. ACLU. FISC Public Access Motion – ACLU Motion for Release of Court Records Interpreting Section 215 of the Patriot Act. Available at: https://www.aclu.org/legal-document/fisc-public-access-motion-aclu-motion-release-court-records-interpreting-section-215

      [40]. United States Foreign Intelligence Surveillance Court Washington DC. In Re motion for consent to disclosure of court records or, in the alternative a determination of the effect of the Court's rules on statutory access rights. Available at: https://www.eff.org/files/filenode/misc-13-01-opinion-order.pdf

      [41]. Google Official Blog. Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests. February 3rd 2014. Available at: http://googleblog.blogspot.in/2014/02/shedding-some-light-on-foreign.html

      [42]. U.S Government Accountability Office. Available at: http://www.gao.gov/key_issues/overview#t=1. Last accessed: August 8, 2015.

      [43]. Report to Congressional Requesters. Combating Terrorism: Foreign Terrorist Organization Designation Proces and U.S Agency Enforcement Actions. Available at: http://www.gao.gov/assets/680/671028.pdf. Accessed: August 8, 2015

      [44]. United States Government Accountability Office. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. Available: http://www.gao.gov/assets/680/670935.pdf. Last accessed: August 6, 2015.

      [45]. Committee Legislation. Available at: http://ballotpedia.org/United_States_Senate_Committee_on_Intelligence_(Select)#Committee_legislation

      [46]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8, 2015

      [47]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [48]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [49]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8th 2015

      [50]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statistics-reports/analysisreports/wiretap-reports

      [51]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statisticsreports/
      analysis-reports/wiretap-reports/faqs-wiretap-reports#faq-What-information-does-the-AO-receive-from-prosecutors?. Last Accessed: August 8th 2015

      [52]. Intelligence and Security Committee of Parliament. Transcripts and Public Evidence. Available at: http://isc.independent.gov.uk/public-evidence. Last accessed: August 8th 2015.

      [53]. Intelligence and Security Committee of Parliament. Special Reports. Available at http://isc.independent.gov.uk/committee-reports/special-reports. Last accessed: August 8th 2015.

      [54]. Hugh Segal. The U.K. has legislative oversight of surveillance. Why not Canada. The Globe and Mail. June 12th 2013. Available at: http://www.theglobeandmail.com/globe-debate/uk-haslegislative-oversight-of-surveillance-why-not-canada/article12489071/. Last accessed: August 8th 2015

      [55]. The Joint Intelligence Committee home page. For more information see: https://www.gov.uk/government/organisations/national-security/groups/joint-intelligence-committee

      [56]. Interception of Communications Commissioner's Office. RIPA. Available at: http://www.iocco-uk.info/sections.asp?sectionID=2&type=top. Last accessed: August 8th 2015

      [57]. Interception of Communications Commissioner's Office. Reports. Available at: http://www.iocco-uk.info/sections.asp?sectionID=1&type=top. Last accessed: August 8th 2015

      [58]. The Intelligence Services Commissioner's Office Homepage. For more information see: http://intelligencecommissioner.com/

      [59]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [60]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [61]. The Intelligence Services Commissioner's Office. What we do. Available at: http://intelligencecommissioner.com/content.asp?id=5. Last Accessed: August 8th 2015.

      [62]. The Intelligence Services Commissioner's Office. Intelligence Services Commissioner's Annual Reports. Available at: http://intelligencecommissioner.com/content.asp?id=19. Last
      accessed: August 8th 2015

      [63]. The Investigatory Powers Tribunal Homepage. Available at: http://www.ipt-uk.com/

      [64]. The Investigatory Powers Tribunal – Functions – Key role. Available at: http://www.ipt-uk.com/section.aspx?pageid=1

      [65]. Investigatory Powers Tribunal. Functions – Decisions available to the Tribunal. Available at: http://www.ipt-uk.com/section.aspx?pageid=4. Last accessed: August 8th 2015

      [66]. Investigator Powers Tribunal. Operation - Available at: http://www.ipt-uk.com/section.aspx?pageid=7

      [67]. Investigatory Powers Tribunal. Operation- Differences to the ordinary court system. Available at: http://www.ipt-uk.com/section.aspx?pageid=7. Last accessed: August 8th 2015

      [68]. Security Intelligence Review Committee – Homepage. Available at: http://www.sirc-csars.gc.ca/index-eng.html

      [69]. SIRC Annual Report 2013-2014: Lifting the Shroud of Secrecy. Available at: http://www.sirccsars. gc.ca/anrran/2013-2014/index-eng.html. Last accessed: August 6th 2015.

      [70]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [71]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [72]. The Office of the Communications Security Establishment – Mandate. Available at: http://www.ocsecbccst.gc.ca/mandate/index_e.php

      [73]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [74]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [75]. Office of the Privacy Commissioner of Canada. Homepage. Available at: https://www.priv.gc.ca/index_e.ASP

      [76]. Office of the Privacy Commissioner of Canada. Reports and Publications. Special Report to Parliament “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber-Surveillance. January 28th 2014. Available at: https://www.priv.gc.ca/information/srrs/201314/sr_cic_e.asp

      [77]. Office of the Privacy Commissioner of Canada. Available at: https://www.priv.gc.ca/index_e.asp. Last accessed: August 6th 2015.

      [78]. Office of the Privacy Commissioner of Canada. Appearance before the Senate Standing Commitee National Security and Defence on Bill C-51, the Anti-Terrorism Act, 2015. Available at: https://www.priv.gc.ca/parl/2015/parl_20150423_e.asp. Last accessed: August 6th 2015.

      [79]. Office of the Privacy Commissioner of Canada. Special Report to Parliament. January 8th 2014. Available at: https://www.priv.gc.ca/information/sr-rs/201314/sr_cic_e.asp. Last accessed: August 6th 2015.

      [80]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [81]. Patrick Baud. The Elimination of the Inspector General of the Canadian Security Intelligence Serive. May 2013. Ryerson University. Available at; http://www.academia.edu/4731993/The_Elimination_of_the_Inspector_General_of_the_Canadian_Security_Intelligence_Service

      [82]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [83]. Glenn Greenwald. Fisa court oversight: a look inside a secret and empty process. The Guardian. June 19th 2013. Available at: http://www.theguardian.com/commentisfree/2013/jun/19/fisa-court-oversight-process-secrecy, Nadia Kayyali. Privacy and Civil Liberties Oversight Board to NSA: Why is Bulk Collection of Telelphone Records Still Happening? February 2105. Available at :https://www.eff.org/deeplinks/2015/02/privacy-and-civil-liberties-oversight-board-nsa-whybulk-collection-telephone. Last accessed: August 8th 2015.

      [84]. Scott Shance. The Troubled Life of the Privacy and Civil Liberties Oversight Board. August 9th 2012. The Caucus. Available at: http://thecaucus.blogs.nytimes.com/2012/08/09/thetroubled-life-of-the-privacy-and-civil-liberties-oversight-board/?_r=0. Last accessed: August 8th 2015

      [85]. The Open Rights Group. Don't Spy on Us. Reforming Surveillance in the UK. September 2014. Available at: https://www.openrightsgroup.org/assets/files/pdfs/reports/DSOU_Reforming_surveillance_old.pdf

      [86].

    Do we need a Unified Post Transition IANA?

    by Pranesh Prakash, Padmini Baruah and Jyoti Panday — last modified Oct 27, 2015 12:46 AM
    As we stand at the threshold of the IANA Transition, we at CIS find that there has been little discussion on the question of how the transition will manifest. The question we wanted to raise was whether there is any merit in dividing the three IANA functions – names, numbers and protocols – given that there is no real technical stability to be gained from a unified Post Transition IANA. The analysis of this idea has been detailed below.

    The Internet Architecture Board, in a submission to the NTIA in 2011 claims that splitting the IANA functions would not be desirable.[1] The IAB notes, “There exists synergy and interdependencies between the functions, and having them performed by a single operator facilitates coordination among registries, even those that are not obviously related,” and also that that the IETF makes certain policy decisions relating to names and numbers as well, and so it is useful to have a single body. But they don’t say why having a single email address for all these correspondences, rather than 3 makes any difference: Surely, what’s important is cooperation and coordination. Just as IETF, ICANN, NRO being different entities doesn’t harm the Internet, splitting the IANA function relating to each entity won’t harm the Internet either. Instead will help stability by making each community responsible for the running of its own registers, rather than a single point of failure: ICANN and/or “PTI”.

    A number of commentators have supported this viewpoint in the past: Bill Manning of University of Southern California’s ISI (who has been involved in DNS operations since DNS started), Paul M. Kane (former Chairman of CENTR's Board of Directors), Jean-Jacques Subrenat (who is currently an ICG member), Association française pour le nommage Internet en coopération (AFNIC), the Internet Governance Project, InternetNZ, and the Coalition Against Domain Name Abuse (CADNA).

    The Internet Governance Project stated: “IGP supports the comments of Internet NZ and Bill Manning regarding the feasibility and desirability of separating the distinct IANA functions. Structural separation is not only technically feasible, it has good governance and accountability implications. By decentralizing the functions we undermine the possibility of capture by governmental or private interests and make it more likely that policy implementations are based on consensus and cooperation.”[2]

    Similarly, CADNA in its 2011 submission to NTIA notes that that in the current climate of technical innovation and the exponential expansion of the Internet community, specialisation of the IANA functions would result in them being better executed. The argument is also that delegation of the technical and administrative functions among other capable entities (such as the IETF and IAB for protocol parameters, or an international, neutral organization with understanding of address space protocols as opposed to RIRs) determined by the IETF is capable of managing this function would ensure accountability in Internet operation. Given that the IANA functions are mainly registry-maintenance function, they can to a large extent be automated. However, a single system of automation would not fit all three.

    Instead of a single institution having three masters, it is better for the functions to be separated. Most importantly, if one of the current customers wishes to shift the contract to another IANA functions operator, even if it isn’t limited by contract, it is limited by the institutional design, since iana.org serves as a central repository. This limitation didn’t exist, for instance, when the IETF decided to enter into a new contract for the RFC Editor role. This transition presents the best opportunity to cleave the functions logically, and make each community responsible for the functioning of their own registers, with IETF, which is mostly funded by ISOC, taking on the responsibility of handing the residual registries, and a discussion about the .ARPA and .INT gTLDs.

    From the above discussion, three main points emerge:

    • Splitting of the IANA functions allows for technical specialisation leading to greater efficiency of the IANA functions.
    • Splitting of the IANA functions allows for more direct accountability, and no concentration of power.
    • Splitting of the IANA functions allows for ease of shifting of the {names,number,protocol parameters} IANA functions operator without affecting the legal structure of any of the other IANA function operators.

    [1]. IAB response to the IANA FNOI, July 28, 2011. See: https://www.iab.org/wp-content/IAB-uploads/2011/07/IANA-IAB-FNOI-2011.pdf

    [2]. Internet Governance Project, Comments of the Internet Governance Project on the NTIA's "Request for Comments on the Internet Assigned Numbers Authority (IANA) Functions" (Docket # 110207099-1099-01) February 25, 2011 See: http://www.ntia.doc.gov/federal-register-notices/2011/request-comments-internet-assigned-numbers-authority-iana-functions

    Connected Trouble

    by Sunil Abraham last modified Oct 28, 2015 04:47 PM
    The internet of things phenomenon is based on a paradigm shift from thinking of the internet merely as a means to connect individuals, corporations and other institutions to an internet where all devices in (insulin pumps and pacemakers), on (wearable technology) and around (domestic appliances and vehicles) humans beings are connected.

    The guest column was published in the Week, issue dated November 1, 2015.


    Proponents of IoT are clear that the network effects, efficiency gains, and scientific and technological progress unlocked would be unprecedented, much like the internet itself.

    Privacy and security are two sides of the same coin―you cannot have one without the other. The age of IoT is going to be less secure thanks to big data. Globally accepted privacy principles articulated in privacy and data protection laws across the world are in conflict with the big data ideology. As a consequence, the age of internet of things is going to be less stable, secure and resilient. Three privacy principles are violated by most IoT products and services.

    Data minimisation

    According to this privacy principle, the less the personal information about the data subject that is collected and stored by the data controller, the more the data subject's right to privacy is protected. But, big data by definition requires more volume, more variety and more velocity and IoT products usually collect a lot of data, thereby multiplying risk.

    Purpose limitation

    This privacy principle is a consequence of the data minimisation principle. If only the bare minimum of personal information is collected, then it can only be put to a limited number of uses. But, going beyond that would harm the data subject. IoT innovators and entrepreneurs are trying to rapidly increase features, efficiency gains and convenience. Therefore, they don't know what future purposes their technology will be put to tomorrow and, again by definition, resist the principle of purpose limitation.

    Privacy by design

    Data protection regulation required that products and services be secure and protect privacy by design and not as a superficial afterthought. IoT products are increasingly being built by startups that are disrupting markets and taking down large technology incumbents. The trouble, however, is that most of these startups do not have sufficient internal security expertise and in their tearing hurry to take products to the market, many IoT products may not be comprehensively tested or audited from a privacy perspective.

    There are other cyber security principles and internet design principles that are disregarded by the IoT phenomenon, further compromising security and privacy of users.

    Centralisation

    Most of the network effects that IoT products contribute to require centralisation of data collected from users and their devices. For instance, if users of a wearable physical activity tracker would like to use gamification to keep each other motivated during exercise, the vendor of that device has to collect and store information about all its users. Since some users always wear them, they become highly granular stores of data that can also be used to inflict privacy harms.

    Decentralisation was a key design principle when the internet was first built. The argument was that you can never take down a decentralised network by bombing any of the nodes. Unfortunately, because of the rise of internet monopolies like Google, the age of cloud computing, and the success of social media giants, the internet is increasingly becoming centralised and, therefore, is much more fragile than it used be. IoT is going to make this worse.

    Complexity

    The more complex a particular technology is, the more fragile and vulnerable it is. This is not necessarily true but is usually the case given that more complex technology needs more quality control, more testing and more fixes. IoT technology raises complexity exponentially because the devices that are being connected are complex themselves and were not originally engineered to be connected to the internet. The networks they constitute are nothing like the internet which till now consisted of clients, web servers, chat servers, file servers and database servers, usually quite removed from the physical world. Compromised IoT devices, on the other hand, could be used to inflict direct harm on life and property.

    Death of the air gap

    The things that will be connected to the internet were previously separated from the internet through the means of an air gap. This kept them secure but also less useful and usable. In other words, the very act of connecting devices that were previously unconnected will expose them to a range of attacks. Security and privacy related laws, standards, audits and enforcement measures are the best way to address these potential pitfalls. Governments, privacy commissioners and data protections authorities across the world need to act so that the privacy of people and the security of our information society are protected.

    Breaking Down ICANN Accountability: What It Is and What the Internet Community Wants

    by Ramya Chandrasekhar last modified Nov 05, 2015 03:29 PM
    At the recent ICANN conference held in Dublin (ICANN54), one issue that was rehashed and extensively deliberated was ICANN's accountability and means to enhance the same. In light of the impending IANA stewardship transition from the NTIA to the internet's multi-stakeholder community, accountability of ICANN to the internet community becomes that much more important. In this blog post, some aspects of the various proposals to enhance ICANN's accountability have been deconstructed and explained.

    The Internet Corporation for Assigned Names and Numbers, known as ICANN, is a private not-for-profit organization, registered in California. Among other functions, it is tasked with carrying out the IANA function[1], pursuant to a contract between the US Government (through the National Telecommunications and Information Administration – NTIA) and itself. Which means, as of now, there exists legal oversight by the USG over ICANN with regard to the discharge of these IANA functions.[2]

    However, in 2014, the NTIA, decided to completely handover stewardship of the IANA functions to the internet’s ‘global multistakeholder community’. But the USG put down certain conditions before this transition could be effected, one of which was to ensure that there exists proper accountability within the ICANN.[3]

    The reason for this, was that the internet community feared a shift of ICANN to a FIFA-esque organization with no one to keep it in check, post the IANA transition if these accountability concerns weren’t addressed.[4]

    And thus, to answer these concerns, the Cross Community Working Group (CCWG-Accountability) has come up with reports that propose certain changes to the structure and functioning of ICANN.

    In light of the discussions that took place at ICANN54 in Dublin, this blog post is directed towards summarizing some of these proposals - those pertaining to the Independent Review Process or IRP (explained below) as well the various accountability models that are the subject of extensive debate both on and off the internet.

    Building Blocks Identified by the CCWG-Accountability

    The CCWG-Accountability put down four “building blocks”, as they call it, on which all their work is based. One of these is what is known as the Independent Review Process (or IRP). This is a mechanism by which internal complaints, either by individuals or by SOs/ACs[5], are addressed. However, the current version of the IRP is criticized for being an inefficient mechanism of dispute resolution.[6]

    And thus the CCWG-Accountability proposed a variety of amendments to the same.

    Another building block that the CCWG-Accountability identified is the need for an “empowered internet community”, which means more engagement between the ICANN Board and the internet community, as well as increased oversight by the community over the Board. As of now, the USG acts as the oversight-entity. Post the IANA transition however, the community feels they should step in and have an increased say with regard to decisions taken by the ICANN Board.

    As part of empowering the community, the CCWG-Accountability identified five core areas in which the community needs to possess some kind of powers or rights. These areas are – review and rejection of the ICANN budget, strategic plans and operating plans; review, rejection and/or approval of standard bylaws as well fundamental bylaws; review and rejection of Board decisions pertaining to IANA functions; appointment and removal of individual directors on the Board; and recall of the entire Board itself. And it is with regard to what kind of powers and rights are to be vested with the community that a variety of accountability models have been proposed, both by the CCWG-Accountability as well as the ICANN Board. However, of all these models, discussion is now primarily centered on three of them – the Sole Member Model (SMM), the Sole Designator Model (SDM) and the Multistakeholder Empowerment Model (MEM).

    What is the IRP?

    The Independent Review Process or IRP is the dispute resolution mechanism, by which complaints and/or oppositions by individuals with regard to Board resolutions are addressed. Article 4 of the ICANN bylaws lay down the specifics of the IRP. As of now, a standing panel of six to nine arbitrators is constituted, from which a panel is selected for hearing every complaint. However, the primary criticism of the current version of the IRP is the restricted scope of issues that the panel passes decisions on.[7]

    The bylaws explicitly state that the panel needs to focus on a set on procedural questions while hearing a complaint – such as whether the Board acted in good faith or exercised due diligence in passing the disputed resolution.

    Changes Proposed by the Internet Community to Enhance the IRP

    To tackle this and other concerns with the existing version of the IRP, the CCWG-Accountability proposed a slew of changes in the second draft proposal that they released in August this year. What they proposed is to make the IRP arbitral panel hear complaints and decide the matter on both procedural (as they do now) and substantive grounds. In addition, they also propose a broadening of who all have locus to initiate an IRP, to include individuals, groups and other entities. Further, they also propose a more precedent-based method of dispute resolution, wherein a panel refers to and uses decisions passed by past panels in arriving at a decision.

    At the 19th October “Enhancing ICANN-Accountability Engagement Session” that took place in Dublin as part of ICANN54, the mechanism to initiate an IRP was explained by Thomas Rickert, CCWG Co-Chair.[8]

    Briefly, the modified process is as follows -

    • An objection may be raised by any individual, even a non-member.
    • This individual needs to find an SO or an AC that shares the objection.
    • A “pre-call” or remote meeting between all the SOs and ACs is scheduled, to see if objection receives prescribed threshold of approval from the community.
    • If this threshold is met, dialogue is undertaken with the Board, to see if the objection is sustained by the Board.
    • If this dialogue also fails, then IRP can be initiated.

    The question of which “enforcement model” empowers the community arises post the initiation of this IRP, and in the event that the community receives an unfavourable decision through the IRP or that the ICANN Board refuses to implement the IRP decision. Thus, all the “enforcement models” retain the IRP as the primary method of internal dispute resolution.

    The direction that the CCWG-Accountability has taken with regard to enhancement of the IRP is heartening. And these proposals have received large support from the community. What is to be seen now is whether these proposals will be fully implemented by the Board or not, in addition to all the other proposals made by the CCWG.

    Enforcement  – An Overview of the Different Models

    In addition to trying to enhance the existing dispute resolution mechanism, the CCWG-Accountability also came up with a variety of “enforcement models”, by which the internet community would be vested with certain powers. And in response to the models proposed by the CCWG-Accountability, the ICANN Board came up with a counter proposal, called the MEM.

    Below is a tabular representation of what kinds of powers are vested with the community under the SMM, the SDM and the MEM.

    Power

    SMM

    SDM

    MEM

    Reject/Review Budget, Strategies and OPs.

    +

    Review/Reject Board decisions with regard to IANA functions.

    Sole Member has the reserved power to reject the budget up to 2 times.

    Member also has standing to enforce bylaw restrictions on the budget, etc.

    Sole Designator can only trigger Board consultations if opposition to budget, etc exists. Further, bylaws specify how many times such a consultation can be triggered.

    Designator only possesses standing to enforce this consultation.

    Community can reject Budget up to two times. Board is required by bylaws to reconsider budget post such rejection, by consulting with the community. If still no change is made, then community can initiate process to recall the Board.

    Reject/Review amendments to Standard bylaws and Fundamental bylaws

    Sole Member has right to veto these changes. Further, member also standing to enforce this right under the relevant Californian law.

    Sole Designator can also veto these changes. However, ambiguity regarding standing of designator to enforce this right.

    No veto power granted to any SO or AC.

    Each SO and AC evaluate if they want to voice the said objection. If certain threshold of agreement reached, then as per the bylaws, the Board cannot go ahead with the amendment.

    Appointment and Removal of individual ICANN directors

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    The SOs/ACs cannot appoint individual directors. But they can initiate process for their removal.

    However, directors can only be removed for breach of or on the basis of certain clauses in a “pre-service letter” that they sign.

    Recall of ICANN Board

    Sole Member has the power to recall Board.

    Further, it has standing to enforce this right in Californian courts.

    Sole Designator also has the power to recall the Board.

    However, ambiguity regarding standing to enforce this right.

    Community is not vested with power to recall the Board.

    However, if simultaneous trigger of pre-service letters occurs, in some scenarios, only then can something similar to a recall of the Board occur.

    A Critique of these Models

    SMM:

    The Sole Member Model (or SMM) was discussed and adopted in the second draft proposal, released in August 2015. This model is in fact the simplest and most feasible variant of all the other membership-based models, and has received substantial support from the internet community. The SMM proposes only one amendment to the ICANN bylaws - a move from having no members to one member, while ICANN itself retains its character as a non-profit mutual-benefit corporation under Californian laws.

    This “sole member” will be the community as a whole, represented by the various SOs and ACs. The SOs and ACs require no separate legal personhood to be a part of this “sole member”, but can directly participate. This participation is to be effected by a voting system, explained in the second draft, which allocates the maximum number of votes each SO and AC can cast. This ensures that each SO/AC doesn’t have to cast a unanimous vote, but each differing opinion within an SO/AC is given equal weight.

    SDM:

    A slightly modified and watered down version of the SMM, proposed by the CCWG-Accountability as an alternative to the same, is the “Sole Designator Model” or the SDM. Such a model requires an amendment to the ICANN bylaws, by which certain SOs/ACs are assigned “designator” status. By virtue of this status, they may then exercise certain rights - the right to recall the Board in certain scenarios and the right to veto budgets and strategic plans.

    However, there is some uncertainty in Californian law regarding who can be a designator - an individual or an entity as well. So whether unincorporated associations, such as the SOs and ACs, can be a “designator” as per the law is a question that doesn’t have a clear answer yet.

    Where most discussion with respect to the SDM has occurred has been in the area of the designator being vested with the power to “spill” or remove all the members of the ICANN Board. The designator is vested with this power as a sort of last-resort mechanism for the community’s voice to be heard. However, an interesting point raised in one of the Accountability sessions at ICANN54 was the almost negligible probability of this course of action ever being taken, i.e. the Board being “spilled”. So while in theory this model seems to vest the community with massive power, in reality, because the right to “spill” the Board may never be invoked, the SDM is actually a weak enforceability model.

    Other Variants of the Designator Model:

    The CCWG-Accountability, in both its first and second report, discussed variants of the designator model as well. A generic SO/AC Designator model was discussed in the first draft. The Enhanced SO/AC Designator model, discussed in the second draft, also functions along similar lines. However, only those SOs and ACs that wanted to be made designators apply to become so, as opposed to the requirement of a mandatory designator under the SDM model.

    After the second draft released by the CCWG-Accountability and the counter-proposal released by the ICANN Board (see below for the ICANN Board’s proposal), discussion was mostly directed towards the SMM and the MEM. However, the discussion with regard to the designator model has recently been revived by members of the ALAC at ICANN54 in Dublin, who unanimously issued a statement supporting the SDM.[9] And following this, many more in the community have expressed their support towards adopting the designator model.[10]

    MEM:

    The Multi-stakeholder Enforcement Model or MEM was the ICANN Board’s counter-model to all the models put forth by the CCWG-Accountability, specifically the SMM. However, there is no clarity with regard to the specifics of this model. In fact, the vagueness surrounding the model is one of the biggest criticisms of the model itself.

    The CCWG-Accountability accounts for possible consequences of implementation every model by a mechanism known as “stress-tests”. The Board’s proposal, on the other hand, rejects the SMM due to its “unintended consequences”, but does not provide any clarity on what these consequences are or what in fact the problems with the SMM itself are.[11]

    In addition, many are opposed to the Board proposal in general because it wasn’t created by the community, and therefore not reflective of the community’s views, as opposed to the SMM.[12]

    Instead, the Board’s solution is to propose a counter-model that doesn’t in fact fix the existing problems of accountability.

    What is known of the MEM though, gathered primarily from an FAQ published on the ICANN community forum, is this: The community, through the various SOs and ACs, can challenge any action of the Board that is CONTRADICTORY TO THE FUNDAMENTAL BYLAWS only, through a binding arbitration. The arbitration panel will be decided by the Board and the arbitration itself will be financed by ICANN. Further, this process will not replace the existing Independent Review Process or IRP, but will run parallely.

    Even this small snippet of the MEM is filled with problems. Concerns of neutrality with regard to the arbitral panel and challenge of the award itself have been raised.[13]

    Further, the MEM seems to be in direct opposition to the ‘gold standard’ multi-stakeholder model of ICANN. Essentially, there is no increased accountability of the ICANN under the MEM, thus eliciting severe opposition from the community.

    What is interesting to note about all these models, is that they are all premised on ICANN continuing to remain within the jurisdiction of the United States. And even more surprising is that hardly anyone questions this premise. However, at ICANN54 this issue received a small amount of traction, enough for the setting up of an ad-hoc committee to address these jurisdictional concerns. But even this isn’t enough traction. The only option now though is to wait and see what this ad-hoc committee, as well as the CCWG-Accountability through its third draft proposal to be released later this year, comes up with.


    [1]. The IANA functions or the technical functions are the name, number and protocol functions with regard to the administration of the Domain Name System or the DNS.

    [2]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [3]. http://www.theregister.co.uk/2015/10/19/congress_tells_icann_quit_escaping_accountability/?page=1

    [4]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [5]. SOs are Supporting Organizations and ACs are Advisory Committees. They form part of ICANN’s operational structure.

    [6]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [7]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [8]. Thomas Rickert (GNSO-appointed CCWG co-chair) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 15,16) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [9]. http://www.brandregistrygroup.org/alac-throws-spanner-in-icann-accountability-discussions

    [10]. http://www.theregister.co.uk/2015/10/22/internet_community_icann_accountability/

    [11]. http://www.theregister.co.uk/2015/09/07/icann_accountability_latest/

    [12]. http://www.circleid.com/posts/20150923_empire_strikes_back_icann_accountability_at_the_inflection_point/

    [13]. http://www.internetgovernance.org/2015/09/06/icann-accountability-a-three-hour-call-trashes-a-year-of-work/

    Bios and Photos of Speakers for Big Data in the Global South International Workshop

    by Prasad Krishna last modified Nov 06, 2015 02:01 AM

    PDF document icon Bios&Photos_BigDataWorkshop.pdf — PDF document, 1825 kB (1869456 bytes)

    Comments on the Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

    by Geetha Hariharan last modified Nov 18, 2015 06:33 AM
    Following the comment-period on the Zero Draft, the Draft Outcome Document of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released on 4 November 2015. Comments were sought on the Draft Outcome Document from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

     

    The WSIS+10 Overall Review of the Implementation of WSIS Outcomes, scheduled for December 2015, comes as a review of the WSIS process initiated in 2003-05. At the December summit of the UN General Assembly, the WSIS vision and mandate of the IGF are to be discussed. The Draft Outcome Document, released on 4 November 2015, is towards an outcome document for the summit. Comments were sought on the Draft Outcome Document. Our comments are below.

    1. The Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“the current Draft”) stands considerably altered from the Zero Draft. With references to development-related challenges, the Zero Draft covered areas of growth and challenges of the WSIS. It noted the persisting digital divide, the importance of innovation and investment, and of conducive legal and regulatory environments, and the inadequacy of financial mechanisms. Issues crucial to Internet governance such as net neutrality, privacy and the mandate of the IGF found mention in the Zero Draft.
    2. The current Draft retains these, and adds to them. Some previously-omitted issues such as surveillance, the centrality of human rights and the intricate relationship of ICTs to the Sustainable Development Goals, now stand incorporated in the current Draft. This is most commendable. However, the current Draft still lacks teeth with regard to some of these issues, and fails to address several others.
    3. In our comments to the Zero Draft, CIS had called for these issues to be addressed. We reiterate our call in the following paragraphs.

    (1) ICT for Development

    1. In the current Draft, paragraphs 14-36 deal with ICTs for development. While the draft contains rubrics like ‘Bridging the digital divide’, ‘Enabling environment’, and ‘Financial mechanisms’, the following issues are unaddressed:
    2. Equitable development for all;
    3. Accessibility to ICTs for persons with disabilities;
    4. Access to knowledge and open data.

    Equitable development

    1. In the Geneva Declaration of Principles (2003), two goals are set forth as the Declaration’s “ambitious goal”: (a) the bridging of the digital divide; and (b) equitable development for all (¶ 17). The current Draft speaks in detail about the bridging of the digital divide, but the goal of equitable development is conspicuously absent. At WSIS+10, when the WSIS vision evolves to the creation of inclusive ‘knowledge societies’, equitable development should be both a key principle and a goal to stand by.
    2. Indeed, inequitable development underscores the persistence of the digital divide. The current Draft itself refers to several instances of inequitable development; for ex., the uneven production capabilities and deployment of ICT infrastructure and technology in developing countries, landlocked countries, small island developing states, countries under occupation or suffering natural disasters, and other vulnerable states; lack of adequate financial mechanisms in vulnerable parts of the world; variably affordable (or in many cases, unaffordable) spread of ICT devices, technology and connectivity, etc.
    3. What underscores these challenges is the inequitable and uneven spread of ICTs across states and communities, including in their production, capacity-building, technology transfers, gender-concentrated adoption of technology, and inclusiveness.
    4. As such, it is essential that the WSIS+10 Draft Outcome Document reaffirm our commitment to equitable development for all peoples, communities and states.
    5. We suggest the following inclusion to paragraph 5 of the current Draft:
    “5. We reaffirm our common desire and commitment to the WSIS vision to build an equitable, people-centred, inclusive, and development-oriented Information Society…”

    Accessibility for persons with disabilities

    10. Paragraph 13 of the Geneva Declaration of Principles (2003) pledges to “pay particular attention to the special needs of marginalized and vulnerable groups of society” in the forging of an Information Society. Particularly, ¶ 13 recognises the special needs of older persons and persons with disabilities.

    11. Moreover, ¶ 31 of the Geneva Declaration of Principles calls for the special needs of persons with disabilities, and also of disadvantaged and vulnerable groups, to be taken into account while promoting the use of ICTs for capacity-building. Accessibility for persons with disabilities is thus core to bridging the digital divide – as important as bridging the gender divide in access to ICTs.

    12. Not only this, but the WSIS+10 Statement on the Implementation of WSIS Outcomes (June 2014) also reaffirms the commitment to “provide equitable access to information and knowledge for all… including… people with disabilities”, recognizing that it is “crucial to increase the participation of vulnerable people in the building process of Information Society…” (¶8).

    13. In our previous submission, CIS had suggested language drawing attention to this. Now, the current Draft only acknowledges that “particular attention should be paid to the specific ICT challenges facing… persons with disabilities…” (paragraph 11). It acknowledges also that now, accessibility for persons with disabilities constitutes one of the core elements of quality (paragraph 22). However, there is a glaring omission of a call to action, or a reaffirmation of our commitment to bridging the divide experienced by persons with disabilities.

    14. We suggest, therefore, the addition of the following language the addition of paragraph 24A to the current Draft. Sections of this suggestion are drawn from ¶8, WSIS+10 Statement on the Implementation of WSIS Outcomes.

    "24A. Recalling the UN Convention on the rights of people with disabilities, the Geneva principles paragraph 11, 13, 14 and 15, Tunis Commitment paras 20, 22 and 24, and reaffirming the commitment to providing equitable access to information and knowledge for all, building ICT capacity for all and confidence in the use of ICTs by all, including youth, older persons, women, indigenous and nomadic peoples, people with disabilities, the unemployed, the poor, migrants, refugees and internally displaced people and remote and rural communities, it is crucial to increase the participation of vulnerable people in the building process of information Society and to make their voice heard by stakeholders and policy-makers at different levels. It can allow the most fragile groups of citizens worldwide to become an integrated part of their economies and also raise awareness of the target actors on the existing ICTs solution (such as tolls as e- participation, e-government, e-learning applications, etc.) designed to make their everyday life better. We recognise need for continued extension of access for people with disabilities and vulnerable people to ICTs, especially in developing countries and among marginalized communities, and reaffirm our commitment to promoting and ensuring accessibility for persons with disabilities. In particular, we call upon all stakeholders to honour and meet the targets set out in Target 2.5.B of the Connect 2020 Agenda that enabling environments ensuring accessible telecommunication/ICT for persons with disabilities should be established in all countries by 2020.”

    Access to knowledge and open data

    15. The Geneva Declaration of Principles dedicates a section to access to information and knowledge (B.3). It notes, in ¶26, that a “rich public domain” is essential to the growth of Information Society. It urges that public institutions be strengthened to ensure free and equitable access to information (¶26), and also that assistive technologies and universal design can remove barriers to access to information and knowledge (¶25). Particularly, the Geneva Declaration advocates the use of free and open source software, in addition to proprietary software, to meet these ends (¶27).

    16. It was also recognized in the WSIS+10 Statement on the Implementation of WSIS Outcomes (‘Challenges-during implementation of Action Lines and new challenges that have emerged’) that there is a need to promote access to all information and knowledge, and to encourage open access to publications and information (C, ¶¶9 and 12).

    17. In our previous submission, CIS had highlighted the importance of open access to knowledge thus: “…the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of ‘Data Revolution for Sustainable Development’. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.”

    18. This crucial element is missing from the current Draft of the WSIS+10 Outcome Document. Of course, the current Draft notes the importance of access to information and free flow of data. But it stops short of endorsing and advocating the importance of access to knowledge and free and open source software, which are essential to fostering competition and innovation, diversity of consumer/ user choice and ensuring universal access.

    19. We suggest the following addition – of paragraph 23A to the current Draft:

    "23A. We recognize the need to promote access for all to information and knowledge, open data, and open, affordable, and reliable technologies and services, while respecting individual privacy, and to encourage open access to publications and information, including scientific information and in the research sector, and particularly in developing and least developed countries.”

    (2) Human Rights in Information Society

    20. The current Draft recognizes that human rights have been central to the WSIS vision, and reaffirms that rights offline must be protected online as well. However, the current Draft omits to recognise the role played by corporations and intermediaries in facilitating access to and use of the Internet.

    21. In our previous submission, CIS had noted that “the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online”.

    22. We reiterate our suggestion for the inclusion of paragraph 43A to the current Draft:

    "43A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

    (3) Internet Governance

    The support for multilateral governance of the Internet

    23. While the section on Internet governance is not considerably altered from the zero draft, there is a large substantive change in the current Draft. The current Draft states that the governance of the Internet should be “multilateral, transparent and democratic, with full involvement of all stakeholders” (¶50). Previously, the zero draft recognized the “the general agreement that the governance of the Internet should be open, inclusive, and transparent”.

    24. A return to purely ‘multilateral’ Internet governance would be regressive. Governments are, without doubt, crucial in Internet governance. As scholarship and experience have both shown, governments have played a substantial role in shaping the Internet as it is today: whether this concerns the availability of content, spread of infrastructure, licensing and regulation, etc. However, these were and continue to remain contentious spaces.

    25. As such, it is essential to recognize that a plurality of governance models serve the Internet, in which the private sector, civil society, the technical community and academia play important roles. We recommend returning to the language of the zero draft in ¶32: “open, inclusive and transparent governance of the Internet”.

    Governance of Critical Internet Resources

    26. It is curious that the section on Internet governance in both the zero and the current Draft makes no reference to ICANN, and in particular, to the ongoing transition of IANA stewardship and the discussions surrounding the accountability of ICANN and the IANA operator. The stewardship of critical Internet resources, such as the root, is crucial to the evolution and functioning of the Internet. Today, ICANN and a few other institutions have a monopoly over the management and policy-formulation of several critical Internet resources.

    27. While the WSIS in 2003-05 considered this a troubling issue, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

    28. Therefore, we reiterate our suggestion to add paragraph 53A after Para 53:

    "53A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

    Inclusiveness and Diversity in Internet Governance

    29. The current Draft, in ¶52, recognizes that there is a need to “promote greater participation and engagement in Internet governance of all stakeholders…”, and calls for “stable, transparent and voluntary funding mechanisms to this end.” This is most commendable.

    30. The issue of inclusiveness and diversity in Internet governance is crucial: today, Internet governance organisations and platforms suffer from a lack of inclusiveness and diversity, extending across representation, participation and operations of these organisations. As CIS submitted previously, the mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle in many cases.

    31. As we submitted before, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For ex., 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based.

    32. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed. Representation must include not only those from developing countries, but must also extend across gender and communities.

    33. We propose, therefore, the addition of a paragraph 51A after Para 51:

    "51A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, including in their representation, participation and operations. We note with concern that the representation of developing countries, of women, persons with disabilities and other vulnerable groups, is far from equitable and adequate. We call upon organisations involved in Internet governance to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

     


    Prepared by Geetha Hariharan, with inputs from Sunil Abraham and Japreet Grewal. All comments submitted towards the Draft Outcome Document may be found at this link.

    Summary Report Internet Governance Forum 2015

    by Jyoti Panday last modified Nov 30, 2015 10:47 AM
    Centre for Internet and Society (CIS), India participated in the Internet Governance Forum (IGF) held at Poeta Ronaldo Cunha Lima Conference Center, Joao Pessoa in Brazil from 10 November 2015 to 13 November 2015. The theme of IGF 2015 was ‘Evolution of Internet Governance: Empowering Sustainable Development’. Sunil Abraham, Pranesh Prakash & Jyoti Panday from CIS actively engaged and made substantive contributions to several key issues affecting internet governance at the IGF 2015. The issue-wise detail of their engagement is set out below.

    INTERNET GOVERNANCE

    I. The Multi-stakeholder Advisory Group to the IGF organised a discussion on Sustainable Development Goals (SDGs) and Internet Economy at the Main Meeting Hall from 9:00 am to 12:30 pm on 11 November, 2015. The discussions at this session focused on the importance of Internet Economy enabling policies and eco-system for the fulfilment of different SDGs. Several concerns relating to internet entrepreneurship, effective ICT capacity building, protection of intellectual property within and across borders were availability of local applications and content were addressed. The panel also discussed the need to identify SDGs where internet based technologies could make the most effective contribution. Sunil Abraham contributed to the panel discussions by addressing the issue of development and promotion of local content and applications. List of speakers included:

    1. Lenni Montiel, Assistant-Secretary-General for Development, United Nations

    2. Helani Galpaya, CEO LIRNEasia

    3. Sergio Quiroga da Cunha, Head of Latin America, Ericsson

    4. Raúl L. Katz, Adjunct Professor, Division of Finance and Economics, Columbia Institute of Tele-information

    5. Jimson Olufuye, Chairman, Africa ICT Alliance (AfICTA)

    6. Lydia Brito, Director of the Office in Montevideo, UNESCO

    7. H.E. Rudiantara, Minister of Communication & Information Technology, Indonesia

    8. Daniel Sepulveda, Deputy Assistant Secretary, U.S. Coordinator for International and Communications Policy at the U.S. Department of State  

    9. Deputy Minister Department of Telecommunications and Postal Services for the republic of South Africa

    10. Sunil Abraham, Executive Director, Centre for Internet and Society, India

    11. H.E. Junaid Ahmed Palak, Information and Communication Technology Minister of Bangladesh

    12. Jari Arkko, Chairman, IETF

    13. Silvia Rabello, President, Rio Film Trade Association

    14. Gary Fowlie, Head of Member State Relations & Intergovernmental Organizations, ITU

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2327-2015-11-11-internet-economy-and-sustainable-development-main-meeting-room

    Video link Internet economy and Sustainable Development here https://www.youtube.com/watch?v=D6obkLehVE8

     II. Public Knowledge organised a workshop on The Benefits and Challenges of the Free Flow of Data at Workshop Room 5 from 11:00 am to 12:00 pm on 12 November, 2015. The discussions in the workshop focused on the benefits and challenges of the free flow of data and also the concerns relating to data flow restrictions including ways to address them. Sunil Abraham contributed to the panel discussions by addressing the issue of jurisdiction of data on the internet. The panel for the workshop included the following.

    1. Vint Cerf, Google

    2. Lawrence Strickling, U.S. Department of Commerce, NTIA

    3. Richard Leaning, European Cyber Crime Centre (EC3), Europol

    4. Marietje Schaake, European Parliament

    5. Nasser Kettani, Microsoft

    6. Sunil Abraham, CIS India

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2467-2015-11-12-ws65-the-benefits-and-challenges-of-the-free-flow-of-data-workshop-room-5

    Video link https://www.youtube.com/watch?v=KtjnHkOn7EQ

     III. Article 19 and Privacy International organised a workshop on Encryption and Anonymity: Rights and Risks at Workshop Room 1 from 11:00 am to 12:30 pm on 12 November, 2015. The workshop fostered a discussion about the latest challenges to protection of anonymity and encryption and ways in which law enforcement demands could be met while ensuring that individuals still enjoyed strong encryption and unfettered access to anonymity tools. Pranesh Prakash contributed to the panel discussions by addressing concerns about existing south Asian regulatory framework on encryption and anonymity and emphasizing the need for pervasive encryption. The panel for this workshop included the following.

    1. David Kaye, UN Special Rapporteur on Freedom of Expression

    2. Juan Diego Castañeda, Fundación Karisma, Colombia

    3. Edison Lanza, Organisation of American States Special Rapporteur

    4. Pranesh Prakash, CIS India

    5. Ted Hardie, Google

    6. Elvana Thaci, Council of Europe

    7. Professor Chris Marsden, Oxford Internet Institute

    8. Alexandrine Pirlot de Corbion, Privacy International

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2407-2015-11-12-ws-155-encryption-and-anonymity-rights-and-risks-workshop-room-1

    Video link available here https://www.youtube.com/watch?v=hUrBP4PsfJo

     IV. Chalmers & Associates organised a session on A Dialogue on Zero Rating and Network Neutrality at the Main Meeting Hall from 2:00 pm to 4:00 pm on 12 November, 2015. The Dialogue provided access to expert insight on zero-rating and a full spectrum of diverse views on this issue. The Dialogue also explored alternative approaches to zero rating such as use of community networks. Pranesh Prakash provided a detailed explanation of harms and benefits related to different approaches to zero-rating. The panellists for this session were the following.

    1. Jochai Ben-Avie, Senior Global Policy Manager, Mozilla, USA

    2. Igor Vilas Boas de Freitas, Commissioner, ANATEL, Brazil

    3. Dušan Caf, Chairman, Electronic Communications Council, Republic of Slovenia

    4. Silvia Elaluf-Calderwood, Research Fellow, London School of Economics, UK/Peru

    5. Belinda Exelby, Director, Institutional Relations, GSMA, UK

    6. Helani Galpaya, CEO, LIRNEasia, Sri Lanka

    7. Anka Kovacs, Director, Internet Democracy Project, India

    8. Kevin Martin, VP, Mobile and Global Access Policy, Facebook, USA

    9. Pranesh Prakash, Policy Director, CIS India

    10. Steve Song, Founder, Village Telco, South Africa/Canada

    11. Dhanaraj Thakur, Research Manager, Alliance for Affordable Internet, USA/West Indies

    12. Christopher Yoo, Professor of Law, Communication, and Computer & Information Science, University of Pennsylvania, USA

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2457-2015-11-12-a-dialogue-on-zero-rating-and-network-neutrality-main-meeting-hall-2

     V. The Internet & Jurisdiction Project organised a workshop on Transnational Due Process: A Case Study in MS Cooperation at Workshop Room 4 from 11:00 am to 12:00 pm on 13 November, 2015. The workshop discussion focused on the challenges in developing an enforcement framework for the internet that guarantees transnational due process and legal interoperability. The discussion also focused on innovative approaches to multi-stakeholder cooperation such as issue-based networks, inter-sessional work methods and transnational policy standards. The panellists for this discussion were the following.

    1. Anne Carblanc Head of Division, Directorate for Science, Technology and Industry, OECD

    2. Eileen Donahoe Director Global Affairs, Human Rights Watch

    3. Byron Holland President and CEO, CIRA (Canadian ccTLD)

    4. Christopher Painter Coordinator for Cyber Issues, US Department of State

    5. Sunil Abraham Executive Director, CIS India

    6. Alice Munyua Lead dotAfrica Initiative and GAC representative, African Union Commission

    7. Will Hudsen Senior Advisor for International Policy, Google

    8. Dunja Mijatovic Representative on Freedom of the Media, OSCE

    9. Thomas Fitschen Director for the United Nations, for International Cooperation against Terrorism and for Cyber Foreign Policy, German Federal Foreign Office

    10. Hartmut Glaser Executive Secretary, Brazilian Internet Steering Committee

    11. Matt Perault, Head of Policy Development Facebook

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2475-2015-11-13-ws-132-transnational-due-process-a-case-study-in-ms-cooperation-workshop-room-4

    Video link Transnational Due Process: A Case Study in MS Cooperation available here https://www.youtube.com/watch?v=M9jVovhQhd0

     VI. The Internet Governance Project organised a meeting of the Dynamic Coalition on Accountability of Internet Governance Venues at Workshop Room 2 from 14:00 – 15:30 on 12 November, 2015. The coalition brought together panelists to highlight the challenges in developing an accountability framework for internet governance venues that include setting up standards and developing a set of concrete criteria. Jyoti Panday provided the perspective of civil society on why acountability is necessary in internet governance processes and organizations. The panelists for this workshop included the following.

    1. Robin Gross, IP Justice

    2. Jeanette Hofmann, Director Alexander von Humboldt Institute for Internet and Society

    3. Farzaneh Badiei, Internet Governance Project

    4. Erika Mann, Managing Director Public PolicyPolicy Facebook and Board of Directors ICANN

    5. Paul Wilson, APNIC

    6. Izumi Okutani, Japan Network Information Center (JPNIC)

    7. Keith Drazek , Verisign

    8. Jyoti Panday, CIS

    9. Jorge Cancio, GAC representative

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c23/dynamic-coalition-on-accountability-of-internet-governance-venues?iframe=no&w=&sidebar=yes&bg=no

    Video link https://www.youtube.com/watch?v=UIxyGhnch7w

     VII. Digital Infrastructure Netherlands Foundation organized an open forum at Workshop Room 3 from 11:00 – 12:00 on 10 November, 2015. The open forum discussed the increase in government engagement with “the internet” to protect their citizens against crime and abuse and to protect economic interests and critical infrastructures. It brought together panelists topresent ideas about an agenda for the international protection of ‘the public core of the internet’ and to collect and discuss ideas for the formulation of norms and principles and for the identification of practical steps towards that goal. Pranesh Prakash participated in the e open forum. Other speakers included

    1. Bastiaan Goslings AMS-IX, NL

    2. Pranesh Prakash CIS, India

    3. Marilia Maciel (FGV, Brasil

    4. Dennis Broeders (NL Scientific Council for Government Policy)

    Detailed description of the open forum is available here http://schd.ws/hosted_files/igf2015/3d/DINL_IGF_Open%20Forum_The_public_core_of_the_internet.pdf

    Video link available here https://www.youtube.com/watch?v=joPQaMQasDQ

    VIII. UNESCO, Council of Europe, Oxford University, Office of the High Commissioner on Human Rights, Google, Internet Society organised a workshop on hate speech and youth radicalisation at Room 9 on Thursday, November 12. UNESCO shared the initial outcome from its commissioned research on online hate speech including practical recommendations on combating against online hate speech through understanding the challenges, mobilizing civil society, lobbying private sectors and intermediaries and educating individuals with media and information literacy. The workshop also discussed how to help empower youth to address online radicalization and extremism, and realize their aspirations to contribute to a more peaceful and sustainable world. Sunil Abraham provided his inputs. Other speakers include

    1. Chaired by Ms Lidia Brito, Director for UNESCO Office in Montevideo

    2.Frank La Rue, Former Special Rapporteur on Freedom of Expression

    3. Lillian Nalwoga, President ISOC Uganda and rep CIPESA, Technical community

    4. Bridget O’Loughlin, CoE, IGO

    5. Gabrielle Guillemin, Article 19

    6. Iyad Kallas, Radio Souriali

    7. Sunil Abraham executive director of Center for Internet and Society, Bangalore, India

    8. Eve Salomon, global Chairman of the Regulatory Board of RICS

    9. Javier Lesaca Esquiroz, University of Navarra

    10. Representative GNI

    11. Remote Moderator: Xianhong Hu, UNESCO

    12. Rapporteur: Guilherme Canela De Souza Godoi, UNESCO

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c1X/ws-128-mitigate-online-hate-speech-and-youth-radicalisation?iframe=no&w=&sidebar=yes&bg=no

    Video link to the panel is available here https://www.youtube.com/watch?v=eIO1z4EjRG0

     INTERMEDIARY LIABILITY

    IX. Electronic Frontier Foundation, Centre for Internet Society India, Open Net Korea and Article 19 collaborated to organize a workshop on the Manila Principles on Intermediary Liability at Workshop Room 9 from 11:00 am to 12:00 pm on 13 November 2015. The workshop elaborated on the Manila Principles, a high level principle framework of best practices and safeguards for content restriction practices and addressing liability for intermediaries for third party content. The workshop saw particpants engaged in over lapping projects considering restriction practices coming togetehr to give feedback and highlight recent developments across liability regimes. Jyoti Panday laid down the key details of the Manila Principles framework in this session. The panelists for this workshop included the following.

    1. Kelly Kim Open Net Korea,

    2. Jyoti Panday, CIS India,

    3. Gabrielle Guillemin, Article 19,

    4. Rebecca McKinnon on behalf of UNESCO

    5. Giancarlo Frosio, Center for Internet and Society, Stanford Law School

    6. Nicolo Zingales, Tilburg University

    7. Will Hudson, Google

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2423-2015-11-13-ws-242-the-manila-principles-on-intermediary-liability-workshop-room-9

    Video link available here https://www.youtube.com/watch?v=kFLmzxXodjs

     ACCESSIBILITY

    X. Dynamic Coalition on Accessibility and Disability and Global Initiative for Inclusive ICTs organised a workshop on Empowering the Next Billion by Improving Accessibility at Workshop Room 6 from 9:00 am to 10:30 am on 13 November, 2015. The discussion focused on the need and ways to remove accessibility barriers which prevent over one billion potential users to benefit from the Internet, including for essential services. Sunil Abraham specifically spoke about the lack of compliance of existing ICT infrastructure with well established accessibility standards specifically relating to accessibility barriers in the disaster management process. He discussed the barriers faced by persons with physical or psychosocial disabilities. The panelists for this discussion were the following.

    1. Francesca Cesa Bianchi, G3ICT

    2. Cid Torquato, Government of Brazil

    3. Carlos Lauria, Microsoft Brazil

    4. Sunil Abraham, CIS India

    5. Derrick L. Cogburn, Institute on Disability and Public Policy (IDPP) for the ASEAN(Association of Southeast Asian Nations) Region

    6. Fernando H. F. Botelho, F123 Consulting

    7. Gunela Astbrink, GSA InfoComm

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2438-2015-11-13-ws-253-empowering-the-next-billion-by-improving-accessibility-workshop-room-3

    Video Link Empowering the next billion by improving accessibility https://www.youtube.com/watch?v=7RZlWvJAXxs

     OPENNESS

    XI. A workshop on FOSS & a Free, Open Internet: Synergies for Development was organized at Workshop Room 7 from 2:00 pm to 3:30 pm on 13 November, 2015. The discussion was focused on the increasing risk to openness of the internet and the ability of present & future generations to use technology to improve their lives. The panel shred different perspectives about the future co-development of FOSS and a free, open Internet; the threats that are emerging; and ways for communities to surmount these. Sunil Abraham emphasised the importance of free software, open standards, open access and access to knowledge and the lack of this mandate in the draft outcome document for upcoming WSIS+10 review and called for inclusion of the same. Pranesh Prakash further contributed to the discussion by emphasizing the need for free open source software with end‑to‑end encryption and traffic level encryption based on open standards which are decentralized and work through federated networks. The panellists for this discussion were the following.

    1. Satish Babu, Technical Community, Chair, ISOC-TRV, Kerala, India

    2. Judy Okite, Civil Society, FOSS Foundation for Africa

    3. Mishi Choudhary, Private Sector, Software Freedom Law Centre, New York

    4. Fernando Botelho, Private Sector, heads F123 Systems, Brazil

    5. Sunil Abraham, CIS India

    6. Pranesh Prakash, CIS India

    7. Nnenna Nwakanma- WWW.Foundation

    8. Yves MIEZAN EZO, Open Source strategy consultant

    9. Corinto Meffe, Advisor to the President and Directors, SERPRO, Brazil

    10. Frank Coelho de Alcantara, Professor, Universidade Positivo, Brazil

    11. Caroline Burle, Institutional and International Relations, W3C Brazil Office and Center of Studies on Web Technologies

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2468-2015-11-13-ws10-foss-and-a-free-open-internet-synergies-for-development-workshop-room-7

    Video link available here https://www.youtube.com/watch?v=lwUq0LTLnDs



    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    by Nishant Shah last modified Nov 23, 2015 01:27 PM
    The idea of a 'digital' Diwali reduces our social relationships to a ledger of give and take. The last fortnight, I have been bombarded with advertisements selling the idea of a “Digital Diwali”. We have become so used to the idea that everything that is digital is modern, better and more efficient.
    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    For me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. (Source: Reuters)

    The article was published in the Indian Express on November 22, 2015.


    I have WhatsApp messages with exploding fireworks, singing greeting cards that chant mystic sounding messages, an app that turns my smartphone into a flickering diya, another app that remotely controls the imitation LED candles on my windows, an invitation to Skype in for a puja at a friend’s house 3,000 km away, and the surfeit of last minute shopping deals, each one offering a dhamaka of discounts.

    However, to me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. Think of Diwali as sharing the fundamental logic that governs the digital — the logic of counting. As we explode with joy this festive season, we count our blessings, our loved ones, the gifts and presents that we exchange. If we are on the new Fitbit trend, we count the calories we consume and burn as we make our way through parties where it is important to see and be seen, compare and contrast, connect with all the people who could be thought of as friends, followers, connectors, or connections.

    While there is no denying that there is a sociality that the festival brings in, there is also a cruel algebra of counting that comes along with it. It is no surprise that as we celebrate the victory of good over evil and right over wrong, we also simultaneously bow our heads to the goddess of wealth in this season.

    Look beyond the glossy surface of Diwali festivities, and you realise that it is exactly like the digital. Digital is about counting. It is right there in the name — digits refers to numbers. Or digits refer to fingers — these counting appendages which we can manipulate and flex in order to achieve desired results. At the core of digital systems is the logic of counting, and counting, as anybody will tell us, is not a benign process. What gets counted, gets accounted for, thus producing a ledger of give and take which often becomes the measure of our social relationships.

    I remember, as a child, my mother meticulously making a note of every gift or envelope filled with money that ever came our way from the relatives, so that there would be precise and exact reciprocation. I am certain that there is now an app which can keep a track of these exchanges. I am not suggesting that these occasions of gifting are merely mercenary, but they are embodiments of finely calibrated values and worth of relationships defined by proximity, intimacy, hierarchy and distance. The digital produces and works on a similar algorithm, which is often as inscrutable and opaque as the unspoken codes of the Diwali ledger.

    There is something else that happens with counting. The only things that can have value are things that have value. I don’t know which ledger counts the coming together of my very distributed family for an evening of chatting, talking, sharing lives and laughter. I don’t know how anybody would reciprocate that one late night when a cousin came to our home and spent hours with my younger brother making a rangoli to surprise the rest of us. I have no idea how they will ever reciprocate gifts that one of the younger kids made at school for all the members of the family.

    Diwali is about the things, but like the digital system, these are things that cannot be counted. And within the digital system, things that cannot be counted are things that get discounted. They become unimportant. They become noise, or rubbish. Our social networks are counting systems that might notice the low frequency of my connections with my extended family but they cannot quantify the joy I hear in the voice of my grandmother when I call her from a different time-zone to catch up with her. Digital systems can only deal with things with value and not their worth.

    I do want to remind myself that there is more to this occasion than merely counting. And for once, I want to go beyond the digital, where my memories of the past and the expectations of the future are not shaped by the digital systems of counting and quantifying. Instead, I want Diwali to be analogue. I shall still be mediating my collectivity with the promises of connectivity, but I want to think of this moment as beyond the logics and logistics of counting that codify our social transactions and take such a central location in our personal functioning. This Diwali, I am rooting for a post-digital Diwali, that accounts for all those things that cannot be counted, but are sometimes the only things that really count.

    CIS Submission on CCWG-Accountability 2nd Draft Proposal on Work Stream 1 Recommendations

    by Pranesh Prakash last modified Nov 23, 2015 02:58 PM
    The Centre for Internet & Society (CIS) submitted the below to ICANN's CCWG-Accountability.

    The CCWG Accountability proposal is longer than many countries' constitutions.  Given that, we will keep our comments brief, addressing a very limited set of the issues in very broad terms.

    Human Rights

    ICANN is unique in many ways.  It is a global regulator that has powers of taxation to fund its own operation.  ICANN is not a mere corporation. For such a regulator, ensuring fair process (what is often referred to as "natural justice") as well as substantive human rights (such as the freedom of expression, right against discrimination, right to privacy, and cultural diversity), are important.  Given this, the narrow framing of "free expression and the free flow of information" in Option 1, we believe Option 2 is preferable.

    Diversity

    We are glad that diversity is being recognized as an important principle.  As we noted during the open floor session at ICANN49: [We are] extremely concerned about the accountability of ICANN to the global community.  Due to various decisions made by the US government relating to ICANN's birth, ICANN has had a troubled history with legitimacy.  While it has managed to gain and retain the confidence of the technical community, it still lacks political legitimacy due to its history.  The NTIA's decision has presented us an opportunity to correct this.

    However, ICANN can't hope to do so without going beyond the current ICANN community, which while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

    Of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.  In a session yesterday, a large number of the policies that favour entrenched incumbents from richer countries were discussed.  But without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies.

    This is true not just of the business sector, but of all the 'stakeholders' that are part of global Internet policymaking, whether they follow the ICANN multistakeholder model or another.  A look at the board members of the Internet Architecture Board, for instance, would reveal how skewed the technical community can be, whether in terms of geographic or gender diversity.

    Without greater diversity within the global Internet policymaking communities, there is no hope of equity, respect for human rights — civil, political, cultural, social and economic — and democratic functioning, no matter how 'open' the processes seem to be, and no hope of ICANN accountability either.

    Meanwhile, there are those who are concerned that diversity should not prevail over skill and experience.  Those who have the greatest skill and experience will be those who are insiders in the ICANN system.  To believe that being an insider in the ICANN system ought to be privileged over diversity is wrong.  A call for diversity isn't just political correctness.  It is essential for legitimacy of ICANN as a globally-representative body, and not just one where the developed world (primarily US-based persons) makes policies for the whole globe, which is what it has so far been.  Of course, this cannot be corrected overnight, but it is crucial that this be a central focus of the accountability initiative.

    Jurisdiction, Membership Models and Voting Rights

    The Sole-Member Community Mechanism (SMCM) that has been proposed seems in large part the best manner provided under Californian law relating to public benefit corporations of dealing with accountability issues, and is the lynchpin of the whole accountability mechanism under workstream.

    However, the jurisdictional analysis laid down in 11.3 will only be completed post-transition, as part of workstream. Thus the SMCM may not necessarily be the best model under a different legal jurisdiction. It would be useful to discuss the dependency between these more clearly.  In this vein, it is essential that the Article XVIII Section 1 not be designated a fundamental bylaw.  Further, it would be useful to add that for some limited aspects of the transition (such as IANA functioning), ICANN should seek to enter into a host country agreement to provide legal immunity, thus providing a qualification to para 125 ("ICANN accountability requires compliance with applicable legislation, in jurisdictions where it operates.") since the IANA functions operator ought not be forced by a country not to honour requests made by, for example, North Korea.

    It should also be noted that accountability needs independence, which may be of two kinds: independence of financial source, and independence of appointment.  From what one could gather from the CCWG proposal, the Independent Review Panel will be funded by the budget the ICANN Board prepares, while the appointment process is still unclear.

    One of the most important accountability mechanisms with regard to the IANA functions is that of changing the IANA Functions Operator.  As per the CWG Stewardship's current proposal, the "Post-Transition IANA" won't be an entity that is independent of ICANN.  If the PTI's governance is permanently made part of ICANN's fundamental bylaws (as an affiliate controlled by ICANN), how is it proposed that the IFO be moved from PTI to some other entity if the IANA Functions Review Team so decides? Additionally, for such an important function, the composition of the IFRT should not be left unspecified.

    While it is welcome that a separation is proposed between the IANA budget and budget for rest of ICANN's functioning, the current discussion around budgets seems to be based on the assumption that all IANA functions will be funded by ICANN, whereas if the IANA functions are separated, each community might fund it separately.  That provides two levels of insulation to IANA functions operator(s): separate sources of operational revenue, as well as separate budgets within ICANN.

    It should be noted that there have been some responses that express concern about the shifting of existing power structures within ICANN through some of the proposed alternative voting allocations in the SMCM. However, rather than present arguments as to why these shifts would be beneficial or harmful for ICANN's overall accountability, these responses seem to assume that shift from the current power structures are harmful.  This is an unfounded assumption and cannot be a valid reason, nor can speculation of how the United States Congress will behave be a valid reason for rejecting an otherwise valid proposal.  If there are harms, they ought to be clearly articulated: shifts from the status quo and fear of the US Congress aren't valid harms.  Thus, while it is important to consider how different voting rights models might change the status quo while arriving at any judgments, that cannot be the sole criterion for judgment of its merits.  Further, as the French government notes:

    [T]he French Government still considers that linking Stress Test 18 to a risk of capture of ICANN by governments and NTIA’s requirement that no “government-led or intergovernmental organization solution would be acceptable”, makes no sense. . . . Logically, the risk of capture of ICANN by governments in the future is as low as it is now and in any case, it cannot lead to a “government-led or intergovernmental organization solution”.

    While dealing with the question of relative voting proportions, the community must remembered that not all parts of the world are equally developed with regard to the domain name industry and with respect to civil society as those countries in North America, Western Europe, and other developed nations, and thus may not find adequate representation via the SOs.  In many parts of the world, civil society organizations — especially those focussed on Internet governance and domain name policies — are non-existent.  Thus a system that privileges the SOs to the exclusion of other components of a multistakeholder governance model would not be representative or diverse.  A multistakeholder model cannot disproportionately represent business interests over all other interests.

    In this regard, the comments of former ICANN Chairperson, Rod Beckstrom, at ICANN43 ought to be recalled:

    ICANN must be able to act for the public good while placing commercial and financial interests in the appropriate context . . . How can it do this if all top leadership is from the very domain name industry it is supposed to coordinate independently?

    As Kieren McCarthy points out about ICANN:

    The Board does have too many conflicted members
    The NomCom is full of conflicts
    There are not enough independent voices within the organization

    Reforms in these ought to be as crucial to accountability as the membership model.

    The current mechanisms for ensuring transparency, such as the DIDP process, are wholly inadequate.  We have summarized our experience with the DIDP process, and how often we were denied information on baseless grounds in this table.

    Predictive Policing: What is it, How it works, and its Legal Implications

    by Rohan George — last modified Nov 24, 2015 04:31 PM
    This article reviews literature surrounding big data and predictive policing and provides an analysis of the legal implications of using predictive policing techniques in the Indian context.

    Introduction

    For the longest time, humans have been obsessed with prediction. Perhaps the most well-known oracle in history, Pythia, the infallible Oracle of Delphi was said to predict future events in hysterical outbursts on the seventh day of the month, inspired by the god Apollo himself. This fascination with informing ourselves about future events has hardly subsided in us humans. What has changed however is the methods we employ to do so. The development of Big data technologies for one, has seen radical applications into many parts of life as we know it, including enhancing our ability to make accurate predictions about the future.

    One notable application of Big data into prediction caters to another basic need since the dawn of human civilisation, the need to protect our communities and cities. The word 'police' itself originates from the Greek word 'polis', which means city. The melding of these two concepts prediction and policing has come together in the practice of Predictive policing, which is the application of computer modelling to historical crime data and metadata to predict future criminal activity[1]. In the subsequent sections, I will attempt an introduction of predictive policing and explain some of the main methods within the domain of predictive policing. Because of the disruptive nature of these technologies, it will also be prudent to expand on the implications predictive technologies have for justice, privacy protections and protections against discrimination among others.

    In introducing the concept of predictive policing, my first step is to give a short explanation about current predictive analytics techniques, because these techniques are the ones which are applied into a law enforcement context as predictive policing.

    What is predictive analysis

    Facilitated by the availability of big data, predictive analytics uses algorithms to recognise data patterns and predict future outcomes[2]. Predictive analytics encompasses data mining, predictive modeling, machine learning, and forecasting[3]. Predictive analytics also relies heavily on machine learning and artificial intelligence approaches [4]. The aim of such analysis is to identify relationships among variables that may not be immediately apparent using hypothesis-driven methods.[5] In the mainstream media, one of the most infamous stories about the use of predictive analysis comes from USA, regarding a department store Target and their data analytics practices [6]. Target mined data from purchasing patterns of people who signed onto their baby registry. From this they were able to predict approximately when customers may be due and target advertisements accordingly. In the noted story, they were so successful that they predicted pregnancy before the pregnant girl's father knew she was pregnant. [7]

    Examples of predictive analytics

    • Predicting the success of a movie based on its online ratings[8]
    • Many universities, sometimes in partnership with other firms use predictive analytics to provide course recommendations to students, track student performance, personalize curriculum to individual students and foster networking between students.[9]
    • Predictive Analysis of Corporate Bond Indices Returns[10]

    Relationship between predictive analytics and predictive policing

    The same techniques used in many of the predictive methods mentioned above find application into some predictive policing methods. However two important points need to be raised:

    First, predictive analytics is actually a subset of predictive policing. This is because while the steps in creating a predictive model, of defining a target variable, exposing your model to training data, selecting appropriate features and finally running predictive analysis [11] maybe the same in a policing context, there are other methods which may be used to predict crime, but which do not rely on data mining. These techniques may instead use other methods, such as some of those detailed below along with data about historical crime to generate predictions.

    In her article "Policing by Numbers: Big Data and the Fourth Amendment"[12], Joh categorises 3 main applications of Big data into policing. These are Predictive Policing, Domain Awareness systems and Genetic Data Banks. Genetic data banks refer to maintaining large databases of DNA that was collected as part of the justice system. Issues arise when the DNA collected is repurposed in order to conduct familial searches, instead of being used for corroborating identity. Familial searches may have disproportionate impacts on minority races. Domain Awareness systems use various computer software and other digital surveillance tools such as Geographical Information Systems [13] or more illicit ones such as Black Rooms[14] to "help police create a software-enhanced picture of the present, using thousands of data points from multiple sources within a city" [15]. I believe Joh was very accurate in separating Predictive Policing from Domain Awareness systems, especially when it comes to analysing the implications of the various applications of Big data into policing.

    In such an analysis of the implications of using predictive policing methods, the issues surrounding predictive technologies often get conflated with larger issues about the application of big data into law enforcement. That opens the debate up to questions about overly intrusive evidence gathering and mass surveillance systems, which though used along with predictive technology, are not themselves predictive in nature. In this article, I aim to concentrate on the specific implications that arise due to predictive methods.

    One important point regarding the impact of predictive policing is how the insights that predictive policing methods offer are used. There is much support for the idea that predictive policing does not replace policing methods, but actually augments them. The RAND report specifically cites one myth about predictive policing as "the computer will do everything for you[16]". In reality police officers need to act on the recommendations provided by the technologies.

    What is Predictive policing?

    Predictive policing is the "application of analytical techniques-particularly quantitative techniques-to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions".[17] It is important to note that the use of data and statistics to inform policing is not new. Indeed, even twenty years ago, before the deluge of big data we have today, law enforcement regimes such as the New York Police Department (NYPD) were already using crime data in a major way. In order to keep track of crime trends, NYPD used the software CompStat[18] to map "crime statistics along with other indicators of problems, such as the locations of crime victims and gun arrests"[19]. The senior officers used the information provided by CompStat to monitor trends of crimes on a daily basis and such monitoring became an instrumental way to track the performance of police agencies[20]. CompStat has since seen application in many other jurisdictions [21].

    But what is new is the amount of data available for collection, as well as the ease with which organisations can analyse and draw insightful results from that data. Specifically, new technologies allow for far more rigorous interrogation of data and wide-ranging applications, including adding greater accuracy to the prediction of future incidence of crime.

    Predictive Policing methods

    Some methods of predictive policing involve application of known standard statistical methods, while other methods involve modifying these standard techniques. Predictive techniques that forecast future criminal activities can be framed around six analytic categories. They all may overlap in the sense that multiple techniques are used to create actual predictive policing software and in fact it is similar theories of criminology which undergird many of these methods, but the categorisation in such a way helps clarify the concept of predictive policing. The basis for the categorisation below comes from a RAND Corporation report entitled 'Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations' [22], which is a comprehensive and detailed contribution to scholarship in this nascent area.

    Hot spot analysis: Methods involving hot spot analysis attempt to "predict areas of increased crime risk based on historical crime data"[23]. The premise behind such methods lies in the adage that "crime tends to be lumpy" [24]. Hot Spot analysis seeks to map out these previous incidences of crime in order to inform potential future crime.

    Regression methods: A regression aims to find relationships between independent variables (factors that may influence criminal activity) and certain variables that one aims to predict. Hence, this method would track more variables than just crime history.

    Data mining techniques: Data mining attempts to recognise patterns in data and use it to make predictions about the future. One important variant in the various types of data mining methods used in policing are different types of algorithms that are used to mine data in different ways. These are dependent on the nature of the data the predictive model was trained on and will be used to interrogate in the future. Two broad categories of algorithms commonly used are clustering algorithms and classification algorithms:

    · Clustering algorithms "form a class of data mining approaches that seek to group data into clusters with similar attributes" [25]. One example of clustering algorithms is spatial clustering algorithms, which use geospatial crime incident data to predict future hot spots for crime[26].

    · Classification algorithms "seek to establish rules assigning a class or label to events"[27]. These algorithms use training data sets "to learn the patterns that determine the class of an observation"[28] The patterns identified by the algorithm will be applied to future data, and where applicable, the algorithm will recognise similar patterns in the data. This can be used to make predictions about future criminal activity for example.

    Near-repeat methods: Near-repeat methods work off the assumption that future crimes will take place close to timing and location of current crimes. Hence, it could be postulated that areas of high crime will experience more crime in the near future[29]. This involves the use of a 'self-exciting' algorithm, very similar to algorithms modelling earthquake aftershocks [30]. The premise undergirding such methods is very similar to that of hot spot analysis.

    Spatiotemporal analysis: Using "environmental and temporal features of the crime location" [31] as the basis for predicting future crime. By combining the spatiotemporal features of the crime area with crime incident data, police could use the resultant information to predict the location and time of future crimes. Examples of factors that may be considered include timing of crimes, weather, distance from highways, time from payday and many more.

    Risk terrain analysis: Analyses other factors that are useful in predicting crimes. Examples of such factors include "the social, physical, and behavioural factors that make certain areas more likely to be affected by crime"[32]

    Various methods listed above are used, often together, to predict the where and when a crime may take place or even potential victims. The unifying thread which relates these methods is their dependence on historical crime data.

    Examples of predictive policing:

    Most uses of predictive policing that have been studied and reviewed in scholarly work come from the USA, though I will detail one case study from Derbyshire, UK. Below is a collation of various methods that are a practical application of the methods raised above.

    Hot Spot analysis in Sacramento: In February 2011, Sacramento Police Department began using hot spot analysis along with research on optimal patrol time to act as a sufficient deterrent to inform how they patrol high-risk areas. This policy was aimed at preventing serious crimes by patrolling these predicted hot spots. In places where there was such patrolling, serious crimes reduced by a quarter with no significant increases such crimes in surrounding areas[33].

    Data Mining and Hot Spot Mapping in Derbyshire, UK: The Safer Derbyshire Partnership, a group of law enforcement agencies and municipal authorities sought to identify juvenile crime hotspots[34]. They used MapInfo software to combine "multiple discrete data sets to create detailed maps and visualisations of criminal activity, including temporal and spatial hotspots" [35]. This information informed law enforcement about how to optimally deploy their resources.

    Regression models in Pittsburgh: Researchers used reports from Pittsburgh Bureau of Police about violent crimes and "leading indicator" [36] crimes, crimes that were relatively minor but which could be a sign of potential future violent offences. The researcher ran analysis of areas with violent crimes, which were used as the dependent variable in analysing whether violent crimes in certain areas could be predicted by the leading indicator data. From the 93 significant violent crime areas that were studied, 19 areas were successfully predicted by the leading indicator data.[37]

    Risk terrain modelling analysis in Morris County, New Jersey: Police in Morris County, used risk terrain analysis to tackle violent crimes and burglaries. They considered five inputs in their model: "past burglaries, the address of individuals recently arrested for property crimes, proximity to major highways, the geographic concentration of young men and the location of apartment complexes and hotels." [38] The Morris County law enforcement officials linked the significant reductions in violent and property crime to their use of risk terrain modelling[39].

    Near-repeat & hot spot analysis used by Santa Cruz Police Department: Uses PredPol software that applies the Mohler's algorithm [40] to a database with five years' worth of crime data to assess the likelihood of future crime occurring in the geographic areas within the city. Before going on shift, officers receive information identifying 15 such areas with the highest probability of crime[41]. The initiative has been cited as being very successful at reducing burglaries, and was used in Los Angeles and Richmond, Virginia[42].

    Data Mining and Spatiotemporal analysis to predict future criminal activities in Chicago: Officers in Chicago Police Department made visits to people their software predicted were likely to be involved in violent crimes[43], guided by an algorithm-generated "Heat List"[44]. Some of the inputs used in the predictions include some types of arrest records, gun ownership, social networks[45] (police analysis of social networking is also a rising trend in predictive policing[46]) and generally type of people you are acquainted with [47] among others, but the full list of the factors are not public. The list sends police officers (or sometimes mails letters) to peoples' homes to offer social services or deliver warnings about the consequences for offending. Based in part on the information provided by the algorithm, officers may provide people on the Heat List information about vocational training programs or warnings about how Federal Law provides harsher punishments for reoffending[48].

    Predictive policing in India

    In this section, I map out some of the developments in the field of predictive policing within India. On the whole, predictive policing is still very new in India, with Jharkhand being the only state that appears to already have concrete plans in place to introduce predictive policing.

    Jharkhand Police

    The Jharkhand police began developing their IT infrastructure such as a Geographic Information System (GIS) and Server room when they received funding for Rs. 18.5 crore from the Ministry of Home Affairs[49]. The Open Group on E-governance (OGE), founded as a collaboration between the Jharkhand Police and National Informatics Centre[50], is now a multi-disciplinary group which takes on different projects related to IT[51]. With regards to predictive policing, some members of OGE began development in 2013 of data mining software which will scan online records that are digitised. The emerging crime trends "can be a building block in the predictive policing project that the state police want to try."[52]

    The Jharkhand Police was also reported in 2012 to be in the final stages of forming a partnership with IIM-Ranchi[53]. It was alleged the Jharkhand police aimed to tap into IIM's advanced business analytics skills [54], skills that can be very useful in a predictive policing context. Mr Pradhan suggested that "predictive policing was based on intelligence-based patrol and rapid response"[55] and that it could go a long way to dealing with the threat of Naxalism in Jharkhand[56].

    However, in Jharkhand, the emphasis appears to be targeted at developing a massive Domain Awareness system, collecting data and creating new ways to present that data to officers on the ground, instead of architecting and using predictive policing software. For example, the Jharkhand police now have in place "a Naxal Information System, Crime Criminal Information System (to be integrated with the CCTNS) and a GIS that supplies customised maps that are vital to operations against Maoist groups"[57]. The Jharkhand police's "Crime Analytics Dashboard" [58] shows the incidence of crime according to type, location and presents it in an accessible portal, providing up-to-date information and undoubtedly raises the situational awareness of the officers. Arguably, the domain awareness systems that are taking shape in Jharkhand would pave the way for predictive policing methods to be applied in the future. These systems and hot spot maps seem to be the start of a new age of policing in Jharkhand.

    Predictive Policing Research

    One promising idea for predictive policing in India comes from the research conducted by Lavanya Gupta and others entitled "Predicting Crime Rates for Predictive Policing"[59], which was a submission for the Gandhian Young Technological Innovation Award. The research uses regression modelling to predict future crime rates. Drawing from First Information Reports (FIRs) of violent crimes (murder, rape, kidnapping etc.) from Chandigarh Police, the team attempted "to extrapolate annual crime rate trends developed through time series models. This approach also involves correlating past crime trends with factors that will influence the future scope of crime, in particular demographic and macro-economic variables" [60]. The researchers used early crime data as the training data for their model, which after some testing, eventually turned out to have an accuracy of around 88.2%.[61] On the face of it, ideas like this could be the starting point for the introduction of predictive policing into India.

    The rest of India's law enforcement bodies do not appear to be lagging behind. In the 44th All India police science congress, held in Gandhinagar, Gujarat in March this year, one of the Themes for discussion was the "Role of Preventive Forensics and latest developments in Voice Identification, Tele-forensics and Cyber Forensics"[62].Mr A K Singh, (Additional Director General of Police, Administration) the chairman of the event also said in an interview that there was to be a round-table DGs (Director General of Police) held at the conference to discuss predictive policing[63]. Perhaps predictive policing in India may not be that far away from reality.

    CCTNS and the building blocks of Predictive policing

    The Ministry of Home Affairs conceived of a Crime and Criminals Tracking and Network System (CCTNS) as part of national e-Governance plans. According to the website of the National Crime Records Bureau (NCRB), CCTNS aims to develop "a nationwide networked infrastructure for evolution of IT-enabled state-of-the-art tracking system around 'investigation of crime and detection of criminals' in real time" [64]

    The plans for predictive policing seem in the works, but first steps that are needed in India across police forces involve digitizing data collection by the police, as well as connecting law enforcement agencies. The NCRB's website described the current possibility of exchange of information between neighbouring police stations, districts or states as being "next to impossible"[65]. The aim of CCTNS is precisely to address this gap and integrate and connect the segregated law enforcement arms of the state in India, which would be a foundational step in any initiatives to apply predictive methods.

    What are the implications of using predictive policing? Lessons from USA

    Despite the moves by law enforcement agencies to adopt predictive policing, one reality is that the implications of predictive policing methods are far from clear. This section will examine these implications on the carriage of justice and its use in law, as well as how it impacts privacy concerns for the individual. It frames the existing debates surrounding these issues with predictive policing, and aims to apply these principles into an Indian context.

    Justice, Privacy & IV Amendment

    Two key concerns about how predictive policing methods may be used by law enforcement relate to how insights from predictive policing methods are acted upon and how courts interpret them. In the USA, this issue may finds its place under the scope of IV Amendment jurisprudence. The IV amendment states that all citizens are "secure from unreasonable searches and seizures of property by the government"[66]. In this sense, the IV amendment forms the basis for search and surveillance law in the USA.

    A central aspect of the IV Amendment jurisprudence is drawn from United States v. Katz. In Katz, the FBI attached a microphone to the outside of a public phone booth to record the conversations of Charles Katz, who was making phone calls related to illegal gambling. The court ruled that such actions constituted a search within the auspices of the 4th amendment. The ruling affirmed constitutional protection of all areas where someone has a "reasonable expectation of privacy"[67].

    Later cases have provided useful tests for situations where government surveillance tactics may or may not be lawful, depending on whether it violates one's reasonable expectation of privacy. For example, in United States v. Knotts, the court held that "police use of an electronic beeper to follow a suspect surreptitiously did not constitute a Fourth Amendment search"[68]. In fact, some argue that that the Supreme Court's reasoning in such cases suggests " any 'scientific enhancement' of the senses used by the police to watch activity falls outside of the Fourth Amendment's protections if the activity takes place in public"[69]. This reasoning is based on the third party doctrine which holds that "if you voluntarily provide information to a third party, the IV Amendment does not preclude the government from accessing it without a warrant"[70]. The clearest exposition of this reasoning was in Smith v. Maryland, where the presiding judges noted that "this Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties"[71].

    However, the third party has seen some challenge in recent time. In United States v. Jones, it was ruled that the government's warrantless GPS tracking of his vehicle 24 hours a day for 28 days violated his Fourth Amendment rights[72]. Though the majority ruling was that warrantless GPS tracking constituted a search, it was in a concurring opinion written by Justice Sonya Sotomayor that such intrusive warrantless surveillance was said to infringe one's reasonable expectation of privacy. As Newell reflected on Sotomayor's opinion,

    "Justice Sotomayor stated that the time had come for Fourth Amendment jurisprudence to discard the premise that legitimate expectations of privacy could only be found in situations of near or complete secrecy. Sotomayor argued that people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties"[73].

    She said that the court's current reasoning on what constitutes reasonable expectations of privacy in information disclosed to third parties, such as email or phone records or even purchase histories, is "ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks"[74].

    Predictive policing vs. Mass surveillance and Domain Awareness Systems

    However, there is an important distinction to be drawn between these cases and evidence from predictive policing. This has to do with the difference in nature of the evidence collection. Arguably, from Jones and others, what we see is that use of mass surveillance and domain awareness systems, drawing from Joh's categorisation of domain awareness systems as being distinct from predictive policing mentioned above, could potentially encroach on one's reasonable expectation of privacy. However, I think that predictive policing, and the possible implications for justice associated with it, its predictive harms, are quite distinct from what has been heard by courts thus far.

    The reason for distinct risks between predictive harms and privacy harms originating from information gathering is related to the nature of predictive policing technologies, and how they are used. It is highly unlikely that the evidence submitted by the State to indict an offender will be mainly predictive in nature. For example, would it be possible to convict an accused person solely on the premise that he was predicted to be highly likely to commit a crime, and that subsequently he did? The legal standard of proving guilt beyond a reasonable doubt [75] can hardly be met solely on predictive evidence for a multitude of reasons. Predictive policing methods could at most, be said to inform police about the risk of someone committing a crime or of crime happening at a certain location, as demonstrated above.

    Predictive policing and Criminal Procedure

    It may therefore pay to analyse how predictive policing may be used across the various processes within the criminal justice system. In fact, in an analysis of the various stages of criminal procedure, from opening an investigation to gathering evidence, followed by arrest, trial, conviction and sentencing, we see that as the individual gets subject to more serious incursions or sanctions by the state, it takes a higher standard of certainty about wrongdoing and a higher burden of proof, in order to legitimize that particular action.

    Hence, at more advanced stages of the criminal justice process such as seeking arrest warrants or trial, it is very unlikely that predictive policing on its own can have a tangible impact, because the nature of predictive evidence is probability based. It aims to calculate the risk of future crime occurring based on statistical analysis of past crime data[76]. While extremely useful, probabilities on their own will not come remotely close meet the legal standards of proving 'guilt beyond reasonable doubt'. It may be at the earlier stages of the criminal justice process that evidence predictive policing might see more widespread application, in terms of applying for search warrants and searching suspicious people while on patrol.

    In fact, in the law enforcement context, prediction as a concept is not new to justice. Both courts and law enforcement officials already make predictions about future likelihood of crimes. In the case of issuing warrants, the IV amendment makes provisions that law enforcement officials show that the potential search is based "upon probable cause"[77] in order for a judge to grant a warrant. In US v. Brinegar, probable cause was defined as existing "where the facts and circumstances within the officers' knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed" [78]. Again, this legal standard seems too high for predictive evidence meet.

    However, the police also have an important role to play in preventing crimes by looking out for potential crimes while on patrol or while doing surveillance. When the police stop a civilian on the road to search him, reasonable suspicion must be established. This standard of reasonable suspicion was defined in most clearly in Terry v. Ohio, which required police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[79]. Therefore, "reasonable suspicion that 'criminal activity may be afoot' is at base a prediction that the facts and circumstances warrant the reasonable prediction that a crime is occurring or will occur"[80]. Despite the assertion that "there are as of yet no reported cases on predictive policing in the Fourth Amendment context"[81], examining the impact of predictive policing on the doctrine of reasonable suspicion could be very instructive in understanding the implications for justice and privacy [82].

    Predictive Policing and Reasonable Suspicion

    Ferguson's insightful contribution to this area of scholarship involves the identification of existing areas where prediction already takes place in policing, and analogising them into a predictive policing context[83]. These three areas are: responding to tips, profiling, and high crime areas (hot spots).

    Tips

    Tips are pieces of information shared with the police by members of the public. Often tips, either anonymous or from known police informants, may predict future actions of certain people, and require the police to act on this information. The precedent for understanding the role of tips in probable cause comes from Illinois v. Gates[84]. It was held that "an informant's 'veracity,' 'reliability,' and 'basis of knowledge'-remain 'highly relevant in determining the value'"[85] of the said tip. Anonymous tips need to be detailed, timely and individualised enough[86] to justify reasonable suspicion [87]. And when the informant is known to be reliable, then his prior reliability may justify reasonable suspicion despite lacking a basis in knowledge[88].

    Ferguson argues that whereas predictive policing cannot provide individualised tips, it is possible to consider reliable tips about certain areas as a parallel to predictive policing[89]. And since the courts had shown a preference for reliability even in the face of a weak basis in knowledge, it is possible to see the reasonable suspicion standard change in its application[90]. It also implies that IV protections may be different in places where crime is predicted to occur [91].

    Profiling

    Despite the negative connotations and controversial overtones at the mere sound of the word, profiling is already a method commonly used by law enforcement. For example, after a crime has been committed and general features of the suspect identified by witnesses, police often stop civilians who fit this description. Another example of profiling is common in combating drug trafficking[92], where agents keep track of travellers at airports to watch for suspicious behaviour. Based on their experience of common traits which distinguish drug traffickers from regular travellers (a profile), agents may search travellers if they fit the profile[93]. In the case of United States v. Sokolow[94], the courts "recognized that a drug courier profile is not an irrelevant or inappropriate consideration that, taken in the totality of circumstances, can be considered in a reasonable suspicion determination" [95]. Similar lines of thinking could be employed in observing people exchanging small amounts of money in an area known for high levels of drug activity, conceiving predictive actions as a form of profile[96].

    It is valid to consider predictive policing as a form of profiling[97], but Ferguson argues that the predictive policing context means this 'new form' of profiling could change IV analysis. The premise behind such an argument lies in the fact that a prediction made by some algorithm about potential high risk of crime in a certain area, could be taken in conjunction observations of ordinarily innocuous events. Read in the totality of circumstances, these two threads may justify individual reasonable suspicion [98]. For example, a man looking into cars at a parking lot may not by itself justify reasonable suspicion, but taken together with a prediction of high risk of car theft at that locality, it may well justify reasonable suspicion. It is this impact of predictive policing, which influences the analysis of reasonable suspicion in a totality of circumstances that may represent new implications for courts looking at IV amendment protections.

    Profiling, Predictive Policing and Discrimination

    The above sections have already brought up the point that law enforcement agencies already utilize profiling methods in their operations. Also, as the sections on how predictive analytics works and on methods of predictive policing make clear, predictive policing definitely incorporates the development of profiles for predicting future criminal activity. Concerns about predictive models generate potentially discriminatory predictions therefore are very serious, and need addressing. Potential discrimination may be either overt, though far less likely, or unintended. A valuable case study of which sheds light on such discriminatory data mining practices can be found in US Labour law. It was shown how predictive models could be discriminatory at various stages, from conceptualising the model and training it with training data, to eventually selecting inappropriate features to search for [99]. It is also possible for data scientists to (intentionally or not) use proxies for identifiers like race, income level, health condition and religion. Barocas and Selbst argue that "the current distribution of relevant attributes-attributes that can and should be taken into consideration in apportioning opportunities fairly-are demonstrably correlated with sensitive attributes" [100]. Hence, what may result is unintended discrimination, as predictive models and their subjective and implicit biases are reflected in predicted decisions, or that the discrimination is not even accounted for in the first place. While I have not found any case law where courts have examined such situations in a criminal context, at the very least, law enforcement agencies need to be aware of these possibilities and guard against any forms of discriminatory profiling.

    However, Ferguson argues that "the precision of the technology may in fact provide more protection for citizens in broadly defined high crime areas" [101]. This is because the label of a 'high-crime area' may no longer apply to large areas but instead to very specific areas of criminal activity. This implies that previously defined areas of high crime, like entire neighbourhoods may not be scrutinised in such detail. Instead, police now may be more precise in locating and policing areas of high crime, such as an individual street corner or a particular block of flats instead of an entire locality.

    Hot Spots

    Courts have also considered the existence of notoriously 'high-crime areas as part of considering reasonable suspicion[102]. This was seen in Illinois v. Wardlow [103], where the "high crime nature of an area can be considered in evaluating the officer's objective suspicion"[104]. Many cases have since applied this reasoning without scrutinising the predictive value of such a label. In fact, Ferguson asserts that such labelling has questionable evidential value[105]. He uses the facts of the Wardlow case itself to challenge the 'high crime area' factor. Ferguson cites the reasoning of one of the judges in the case:

    "While the area in question-Chicago's District 11-was a low-income area known for violent crimes, how that information factored into a predictive judgment about a man holding a bag in the afternoon is not immediately clear."[106]

    Especially because "the most basic models of predictive policing rely on past crimes"[107], it is likely that the predictive policing methods like hot spot or spatiotemporal analysis and risk terrain modelling may help to gather or build data models about high crime areas. Furthermore, the mathematical rigour of the predictive modelling could help clarify the term 'high crime area'. As Ferguson argues, "courts may no longer need to rely on the generalized high crime area terminology when more particularized and more relevant information is available" [108].

    Summary

    Ferguson synthesises four themes to which encapsulate reasonable suspicion analysis:

    1. Predictive information is not enough on its own. Instead, it is "considered relevant to the totality of circumstances, but must be corroborated by direct police observation"[109].
    2. The prediction must also "be particularized to a person, a profile, or a place, in a way that directly connects the suspected crime to the suspected person, profile, or place"[110].
    3. It must also be detailed enough to distinguish a person or place from others not the focus of the prediction [111].
    4. Finally, predicted information becomes less valuable over time. Hence it must be acted on quickly or be lost [112].

    Conclusions from America

    The main conclusion to draw from the analysis of the parallels between existing predictions in IV amendment law and predictive policing is that "predictive policing will impact the reasonable suspicion calculus by becoming a factor within the totality of circumstances test"[113]. Naturally, it reaffirms the imperative for predictive techniques to collect reliable data [114] and analyse it transparently[115]. Moreover, in order for courts to evaluate the reliability of the data and the processes used (since predictive methods become part of the reasonable suspicion calculus), courts need to be able to analyse the predictive process. This has implications for the how hearings may be conducted, for how legal adjudicators may require training and many more. Another important concern is that the model of predictive information and police corroboration or direct observation[116] may mean that in areas which were predicted to have low risk of crime, the reasonable suspicion doctrine works against law enforcement. There may be less effort paid to patrolling these other areas as a result of predictions.

    Implications for India

    While there have been no cases directly involving predictive policing methods, it would be prudent to examine the parts of Indian law which would inform the calculus on the lawfulness of using predictive policing methods. A useful lens to examine this might be found in the observation that prediction is not in itself a novel concept in justice, and is already used by courts and law enforcement in numerous circumstances.

    Criminal Procedure in Non-Warrant Contexts

    The most logical way to begin analysing the legal implications of predictive policing in India may probably involve identifying parallels between American and Indian criminal procedure, specifically searching for instances where 'reasonable suspicion' or some analogous requirement exists for justifying police searches.

    In non-warrant scenarios, we find conditions for officers to conduct such a warrantless search in Section 165 of the Criminal Procedure Code (Cr PC). For clarity purposes I have stated section 165 (1) in full:

    "Whenever an officer in charge of a police station or a police officer making an investigation has reasonable grounds for believing that anything necessary for the purposes of an investigation into any offence which he is authorised to investigate may be found in any place with the limits of the police station of which he is in charge, or to which he is attached, and that such thing cannot in his opinion be otherwise obtained without undue delay, such officer may, after recording in writing the grounds of his belief and specifying in such writing, so far as possible, the thing for which search is to be made, search, or cause search to be made, for such thing in any place within the limits of such station." [117]

    However, India differs from the USA in that its Cr PC allows for police to arrest individuals without a warrant as well. As observed in Gulab Chand Upadhyaya vs State Of U.P, "Section 41 Cr PC gives the power to the police to arrest without warrant in cognizable offences, in cases enumerated in that Section. One such case is of receipt of a 'reasonable complaint' or 'credible information' or 'reasonable suspicion'" [118] Like above, I have stated section 41 (1) and subsection (a) in full:

    "41. When police may arrest without warrant.

    (1) Any police officer may without an order from a Magistrate and without a warrant, arrest any person-

    (a) who has been concerned in any cognizable offence, or against whom a reasonable complaint has been made, or credible information has been received, or a reasonable suspicion exists, of his having been so concerned"[119]

    In analysing the above sections of Indian criminal procedure from a predictive policing angle, one may find both similarities and differences between the proposed American approach and possible Indian approaches to interpreting or incorporating predictive policing evidence.

    Similarity of 'reasonable suspicion' requirement

    For one, the requirement for "reasonable grounds" or "reasonable suspicion" seems to be analogous to the American doctrine of reasonable suspicion. This suggests that the concepts used in forming reasonable suspicion, for the police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[120] may also be useful in the Indian context.

    One case which sheds light on an Indian interpretation of reasonable suspicion or grounds is State of Punjab v. Balbir Singh[121]. In that case, the court observes a requirement for "reason to believe that such an offence under Chapter IV has been committed and, therefore, an arrest or search was necessary as contemplated under these provisions"[122] in the context of Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act, 1985[123]. In examining the requirement of having "reason to believe", the court draws on Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act[124], where the judge observed that "the expression 'reason to believe' is not synonymous with subjective satisfaction of the officer. The belief must be held in good faith; it cannot be merely a pretence….."[125]

    In light of this, the judge in Balbir Singh remarked that "whether there was such reason to believe and whether the officer empowered acted in a bona fide manner, depends upon the facts and circumstances of the case and will have a bearing in appreciation of the evidence" [126]. The standard considered by the court in Balbir Singh and Partap Singh is different from the 'reasonable suspicion' or 'reasonable grounds' standard as per Section 41 and 165 of Cr PC. But I think the discussion can help to inform our analysis of the idea of reasonableness in law enforcement actions. Of importance was the court requirement of something more than mere "pretence" as well as a belief held in good faith. This could suggest that in fact the reasoning in American jurisprudence about reasonable suspicion might be at least somewhat similar to how Indian courts view reasonable suspicion or grounds in the context of predictive policing, and therefore how we could similarly conjecture that predictive evidence could form part of the reasonable suspicion calculus in India as well.

    Difference in judicial treatment of illegally obtained evidence - Indian lack of exclusionary rules

    However, the apparent similarity of how police in America and India may act in non-warrant situations - guided by the idea of reasonable suspicion - is only veneered by linguistic parallels. Despite the existence of such conditions which govern the searches without a warrant, I believe that Indian courts currently may provide far less protection against unlawful use of predictive technologies. The main premise behind this argument is that Indian courts refuse to exclude evidence that was obtained in breaches of the conditions of sections of the Cr PC. What exists in place of evidentiary safeguards is a line of cases in which courts routinely admit unlawfully or illegally obtained evidence. Without protections against unlawfully gathered evidence being considered relevant by courts, any regulations on search or conditions to be met before a search is lawful become ineffective. Evidence may simply enter the courtroom through a backdoor.

    In the USA, this is by and large, not the case. Although there are exceptions to these rules, exclusionary rules are set out to prevent admission of evidence which violates the constitution[127]. "The exclusionary rule applies to evidence gained from an unreasonable search or seizure in violation of the Fourth Amendment "[128]. Mapp v. Ohio [129] set the precedent for excluding unconstitutionally gathered evidence, where the court ruled that "all evidence obtained by searches and seizures in violation of the Federal Constitution is inadmissible in a criminal trial in a state court" [130].

    Any such evidence which then leads law enforcement to collect new information may also be excluded, as part of the "fruit of the poisonous tree" doctrine[131], established in Silverthorne Lumber Co. v. United States [132]. The doctrine is a metaphor which suggests that if the source of certain evidence is tainted, so is 'fruit' or derivatives from that unconstitutional evidence. One such application was in Beck v. Ohio[133], where the courts overturned a petitioner's conviction because the evidence used to convict him was obtained via an unlawful arrest.

    However in India's context, there is very little protection against the admission and use of unlawfully gathered evidence. In fact, there are a line of cases which lay out the extent of consideration given to unlawfully gathered evidence - both cases that specifically deal with the rules as per the Indian Cr PC as well as cases from other contexts - which follow and develop this line of reasoning of allowing illegally obtained evidence.

    One case to pay attention to is State of Maharastra v. Natwarlal Damodardas Soni - in this case, the Anti-Corruption Bureau searched the house of the accused after receiving certain information as a tip. The police "had powers under the Code of Criminal Procedure to search and seize this gold if they had reason to believe that a cognizable offence had been committed in respect thereof"[134]. Justice Sarkaria, in delivering his judgement, observed that for argument's sake, even if the search was illegal, "then also, it will not affect the validity of the seizure and further investigation"[135]. The judge drew reasoning from Radhakishan v. State of U.P[136]. This which was a case involving a postman who had certain postal items that were undelivered recovered from his house. As the judge in Radhakishan noted:

    "So far as the alleged illegality of the search is concerned, it is sufficient to say that even assuming that the search was illegal the seizure of the articles is not vitiated. It may be that where the provisions of Sections 103 and 165 of the Code of Criminal Procedure, are contravened the search could be resisted by the person whose premises are sought to be searched. It may also be that because of the illegality of the search the court may be inclined to examine carefully the evidence regarding the seizure. But beyond these two consequences no further consequence ensues." [137]

    Shyam Lal Sharma v. State of M.P.[138] was also drawn upon, where it was held that "even if the search is illegal being in contravention with the requirements of Section 165 of the Criminal Procedure Code, 1898, that provision ceases to have any application to the subsequent steps in the investigation"[139].

    Even in Gulab Chand Upadhyay, mentioned above, the presiding judge contended that even "if arrest is made, it does not require any, much less strong, reasons to be recorded or reported by the police. Thus so long as the information or suspicion of cognizable offence is "reasonable" or "credible", the police officer is not accountable for the discretion of arresting or no arresting"[140].

    A more complete articulation of the receptiveness of Indian courts to admit illegally gathered evidence can be seen in the aforementioned Balbir Singh. The judgement aimed to:

    "dispose of one of the contentions that failure to comply with the provisions of Cr PC in respect of search and seizure even up to that stage would also vitiate the trial. This aspect has been considered in a number of cases and it has been held that the violation of the provisions particularly that of Sections 100, 102, 103 or 165 Cr PC strictly per se does not vitiate the prosecution case. If there is such violation, what the courts have to see is whether any prejudice was caused to the accused and in appreciating the evidence and other relevant factors, the courts should bear in mind that there was such a violation and from that point of view evaluate the evidence on record."[141]

    The judges then consulted a series of authorities on the failure to comply with provisions of the Cr PC:

    1. State of Punjab v. Wassan Singh[142]: "irregularity in a search cannot vitiate the seizure of the articles"[143].
    2. Sunder Singh v. State of U.P[144]: 'irregularity cannot vitiate the trial unless the accused has been prejudiced by the defect and it is also held that if reliable local witnesses are not available the search would not be vitiated."[145]
    3. Matajog Dobey v.H.C. Bhari[146]: "when the salutory provisions have not been complied with, it may, however, affect the weight of the evidence in support of the search or may furnish a reason for disbelieving the evidence produced by the prosecution unless the prosecution properly explains such circumstance which made it impossible for it to comply with these provisions."[147]
    4. R v. Sang[148]: "reiterated the same principle that if evidence was admissible it matters not how it was obtained."[149] Lord Diplock, one of the Lords adjudicating the case, observed that "however much the judge may dislike the way in which a particular piece of evidence was obtained before proceedings were commenced, if it is admissible evidence probative of the accused's guilt "it is no part of his judicial function to exclude it for this reason". [150] As the judge in Balbir Singh quoted from Lord Diplock, a judge "has no discretion to refuse to admit relevant admissible evidence on the ground that it was obtained by improper or unfair means. The court is not concerned with how it was obtained."[151]

    The vast body of case law presented above provides observers with a clear image of the courts willingness to admit and consider illegally obtained evidence. The lack of safeguards against admission of unlawful evidence are important from the standpoint of preventing the excessive or unlawful use of predictive policing methods. The affronts to justice and privacy, as well as the risks of profiling, seem to become magnified when law enforcement use predictive methods more than just to augment their policing techniques but to replace some of them. The efficacy and expediency offered by using predictive policing needs to be balanced against the competing interest of ensuring rule of law and due process. In the Indian context, it seems courts sparsely consider this competing interest.

    Naturally, weighing in on which approach is better depends on a multitude of criteria like context, practicality, societal norms and many more. It also draws on existing debates in administrative law about the role of courts, which may emphasise protecting individuals and preventing excessive state power (red light theory) or emphasise efficiency in the governing process with courts assisting the state to achieve policy objectives (green light theory) [152].

    A practical response may be that India should aim to embrace both elements and balance them appropriately, although what an appropriate balance again may vary. There are some who claim that this balance already exists in India. Evidence for such a claim may come from R.M. Malkani v. State of Maharashtra[153], where the court considered whether an illegally tape-recorded conversation could be admissible. In its reasoning, the court drew from Kuruma, Son of Kanju v. R. [154], noting that

    " if evidence was admissible it matters not how it was obtained. There is of course always a word of caution. It is that the Judge has a discretion to disallow evidence in a criminal case if the strict rules of admissibility would operate unfairly against the accused. That caution is the golden rule in criminal jurisprudence"[155].

    While this discretion exists at least principally in India, in practice the cases presented above show that judges rarely exercise that discretion to prevent or bar the admission of illegally obtained evidence or evidence that was obtained in a manner that infringed the provisions governing search or arrest in the Cr PC. Indeed, the concern is that perhaps the necessary safeguards required to keep law enforcement practices, including predictive policing techniques, in check would be better served by a greater focus on reconsidering the legality of unlawfully gathered evidence. If not, evidence which should otherwise be inadmissible may find its way into consideration by existing legal backdoors.

    Risk of discriminatory predictive analysis

    Regarding the risk of discriminatory profiling, Article 15 of India's Constitution[156] states that "the State shall not discriminate against any citizen on grounds only of religion, race, caste, sex, place of birth or any of them" [157]. The existence of constitutional protection for such forms of discrimination suggests that India will be able to guard against discriminatory predictive policing. However, as mentioned before, predictive analytics often discriminates institutionally, "whereby unconscious implicit biases and inertia within society's institutions account for a large part of the disparate effects observed, rather than intentional choices"[158]. As in most jurisdictions, preventing these forms of discrimination are much harder. Especially in a jurisdiction whose courts are already receptive to allowing admission of illegally obtained evidence, the risk of discriminatory data mining or prejudiced algorithms being used by police becomes magnified. Because the discrimination may be unintentional, it may be even harder for evidence from discriminatory predictive methods to be scrutinised or when applicable, dismissed by the courts.

    Conclusion for India

    One thing which is eminently clear from the analysis of possible interpretations of predictive evidence is that Indian Courts have had no experience with any predictive policing cases, because the technology itself is still at a nascent stage. There is in fact a long way to go before predictive policing will become used on a scale similar to that of USA for example.

    But, even in places where predictive policing is used much more prominently, there is no precedent to observe how courts may view predictive policing. Ferguson's method of locating analogous situations to predictive policing which courts have already considered is one notable approach, but even this does not provide complete answer. One of his main conclusions that predictive policing will affect the reasonable suspicion calculus, or in India's case, contribute to 'reasonable grounds' in some ways, is perhaps the most valid one.

    However, what provides more cause for concern in India's context are the limited protections against use of unlawfully gathered evidence. The lack of 'exclusionary rules' unlike those present in the US amplifies the various risks of predictive policing because individuals have little means of redress in such situations where predictive policing may be used unjustly against them.

    Yet, the promise of predictive policing remains undeniably attractive for India. The successes predictive policing methods seem to have had In the US and UK coupled with the more efficient allocation of law enforcement's resources as a consequence of adapting predictive policing evidence this point. The government recognises this and seems to be laying the foundation and basic digital infrastructure required to utilize predictive policing optimally. One ought also to ask whether it is the even within the court's purview to decide what kind of policing methods are to be permissible through evaluating the nature of evidence. There is a case to be made for the legislative arm of the state to provide direction on how predictive policing is to be used in India. Perhaps the law must also evolve with the changes in technology, especially if courts are to scrutinise the predictive policing methods themselves.


    [1] Joh, Elizabeth E. "Policing by Numbers: Big Data and the Fourth Amendment." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 1, 2014. http://papers.ssrn.com/abstract=2403028.

    [2] Tene, Omer, and Jules Polonetsky. "Big Data for All: Privacy and User Control in the Age of Analytics." Northwestern Journal of Technology and Intellectual Property 11, no. 5 (April 17, 2013): 239.

    [3] Datta, Rajbir Singh. "Predictive Analytics: The Use and Constitutionality of Technology in Combating Homegrown Terrorist Threats." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 1, 2013. http://papers.ssrn.com/abstract=2320160.

    [4] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [5] Ibid.

    [6] Duhigg, Charles. "How Companies Learn Your Secrets." The New York Times, February 16, 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

    [7] Ibid.

    [8] Lijaya, A, M Pranav, P B Sarath Babu, and V R Nithin. "Predicting Movie Success Based on IMDB Data." International Journal of Data Mining Techniques and Applications 3 (June 2014): 365-68.

    [9] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [10] Sangvinatsos, Antonios A. "Explanatory and Predictive Analysis of Corporate Bond Indices Returns." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, June 1, 2005. http://papers.ssrn.com/abstract=891641.

    [11] Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 13, 2015. http://papers.ssrn.com/abstract=2477899.

    [12] Joh, supra note 1.

    [13] US Environmental Protection Agency. "How We Use Data in the Mid-Atlantic Region." US EPA. Accessed November 6, 2015. http://archive.epa.gov/reg3esd1/data/web/html/.

    [14] See here for details of blackroom.

    [15] Joh, supra note 1, at pg 48.

    [16] Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith and John S. Hollywood. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: RAND Corporation, 2013. http://www.rand.org/pubs/research_reports/RR233. Also available in print form.

    [17] Ibid, at pg 2.

    [18] Chan, Sewell. "Why Did Crime Fall in New York City?" City Room. Accessed November 6, 2015. http://cityroom.blogs.nytimes.com/2007/08/13/why-did-crime-fall-in-new-york-city/.

    [19] Bureau of Justice Assistance. "COMPSTAT: ITS ORIGINS, EVOLUTION, AND FUTURE IN LAW ENFORCEMENT AGENCIES," 2013. http://www.policeforum.org/assets/docs/Free_Online_Documents/Compstat/compstat%20-%20its%20origins%20evolution%20and%20future%20in%20law%20enforcement%20agencies%202013.pdf.

    [20] 1996 internal NYPD article "Managing for Results: Building a Police Organization that Dramatically Reduces Crime, Disorder, and Fear."

    [21] Bratton, William. "Crime by the Numbers." The New York Times, February 17, 2010. http://www.nytimes.com/2010/02/17/opinion/17bratton.html.

    [22] RAND CORP, supra note 16.

    [23] RAND CORP, supra note 16, at pg 19.

    [24] Joh, supra note 1, at pg 44.

    [25] RAND CORP, supra note 16, pg 38.

    [26] Ibid.

    [27] RAND CORP, supra note 16, at pg 39.

    [28] Ibid.

    [29] RAND CORP, supra note 16, at pg 41.

    [30] Data-Smart City Solutions. "Dr. George Mohler: Mathematician and Crime Fighter." Data-Smart City Solutions, May 8, 2013. http://datasmart.ash.harvard.edu/news/article/dr.-george-mohler-mathematician-and-crime-fighter-166.

    [31] RAND CORP, supra note 16, at pg 44.

    [32] Joh, supra note 1, at pg 45.

    [33] Ouellette, Danielle. "Dispatch - A Hot Spots Experiment: Sacramento Police Department," June 2012. http://cops.usdoj.gov/html/dispatch/06-2012/hot-spots-and-sacramento-pd.asp.

    [34] Pitney Bowes Business Insight. "The Safer Derbyshire Partnership." Derbyshire, 2013. http://www.mapinfo.com/wp-content/uploads/2013/05/safer-derbyshire-casestudy.pdf.

    [35] Ibid.

    [36] Daniel B Neill, Wilpen L. Gorr. "Detecting and Preventing Emerging Epidemics of Crime," 2007.

    [37] RAND CORP, supra note 16, at pg 33.

    [38] Joh, supra note 1, at pg 46.

    [39] Paul, Jeffery S, and Thomas M. Joiner. "Integration of Centralized Intelligence with Geographic Information Systems: A Countywide Initiative." Geography and Public Safety 3, no. 1 (October 2011): 5-7.

    [40] Mohler, supra note 30.

    [41] Ibid.

    [42] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement
    Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

    [43] Gorner, Jeremy. "Chicago Police Use Heat List as Strategy to Prevent Violence." Chicago Tribune. August 21, 2013. http://articles.chicagotribune.com/2013-08-21/news/ct-met-heat-list-20130821_1_chicago-police-commander-andrew-papachristos-heat-list.

    [44] Stroud, Matt. "The Minority Report: Chicago's New Police Computer Predicts Crimes, but Is It Racist?" The Verge. Accessed November 13, 2015. http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

    [45] Moser, Whet. "The Small Social Networks at the Heart of Chicago Violence." Chicago Magazine, December 9, 2013. http://www.chicagomag.com/city-life/December-2013/The-Small-Social-Networks-at-the-Heart-of-Chicago-Violence/.

    [46] Lester, Aaron. "Police Clicking into Crimes Using New Software." Boston Globe, March 18, 2013. https://www.bostonglobe.com/business/2013/03/17/police-intelligence-one-click-away/DzzDbrwdiNkjNMA1159ybM/story.html.

    [47] Stanley, Jay. "Chicago Police 'Heat List' Renews Old Fears About Government Flagging and Tagging." American Civil Liberties Union, February 25, 2014. https://www.aclu.org/blog/chicago-police-heat-list-renews-old-fears-about-government-flagging-and-tagging.

    [48] Rieke, Aaron, David Robinson, and Harlan Yu. "Civil Rights, Big Data, and Our Algorithmic Future," September 2014. https://bigdata.fairness.io/wp-content/uploads/2015/04/2015-04-20-Civil-Rights-Big-Data-and-Our-Algorithmic-Future-v1.2.pdf.

    [49] Edmond, Deepu Sebastian. "Jhakhand's Digital Leap." Indian Express, September 15, 2013. http://www.jhpolice.gov.in/news/jhakhands-digital-leap-indian-express-15092013-18219-1379316969.

    [50] Jharkhand Police. "Jharkhand Police IT Vision 2020 - Effective Shared Open E-Governance." 2012. http://jhpolice.gov.in/vision2020. See slide 2

    [51] Edmond, supra note 49.

    [52] Edmond, supra note 49.

    [53] Kumar, Raj. "Enter, the Future of Policing - Cops to Team up with IIM Analysts to Predict & Prevent Incidents." The Telegraph. August 28, 2012. http://www.telegraphindia.com/1120828/jsp/jharkhand/story_15905662.jsp#.VkXwxvnhDWK.

    [54] Ibid.

    [55] Ibid.

    [56] Ibid.

    [57] See supra note 49.

    [58] See here for Jharkhand Police crime dashboard.

    [59] Lavanya Gupta, and Selva Priya. "Predicting Crime Rates for Predictive Policing." Gandhian Young Technological Innovation Award, December 29, 2014. http://gyti.techpedia.in/project-detail/predicting-crime-rates-for-predictive-policing/3545.

    [60] Gupta, Lavanya. "Minority Report: Minority Report." Accessed November 13, 2015. http://cmuws2014.blogspot.in/2015/01/minority-report.html.

    [61] See supra note 59.

    [62] See here for details about 44th All India Police Science Congress.

    [63] India, Press Trust of. "Police Science Congress in Gujarat to Have DRDO Exhibition." Business Standard India, March 10, 2015. http://www.business-standard.com/article/pti-stories/police-science-congress-in-gujarat-to-have-drdo-exhibition-115031001310_1.html.

    [64] National Crime Records Bureau. "About Crime and Criminal Tracking Network & Systems - CCTNS." Accessed November 13, 2015. http://ncrb.gov.in/cctns.htm.

    [65] Ibid. (See index page)

    [66] U.S. Const. amend. IV, available here

    [67] United States v Katz, 389 U.S. 347 (1967) , see here

    [68] See supra note 1, at pg 60.

    [69] See supra note 1, at pg 60.

    [70] Villasenor, John. "What You Need to Know about the Third-Party Doctrine." The Atlantic, December 30, 2013. http://www.theatlantic.com/technology/archive/2013/12/what-you-need-to-know-about-the-third-party-doctrine/282721/.

    [71] Smith v Maryland, 442 U.S. 735 (1979), see here

    [72] United States v Jones, 565 U.S. ___ (2012), see here

    [73] Newell, Bryce Clayton. "Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, October 16, 2013. http://papers.ssrn.com/abstract=2341182, at pg 24.

    [74] See supra note 72.

    [75] Dahyabhai Chhaganbhai Thakker vs State Of Gujarat, 1964 AIR 1563

    [76] See supra note 16.

    [77] See supra note 66.

    [78] Brinegar v. United States, 338 U.S. 160 (1949), see here

    [79] Terry v. Ohio, 392 U.S. 1 (1968), see here

    [80] Ferguson, Andrew Guthrie. "Big Data and Predictive Reasonable Suspicion." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 4, 2014. http://papers.ssrn.com/abstract=2394683, at pg 287. See also supra note 79.

    [81] See supra note 80.

    [82] See supra note 80.

    [83] See supra note 80.

    [84] See supra note 80, at pg 289.

    [85] Illinois v. Gates, 462 U.S. 213 (1983). See here

    [86] See Alabama v. White, 496 U.S. 325 (1990). See here

    [87] See supra note 80, at pg 291.

    [88] See supra note 80, at pg 293.

    [89] See supra note 80, at pg 308.

    [90] Ibid.

    [91] Ibid.

    [92] Larissa Cespedes-Yaffar, Shayona Dhanak, and Amy Stephenson. "U.S. v. Mendenhall, U.S. v. Sokolow, and the Drug Courier Profile Evidence Controversy." Accessed July 6, 2015. http://courses2.cit.cornell.edu/sociallaw/student_projects/drugcourier.html.

    [93] Ibid.

    [94] United States v. Sokolow, 490 U.S. 1 (1989), see here

    [95] See supra note 80, at pg 295.

    [96] See supra note 80, at pg 297.

    [97] See supra note 80, at pg 308.

    [98] See supra note 80, at pg 310.

    [99] See supra note 11.

    [100] See supra note 11.

    [101] See supra note 80, at pg 303.

    [102] See supra note 80, at pg 300.

    [103] Illinois v. Wardlow, 528 U.S. 119 (2000), see here

    [104] Ibid.

    [105] See supra note 80, at pg 301.

    [106] Ibid.

    [107] See supra note 1, at pg 42.

    [108] See supra note 80, at pg 303.

    [109] See supra note 80, at pg 303.

    [110] Ibid.

    [111] Ibid.

    [112] Ibid.

    [113] See supra note 80, at pg 312.

    [114] See supra note 80, at pg 317.

    [115] See supra note 80, at pg 319.

    [116] See supra note 80, at pg 321.

    [117] Section 165 Indian Criminal Procedure Code, see here

    [118] Gulab Chand Upadhyaya vs State Of U.P, 2002 CriLJ 2907

    [119] Section 41 Indian Criminal Procedure Code

    [120] See supra note 79

    [121] State of Punjab v. Balbir Singh. (1994) 3 SCC 299

    [122] Ibid.

    [123] Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act 1985, see here

    [124] Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act. (1985) 3 SCC 72 : 1985 SCC (Cri) 312 : 1985 SCC (Tax) 352 : AIR 1985 SC 989

    [125] Ibid, at SCC pg 77-78.

    [126] See supra note 121, at pg 313.

    [127] Carlson, Mr David. "Exclusionary Rule." LII / Legal Information Institute, June 10, 2009. https://www.law.cornell.edu/wex/exclusionary_rule.

    [128] Ibid.

    [129] Mapp v Ohio, 367 U.S. 643 (1961), see here

    [130] Ibid.

    [131] Busby, John C. "Fruit of the Poisonous Tree." LII / Legal Information Institute, September 21, 2009. https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree.

    [132] Silverthorne Lumber Co., Inc. v. United States, 251 U.S. 385 (1920), see here.

    [133] Beck v. Ohio, 379 U.S. 89 (1964), see here.

    [134] State of Maharashtra v. Natwarlal Damodardas Soni, (1980) 4 SCC 669, at 673.

    [135] Ibid.

    [136] Radhakishan v. State of U.P. [AIR 1963 SC 822 : 1963 Supp 1 SCR 408, 411, 412 : (1963) 1 Cri LJ 809]

    [137] Ibid, at SCR pg 411-12.

    [138] Shyam Lal Sharma v. State of M.P. (1972) 1 SCC 764 : 1974 SCC (Cri) 470 : AIR 1972 SC 886

    [139] See supra note 135, at page 674.

    [140] See supra note 119, at para. 10.

    [141] See supra note 121, at pg 309.

    [142] State of Punjab v. Wassan Singh, (1981) 2 SCC 1 : 1981 SCC (Cri) 292

    [143] See supra note 121, at pg 309.

    [144] Sunder Singh v. State of U.P, AIR 1956 SC 411 : 1956 Cri LJ 801

    [145] See supra note 121, at pg 309.

    [146] Matajog Dobey v.H.C. Bhari, AIR 1956 SC 44 : (1955) 2 SCR 925 : 1956 Cri LJ 140

    [147] See supra note 121, at pg 309.

    [148] R v. Sang, (1979) 2 All ER 1222, 1230-31

    [149] See supra note 121, at pg 309.

    [150] Ibid.

    [151] Ibid.

    [152] Harlow, Carol, and Richard Rawlings. Law and Administration. 3rd ed. Law in Context. Cambridge University Press, 2009.

    [153] R.M. Malkani v. State of Maharashtra, (1973) 1 SCC 471

    [154] Kuruma, Son of Kanju v. R., (1955) AC 197

    [155] See supra note 154, at 477.

    [156] Indian Const. Art 15, see here

    [157] Ibid.

    [158] See supra note 11.

    Response by the Centre for Internet and Society to the Draft Proposal to Transition the Stewardship of the Internet Assigned Numbers Authority (IANA) Functions from the U.S. Commerce Department’s National Telecommunications and Information Administration

    by Pranesh Prakash last modified Nov 29, 2015 06:35 AM
    This proposal was made to the Global Multistakeholder Community on August 9, 2015. The proposal was drafted by Pranesh Prakash and Jyoti Panday. The research assistance was provided by Padmini Baruah, Vidushi Marda, and inputs from Sunil Abraham.

    For more than a year now, the customers and operational communities performing key internet functions related to domain names, numbers and protocols have been negotiating the transfer of IANA stewardship. India has dual interests in the ICANN IANA Transition negotiations: safeguarding independence, security and stability of the DNS for development, and promoting an effective transition agreement that internationalizes the IANA Functions Operator (IFO). Last month the IANA Stewardship Transition Coordination Group (ICG) set in motion a public review of its combined assessment of the proposals submitted by the names, numbers and protocols communities. In parallel to the transition of the NTIA oversight, the community has also been developing mechanisms to strengthen the accountability of ICANN and has devised two workstreams that consider both long term and short term issues. This 2 is our response to the consolidated ICG proposal which considers the proposals for the transition of the NTIA oversight over the IFO.

    Click to download the submission.

    The Humpty-Dumpty Censorship of Television in India

    by Bhairav Acharya last modified Nov 29, 2015 08:37 AM
    The Modi government’s attack on Sathiyam TV is another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    The article originally published in the Wire on September 8, 2015 was also mirrored on the website Free Speech/Privacy/Technology.


    It is tempting to think of the Ministry of Information and Broadcasting’s (MIB) attack on Sathiyam TV solely as another authoritarian exhibition of Prime Minister Narendra Modi’s government’s intolerance of criticism and dissent. It certainly is. But it is also another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    Sathiyam TV’s transgressions

    Sathiyam’s transgressions began more than a year ago, on May 9, 2014, when it broadcast a preacher saying of an unnamed person: “Oh Lord! Remove this satanic person from the world!” The preacher also allegedly claimed this “dreadful person” was threatening Christianity. This, the MIB reticently claims, “appeared to be targeting a political leader”, referring presumably to Prime Minister Modi, to “potentially give rise to a communally sensitive situation and incite the public to violent tendencies.”

    The MIB was also offended by a “senior journalist” who, on the same day, participated in a non-religious news discussion to allegedly claim Modi “engineered crowds at his rallies” and used “his oratorical skills to make people believe his false statements”. According to the MIB, this was defamatory and “appeared to malign and slander the Prime Minister which was repugnant to (his) esteemed office”.

    For these two incidents, Sathiyam was served a show-cause notice on 16 December 2014 which it responded to the next day, denying the MIB’s claims. Sathiyam was heard in-person by a committee of bureaucrats on 6 February 2015. On 12 May 2015, the MIB handed Sathiyam an official an official “Warning” which appears to be unsupported by law. Sathiyam moved the Delhi High Court to challenge this.

    As Sathiyam sought judicial protection, the MIB issued the channel a second warning August 26, 2016 citing three more objectionable news broadcasts of: a child being subjected to cruelty by a traditional healer in Assam; a gun murder inside a government hospital in Madhya Pradesh; and, a self-immolating man rushing the dais at a BJP rally in Telangana. All three news items were carried by other news channels and websites.

    Governing communications

    Most news providers use multiple media to transmit their content and suffer from complex and confusing regulation. Cable television is one such medium, so is the Internet; both media swiftly evolve to follow technological change. As the law struggles to keep up, governmental anxiety at the inability to perfectly control this vast field of speech and expression frequently expresses itself through acts of overreach and censorship.

    In the newly-liberalised media landscape of the early 1990s, cable television sprang up in a legal vacuum. Doordarshan, the sole broadcaster, flourished in the Centre’s constitutionally-sanctioned monopoly of broadcasting which was only broken by the Supreme Court in 1995. The same year, Parliament enacted the Cable Television Networks (Regulation) Act, 1995 (“Cable TV Act”) to create a licence regime to control cable television channels. The Cable TV Act is supplemented by the Cable Television Network Rules, 1994 (“Cable Rules”).

    The state’s disquiet with communications technology is a recurring motif in modern Indian history. When the first telegraph line was laid in India, the colonial state was quick to recognize its potential for transmitting subversive speech and responded with strict controls. The fourth iteration of the telegraph law represents the colonial government’s perfection of the architecture of control. This law is the Indian Telegraph Act, 1885, which continues to dominate communications governance in India today including, following a directive in 2004, broadcasting.

    Vague and arbitrary law

    The Cable TV Act requires cable news channels such as Sathiyam to obey a list of restrictions on content that is contained in the Cable Rules (“Programme Code“). Failure to conform to the Programme Code can result in seizure of equipment and imprisonment; but, more importantly, creates the momentum necessary to invoke the broad powers of censorship to ban a programme, channel, or even the cable operator. But the Programme Code is littered with vague phrases and undefined terms that can mean anything the government wants them to mean.

    By its first warning of May 12, 2015, the MIB claimed Sathiyam violated four rules in the Programme Code. These include rule 6(1)(c) which bans visuals or words “which promote communal attitudes”; rule 6(1)(d) which bans “deliberate, false and suggestive innuendos and half-truths”; rule 6(1)(e) which bans anything “which promotes anti-national attitudes”; and, rule 6(1)(i) which bans anything that “criticises, maligns or slanders any…person or…groups, segments of social, public and moral life of the country” (sic).

    The rest of the Programme Code is no less imprecise. It proscribes content that “offends against good taste” and “reflects a slandering, ironical and snobbish attitude” against communities. On the face of it, several provisions of the Programme Code travel beyond the permissible restrictions on free speech listed in Article 19(2) of the Constitution to question their validity. The fiasco of implementing the vague provisions of the erstwhile section 66A of the Information Technology Act, 2000 is a recent reminder of the dangers presented by poorly-drafted censorship law – which is why it was struck down by the Supreme Court for infringing the right to free speech. The Programme Code is an older creation, it has simply evaded scrutiny for two decades.

    The arbitrariness of the Programme Code is amplified manifold by the authorities responsible for interpreting and implementing it. An Inter-Ministerial Committee (IMC) of bureaucrats, supposedly a recommendatory body, interprets the Programme Code before the MIB takes action against channels. This is an executive power of censorship that must survive legal and constitutional scrutiny, but has never been subjected to it. Curiously, the courts have shied away from a proper analysis of the Programme Code and the IMC.

    Judicial challenges

    In 2011, a single judge of the Delhi High Court in the Star India case (2011) was asked to examine the legitimacy of the IMC as well as four separate clauses of the Programme Code including rule 6(1)(i), which has been invoked against Sathiyam. But the judge neatly sidestepped the issues. This feat of judicial adroitness was made possible by the crass indecency of the content in question, which could be reasonably restricted. Since the show clearly attracted at least one ground of legitimate censorship, the judge saw no cause to examine the other provisions of the Programme Code or even the composition of the IMC.

    This judicial restraint has proved detrimental. In May 2013, another single judge of the Delhi High Court, who was asked by Comedy Central to adjudge the validity of the IMC’s decision-making process, relied on Star India (2011) to uphold the MIB’s action against the channel. The channel’s appeal to the Supreme Court is currently pending. If the Supreme Court decides to examine the validity of the IMC, the Delhi High Court may put aside Sathiyam’s petition to wait for legal clarity.

    As it happens, in the Shreya Singhal case (2015) that struck down section 66A of the IT Act, the Supreme Court has an excellent precedent to follow to demand clarity and precision from the Programme Code, perhaps even strike it down, as well as due process from the MIB. On the accusation of defaming the Prime Minister, probably the only clearly stated objection by the MIB, the Supreme Court’s past law is clear: public servants cannot, for non-personal acts, claim defamation.

    Censorship by blunt force

    Beyond the IMC’s advisories and warnings, the Cable TV Act contains two broad powers of censorship. The first empowerment in section 19 enables a government official to ban any programme or channel if it fails to comply with the Programme Code or, “if it is likely to promote, on grounds of religion, race, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, linguistic or regional groups or castes or communities or which is likely to disturb the public tranquility.”

    The second empowerment is much wider. Section 20 of the Cable TV Act permits the Central Government to ban an entire cable television operator, as opposed to a single channel or programmes within channels, if it “thinks it necessary or expedient so to do in public interest”. No reasons need be given and no grounds need be considered. Such a blunt use of force creates an overwhelming power of censorship. It is not a coincidence that section 20 resembles some provisions of nineteenth-century telegraph laws, which were designed to enable the colonial state to control the flow of information to its native subjects.

    A manual for television bans

    Film and television have always attracted political attention and state censorship. In 1970, Justice Hidayatullah of the Supreme Court explained why: “It has been almost universally recognised that the treatment of motion pictures must be different from that of other forms of art and expression. This arises from the instant appeal of the motion picture… The motion picture is able to stir up emotions more deeply than any other product of art.”

    Within this historical narrative of censorship, television regulation is relatively new. Past governments have also been quick to threaten censorship for attacking an incumbent Prime Minister. There seems to be a pan-governmental consensus that senior political leaders ought to be beyond reproach, irrespective of their words and deeds.

    But on what grounds could the state justify these bans? Lord Atkins’ celebrated war-time dissent in Liversidge (1941) offers an unlikely answer:

    “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’”

    The Short-lived Adventure of India’s Encryption Policy

    by Bhairav Acharya last modified Nov 29, 2015 09:03 AM
    Written for the Berkeley Information Privacy Law Association (BIPLA).

    During his recent visit to Silicon Valley, Indian Prime Minister Narendra Modi said his government was “giving the highest importance to data privacy and security, intellectual property rights and cyber security”. But a proposed national encryption policy circulated in September 2015 would have achieved the opposite effect.

    The policy was comically short-lived. After its poorly-drafted provisions invited ridicule, it was swiftly withdrawn. But the government has promised to return with a fresh attempt to regulate encryption soon. The incident highlights the worrying assault on communications privacy and free speech in India, a concern compounded by the enormous scale of the telecommunications and Internet market.

    Even with only around 26 percent of its population online, India is already the world’s second-largest Internet user, recently overtaking the United States. The number of Internet users in India is set to grow exponentially, spurred by ambitious governmental schemes to build a ‘Digital India’ and a country-wide fiber-optic backbone. There will be a corresponding increase in the use of the Internet for communicating and conducting commerce.

    Encryption on the Internet

    Encryption protects the security of Internet users from invasions of privacy, theft of data, and other attacks. By applying an algorithmic cipher (key), ordinary data (plaintext) is encoded into an unintelligible form (ciphertext), which is decrypted using the key. The ciphertext can be intercepted but will remain unintelligible without the key. The key is secret.

    There are several methods of encryption. SSL/TLS, a family of encryption protocols, is commonly used by major websites. But while some companies encrypt sensitive data, such as passwords and financial information, during its transit through the Internet, most data at rest on servers is largely unencrypted. For instance, email providers regularly store plaintext messages on their servers. As a result, governments simply demand and receive backdoor access to information directly from the companies that provide these services. However, governments have long insisted on blanket backdoor access to all communications data, both encrypted and unencrypted, and whether at rest or in transit.

    On the other hand, proper end-to-end encryption – full encryption from the sender to recipient, where the service provider simply passes on the ciphertext without storing it, and deletes the metadata – will defeat backdoors and protect privacy, but may not be profitable. End-to-end encryption alarms the surveillance establishment, which is why British Prime Minister David Cameron wants to ban it, and many in the US government want Silicon Valley companies to stop using it.

    Communications privacy

    Instead of relying on a company to secure communications, the surest way to achieve end-to-end encryption is for the sender to encrypt the message before it leaves her computer. Since only the sender and intended recipient have the key, even if the data is intercepted in transit or obtained through a backdoor, only the ciphertext will be visible.

    For almost all of human history, encryption relied on a single shared key; that is, both the sender and recipient used a pre-determined key. But, like all secrets, the more who know it, the less secure the key becomes. From the 1970s onwards, revolutionary advances in cryptography enabled the generation of a pair of dissimilar keys, one public and one private, which are uniquely and mathematically linked. This is asymmetric or public key cryptography, where the private key remains an exclusive secret. It offers the strongest protection for communications privacy because it returns autonomy to the individual and is immune to backdoors.

    For those using public key encryption, Edward Snowden’s revelation that the NSA had cracked several encryption protocols including SSL/TLS was worrying. Brute-force decryption (the use of supercomputers to mathematically attack keys) questions the integrity of public key encryption. But, since the difficulty of code-breaking is directly proportional to key size, notionally, generating longer keys will thwart the NSA, for now.

    The crypto-wars in India

    Where does India’s withdrawn encryption policy lie in this landscape of encryption and surveillance? It is difficult to say. Because it was so badly drafted, understanding the policy was a challenge. It could have been a ham-handed response to commercial end-to-end encryption, which many major providers such as Apple and WhatsApp are adopting following consumer demand. But curiously, this did not appear to be the case, because the government later exempted WhatsApp and other “mass use encryption products”.

    The Indian establishment has a history of battling commercial encryption. From 2008, it fought Blackberry for backdoor access to its encrypted communications, coming close to banning the service, which dissipated only once the company lost its market share. There have been similar attempts to force Voice over Internet Protocol providers to fall in line, including Skype and Google. And there is a new thrust underway to regulate over-the-top content providers, including US companies.

    The policy could represent a new phase in India’s crypto-wars. The government, emboldened by the sheer scale of the country’s market, might press an unyielding demand for communications backdoors. The policy made no bones of this desire: it sought to bind communications companies by mandatory contracts, regulate key-size and algorithms, compel surrender of encryption products including “working copies” of software (the key generation mechanism), and more.

    The motives of regulation

    The policy’s deeply intrusive provisions manifest a long-standing effort of the Indian state to dominate communications technology unimpeded by privacy concerns. From wiretaps to Internet metadata, intrusive surveillance is not judicially warranted, does not require the demonstration of probable cause, suffers no external oversight, and is secret. These shortcomings are enabling the creation of a sophisticated surveillance state that sits ill with India’s constitutional values.

    Those values are being steadily besieged. India’s Supreme Court is entertaining a surge of clamorous litigation to check an increasingly intrusive state. Only a few months ago, the Attorney-General – the government’s foremost lawyer – argued in court that Indians did not have a right to privacy, relying on 1950s case law which permitted invasive surveillance. Encryption which can inexpensively lock the state out of private communications alarms the Indian government, which is why it has skirmished with commercially-available encryption in the past.

    On the other hand, the conflict over encryption is fueled by irregular laws. Telecoms licensing regulations restrict Internet Service Providers to 40-bit symmetric keys, a primitively low standard; higher encryption requires permission and presumably surrender of the shared key to the government. Securities trading on the Internet requires 128-bit SSL/TLS encryption while the country’s central bank is pushing for end-to-end encryption for mobile banking. Seen in this light, the policy could simply be an attempt to rationalize an uneven field.

    Encryption and freedom

    Perhaps the government was trying to restrict the use of public key encryption and Internet anonymization services, such as Tor or I2P, by individuals. India’s telecoms minister stated: “The purport of this encryption policy relates only to those who encrypt.” This was not particularly illuminating. If the government wants to pre-empt terrorism – a legitimate duty, this approach is flawed since regardless of the law’s command arguably no terrorist will disclose her key to the government. Besides, since there are very few Internet anonymizers in India who are anyway targeted for special monitoring, it would be more productive for the surveillance establishment to maintain the status quo.

    This leaves harmless encrypters – businesses, journalists, whistle blowers, and innocent privacy enthusiasts. For this group, impediments to encryption interferes with their ability to freely communicate. There is a proportionate link between encryption and the freedom of speech and expression, a fact acknowledged by Special Rapporteur David Kaye of the UN Human Rights Council, where India is a participating member. Kaye notes: “Encryption and anonymity are especially useful for the development and sharing of opinions, which often occur through online correspondence such as e-mail, text messaging, and other online interactions.”

    This is because encryption affords privacy which promotes free speech, a relationship reiterated by the previous UN Special Rapporteur, Frank La Rue. On the other hand, surveillance has a “chilling effect” on speech. In 1962, Justice Subba Rao’s famous dissent in the Indian Supreme Court presciently connected privacy and free speech:

    The act of surveillance is certainly a restriction on the [freedom of speech]. It cannot be suggested that the said freedom…will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like. To extend the analogy to the present case is to treat the man under surveillance as a prisoner within the confines of our country and the authorities enforcing surveillance as guards. So understood, it must be held that the petitioner’s freedom under [the right to free speech under the Indian] Constitution is also infringed.

    Kharak Singh v. State of Uttar Pradesh (1964) 1 SCR 332, pr. 30.

    Perhaps the policy expressed the government’s discomfort at individual encrypters escaping surveillance, like free agents evading the state’s control. How should the law respond to this problem? Daniel Solove says the security of the state need not compromise individual privacy. On the other hand, as Ronald Dworkin influentially maintained, the freedoms of the individual precede the interests of the state.

    Security and trade interests

    However, even when assessed from the perspective of India’s security imperatives, the policy would have had harmful consequences. It required users of encryption, including businesses and consumers, to store plaintext versions of their communications for ninety days to surrender to the government upon demand. This outrageously ill-conceived provision would have created real ‘honeypots’ (originally, honeypots are decoy servers to lure hackers) of unencrypted data, ripe for theft. Note that India does not have a data breach law.

    The policy’s demand for encryption companies to register their products and give working copies of their software and encryption mechanisms to the Indian government would have flown in the face of trade secrecy and intellectual property protection. The policy’s hurried withdrawal was a public relations exercise on the eve of Prime Minister Modi’s visit to Silicon Valley. It was successful. Modi encountered no criticism of his government’s visceral opposition to privacy, even though the policy would have severely disrupted the business practices of US communications providers operating in India.

    Encryption invites a convergence of state interests between India and US as well: both countries want to control it. Last month’s joint statement from the US-India Strategic and Commercial Dialogue pledges “further cooperation on internet and cyber issues”. This innocuous statement masks a robust information-gathering and -sharing regime. There is no guarantee against the sharing of any encryption mechanisms or intercepted communications by India.

    The government has promised to return with a reworked proposal. It would be in India’s interest for this to be preceded by a broad-based national discussion on encryption and its links to free speech, privacy, security, and commerce.


    Click to read the post published on Free Speech / Privacy / Technology website.

    How India Regulates Encryption

    by Pranesh Prakash & Japreet Grewal — last modified Jul 23, 2016 01:24 PM
    Contributors: Geetha Hariharan

    Governments across the globe have been arguing for the need to regulate the use of encryption for law enforcement and national security purposes. Various means of regulation such as backdoors, weak encryption standards and key escrows have been widely employed which has left the information of online users vulnerable not only to uncontrolled access by governments but also to cyber-criminals. The Indian regulatory space has not been untouched by this practice and constitutes laws and policies to control encryption. The regulatory requirements in relation to the use of encryption are fragmented across legislations such as the Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) and several sector-specific regulations. The regulatory framework is designed to either limit encryption or gain access to the means of decryption or decrypted information.

    Limiting encryption

    The IT Act does not prescribe the level or type of encryption to be used by online users. Under Section 84A, it grants the Government the authority to prescribe modes and methods of encryption. The Government has not issued any rules in exercise of these powers so far but had released a draft encryption policy on September 21, 2015. Under the draft policy, only those encryption algorithms and key sizes were permitted to be used as were to be notified by the Government. The draft policy was withdrawn due to widespread criticism of various requirements under the policy of which retention of unencrypted user information for 90 days and mandatory registration of all encryption products offered in the country were noteworthy.

    The Internet Service Providers License Agreement (ISP License), entered between the Department of Telecommunication (DoT) and an Internet Service Provider (ISP) to provide internet services (i.e. internet access and internet telephony services), permits the use of encryption up to 40 bit key length in the symmetric algorithms or its equivalent in others.[1] The restriction applies not only to the ISPs but also to individuals, groups and organisations that use encryption. In the event an individual, group or organisation decides to deploy encryption that is higher than 40 bits, prior permission from the DoT must be obtained and the decryption key must be deposited with the DoT. There are, however no parameters laid down for use of the decryption key by the Government. Several issues arise in relation enforcement of these license conditions.

    1. While this requirement is applicable to all individuals, groups and organisations using encryption it is difficult to enforce it as the ISP License only binds DoT and the ISP and cannot be enforced against third parties.
    2. Further, a 40 bit symmetric key length is considered to be an extremely weak standard[2] and is inadequate for protection of data stored or communicated online. Various sector-specific regulations that are already in place in India prescribe encryption of more than 40 bits.
      • The Reserve Bank of India has issued guidelines for Internet banking[3] where it prescribes 128-bit as the minimum level of encryption and acknowledges that constant advances in computer hardware and cryptanalysis may induce use of larger key lengths. The Securities and Exchange Board of India also prescribes[4] a 64-bit/128-bit encryption for standard network security and use of secured socket layer security preferably with 128-bit encryption, for securities trading over a mobile phone or a wireless application platform.  Further, under Rule 19 (2) of the Information Technology (Certifying Authorities) Rules, 2000 (CA Rules), the Government has prescribed security guidelines for management and implementation of information technology security of the certifying authorities. Under these guidelines, the Government has suggested the use of suitable security software or even encryption software to protect sensitive information and devices that are used to transmit or store sensitive information such as routers, switches, network devices and computers (also called information assets). The guidelines acknowledge the need to use internationally proven encryption techniques to encrypt stored passwords such as PKCS#1 RSA Encryption Standard (512, 1024, 2048 bit), PKCS#5 Password Based Encryption Standard or PKCS#7 Cryptographic Message Syntax Standard as mentioned under Rule 6 of the CA Rules. These encryption algorithms are very strong and secure as compared to a 40 bit encryption key standard.
      • The ISP License also contains a clause which provides that use of any hardware or software that may render the network security vulnerable would be considered a violation of the license conditions.[5] Network security may be compromised by using a weak security measure such as the 40 bit encryption or its equivalent prescribed by the DoT but the liability will be imputed to the ISP. As a result, an ISP which is merely complying with the license conditions by employing not more than a 40 bit encryption may be liable for what appears to be contradictory license conditions.
      • It is noteworthy that the restriction on the key size under the ISP License has not been imported to the Unified Service License Agreement (UL Agreement) that has been formulated by the DoT. The UL Agreement does not prescribe a specific level of encryption to be used for provision of services. Clause 37.5 of the UL Agreement however makes it clear that use of encryption will be governed by the provisions of the IT Act. As noted earlier, the Government has not specified any limit to level and type of encryption under the IT Act however it had released a draft encryption policy that has been suspended due to widespread criticism of its mandate.

     

    The Telecom Licenses (ISP License, UL Agreement, and Unified Access Service License) prohibit the use of bulk encryption by the service providers but they continue to remain responsible for maintaining privacy of communication and preventing unauthorized interception.

    Gaining access to means of decryption or decrypted information

    Besides restrictions on the level of encryption, the ISP License and the UL Agreement make it mandatory for the service providers including ISPs to provide to the DoT all details of the technology that is employed for operations and furnish all documentary details like concerned literature, drawings, installation materials and tools and testing instruments relating to the system intended to be used for operations as and when required by the DoT.[6] While these license conditions do not expressly lay down that access to means of decryption must be given to the government the language is sufficiently broad to include gaining such access as well. Further, ISPs are required to take prior approval of the DoT for installation of any equipment or execution of any project in areas which are sensitive from security point of view. The ISPs are in fact subject to and further required to facilitate continuous monitoring by the DoT. These obligations ensure that the Government has complete access to and control over the infrastructure for providing internet services which includes any installation or equipment required for the purpose of encryption and decryption.

    The Government has also been granted the power to gain access to means of decryption or simply, decrypted information under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

    1. A decryption order usually entails a direction to a decryption key holder to disclose a decryption key, allow access to or facilitate conversion of encrypted information and must contain reasons for such direction. In fact, Rule 8 of the Decryption Rules makes it mandatory for the authority to consider other alternatives to acquire the necessary information before issuing a decryption order.
    2. The Secretary in the Ministry of Home Affairs or the Secretary in charge of the Home Department in a state or union territory is authorised to issue an order of decryption in the interest of sovereignty or integrity of India, defense of India, security of the state, friendly relations with foreign states or public order or preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence. It is useful to note that this provision was amended in 2009 to expand the grounds on which a direction for decryption can be passed. Post 2009, the Government can issue a decryption order for investigation of any offence.  In the absence of any specific process laid down for collection of digital evidence do we follow the procedure under the criminal law or is it necessary that we draw a distinction between the investigation process in the digital and the physical environment and see if adequate safeguards exist to check the abuse of investigatory powers of the police herein.
    3. The orders for decryption must be examined by a review committee constituted under Rule 419A of the Indian Telegraph Rules, 1951 to ensure compliance with the provisions under the IT Act. The review committee is required to convene atleast once in two months for this purpose. However, we have been informed in a response by the Department of Electronics and Information Technology to an RTI dated April 21, 2015 filed by our organisation that since the constitution of the review committee has met only once in January 2013.

    Conclusion

    While studying a regulatory framework for encryption it is necessary that we identify the lens through which encryption is looked at i.e. whether encryption is considered as a means of information security or a threat to national security. As noted earlier, the encryption mandates for banking systems and certifying authorities in India are contradictory to those under the telecom licenses and the Decryption Rules. Would it help to analyse whether the prevailing scepticism of the Government is well founded against the need to have strong encryption? It would be useful to survey the statistics of cyber incidents where strong encryption was employed as well as look at instances that reflect on whether strong encryption has made it difficult for law enforcement agencies to prevent or resolve crimes. It would also help  to record cyber incidents that have resulted from vulnerabilities such as backdoors or key escrows deliberately introduced by law. These statistics would certainly clear the air about the role of encryption in securing cyberspace and facilitate appropriate regulation.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     



    [1] Clause 2.2 (vii) of the ISP License

    [2] Schneier, Bruce (1996). Applied Cryptography (Second ed.). John Wiley & Sons

    [3] Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds- Implementation of recommendations, 2011

    [4] Report on Internet Based Trading by the SEBI Committee on Internet based Trading and Services, 2000; It is useful to note that subsequently SEBI had acknowledged that the level of encryption would be governed by DoT policy in a SEBI circular no CIR/MRD/DP/25/2010 dated August 27, 2010 on Securities Trading using Wireless Technology

    [5] Clause 34.25 of the ISP License

    [6] Clauses 22 and  23 of Part IV  of the ISP License

    Concept Note: Network Neutrality in South Asia

    by Prasad Krishna last modified Dec 01, 2015 02:34 AM

    PDF document icon Network Neutrality South Asia Concept Note _ORF CIS.pdf — PDF document, 238 kB (244150 bytes)

    The Case of Whatsapp Group Admins

    by Japreet Grewal — last modified Dec 08, 2015 10:25 AM
    Contributors: Geetha Hariharan

    Censorship laws in India have now roped in group administrators of chat groups on instant messaging platforms such as Whatsapp (group admin(s)) for allegedly objectionable content that was posted by other users of these chat groups. Several incidents[1] were reported this year where group admins were arrested in different parts of the country for allowing content that was allegedly objectionable under law. A few reports mentioned that these arrests were made under Section 153A[2] read with Section 34[3] of the Indian Penal Code (IPC) and Section 67[4] of the Information Technology Act (IT Act).

    Targeting of a group admin for content posted by other members of a chat group has raised concerns about how this liability is imputed. Whether a group admin should be considered an intermediary under Section 2 (w) of the IT Act? If yes, whether a group admin would be protected from such liability?

    Group admin as an intermediary

    Whatsapp is an instant messaging platform which can be used for mass communication by opting to create a chat group. A chat group is a feature on Whatsapp that allows joint participation of Whatsapp users. The number of Whatsapp users on a single chat group can be up to 100. Every chat group has one or more group admins who control participation in the group by deleting or adding people. [5] It is imperative that we understand that by choosing to create a chat group on Whatsapp whether a group admin can become liable for content posted by other members of the chat group.

    Section 34 of the IPC provides that when a number of persons engage in a criminal act with a common intention, each person is made liable as if he alone did the act. Common intention implies a pre-arranged plan and acting in concert pursuant to the plan. It is interesting to note that group admins have been arrested under Section 153A on the ground that a group admin and a member posting content on a chat group that is actionable under this provision have common intention to post such content on the group. But would this hold true when for instance, a group admin creates a chat group for posting lawful content (say, for matchmaking purposes) and a member of the chat group posts content which is actionable under law (say, posting a video abusing Dalit women)? Common intention can be established by direct evidence or inferred from conduct or surrounding circumstances or from any incriminating facts.[6]

    We need to understand whether common intention can be established in case of a user merely acting as a group admin. For this purpose it is necessary to see how a group admin contributes to a chat group and whether he acts as an intermediary.

    We know that parameters for determining an intermediary differ across jurisdictions and most global organisations have categorised them based on their role or technical functions.[7] Section 2 (w) of the Information Technology Act, 2000 (IT Act) defines an intermediary as any person, who on behalf of another person, receives, stores or transmits messages or provides any service with respect to that message and includes the telecom services providers, network providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces and cyber cafés. Does a group admin receive, store or transmit messages on behalf of group participants or provide any service with respect to messages of group participants or falls in any category mentioned in the definition? Whatsapp does not allow a group admin to receive, or store on behalf of another participant on a chat group. Every group member independently controls his posts on the group. However, a group admin helps in transmitting messages of another participant to the group by allowing the participant to be a part of the group thus effectively providing service in respect of messages. A group admin therefore, should be considered an intermediary. However his contribution to the chat group is limited to allowing participation but this is discussed in further detail in the section below.

    According to the Organisation for Economic Co-operation and Development (OECD), in a 2010 report[8], an internet intermediary brings together or facilitates transactions between third parties on the Internet. It gives access to, hosts, transmits and indexes content, products and services originated by third parties on the Internet or provide Internet-based services to third parties. A Whatsapp chat group allows people who are not on your list to interact with you if they are on the group admins’ contact list. In facilitating this interaction, according to the OECD definition, a group admin may be considered an intermediary.

    Liability as an intermediary

    Section 79 (1) of the IT Act protects an intermediary from any liability under any law in force (for instance, liability under Section 153A pursuant to the rule laid down in Section 34 of IPC) if an intermediary fulfils certain conditions laid down therein. An intermediary is required to carry out certain due diligence obligations laid down in Rule 3 of the Information Technology (Intermediaries Guidelines) Rules, 2011 (Rules). These obligations include monitoring content that infringes intellectual property, threatens national security or public order, or is obscene or defamatory or violates any law in force (Rule 3(2)).[9] An intermediary is liable for publishing or hosting such user generated content, however, as mentioned earlier, this liability is conditional. Section 79 of IT Act states that an intermediary would be liable only if it initiates transmission, selects receiver of the transmission and selects or modifies information contained in the transmission that falls under any category mentioned in Rule 3 (2) of the Rules. While we know that a group admin has the ability to facilitate sharing of information and select receivers of such information, he has no direct editorial control over the information shared. Group admins can only remove members but cannot remove or modify the content posted by members of the chat group. An intermediary is liable in the event it fails to comply with due diligence obligations laid down under rule 3 (2) and 3 (3) of the Rules however, since a group admin lacks the authority to initiate transmission himself and control content, he can’t comply with these obligations. Therefore, a group admin would be protected from any liability arising out of third party/user generated content on his group pursuant to Section 79 of the IT Act.

    It is however relevant to note whether the ability of a group admin to remove participants amounts to an indirect form of editorial control.

    Other pertinent observations

    In several reports[10] there have been discussions about how holding a group admin liable makes the process convenient as it is difficult to locate all the users of a particular group. This reasoning may not be correct as the Whatsapp policy[11] makes it mandatory for a prospective user to provide his mobile number in order to use the platform and no additional information is collected from group admins which may justify why group admins are targeted. Investigation agencies can access mobile numbers of Whatsapp users and gain more information from telecom companies.

    It is also interesting to note that the group admins were arrested after a user or someone familiar to a user filed a complaint with the police about content being objectionable or hurtful. Earlier this year, the apex court had ruled in the case of Shreya Singhal v. Union of India[12] that an intermediary needed a court order or a government notification for taking down information. With actions taken against group admins on mere complaints filed by anyone, it is clear that the law enforcement officials have been overriding the mandate of the court.

    Conclusion

     

    According to a study conducted by a global research consultancy, TNS Global, around 38 % of internet users in India use instant messaging applications such as Snapchat and Whatsapp on a daily basis, Whatsapp being the most widely used application. These figures indicate the scale of impact that arrests of group admins may have on our daily communication.

    It is noteworthy that categorising a group admin as an intermediary would effectively make the Rules applicable to all Whatsapp users intending to create groups and make it difficult to enforce and would perhaps blur the distinction between users and intermediaries.

    The critical question however is whether a chat group is considered a part of the bundle of services that Whatsapp offers to its users and not as an independent platform that makes a group admin a separate entity. Also, would it be correct to draw comparison of a Whatsapp group chat with a conference call on Skype or sharing a Google document with edit rights to understand the domain in which censorship laws are penetrating today?

     

    Valuable contribution by Pranesh Prakash and Geetha Hariharan


    [1] http://www.nagpurtoday.in/whatsapp-admin-held-for-hurting-religious-sentiment/06250951http://www.catchnews.com/raipur-news/whatsapp-group-admin-arrested-for-spreading-obscene-video-of-mahatma-gandhi-1440835156.html ; http://www.financialexpress.com/article/india-news/whatsapp-group-admin-along-with-3-members-arrested-for-objectionable-content/147887/

    [2] Section 153A. “Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.— (1) Whoever— (a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different reli­gious, racial, language or regional groups or castes or communi­ties…” or 2) Whoever commits an offence specified in sub-section (1) in any place of worship or in any assembly engaged in the performance of religious wor­ship or religious ceremonies, shall be punished with imprisonment which may extend to five years and shall also be liable to fine.

    [3] Section 34. Acts done by several persons in furtherance of common intention – When a criminal act is done by several persons in furtherance of common intention of all, each of such persons is liable for that act in the same manner as if it were done by him alone.

    [4] Section 67 Publishing of information which is obscene in electronic form. -Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees."

    [5] https://www.whatsapp.com/faq/en/general/21073373

    [6] Pandurang v. State of Hyderabad AIR 1955 SC 216

    [7]https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf;  http://unesdoc.unesco.org/images/0023/002311/231162e.pdf

    [8] http://www.oecd.org/internet/ieconomy/44949023.pdf

    [9] Rule 3(2) (b) of the Rules

    [10]http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp-group-admin-better-be-careful/article7531350.ece; http://www.newindianexpress.com/states/tamil_nadu/Social-Media-Administrator-You-Could-Land-in-Trouble/2015/10/10/article3071815.ece;  http://www.medianama.com/2015/10/223-whatsapp-group-admin-arrest/http://www.thenewsminute.com/article/whatsapp-group-admin-you-are-intermediary-and-here%E2%80%99s-what-you-need-know-35031

    [11] https://www.whatsapp.com/legal/

    [12] http://supremecourtofindia.nic.in/FileServer/2015-03-24_1427183283.pdf

    DNA Research

    by Vanya Rakesh last modified Jul 21, 2016 11:02 AM
    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament. This legislation will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. Here is a collection of our research on privacy and security concerns related to the Bill.

     

    The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues related to the DNA Bill:

    1. Validity and legality of collection, usage and storage of DNA samples and information derived from the same.
    2. Monitoring projects and policies around Human DNA Profiling.
    3. Raising public awareness around issues concerning biometrics.

    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament.

    The Bill seeks to establish DNA Databases at the state and regional level and a national level database. The databases would store DNA profiles of suspects, offenders, missing persons, and deceased persons. The database could be used by courts, law enforcement (national and international) agencies, and other authorized persons for criminal and civil purposes. The Bill will also regulate DNA laboratories collecting DNA samples. Lack of adequate consent, the broad powers of the board, and the deletion of innocent persons profiles are just a few of the concerns voiced about the Bill.

    DNA Profiling Bill - Infographic
    Download the infographic. Credit: Scott Mason and CIS team.

     

    1. DNA Bill

    The Human DNA Profiling bill is a legislation that will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. There have been many concerns raised about the infringement of privacy and the power that the government will have with such information raised by Human Rights Groups, individuals and NGOs. The bill proposes to profile people through their fingerprints and retinal scans which allow the government to create different unique profiles for individuals. Some of the concerns raised include the loss of privacy by such profiling and the manner in which they are conducted. Unless strictly controlled, monitored and protected, such a database of the citizens' fingerprints and retinal scans could lead to huge blowbacks in the form of security risks and privacy invasions. The following articles elaborate upon these matters.

       

      2. Comparative Analysis with other Legislatures

      Human DNA Profiling is a system that isn't proposed only in India. This system of identification has been proposed and implemented in many nations. Each of these systems differs from the other on bases dependent on the nation's and society's needs. The risks and criticisms that DNA profiling has faced may be the same but the manner in which solutions to such issues are varying. The following articles look into the different systems in place in different countries and create a comparison with the proposed system in India to give us a better understanding of the risks and implications of such a system being implemented.

       

      Privacy Policy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:40 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Raising public awareness  and dialogue around privacy,
      2. Undertaking in depth research of domestic and international policy pertaining to privacy
      3. Driving comprehensive privacy legislation in India through research.

      India does not have a comprehensive legislation covering issues of privacy or establishing the right to privacy In 2010 an "Approach Paper on Privacy" was published, in 2011 the Department of Personnel and Training released a draft Right to Privacy Bill, in 2012 the Planning Commission constituted a group of experts which published The Report of the Group of Experts on Privacy, in 2013 CIS drafted the citizens Privacy Protection Bill, and in 2014 the Right to Privacy Bill was leaked. Currently the Government is in the process of drafting and finalizing the Bill.

      Draft Right to Privacy

      Privacy Research -

      1. Approach Paper on Privacy, 2010 -

      The following article contains the reply drafted by CIS in response to the Paper on Privacy in 2010. The Paper on Privacy was a document drafted by a group of officers created to develop a framework for a privacy legislation that would balance the need for privacy protection, security, sectoral interests, and respond to the domain legislation on the subject.

      2. Report on Privacy, 2012 -

      The Report on Privacy, 2012 was drafted and published by a group of experts under the Planning Commission pertaining to the current legislation with respect to privacy. The following articles contain the responses and criticisms to the report and the current legislation.

      3. Privacy Protection Bill, 2013 -

      The Privacy Protection Bill, 2013 was a legislation that aims to formulate the rules and law that governs privacy protection. The following articles refer to this legislation including a citizen's draft of the legislation.

      4. Right to Privacy Act, 2014 (Leaked Bill) -

      The Right to Privacy Act, 2014 is a bill still under proposal that was leaked, linked below.

      • Leaked Privacy Bill: 2014 vs. 2011 http://bit.ly/QV0Y0w

      Sectoral Privacy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:46 AM
      The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      The Right to Privacy has evolved in India since many decades, where the question of it being a Fundamental Right has been debated many times in courts of Law. With the advent of information technology and digitisation of the services, the issue of Privacy holds more relevance in sectors like Banking, Healthcare, Telecommunications, ITC, etc., The Right to Privacy is also addressed in light of the Sexual minorities, Whistle-blowers, Government services, etc.

      Sectors -

      1. Consumer Privacy and other sectors -

      Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate customer privacy measures. The following articles deal with the current consumer privacy laws in place in India and around the world. Also, privacy concerns have been considered along with other sectors like Copyright law, data protection, etc.

      § Consumer Privacy - How to Enforce an Effective Protective Regime? http://bit.ly/1a99P2z

      § Privacy and Information Technology Act: Do we have the Safeguards for Electronic Privacy? http://bit.ly/10VJp1P

      • Limits to Privacy http://bit.ly/19mPG6I

      § Copyright Enforcement and Privacy in India http://bit.ly/18fi9fM

      • Privacy in India: Country Report http://bit.ly/14pnNwl

      § Transparency and Privacy http://bit.ly/1a9dMnC

      § The Report of the Group of Experts on Privacy (Contributed by CIS) http://bit.ly/VqzKtr

      § The (In) Visible Subject: Power, Privacy and Social Networking http://bit.ly/15koqol

      § Privacy and the Indian Copyright Act, 1857 as Amended in 2010 http://bit.ly/1euwX0r

      § Should Ratan Tata be afforded the Right to Privacy? http://bit.ly/LRlXin

      § Comments on Information Technology (Guidelines for Cyber Café) Rules, 2011 http://bit.ly/15kojJn

      § Broadcasting Standards Authority Censures TV9 over Privacy Violations! http://bit.ly/16L4izl

      § Is Data Protection Enough? http://bit.ly/1bvaWx2

      § Privacy, speech at stake in cyberspace http://cis-india.org/news/privacy-speech-at-stake-in-cyberspace-1

      § Q&A to the Report of the Group of Experts on Privacy http://bit.ly/TPhzQQ

      § Privacy worries cloud Facebook's WhatsApp Deal http://cis-india.org/internet-governance/blog/economic-times-march-14-2014-sunil-abraham-privacy-worries-cloud-facebook-whatsapp-deal

      § GNI Assessment Finds ICT Companies Protect User Privacy and Freedom of Expression http://bit.ly/1mjbpmL

      § A Stolen Perspective http://bit.ly/1bWHyzv

      § Is Data Protection enough? http://cis-india.org/internet-governance/blog/privacy/is-data-protection-enough

      § I don't want my fingerprints taken http://bit.ly/aYdMia

      § Keeping it Private http://bit.ly/15wjTVc

      § Personal Data, Public Profile http://bit.ly/15vlFk4

      § Why your Facebook Stalker is Not the Real Problem http://bit.ly/1bI2MSc

      § The Private Eye http://bit.ly/173ypSI

      § How Facebook is Blatantly Abusing our Trust http://bit.ly/OBXGXk

      § Open Secrets http://bit.ly/1b5uvK0

      § Big Brother is Watching You http://bit.ly/1cGpg0K

      2. Banking/Finance -

      Privacy in the banking and finance industry is crucial as the records and funds of one person must not be accessible by another without the due authorisation. The following articles deal with the current system in place that governs privacy in the financial and banking industry.

      § Privacy and Banking: Do Indian Banking Standards Provide Enough Privacy Protection? http://bit.ly/18fhsTM

      § Finance and Privacy http://bit.ly/15aUPh6

      § Making the Powerful Accountable http://bit.ly/1nvzSpC

      3. Telecommunications -

      The telecommunications industry is the backbone of current technology with respect to ICTs. The telecommunications industry has its own rules and regulations. These rules are the focal point of the following articles including criticism and acclaim.

      § Privacy and Telecommunications: Do We Have the Safeguards? http://bit.ly/10VJp1P

      § Privacy and Media Law http://bit.ly/18fgDfF

      § IP Addresses and Expeditious Disclosure of Identity in India http://bit.ly/16dBy4N

      § Telecommunications and Internet Privacy Read more: http://bit.ly/16dEcaF

      § Encryption Standards and Practices http://bit.ly/KT9BTy

      § Encryption Standards and Practices http://cis-india.org/internet-governance/blog/privacy/privacy_encryption

      § Security: Privacy, Transparency and Technology http://cis-india.org/internet-governance/blog/security-privacy-transparency-and-technolog y

      4. Sexual Minorities -

      While the internet is a global forum of self-expression and acceptance for most of us, it does not hold true for sexual minorities. The internet is a place of secrecy for those that do not conform to the typical identities set by society and therefore their privacy is more important to them than most. When they reveal themselves or are revealed by others, they typically face a lot of group hatred from the rest of the people and therefore value their privacy. The following article looks into their situation.

      · Privacy and Sexual Minorities http://bit.ly/19mQUyZ

      5. Health -

      The privacy between a doctor and a patient is seen as incredibly important and so should the privacy of a person in any situation where they reveal more than they would to others in the sense of CT scans and other diagnoses. The following articles look into the present scenario of privacy in places like a hospital or diagnosis center.

      § Health and Privacy http://bit.ly/16L1AJX

      § Privacy Concerns in Whole Body Imaging: A Few Questions http://bit.ly/1jmvH1z

      6. e-Governance -

      The main focus of governments in ICTs is their gain for governance. There have many a multiplicity of laws and legislation passed by various countries including India in an effort to govern the universal space that is the internet. Surveillance is a major part of that governance and control. The articles listed below deal with the issues of ethics and drawbacks in the current legal scenario involving ICTs.

      § E-Governance and Privacy http://bit.ly/18fiReX

      § Privacy and Governmental Databases http://bit.ly/18fmSy8

      § Killing Internet Softly with its Rules http://bit.ly/1b5I7Z2

      § Cyber Crime & Privacy http://bit.ly/17VTluv

      § Understanding the Right to Information http://bit.ly/1hojKr7

      § Privacy Perspectives on the 2012-2013 Goa Beach Shack Policy http://bit.ly/ThAovQ

      § Identifying Aspects of Privacy in Islamic Law http://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law

      § What Does Facebook's Transparency Report Tell Us About the Indian Government's Record on Free Expression & Privacy? http://cis-india.org/internet-governance/blog/what-does-facebook-transparency-report-tell -us-about-indian-government-record-on-free-expression-and-privacy

      § Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India http://cis-india.org/internet-governance/blog/search-and-seizure-and-right-to-privacy-in-digital-age

      § Internet Privacy in India http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-i ndia

      § Internet-driven Developments - Structural Changes and Tipping Points http://bit.ly/10s8HVH

      § Data Retention in India http://bit.ly/XR791u

      § 2012: Privacy Highlights in India http://bit.ly/1kWe3n7

      § Big Dog is Watching You! The Sci-fi Future of Animal and Insect Drones http://bit.ly/1kWee1W

      7. Whistle-blowers -

      Whistle-blowers are always in a difficult situation when they must reveal the misdeeds of their corporations and governments due to the blowback that is possible if their identity is revealed to the public. As in the case of Edward Snowden and many others, a whistle-blowers identity is to be kept the most private to avoid the consequences of revealing the information that they did. This is the main focus of the article below.

      § The Privacy Rights of Whistle-blowers http://bit.ly/18GWmM3

      8. Cloud and Open Source -

      Cloud computing and open source software have grown rapidly over the past few decades. Cloud computing is when an individual or company uses offsite hardware on a pay by usage basis provided and owned by someone else. The advantages are low costs and easy access along with decreased initial costs. Open source software on the other hand is software where despite the existence of proprietary elements and innovation, the software is available to the public at no charge. These software are based of open standards and have the obvious advantage of being compatible with many different set ups and are free. The following article highlights these computing solutions.

      § Privacy, Free/Open Source, and the Cloud http://bit.ly/1cTmGoI

      9. e-Commerce -

      One of the fastest growing applications of the internet is e-Commerce. This includes many facets of commerce such as online trading, the stock exchange etc. in these cases, just as in the financial and banking industries, privacy is very important to protect ones investments and capital. The following article's main focal point is the world of e-Commerce and its current privacy scenario.

      § Consumer Privacy in e-Commerce http://bit.ly/1dCtgTs

      Security Research

      by Vanya Rakesh last modified Jan 03, 2016 09:55 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      State surveillance in India has been carried out by Government agencies for many years. Recent projects include: NATGRID, CMS, NETRA, etc. which aim to overhaul the overall security and intelligence infrastructure in the country. The purpose of such initiatives has been to maintain national security and ensure interconnectivity and interoperability between departments and agencies. Concerns regarding the structure, regulatory frameworks (or lack thereof), and technologies used in these programmes and projects have attracted criticism.

      Surveillance/Security Research -

      1. Central Monitoring System -

      The Central Monitoring System or CMS is a clandestine mass electronic surveillance data mining program installed by the Center for Development of Telematics (C-DOT), a part of the Indian government. It gives law enforcement agencies centralized access to India's telecommunications network and the ability to listen in on and record mobile, landline, satellite, Voice over Internet Protocol (VoIP) calls along with private e-mails, SMS, MMS. It also gives them the ability to geo-locate individuals via cell phones in real time.

      • The Central Monitoring System: Some Questions to be Raised in Parliament http://bit.ly/1fln2vu

      2. Surveillance Industry : Global And Domestic -

      The surveillance industry is a multi-billion dollar economic sector that tracks individuals along with their actions such as e-mails and texts. With the cause for its existence being terrorism and the government's attempts to fight it, a network has been created that leaves no one with their privacy. All that an individual does in the digital world is suspect to surveillance. This included surveillance in the form of snooping where an individual's phone calls, text messages and e-mails are monitored or a more active kind where cameras, sensors and other devices are used to actively track the movements and actions of an individual. This information allows governments to bypass the privacy that an individual has in a manner that is considered unethical and incorrect. This information that is collected also in vulnerable to cyber-attacks that are serious risks to privacy and the individuals themselves. The following set of articles look into the ethics, risks, vulnerabilities and trade-offs of having a mass surveillance industry in place.

      • Surveillance Technologies http://bit.ly/14pxg74
      • New Standard Operating Procedures for Lawful Interception and Monitoring http://bit.ly/1mRRIo4

      3. Judgements By the Indian Courts -

      The surveillance industry in India has been brought before the court in different cases. The following articles look into the cause of action in these cases along with their impact on India and its citizens.

      4. International Privacy Laws -

      Due to the universality of the internet, many questions of accountability arise and jurisdiction becomes a problem. Therefore certain treaties, agreements and other international legal literature was created to answer these questions. The articles listed below look into the international legal framework which governs the internet.

      5. Indian Surveillance Framework -

      The Indian government's mass surveillance systems are configured a little differently from the networks of many countries such as the USA and the UK. This is because of the vast difference in infrastructure both in existence and the required amount. In many ways, it is considered that the surveillance network in India is far worse than other countries. This is due to the present form of the legal system in existence. The articles below explore the system and its functioning including the various methods through which we are spied on. The ethics and vulnerabilities are also explored in these articles.

      • A Comparison of Indian Legislation to Draft International Principles on Surveillance of Communications http://bit.ly/U6T3xy
      • Surveillance and the Indian Constitution - Part 2: Gobind and the Compelling State Interest Test http://bit.ly/1dH3meL
      • Surveillance and the Indian Constitution - Part 3: The Public/Private Distinction and the Supreme Court's Wrong Turn http://bit.ly/1kBosnw
      • Mastering the Art of Keeping Indians Under Surveillance http://cis-india.org/internet-governance/blog/the-wire-may-30-2015-bhairav-acharya-mastering-the-art-of-keeping-indians-under-surveillance

      UID Research

      by Vanya Rakesh last modified Jan 03, 2016 09:59 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Researching the vision and implementation of the UID Scheme - both from a technical and regulatory perspective.
      2. Understanding the validity and legality of collection, usage and storage of Biometric information for this scheme.
      3. Raising public awareness around issues concerning privacy, data security and the objectives of the UID Scheme.

      The UID scheme seeks to provide all residents of India an identity number based on their biometrics that can be used to authenticate individuals for the purpose of Government benefits and services. A 2015 Supreme Court ruling has clarified that the UID can only be used in the PDS and LPG Schemes.

      Concerns with the scheme include the broad consent taken at the time of enrolment, the lack of clarity as to what happens with transactional metadata, the centralized storage of the biometric information in the CIDR, the seeding of the aadhaar number into service providers’ databases, and the possibility of function creep. Also, there are concerns due to absence of a legislation to look into the privacy and security concerns.

      UID Research -

      1. Ramifications of Aadhar and UID schemes -

      The UID and Aadhar systems have been bombarded with criticisms and plagued with issues ranging from privacy concerns to security risks. The following articles deal with the many problems and drawbacks of these systems.

      § UID and NPR: Towards Common Ground http://cis-india.org/internet-governance/blog/uid-npr-towards-common-ground

      § Public Statement to Final Draft of UID Bill http://bit.ly/1aGf1NN

      § UID Project in India - Some Possible Ramifications http://cis-india.org/internet-governance/blog/uid-in-india

      § Aadhaar Number vs the Social Security Number http://cis-india.org/internet-governance/blog/aadhaar-vs-social-security-number

      § Feedback to the NIA Bill http://cis-india.org/internet-governance/blog/cis-feedback-to-nia-bill

      § Unique ID System: Pros and Cons http://bit.ly/1jmxbZS

      § Submitted seven open letters to the Parliamentary Finance Committee on the UID covering the following aspects: SCOSTA Standards (http://bit.ly/1hq5Rqd), Centralized Database (http://bit.ly/1hsHJDg), Biometrics (http://bit.ly/196drke), UID Budget (http://bit.ly/1e4c2Op), Operational Design (http://bit.ly/JXR61S), UID and Transactions (http://bit.ly/1gY6B8r), and Deduplication (http://bit.ly/1c9TkSg)

      § Comments on Finance Committee Statements to Open Letters on Unique Identity: The Parliamentary Finance Committee responded to the open letters sent by CIS through an email on 12 October 2011. CIS has commented on the points raised by the Committee: http://bit.ly/1kz4H0F

      § Unique Identification Scheme (UID) & National Population Register (NPR), and Governance http://cis-india.org/internet-governance/blog/uid-and-npr-a-background-note

      § Financial Inclusion and the UID http://cis-india.org/internet-governance/privacy_uidfinancialinclusion

      § The Aadhaar Case http://cis-india.org/internet-governance/blog/the-aadhaar-case

      § Do we need the Aadhaar scheme http://bit.ly/1850wAz

      § 4 Popular Myths about UID http://bit.ly/1bWFoQg

      § Does the UID Reflect India? http://cis-india.org/internet-governance/blog/privacy/uid-reflects-india

      § Would it be a unique identity crisis? http://cis-india.org/news/unique-identity-crisis

      § UID: Nothing to Hide, Nothing to Fear? http://cis-india.org/internet-governance/blog/privacy/uid-nothing-to-hide-fear

      2. Right to Privacy and UID -

      The UID system has been hit by many privacy concerns from NGOs, private individuals and others. The sharing of one's information, especially fingerprints and retinal scans to a system that is controlled by the government and is not vetted as having good security irks most people. These issues are dealt with the in the following articles.

      § India Fears of Privacy Loss Pursue Ambitious ID Project http://cis-india.org/news/india-fears-of-privacy-loss

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://bit.ly/1bWFoQg

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://cis-india.org/internet-governance/blog/privacy/privacy-uiddevaprasad

      § Supreme Court order is a good start, but is seeding necessary? http://cis-india.org/internet-governance/blog/supreme-court-order-is-a-good-start-but-is-seeding-necessary

      § Right to Privacy in Peril http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

      3. Data Flow in the UID -

      The articles below deal with the manner in which data is moved around and handled in the UID system in India.

      § UIDAI Practices and the Information Technology Act, Section 43A and Subsequent Rules http://cis-india.org/internet-governance/blog/uid-practices-and-it-act-sec-43-a-and-subsequent-rules

      § Data flow in the Unique Identification Scheme of India http://cis-india.org/internet-governance/blog/data-flow-in-unique-identification-scheme-of-india

      CIS's Position on Net Neutrality

      by Sunil Abraham last modified Dec 09, 2015 01:06 PM
      Contributors: pranesh
      As researchers committed to the principle of pluralism we rarely produce institutional positions. This is also because we tend to update our positions based on research outputs. But the lack of clarity around our position on network neutrality has led some stakeholders to believe that we are advocating for forbearance. Nothing can be farther from the truth. Please see below for the current articulation of our common institutional position.

       

      1. Net Neutrality violations can potentially have multiple categories of harms — competition harms, free speech harms, privacy harms, innovation and ‘generativity’ harms, harms to consumer choice and user freedoms, and diversity harms thanks to unjust discrimination and gatekeeping by Internet service providers.

      2. Net Neutrality violations (including some those forms of zero-rating that violate net neutrality) can also have different kinds benefits — enabling the right to freedom of expression, and the freedom of association, especially when access to communication and publishing technologies is increased; increased competition [by enabling product differentiation, can potentially allow small ISPs compete against market incumbents]; increased access [usually to a subset of the Internet] by those without any access because they cannot afford it, increased access [usually to a subset of the Internet] by those who don't see any value in the Internet, reduced payments by those who already have access to the Internet especially if their usage is dominated by certain services and destinations.

      3. Given the magnitude and variety of potential harms, complete forbearance from all regulation is not an option for regulators nor is self-regulation sufficient to address all the harms emerging from Net Neutrality violations, since incumbent telecom companies cannot be trusted to effectively self-regulate. Therefore, CIS calls for the immediate formulation of Net Neutrality regulation by the telecom regulator [TRAI] and the notification thereof by the government [Department of Telecom of the Ministry of Information and Communication Technology]. CIS also calls for the eventual enactment of statutory law on Net Neutrality.  All such policy must be developed in a transparent fashion after proper consultation with all relevant stakeholders, and after giving citizens an opportunity to comment on draft regulations.

      4. Even though some of these harms may be large, CIS believes that a government cannot apply the precautionary principle in the case of Net Neutrality violations. Banning technical innovations and business model innovations is not an appropriate policy option. The regulation must toe a careful line to solve the optimization problem: refraining from over-regulation of ISPs and harming innovation at the carrier level (and benefits of net neutrality violations mentioned above) while preventing ISPs from harming innovation and user choice.  ISPs must be regulated to limit harms from unjust discrimination towards consumers as well as to limit harms from unjust discrimination towards the services they carry on their networks.

      5. Based on regulatory theory, we believe that a regulatory framework that is technologically neutral, that factors in differences in technological context, as well as market realities and existing regulation, and which is able to respond to new evidence is what is ideal.

        This means that we need a framework that has some bright-line rules based, but which allows for flexibility in determining the scope of exceptions and in the application of the rules.  Candidate principles to be embodied in the regulation include: transparency, non-exclusivity, limiting unjust discrimination.

      6. The harms emerging from walled gardens can be mitigated in a number of waysOn zero-rating the form of regulation must depend on the specific model and the potential harms that result from that model. Zero-rating can be: paid for by the end consumer or subsidized by ISPs or subsidized by content providers or subsidized by government or a combination of these; deal-based or criteria-based or government-imposed; ISP-imposed or offered by the ISP and chosen by consumers; Transparent and understood by consumers vs. non-transparent; based on content-type or agnostic to content-type; service-specific or service-class/protocol-specific or service-agnostic; available on one ISP or on all ISPs.  Zero-rating by a small ISP with 2% penetration will not have the same harms as zero-rating by the largest incumbent ISP.  For service-agnostic / content-type agnostic zero-rating, which Mozilla terms ‘equal rating’, CIS advocates for no regulation.

      7. CIS believes that Net Neutrality regulation for mobile and fixed-line access must be different recognizing the fundamental differences in technologies.

      8. On specialized services CIS believes that there should be logical separation and that all details of such specialized services and their impact on the Internet must be made transparent to consumers both individual and institutional, the general public and to the regulator.  Further, such services should be available to the user only upon request, and not without their active choice, with the requirement that the service cannot be reasonably provided with ‘best efforts’ delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or that the discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

      9. On incentives for telecom operators, CIS believes that the government should consider different models such as waiving contribution to the Universal Service Obligation Fund for prepaid consumers, and freeing up additional spectrum for telecom use without royalty using a shared spectrum paradigm, as well as freeing up more spectrum for use without a licence.

      10. On reasonable network management CIS still does not have a common institutional position.

      Smart Cities in India: An Overview

      by Vanya Rakesh last modified Jan 11, 2016 01:30 AM
      The Government of India is in the process of developing 100 smart cities in India which it sees as the key to the country's economic and social growth. This blog post gives an overview of the Smart Cities project currently underway in India. The smart cities mission in India is at a nascent stage and an evolving area for research. The Centre for Internet and Society will continue work in this area.

      Overview of the 100 Smart Cities Mission

      The Government of India announced its flagship programme- the 100 Smart Cities mission in the year 2014 and was launched in June 2015 to achieve urban transformation, drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology. Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY 2015-16 to FY 2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporation of the learnings into the Mission. The Mission aims to focus on area-based development in the form of redevelopment of existing spaces, or the development of new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all - especially the poor and the disadvantaged. [1] On 27th August 2015 the Centre unveiled 98 smart cities across India which were selected for this Project. Across the selected cities, 13 crore population ( 35% of the urban population will be included in the development plans. [2] The mission has been developed for the purpose of achieving urban transformation. The vision is to preserve India's traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. [3]

      The promises of the Smart City mission include reduction of carbon footprint, adequate water and electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing, robust IT connectivity and digitalization, good governance, citizen participation, security of citizens, health and education.

      Questions unanswered

      • Why and How was the Smart Cities project conceptualized in India? What was the need for such a project in India?
      • What was the role of the public/citizens at the ideation and conceptualization stage of the project?
      • Which actors from the Government, Private industry and the civil society are involved in this mission? Though the smart cities mission has been initiated by the Government of India under the Ministry of Urban Development, there is no clarity about the involvement of the associated offices and departments of the Ministry.

      How are the Smart Cities being selected?

      The 100 cities were supposed to be selected on the basis of Smart cities challenge[4] involving two stages. Stage I of the challenge involved Intra-State city selection on objective criteria to identify cities to compete in stage-II. In August 2015, The Ministry of Urban Development, Government of India announced 100 smart cities [5] evaluated on parameters such as service levels, financial and institutional capacity, past track record, called as the 'shortlisted cities' for this purpose. The selected cities are now competing for selection in the Second stage of the challenge, which is an All India competition. For this crucial stage, the potential 100 smart cities are required to prepare a Smart City Proposal (SCP) stating the model chosen (retrofitting, redevelopment, Greenfield development or a mix), along with a Pan-City dimension with Smart Solutions. The proposal must also include suggestions collected by way of consultations held with city residents and other stakeholders, along with the proposal for financing of the smart city plan including the revenue model to attract private participation. The country saw wide participation from the citizens to voice their aspirations and concerns regarding the smart city. 15th December 2015 has been declared as the deadline for submission of the SCP, which must be in consonance with evaluation criteria set by The MoUD, set on the basis of professional advice. [6] On the basis of this, 20 cities will be selected for the first year. According to the latest reports, the Centre is planning to fund only 10 cities for the first phase in case the proposals sent by the states do not match the expected quality standards and are unable to submit complete area-development plans by the deadline, i.e. 15th December, 2015. [7]

      Questions unanswered

      • Who would be undertaking the task of evaluating and selecting the cities for this project?
      • What are the criteria for selection of a city to qualify in the first 20 (or 10, depending on the Central Government) for the first phase of implementation?

      How are the smart cities going to be Funded?

      The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year. [8] The additional resources will have to be mobilized by the State/ ULBs from external/internal sources. According to the scheme, once list of shortlisted Smart Cities is finalized, Rs. 2 crore would have been disbursed to each city for proposal preparation.[9]

      According to estimates of the Central Government, around Rs 4 lakh crore of funds will be infused mainly through private investments and loans from multilateral institutions among other sources, which accounts to 80% of the total spending on the mission. [10] For this purpose, the Government will approach the World Bank and the Asian Development Bank (ADB) for a loan costing £500 million and £1 billion each for 2015-20. If ADB approves the loan, it would be it will be the bank's highest funding to India's urban sector so far.[11] Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission. [12]

      Questions unanswered

      • The Government notes on Financing of the project mentions PPPs for private funding and leveraging of resources from internal and external resources. There is lack of clarity on the external resources the Government has/will approach and the varied PPP agreements the Government is or is planning to enter into for the purpose of private investment in the smart cities.

      How is the scheme being implemented?

      Under this scheme, each city is required to establish a Special Purpose Vehicle (SPV) having flexibility regarding planning, implementation, management and operations. The body will be headed by a full-time CEO, with nominees of Central Government, State Government and ULB on its Board. The SPV will be a limited company incorporated under the Companies Act, 2013 at the city-level, in which the State/UT and the Urban Local Body (ULB) will be the promoters having equity shareholding in the ratio 50:50. The private sector or financial institutions could be considered for taking equity stake in the SPV, provided the shareholding pattern of 50:50 of the State/UT and the ULB is maintained and the State/UT and the ULB together have majority shareholding and control of the SPV. Funds provided by the Government of India in the Smart Cities Mission to the SPV will be in the form of tied grant and kept in a separate Grant Fund.[13]

      For the purpose of implementation and monitoring of the projects, the MoUD has also established an Apex Committee and National Mission Directorate for National Level Monitoring[14], a State Level High Powered Steering Committee (HPSC) for State Level Monitoring[15] and a Smart City Advisory Forum at the City Level [16].

      Also, several consulting firms[17] have been assigned to the 100 cities to help them prepare action plans.[18] Some of them include CRISIL, KPMG, McKinsey, etc. [19]

      Questions unanswered

      • What policies and regulations have been put in place to account for the smart cities, apart from policies looking at issues of security, privacy, etc.?
      • What international/national standards will be adopted while development of the smart cities? Though the Bureau of Indian Standards is in the process of formulating standardized guidelines for the smart cities in India[20], yet there is lack of clarity on adoption of these national standards, along with the role of international standards like the ones formulated by ISO.

      What is the role of Foreign Governments and bodies in the Smart cities mission?

      Ever since the government's ambitious project has been announced and cities have been shortlisted, many countries across the globe have shown keen interest to help specific shortlisted cities in building the smart cities and are willing to invest financially. Countries like Sweden, Malaysia, UAE, USA, etc. have agreed to partner with India for the mission.[21] For example, UK has partnered with the Government to develop three India cities-Pune, Amravati and Indore.[22] Israel's start-up city Tel Aviv also entered into an agreement to help with urban transformation in the Indian cities of Pune, Nagpur and Nashik to foster innovation and share its technical know-how.[23] France has piqued interest for Nagpur and Puducherry, while the United States is interested in Ajmer, Vizag and Allahabad. Also, Spain's Barcelona Regional Agency has expressed interest in exchanging technology with the Delhi. Apart from foreign government, many organizations and multilateral agencies are also keen to partner with the Indian government and have offered financial assistance by way of loans. Some of them include the UK government-owned Department for International Development, German government KfW development bank, Japan International Cooperation Agency, the US Trade and Development Agency, United Nations Industrial Development Organization and United Nations Human Settlements Programme. [24]

      Questions unanswered

      • Do these governments or organization have influence on any other component of the Smart cities?
      • How much are the foreign governments and multilateral bodies spending on the respective cities?
      • What kind of technical know-how is being shared with the Indian government and cities?

      What is the way ahead?

      On the basis of the SCP, the MoUD will evaluate, assess the credibility and select 20 smart cities out of the short-listed ones for execution of the plan in the first phase. The selected city will set up a SPV and receive funding from the Government.

      Questions unanswered

      • Will the deadline of submission of the Smart Cities Proposal be pushed back?
      • After the SCP is submitted on the basis of consultation with the citizens and public, will they be further involved in the implementation of the project and what will be their role?
      • How will the MoUD and other associated organizations as well as actors consider the implementation realities of the project, like consideration of land displacement, rehabilitation of the slum people, etc.
      • How are ICT based systems going to be utilized to make the cities and the infrastructure "smart"?
      • How is the MoUD going to respond to the concerns and criticism emerging from various sections of the society, as being reflected in the news items?
      • How will the smart cities impact and integrate the existing laws, regulations and policies? Does the Government intend to use the existing legislations in entirety, or update and amend the laws for implementation of the Smart Cities Mission?


      [1] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

      [2] http://articles.economictimes.indiatimes.com/2015-08-27/news/65929187_1_jammu-and-kashmir-12-cities-urban-development-venkaiah-naidu

      [3] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [4] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [5] Full list : http://www.scribd.com/doc/276467963/Smart-Cities-Full-List

      [6] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [7] http://www.ibtimes.co.in/modi-govt-select-only-10-cities-under-smart-city-project-this-year-report-658888

      [8] http://smartcities.gov.in/writereaddata/Financing%20of%20Smart%20Cities.pdf

      [9] Smart Cities presentation by MoUD : http://smartcities.gov.in/writereaddata/Presentation%20on%20Smart%20Cities%20Mission.pdf

      [10] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/

      [11] http://indianexpress.com/article/india/india-others/funding-for-smart-cities-key-to-coffer-lies-outside-india/#sthash.5lnW9Jsq.dpuf

      [12] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [13] http://smartcities.gov.in/writereaddata/SPVs.pdf

      [14] http://smartcities.gov.in/writereaddata/National%20Level%20Monitoring.pdf

      [15] http://smartcities.gov.in/writereaddata/State%20Level%20Monitoring.pdf

      [16] http://smartcities.gov.in/writereaddata/City%20Level%20Monitoring.pdf

      [17] http://smartcities.gov.in/writereaddata/List_of_Consulting_Firms.pdf

      [18] http://pib.nic.in/newsite/PrintRelease.aspx?relid=128457

      [20] http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html

      [21] http://accommodationtimes.com/foreign-countries-have-keen-interest-in-development-of-smart-cities/

      [22] http://articles.economictimes.indiatimes.com/2015-11-20/news/68440402_1_uk-trade-three-smart-cities-british-deputy-high-commissioner

      [23] http://www.jpost.com/Business-and-Innovation/Tech/Tel-Aviv-to-help-India-build-smart-cities-435161?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

      [24] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/#sthash.nCMxEKkc.dpuf

      ISO/IEC/ JTC 1/SC 27 Working Groups Meeting, Jaipur

      by Vanya Rakesh last modified Dec 21, 2015 02:38 AM
      I attended this event held from October 26 to 30, 2015 in Jaipur.

      The Bureau of Indian Standards (BIS) in collaboration with Data Security Council of India (DSCI) hosted the global standards’ meeting – ISO/IEC/ JTC 1/SC 27 Working Groups Meeting in Jaipur, Rajasthan at Hotel Marriott from 26th to 30th of October, 2015, followed by a half day conference on Friday, 30th October on the importance of Standards in the domain. The event witnessed experts from across the globe deliberating on forging international standards on Privacy, Security and Risk management in IoT, Cloud Computing and many other contemporary technologies, along with updating existing standards. Under SC 27, 5 working groups parallely held the meetings on varied Projects and Study periods respectively. The 5 Working Groups are as follows:

      1. WG1: Information Security Management Systems;
      2. WG 2 :Cryptography and Security Mechanisms;
      3. WG 3 : Security Evaluation, Testing and Specification;
      4. WG 4 : Security Controls and Services; and
      5. WG 5 :Identity Management and Privacy technologies; competence of security management

      This key set of Working Groups (WG)met in India for the first time.  Professionals discussed and debated development of standards under each working group to develop international standards to address issues regarding security, identity management and privacy.

      CIS had the opportunity to attend meetings under Working Group 5. This group further had parallel meetings on several topics namely:

      • Privacy enhancing data de-identification techniques ISO/IEC NWIP 20889 : Data de-identification techniques are important when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles. The selection, design, use and assessment of these techniques need to be performed appropriately in order to effectively address the risks of re-identification in a given context.  There is thus a need to classify known de-identification techniques using standardized terminology, and to describe their characteristics, including the underlying technologies, the applicability of each technique to reducing the risk of re-identification, and the usability of the de-identified data.  This is the main goal of this International Standard. Meetings were conducted to resolve comments sent by organisations across the world, review draft documents and agree on next steps.
      • A study period on Privacy Engineering framework : This session deliberated upon contributions, terms of reference and discuss the scope for the emerging field of privacy engineering framework. The session also reviewed important terms to be included in the standard and identify possible improvements to existing privacy impact assessment and management standards. It was identified that the goal of this standard is to integrate privacy into systems as part of the systems engineering process. Another concern raised was that the framework must be consistent with Privacy framework under ISO 29100 and HL7 Privacy and security standards.
      • A study period on user friendly online privacy notice and consent: The basic purpose of this New Work Item Proposal is to assess the viability of producing a guideline for PII Controllers on providing easy to understand notices and consent procedures to PII Principals within WG5. At the Meeting, a brief overview of the contributions received was given,along with assessment of  liaison to ISO/IEC JTC 1/SC 35 and other entities. This International Standard gives guidelines for the content and the structure of online privacy notices as well as documents asking for consent to collect and process personally identifiable information (PII) from PII principals online and is applicable to all situations where a PII controller or any other entity processing PII informs PII principals in any online context.
      • Some of the other sessions under Working Group 5 were on Privacy Impact Assessment ISO/IEC 29134, Standardization in the area of Biometrics and Biometric information protection, Code of Practise for the protection of personally identifiable information, Study period on User friendly online privacy notice and consent, etc.

      ISO/IEC/JTC 1/ SC27 is a joint technical committee of the international standards bodies – ISO and IEC on Information Technology security techniques which conducts regular meetings across the world. JTC 1 has over 2600 published standards developed under the broad umbrella of the committee and its 20 subcommittees. Draft International Standards adopted by the joint technical committees are circulated to the national bodies for voting. Publication as an International Standard requires approval by at least 75% of the national bodies casting a vote in favour of the same. In India, the Bureau of Indian Standards (BIS) is the National Standards Body. Standards are formulated keeping in view national priorities, industrial development, technical needs, export promotion, health, safety etc. and are harmonized with ISO/IEC standards (wherever they exist) to the extent possible, in order to facilitate adoption of ISO/IEC standards by all segments of industry and business.BIS has been actively participating in the  Technical Committee  work of ISO/IEC and is currently a Participating member in 417 and 74 Technical Committees/ Subcommittees and Observer member in 248 and 79 Technical Committees/Subcommittees of ISO and IEC respectively.  BIS  holds Secretarial responsibilities of 2 Technical Committees and 6 Subcommittees of ISO.

      The last meeting was held in the month of May, 2015 in Malaysia, followed by this meeting in October, 2015 Jaipur. 51 countries play an active role as the ‘Participating Members, India being one, while a few countries as observing members. As a part of these sessions, the participating countries also have rights to vote in all official ballots related to standards. The representatives of the country work on the preparation and development of the International Standards and provide feedback to their national organizations.

      There was an additional study group meeting on IoT to discuss comments on the previous drafts, suggest changes , review responses and identify standard gaps in SC 27.

      On October 30, 2015  BIS-DSCI hosted a half day International conference on 30 October, 2015 on Cyber Security and Privacy Standards, comprising of keynotes and panel discussions, bringing together national and international experts to share experience and exchange views on cyber security techniques and protection of data and privacy in international standards, and their growing importance in their society.  The conference looked at various themes like the Role of standards in smart cities, Responding to the Challenges of Investigating Cyber Crimes through Standards, etc. It was emphasised that due to an increasing digital world, there is a universal agreement for the need of cyber security as the infrastructure is globally connected, the cyber threats are also distributed as they are not restricted by the geographical boundaries. Hence, the need for technical and policy solutions, along with standards was highlighted for future protection of the digital world which is now deeply embedded in life, businesses and the government. Standards will help in setting crucial infrastructure for in data security and build associated infrastructure on these lines.

      The importance of standards was highlighted in context of smart cities wherein the need for standards was discussed by experts. Harmonization of regulations with standards must be looked at, by primarily creating standards which could be referred to by the regulators. Broadly, the challenges faced by smart cities are data security, privacy and digital resilience of the infrastructure. It was suggested that in the beginning, these areas must be looked at for development of standards in smart cities. Also, the ISO/IEC  has a Working Group and a Strategic Group focussing on Smart Cities. The risks of digitisation, network, identity management, etc. must be looked at to create the standards.

      The next meeting has been scheduled for April 2016 in Tampa (USA).

      This meeting was a good opportunity to interact with experts from various parts of the World and understand the working of ISO Meetings which are held twice/thrice every year. The Centre for Internet and Society will be continuing work and becoming involved in the standard setting process at the future Working group meetings.

      RTI PDF

      by Prasad Krishna last modified Dec 22, 2015 02:54 AM

      PDF document icon RTI.pdf — PDF document, 412 kB (422252 bytes)

      RTI response regarding the UIDAI

      by Vanya Rakesh last modified Dec 22, 2015 02:57 AM
      This is a response to the RTI filed regarding UIDAI

      The Supreme Curt of India, by virtue of an order dated 11th August 2015, directed the Government to widely publicize in electronic and print media, including radio and television networks that obtaining Aadhar card is not mandatory for the citizens to avail welfare schemes of the Government. (until the matter is resolved). CIS filed an RTI to get information about the steps taken by Government in this regard, the initiatives taken, and details about the expenditure incurred to publicize and inform the public about Aadhar not being mandatory to avail welfare schemes of the Government.

      Response: It has been informed that an advisory was issued by UIDAI headquarters to all regional offices to comply with the order, along with several advertisement campaigns. The total cost incurred so far by UIDAI for this is Rs. 317.30 lakh.


      Download the Response

      Benefits and Harms of "Big Data"

      by Scott Mason — last modified Dec 30, 2015 02:48 AM
      Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life.

      Introduction

      In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabyte[1]. By 2013 that had grown to 4 zettabytes[2], and with the nascent development of the so-called 'Internet of Things' gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available[3] , together with the development of innovative forms of statistical analytics, is generally referred to as "Big Data"; though there is no single agreed upon definition of the term. Although still in its initial stages, Big Data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even 10 years ago.

      Despite enormous optimism about the scope and variety of Big Data's potential applications however, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits[4]. Most notably these have included concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data [5]. However, concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on Big Data analytics as opposed to traditional forms of analysis and the creation of new digital divides to just name a few.

      The existing literature on Big Data is vast, however many of the benefits and harms identified by researchers tend to relate to sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of Big Data's possible applications, it can nevertheless be difficult to gain an overall perspective of the broader impacts of Big Data as a whole. As such this article will seek to disaggregate the potential benefits and harms of Big Data, organising them into several broad categories, which are reflective of the existing scholarly literature.

      What are the potential benefits of Big Data?

      From politicians to business leaders, recent years have seen Big Data confidently proclaimed as a potential solution to a diverse range of problems from, world hunger and diseases, to government budget deficits and corruption. But if we look beyond the hyperbole and headlines, what do we really know about the advantages of Big Data? Given the current buzz surrounding it, the existing literature on Big Data is perhaps unsurprisingly vast, providing innumerable examples of the potential applications of Big Data from agriculture to policing. However, rather than try (and fail) to list the many possible applications of Big Data analytics across all sectors and industries, for the purposes of this article we have instead attempted to distil the various advantages of Big Data discussed within literature into the following five broad categories; Decision-Making, Efficiency & Productivity, Research & Development, Personalisation and Transparency, each of which will be discussed separately below.

      Decision-Making

      Whilst data analytics have always been used to improve the quality and efficiency of decision-making processes, the advent of Big Data means that the areas of our lives in which data driven decision- making plays a role is expanding dramatically; as businesses and governments become better able to exploit new data flows. Furthermore, the real-time and predictive nature of decision-making made possible by Big Data, are increasingly allowing these decisions to be automated. As a result, Big Data is providing governments and business with unprecedented opportunities to create new insights and solutions; becoming more responsive to new opportunities and better able to act quickly - and in some cases preemptively - to deal with emerging threats.

      This ability of Big Data to speed up and improve decision-making processes can be applied across all sectors from transport to healthcare and is often cited within the literature as one of the key advantages of Big Data. Joh, for example, highlights the increased use of data driven predictive analysis by police forces to help them to forecast the times and geographical locations in which crimes are most likely to occur. This allows the force to redistribute their officers and resources according to anticipated need, and in certain cities has been highly effective in reducing crime rates [6]. Raghupathi meanwhile cites the case of healthcare, where predictive modelling driven by big data is being used to proactively identify patients who could benefit from preventative care or lifestyle changes[7].

      One area in particular where the decision-making capabilities of Big Data are having a significant impact is in the field of risk management [8]. For instance, Big Data can allow companies to map their entire data landscape to help detect sensitive information, such as 16 digit numbers - potentially credit card data - which are not being stored according to regulatory requirements and intervene accordingly. Similarly, detailed analysis of data held about suppliers and customers can help companies to identify those in financial trouble, allowing them to act quickly to minimize their exposure to any potential default[9].

      Efficiency and Productivity

      In an era when many governments and businesses are facing enormous pressures on their budgets, the desire to reduce waste and inefficiency has never been greater. By providing the information and analysis needed for organisations to better manage and coordinate their operations, Big Data can help to alleviate such problems, leading to the better utilization of scarce resources and a more productive workforce [10].

      Within the literature such efficiency savings are most commonly discussed in relation to reductions in energy consumption [11]. For example, a report published by Cisco notes how the city of Olso has managed to reduce the energy consumption of street-lighting by 62 percent through the use of smart solutions driven by Big Data[12]. Increasingly, however, statistical models generated by Big Data analytics are also being utilized to identify potential efficiencies in sourcing, scheduling and routing in a wide range of sectors from agriculture to transport. For example, Newell observes how many local governments are generating large databases of scanned license plates through the use of automated license plate recognition systems (ALPR), which government agencies can then use to help improve local traffic management and ease congestion[13].

      Commonly these efficiency savings are only made possible by the often counter-intuitive insights generated by the Big Data models. For example, whilst a human analyst planning a truck route would always tend to avoid 'drive-bys' - bypassing one stop to reach a third before doubling back - Big Data insights can sometimes show such routes to be more efficient. In such cases efficiency saving of this kind would in all likelihood have gone unrecognised by a human analyst, not trained to look for such patterns[14].

      Research, Development, and Innovation

      Perhaps one of the most intriguing benefits of Big Data is its potential use in the research and development of new products and services. As is highlighted throughout the literature, Big Data can help businesses to gain an understanding of how others perceive their products or identify customer demand and adapt their marketing or indeed the design of their products accordingly[15]. Analysis of social media data, for instance, can provide valuable insights into customers' sentiments towards existing products as well as discover demands for new products and services, allowing businesses to respond more quickly to changes in customer behaviour[16].

      In addition to market research, Big Data can also be used during the design and development stage of new products; for example by helping to test thousands of different variations of computer-aided designs in an expedient and cost-effective manner. In doing so, business and designers are able to better assess how minor changes to a products design may affect its cost and performance, thereby improving the cost-effectiveness of the production process and increasing profitability.

      Personalisation

      For many consumers, perhaps the most familiar application of Big Data is its ability to help tailor products and services to meet their individual preferences. This phenomena is most immediately noticeable on many online services such as Netflix; where data about users activities and preferences is collated and analysed to provide a personalised service, for example by suggesting films or television shows the user may enjoy based upon their previous viewing history[17]. By enabling companies to generate in-depth profiles of their customers, Big Data allows businesses to move past the 'one size fits all' approach to product and services design and instead quickly and cost-effectively adapt their services to better meet customer demand.

      In addition to service personalisation, similar profiling techniques are increasingly being utilized in sectors such as healthcare. Here data about a patient's medical history, lifestyle, and even their gene expression patterns are collated, generating a detailed medical profile which can then be used to tailor treatments to meet their specific needs[18]. Targeted care of this sort can not only help to reduce costs for example by helping to avoid over-prescriptions, but may also help to improve the effectiveness of treatments and so ultimately their outcome.

      Transparency

      If 'knowledge is power', then, - so say Big Data enthusiasts - advances in data analytics and the quantity of data available can give consumers and citizens the knowledge to hold governments and businesses to account, as well as make more informed choices about the products and services they use. Nevertheless, data (even lots of it) does not necessarily equal knowledge. In order for citizens and consumers to be able to fully utilize the vast quantities of data available to them, they must first have some way to make sense of it. For some, Big Data analytics provides just such a solution, allowing users to easily search, compare and analyze available data, thereby helping to challenge existing information asymmetries and make business and government more transparent[19].

      In the private sector, Big Data enthusiasts have claimed that Big Data holds the potential to ensure complete transparency of supply chains, enabling concerned consumers to trace the source of their products, for example to ensure that they have been sourced ethically [20]. Furthermore, Big Data is now making accessible information which was previously unavailable to average consumers and challenging companies whose business models rely on the maintenance of information asymmetries.The real-estate industry, for example, relies heavily upon its ability to acquire and control proprietary information, such as transaction data as a competitive asset. In recent years, however, many online services have allowed consumers to effectively bypass agents, by providing alternative sources of real-estate data and enabling prospective buyers and sellers to communicate directly with each other[21]. Therefore, providing consumers with access to large quantities of actionable data . Big Data can help to eliminate established information asymmetries, allowing them to make better and more informed decisions about the products they buy and the services they enlist.

      This potential to harness the power of Big Data to improve transparency and accountability can also be seen in the public sector, with many scholars suggesting that greater access to government data could help to stem corruption and make politics more accountable. This view was recently endorsed by the UN who highlighted the potential uses of Big Data to improve policymaking and accountability in a report published by the Independent Expert Advisory Group on the "Data Revolution for Sustainable Development". In the report experts emphasize the potential of what they term the 'data revolution', to help achieve sustainable development goals by for example helping civil society groups and individuals to 'develop data literacy and help communities and individuals to generate and use data, to ensure accountability and make better decisions for themselves' [22].

      What are the potential harms of Big Data?

      Whilst it is often easy to be seduced by the utopian visions of Big Data evangelists, in order to ensure that Big Data can deliver the types of far-reaching benefits its proponents promise, it is vital that we are also sensitive to its potential harms. Within the existing literature, discussions about the potential harms of Big Data are perhaps understandably dominated by concerns about privacy. Yet as Big Data has begun to play an increasingly central role in our daily lives, a broad range of new threats have begun to emerge including issues related to security and scientific epistemology, as well as problems of marginalisation, discrimination and transparency; each of which will be discussed separately below.

      Privacy

      By far the biggest concern raised by researchers in relation to Big Data is its risk to privacy. Given that by its very nature Big Data requires extensive and unprecedented access to large quantities of data; it is hardly surprising that many of the benefits outlined above in one way or another exist in tension with considerations of privacy. Although many scholars have called for a broader debate on the effects of Big Data on ethical best practice [23], a comprehensive exploration into the complex debates surrounding the ethical implications of Big Data go far beyond the scope of this article. Instead we will simply attempt to highlight some of the major areas of concern expressed in the literature, including its effects on established principles of privacy and the implication of Big Data on the suitability of existing regulatory frameworks governing privacy and data protection.

      1. Re-identification

      Traditionally many Big Data enthusiasts have used de-identification - the process of anonymising data by removing personally identifiable information (PII) - as a way of justifying mass collection and use of personal data. By claiming that such measures are sufficient to ensure the privacy of users, data brokers, companies and governments have sought to deflect concerns about the privacy implications of Big Data, and suggest that it can be compliant with existing regulatory and legal frameworks on data protection.

      However, many scholars remain concerned about the limits of anonymisation. As Tene and Polonetsky observe 'Once data-such as a clickstream or a cookie number-are linked to an identified individual, they become difficult to disentangle'[24]. They cite the example of University of Texas researchers Narayanan and Shmatikov, who were able to successfully re-identify anonymised Netflix user data by cross referencing it with data stored in a publicly accessible online database. As Narayanan and Shmatikov themselves explained, 'once any piece of data has been linked to a person's real identity, any association between this data and a virtual identity breaks anonymity of the latter' [25]. The quantity and variety of datasets which Big Data analytics has made associable with individuals is therefore expanding the scope of the types of data that can be considered PII, as well as undermining claims that de-identification alone is sufficient to ensure privacy for users.

      2. Privacy Frameworks Obsolete?

      In recent decades privacy and data protection frameworks based upon a number of so-called 'privacy principles' have formed the basis of most attempts to encourage greater consideration of privacy issues online[26]. For many however, the emergence of Big Data has raised question about the extent to which these 'principles of privacy' are workable in an era of ubiquitous data collection.

      Collection Limitation and Data Minimization : Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum[27].

      Purpose Limitation: Since the utility of a given dataset is often not easily identifiable at the time of collection, datasets are increasingly being processed several times for a variety of different purposes. Such practices have significant implications for the principle of purpose limitation, which aims to ensure that organizations are open about their reasons for collecting data, and that they use and process the data for no other purpose than those initially specified [28].

      Notice and Consent: The principles of notice and consent have formed the cornerstones of attempts to protect privacy for decades. Nevertheless in an era of ubiquitous data collection, the notion that an individual must be required to provide their explicit consent to allow for the collection and processing of their data seems increasingly antiquated, a relic of an age when it was possible to keep track of your personal data relationships and transactions. Today as data streams become more complex, some have begun to question suitability of consent as a mechanism to protect privacy. In particular commentators have noted how given the complexity of data flows in the digital ecosystem most individuals are not well placed to make truly informed decisions about the management of their data[29]. In one study, researchers demonstrated how by creating the perceptions of control, users were more likely to share their personal information, regardless of whether or not the users had actually gained control [30]. As such, for many, the garnering of consent is increasingly becoming a symbolic box-ticking exercise which achieves little more than to irritate and inconvenience customers whilst providing a burden for companies and a hindrance to growth and innovation [31].

      Access and Correction: The principle of 'access and correction' refers to the rights of individuals to obtain personal information being held about them as well as the right to erase, rectify, complete or otherwise amend that data. Aside from the well documented problems with privacy self-management, for many the real-time nature of data generation and analysis in an era of Big Data poses a number of structural challenges to this principle of privacy. As x comments, 'a good amount of data is not pre-processed in a similar fashion as traditional data warehouses. This creates a number of potential compliance problems such as difficulty erasing, retrieving or correcting data. A typical big data system is not built for interactivity, but for batch processing. This also makes the application of changes on a (presumably) static data set difficult'[32].

      Opt In-Out: The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services is making opting-out of data collection not only impractical but in some cases impossible. [33]

      3. "Chilling Effects"

      For many scholars the normalization of large scale data collection is steadily producing a widespread perception of ubiquitous surveillance amongst users. Drawing upon Foucault's analysis of Jeremy Bentham's panopticon and the disciplinary effects of surveillance, they argue that this perception of permanent visibility can cause users to sub-consciously 'discipline' and self- regulate of their own behavior, fearful of being targeted or identified as 'abnormal' [34]. As a result, the pervasive nature of Big Data risks generating a 'chilling effect' on user behavior and free speech.

      Although the notion of "chilling effects" is quite prevalent throughout the academic literature on surveillance and security, the difficulty of quantifying the perception and effects of surveillance on online behavior and practices means that there have only been a limited number of empirical studies of this phenomena, and none directly related to the chilling effects of Big Data. One study, conducted by researchers at MIT however, sought to assess the impact of Edward Snowden's revelations about NSA surveillance programs on Google search trends. Nearly 6,000 participants were asked to individually rate certain keywords for their perceived degree of privacy sensitivity along multiple dimensions. Using Google's own publicly available search data, the researchers then analyzed search patterns for these terms before and after the Snowden revelations. In doing so they were able to demonstrate a reduction of around 2.2% in searchers for those terms deemed to be most sensitive in nature. According to the researchers themselves, the results 'suggest that there is a chilling effect on search behaviour from government surveillance on the Internet'[35]. Although this study focussed on the effects on government surveillance, for many privacy advocates the growing pervasiveness of Big Data risks generating similar results. [36]

      4. Dignitary Harms of Predictive Decision-Making

      In addition to its potentially chilling effects on free speech, the automated nature of Big Data analytics also possess the potential to inflict so-called 'dignitary harms' on individuals, by revealing insights about themselves that they would have preferred to keep private [37].

      In an infamous example, following a shopping trip to the retail chain Target, a young girl began to receive mail at her father's house advertising products for babies including, diapers, clothing, and cribs. In response, her father complained to the management of the company, incensed by what he perceived to be the company's attempts to "encourage" pregnancy in teens. A few days later however, the father was forced to contact the store again to apologies, after his daughter had confessed to him that she was indeed pregnant. It was later revealed that Target regularly analyzed the sale of key products such as supplements or unscented lotions in order to generate "pregnancy prediction" scores, which could be used to assess the likelihood that a customer was pregnant and to therefore target them with relevant offers[38]. Such cases, though anecdotal illustrate how Big Data if not adopted sensitively can lead to potential embarrassing information about users being made public.

      Security

      In relation to cybersecurity Big Data can be viewed to a certain extent as a double-edged sword. On the one hand, the unique capabilities of Big Data analytics can provide organizations with new and innovative methods of enhancing their cybersecurity systems. On the other however, the sheer quantity and diversity of data emanating from a variety of sources creates its own security risks.

      5. "Honey-Pot"

      The larger the quantities of confidential information stored by companies on their databases the more attractive those databases may appear to potential hackers.

      6. Data Redundancy and Dispersion

      Inherent to Big Data systems is the duplication of data to many locations in order to optimize query processing. Data is dispersed across a wide range of data repositories in different servers, in different parts of the world. As a result it may be difficult for organizations to accurately locate and secure all items of personal information.

      Epistemological and Methodological Implications

      In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [39]. 'There is now a better way' Anderson insisted, 'Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot'[40].

      In spite of these bold claims however, many theorists remain skeptical of Big Data's methodological benefits and have expressed concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favor of shallow correlative analysis[41].

      7. Obfuscation

      Although Big Data analytics can be utilized to study almost any phenomena where enough data exists, many theorists have warned that simply because Big Data analytics can be used does not necessarily mean that they should be used[42]. Bigger is not always better and indeed the sheer quantity of data made available to users may in fact act to obscure certain insights. Whereas traditional scientific methods use sampling techniques to identify the most important and relevant data, Big Data by contrast encourages the collection and use of as much data as possible, in an attempt to attain full resolution of the phenomena being studied. However, not all data is equally useful and simply inputting as much data as possible into an algorithm is unlikely to produce accurate results and may instead obscure key insights.

      Indeed, whilst the promise of automation is central to a large part of Big Data's appeal, researchers observe that most Big Data analysis still requires an element of human judgement to filter out the 'good' data from the 'bad', and to decide what aspects of the data are relevant to the research objectives. As Boyd and Crawford observe, 'in the case of social media data, there is a 'data cleaning' process: making decisions about what attributes and variables will be counted, and which will be ignored. This process is inherently subjective"[43].

      Google's Flu Trend project provides an illustrative example of how Big Data's tendency to try to maximise data inputs can produce misleading results. Designed to accurately track flu outbreaks based upon data collected from Google searches, the project was initially proclaimed to be a great success. Gradually however it became apparent that the results being produced were not reflective of the reality on the ground. Later it was discovered that the algorithms used by the project to interpret search terms were insufficiently accurate to filter out anomalies in searches, such as those related to the 2009 H1N1 flu pandemic. As such, despite the great promise of Big Data, scholars insist it remains critical to be mindful of its limitations, remain selective about the types of data included in the analysis and exercise caution and intuition whenever interpreting its results [44].

      8. "Apophenia"

      In complete contrast to the problem of obfuscation, Boyd and Crawford observe how Big Data may also lead to the practice of 'apophenia', a phenomena whereby analysts interpret patterns where none exist, 'simply because enormous quantities of data can offer connections that radiate in all directions" [45]. David Leinweber for example demonstrated that data mining techniques could show strong but ultimately spurious correlations between changes in the S&P 500 stock index and butter production in Bangladesh [46]. Such spurious correlation between disparate and unconnected phenomena are a common feature of Big Data analytics and risks leading to unfounded conclusions being draw from the data.

      Although Leinweber's primary focus of analysis was the use of Data-Mining technologies, his observations are equally applicable to Big Data. Indeed the tendency amongst Big Data analysts to marginalise the types of domain specific expertise capable of differentiating between relevant and irrelevant correlations in favour of algorithmic automation can in many ways be seen to exacerbate many of the problems Leinweber identified.

      9. From Causation to Correlation

      Closely related to the problem of Aphonenia is the concern that Big Data's emphasis on correlative analysis risks leading to an abandonment of the pursuit of causal knowledge in favour of shallow descriptive accounts of scientific phenomena[47].

      For many, Big Data enthusiasts 'correlation is enough', producing inherently meaningful results interpretable by anyone without the need for pre-existing theory or hypothesis. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, for others by neglecting the knowledge of domain experts, Big Data risks generating a shallow type of analysis, since it fails to adequately embed observations within a pre-existing body of knowledge.

      This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[48].

      As such, although Big Data can provide unrivalled accounts of "what" people do, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially critical in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes.

      Digital Divides and Marginalisation

      Today data is a highly valuable commodity. The market for data in and of itself has been steadily growing in recent years with the business models of many online services now formulated around the strategy of harvesting data from users[49]. As with the commodification of anything however, inequalities can easily emerge between the haves and have not's. Whilst the quantity of data currently generated on a daily basis is many times greater than at any other point in human history, the vast majority of this data is owned and tightly controlled by a very small number of technology companies and data brokers. Although in some instances limited access to data may be granted to university researchers or to those willing and able to pay a fee, in many cases data remains jealously guarded by data brokers, who view it as an important competitive asset. As a result these data brokers and companies risk becoming the gatekeepers of the Big Data revolution, adjudicating not only over who can benefit from Big Data, but also in what context and under what terms. For many such inconsistencies and inequalities in access to data raises serious doubts about just how widely distributed the benefits of Big Data will be. Others go even further claiming that far from helping to alleviate inequalities, the advent of Big Data risks exacerbating already significant digital divides that exist as well as creating new ones [50].

      10. Anti-Competitive Practices

      As a result of the reluctance of large companies to share their data, there increasingly exists a divide in access between small start-ups companies and their larger and more established competitors. Thus, new entrants to the marketplace may be at a competitive disadvantage in relation to large and well established enterprises, being as they are unable to harness the analytical power of the vast quantities of data available to large companies by virtue of their privileged market position. Since the performance of many online services are today often intimately connected with the collation and use of users data, some researchers have suggested that this inequity in access to data could lead to a reduction in competition in the online marketplace, and ultimately therefore to less innovation and choice for consumers[51].

      As a result researchers including Nathan Newman of New York University have called for a reassessment and reorientation of anti-trust investigations and regulatory approaches more generally to 'to focus on how control of personal data by corporations can entrench monopoly power and harm consumer welfare in an economy shaped increasingly by the power of "big data"'[52]. Similarly a report produced by the European Data Protection Supervisor concluded that, 'The scope for abuse of market dominance and harm to the consumer through refusal of access to personal information and opaque or misleading privacy policies may justify a new concept of consumer harm for competition enforcement in digital economy' [53].

      11. Research

      From a research perspective barriers to access to data caused by proprietary control of datasets are problematic, since certain types of research could become restricted to those privileged enough to be granted access to data. Meanwhile those denied access are left not only incapable of conducting similar research projects, but also unable to test, verify or reproduce the findings of those who do. The existence of such gatekeepers may also lead to reluctance on the part of researchers to undertake research critical of the companies, upon whom they rely for access, leading to a chilling effect on the types of research conducted[54].

      12. Inequality

      Whilst bold claims are regularly made about the potential of Big Data to deliver economic development and generate new innovations, some critics of remain concerned about how equally the benefits of Big Data will be distributed and the effects this could have on already established digital divides [55].

      Firstly, whilst the power of Big Data is already being utilized effectively by most economically developed nations, the same cannot necessarily be said for many developing countries. A combination of lower levels of connectivity, poor information infrastructure, underinvestment in information technologies and a lack of skills and trained personnel make it far more difficult for the developing world to fully reap the rewards of Big Data. As a consequence the Big Data revolution risks deepening global economic inequality as developing countries find themselves unable to compete with data rich nations whose governments can more easily exploit the vast quantities of information generated by their technically literate and connected citizens.

      Likewise, to the extent that the Big Data analytics is playing a greater role in public policy-making, the capacity of individuals to generate large quantities of data, could potentially impact upon the extent to which they can provide inputs into the policy-making process. In a country such as India for example, where there exist high levels of inequality in access to information and communication technologies and the internet, there remain large discrepancies in the quantities of data produced by individuals. As a result there is a risk that those who lack access to the means of producing data will be disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority [56].

      Discrimination

      13. Injudicious or Discriminatory Outcomes

      Big Data presents the opportunity for governments, businesses and individuals to make better, more informed decisions at a much faster pace. Whilst this can evidently provide innumerable opportunities to increase efficiency and mitigate risk, by removing human intervention and oversight from the decision-making process Big Data analysts run the risk of becoming blind to unfair or injudicious results generated by skewed or discriminatory programming of the algorithms.

      There currently exists a large number of automated decision-making algorithms in operation across a broad range of sectors including most notably perhaps those used to asses an individual's suitability for insurance or credit. In either of these cases faults in the programming or discriminatory assessment criteria can have potentially damaging implications for the individual, who may as a result be unable to attain credit or insurance. This concern with the potentially discriminatory aspects of Big Data is prevalent throughout the literature and real life examples have been identified by researchers in a large number of major sectors in which Big Data is currently being used[57].

      Yu for instance, cites the case of the insurance company Progressive, which required its customers to install 'Snapsnot' - a small monitoring device - into their cars in order to receive their best rates. The device tracked and reported the customers driving habits, and offered discounts to those drivers who drove infrequently, broke smoothly, and avoided driving at night - behaviors that correlate with a lower risk of future accidents. Although this form of price differentiation provided incentives for customers to drive more carefully, it also had the unintended consequence of unfairly penalizing late-night shift workers. As Yu observes, 'for late night shift-workers, who are disproportionately poorer and from minority groups, this differential pricing provides no benefit at all. It categorizes them as similar to late-night party-goers, forcing them to carry more of the cost of the intoxicated and other irresponsible driving that happens disproportionately at night'[58].

      In another example, it is noted how Big Data is increasingly being used to evaluate applicants for entry-level service jobs. One method of evaluating applicants is by the length of their commute - the rationale being that employees with shorter commutes are statistically more likely to remain in the job longer. However, since most service jobs are typically located in town centers and since poorer neighborhoods tend to be those on the outskirts of town, such criteria can have the effect of unfairly disadvantaging those living in economically deprived areas. Consequently such metrics of evaluation can therefore also unintentionally act to reinforce existing social inequalities by making it more difficult for economically disadvantaged communities to work their way out of poverty[59].

      14. Lack of Algorithmic Transparency.

      If data is indeed the 'oil of the 21st century'[60] then algorithms are very much the engines which are driving innovation and economic development. For many companies the quality of their algorithms is often a crucial factor in providing them with a market advantage over their competitor. Given their importance, the secrets behind the programming of algorithms are often closely guarded by companies, and are typically classified as trade secrets and as such are protected by intellectual property rights. Whilst companies may claim that such secrecy is necessary to encourage market competition and innovation, many scholars are becoming increasingly concerned about the lack of transparency surrounding the design of these most crucial tools.

      In particular there is a growing sentiment common amongst many researchers that there currently exists a chronic lack of accountability and transparency in terms of how Big Data algorithms are programmed and what criteria are used to determine outcomes [61]. As Frank Pasquale observed,

      ' hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy'[62].

      As such, without increased transparency in algorithmic design, instances of Big Data discrimination may go unnoticed as analyst are unable to access the information necessary to identify them.

      Conclusion

      Today Big Data presents us with as many challenges as it does benefits. Whilst Big Data analytics can offer incredible opportunities to reduce inefficiency, improve decision-making, and increase transparency, concerns remain about the effects of these new technologies on issues such as privacy, equality and discrimination. Although the tensions between the competing demands of Big Data advocates and their critics may appear irreconcilable; only by highlighting these points of contestation can we hope to begin to ask the types of important and difficult questions necessary to do so, including; how can we reconcile Big Data's need for massive inputs of personal information with core principles of privacy such as data minimization and collection limitation? What processes and procedures need to be put in place during the design and implementation of Big Data models and algorithms to provide sufficient transparency and accountability so as to avoid instances of discrimination? What measures can be used to help close digital divides and ensure that the benefits of Big Data are shared equitably? Questions such as these are today only just beginning to be addressed; each however, will require careful consideration and reasoned debate, if Big Data is to deliver on its promises and truly fulfil its 'revolutionary' potential.


      [1] Gantz, J., &Reinsel, D. Extracting Value from Chaos, IDC, (2011), available at: http://www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf

      [2] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

      [4] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878, Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [5] Ibid.,

      [6] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

      [7] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [8] Anderson, R., & Roberts, D. 'Big Data: Strategic Risks and Opportunities, Crowe Horwarth Global Risk Consulting Limited, (2012) https://www.crowehorwath.net/uploadedfiles/crowe-horwath-global/tabbed_content/big%20data%20strategic%20risks%20and%20opportunities%20white%20paper_risk13905.pdf

      [9] Ibid.

      [10] Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society (2014)http://bds.sagepub.com/content/1/2/2053951714564227.abstract,

      [11] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [12] Cisco, 'IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service', Cisco, (2014) Available at: http://www.cisco.com/c/dam/m/en_us/ioe/public_sector/pdfs/jurisdictions/Oslo_Jurisdiction_Profile_051214REV.pdf

      [13] Newell, B, C. Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information. University of Washington - the Information School, (2013) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2341182

      [14] Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

      [15] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [16] Davenport, T., Barth., Bean, R. How is Big Data Different, MITSloan Management Review, Fall (2012), Available at, http://sloanreview.mit.edu/article/how-big-data-is-different/

      [17] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [18] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [19] Brown, B., Chui, M., Manyika, J. 'Are you Ready for the Era of Big Data?', McKinsey Quarterly, (2011), Available at, http://www.t-systems.com/solutions/download-mckinsey-quarterly-/1148544_1/blobBinary/Study-McKinsey-Big-data.pdf ; Benady, D., 'Radical transparency will be unlocked by technology and big data', Guardian (2014) Available at: http://www.theguardian.com/sustainable-business/radical-transparency-unlocked-technology-big-data

      [20] Ibid.

      [21] Ibid.

      [22] United Nations, A World That Counts: Mobilising the Data Revolution for Sustainable Development, Report prepared at the request of the United Nations Secretary-General,by the Independent Expert Advisory Group on a Data Revolutionfor Sustainable Development. (2014), pg. 18, see also, Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      [23] Greenleaf, G. Abandon All Hope? Foreword for Issue 37(2) of the UNSW Law Journal on 'Communications Surveillance, Big Data, and the Law' ,(2014) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2490425##, Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [24] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [25] Narayanan and Shmatikov quoted in Ibid.,

      [26] OECD, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, The Organization for Economic Co-Operation and Development, (1999); The European Parliament and the Council of the European Union, EU Data Protection Directive, "Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data," (1995)

      [27] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899

      [28] Article 29 Working Group., Opinion 03/2013 on purpose limitation, Article 29 Data Protection Working Party, (2013) available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

      [29] Solove, D, J. Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [30] Brandimarte, L., Acquisti, A., & Loewenstein, G., Misplaced Confidences:

      Privacy and the Control Paradox, Ninth Annual Workshop on the Economics of Information Security (WEIS) June 7-8 2010, Harvard University, Cambridge, MA, (2010), available at: https://fpf.org/wp-content/uploads/2010/07/Misplaced-Confidences-acquisti-FPF.pdf

      [31] Solove, D, J., Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [32] Yu, W, E., Data., Privacy and Big Data-Compliance Issues and Considerations, ISACA Journal, Vol. 3 2014 (2014), available at: http://www.isaca.org/Journal/archives/2014/Volume-3/Pages/Data-Privacy-and-Big-Data-Compliance-Issues-and-Considerations.aspx

      [33] Ramirez, E., Brill, J., Ohlhausen, M., Wright, J., & McSweeny, T., Data Brokers: A Call for Transparency and Accountability, Federal Trade Commission (2014) https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

      [34] Michel Foucault, Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan, London: Allen Lane, Penguin, (1977)

      [35] Marthews, A., & Tucker, C., Government Surveillance and Internet Search Behavior (2015), available at SSRN: http://ssrn.com/abstract=2412564

      [36] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012)

      [37] Hirsch, D., That's Unfair! Or is it? Big Data, Discrimination and the FTC's Unfairness Authority, Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf

      [38] Hill, K., How Target Figured Out A Teen Girl Was Pregnant Before Her Father Didhttp://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

      [39] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

      [40] Ibid.,

      [41] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [42] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [43] Ibid

      [44] Lazer, D., Kennedy, R., King, G., &Vespignani, A. " The Parable of Google Flu: Traps in Big Data Analysis ." Science 343 (2014): 1203-1205. Copy at http://j.mp/1ii4ETo

      [45] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [46] Leinweber, D. (2007) 'Stupid data miner tricks: overfitting the S&P 500', The Journal of Investing, vol. 16, no. 1, pp. 15-22. http://m.shookrun.com/documents/stupidmining.pdf

      [47] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [48] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      [49] De Zwart, M. J., Humphreys, S., & Van Dissel, B. Surveillance, big data and democracy: lessons for Australia from the US and UK. Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2. (2014) Retrieved from https://digital.library.adelaide.edu.au/dspace/handle/2440/90048

      [50] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878; Newman, N., Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [51] Newman, N., The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data (2013). Available at SSRN: http://ssrn.com/abstract=2265026, Newman, N. ,Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [52] Ibid.,

      [53] European Data Protection Supervisor, Privacy and competitiveness in the age of big data:

      The interplay between data protection, competition law and consumer protection in the Digital Economy, (2014), available at: https://secure.edps.europa.eu/EDPSWEB/webdav/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

      [54] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [55] Schradie, J., Big Data Not Big Enough? How the Digital Divide Leaves People Out, MediaShift, 31 July 2013, (2013), available at: http://mediashift.org/2013/07/big-data-not-big-enough-how-digital-divide-leaves-people-out/

      [56] Crawford, K., The Hidden Biases in Big Data, Harvard Business Review, 1 April 2013 (2013), available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data

      [57] Robinson, D., Yu, H., Civil Rights, Big Data, and Our Algorithmic Future, (2014) http://bigdata.fairness.io/introduction/

      [58] Ibid.

      [59] Ibid

      [60] Rotellla, P., Is Data The New Oil? Forbes, 2 April 2012, (2012), available at: http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/

      [61] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899; Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society(2014) http://bds.sagepub.com/content/1/2/2053951714564227.abstract

      [62] Pasquale, F., The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press , (2015)

      Eight Key Privacy Events in India in the Year 2015

      by Amber Sinha — last modified Jan 03, 2016 05:43 AM
      As the year draws to a close, we are enumerating some of the key privacy related events in India that transpired in 2015. Much like the last few years, this year, too, was an eventful one in the context of privacy.

      While we did not witness, as one had hoped, any progress in the passage of a privacy law, the year saw significant developments with respect to the ongoing Aadhaar case. The statement by the Attorney General, India's foremost law officer, that there is a lack of clarity over whether the right to privacy is a fundamental right, and the fact the the matter is yet unresolved was a huge setback to the jurisprudence on privacy. [1] However, the court has recognised a purpose limitation as applicable into the Aadhaar scheme, limiting the sharing of any information collected during the enrollment of residents in UID. A draft Encryption Policy was released and almost immediately withdrawn in the face of severe public backlash, and an updated Human DNA Profiling Bill was made available for comments. Prime Minister Narendra Modi's much publicised project "Digital India" was in news throughout the year, and it also attracted its' fair share of criticism in light of the lack of privacy safeguards it offered. Internationally, a lawsuit brought by Maximilian Schrems, an Austrian privacy activist, dealt a body blow to the fifteen year old Safe Harbour Framework in place for data transfers between EU and USA. Below, we look at what were, according to us, the eight most important privacy events in India, in 2015.

      1. August 11, 2015 order on Aadhaar not being compulsory

      In 2012, a writ petition was filed by Judge K S Puttaswamy challenging the government's policy in its attempt to enroll all residents of India in the UID project and linking the Aadhaar card with various government services. A number of other petitioners who filed cases against the Aadhaar scheme have also been linked with this petition and the court has been hearing them together. On September 11, 2015, the Supreme Court reiterated its position in earlier orders made on September 23, 2013 and March 24, 2014 stating that the Aadhaar card shall not be made compulsory for any government services. [2] Building on its earlier position, the court passed the following orders:

      a) The government must give wide publicity in the media that it was not mandatory for a resident to obtain an Aadhaar card,

      b) The production of an Aadhaar card would not be a condition for obtaining any benefits otherwise due to a citizen,

      c) Aadhaar card would not be used for any purpose other than the PDS Scheme, for distribution of foodgrains and cooking fuel such as kerosene and for the LPG distribution scheme.

      d) The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation.[3]

      Despite this being the fifth court order given by the Supreme Court[4] stating that the Aadhaar card cannot be a mandatory requirement for access to government services or subsidies, repeated violations continue. One of the violations which has been widely reported is the continued requirement of an Aadhaar number to set up a Digital Locker account which also led to activist, Sudhir Yadav filing a petition in the Supreme Court.[5]

      2. No Right to Privacy - Attorney General to SC

      The Attorney General, Mukul Rohatgi argued before the Supreme Court in the Aadhaar case that the Constitution of India did not provide for a fundamental Right to Privacy.[6] He referred to the body of case in the Supreme Court dealing with this issue and made a reference to the 1954 case, MP Sharma v. Satish Chandra[7] stating that there was "clear divergence of opinion" on the Right to Privacy and termed it as "a classic case of unclear position of law." He also referred to the discussion on this matter in the Constitutional Assembly Debates and pointed to the fact the framers of the Constitution did not intend for this to be a fundamental right. He said the matter needed to be referred to a nine judge Constitution bench.[8] This raises serious questions over the jurisprudence developed by the Supreme Court on the right to privacy over the last five decades. The matter is currently pending resolution by a larger bench which needs to be constituted by the Chief Justice of India.

      3. Shreya Singhal judgment and Section 69A, IT Act

      In the much celebrated judgment, Shreya Singhal v. Union of India, in March 2015, the Supreme Court struck down Section 66A of the Information Technology Act, 2000 as unconstitutional and laid down guidelines for online takedowns under the Internet intermediary rules. However, significantly, the court also upheld Section 69A and the blocking rules under this provision. It was held to be a narrowly-drawn provision with adequate safeguards. The rules prescribe a procedure for blocking which involves receipt of a blocking request, examination of the request by the Committee and a review committee which performs oversight functions. However, commentators have pointed to the opacity of the process in the rules under this provisions. While the rules mandate that a hearing is given to the originator of the content, this safeguard is widely disregarded. The judgment did not discuss Section 69 of the Information Technology Act, 2000 which deal with decrypting of electronic communication, however, the Department of Electronic and Information Technology brought up this issue subsequently, through a Draft Encryption Policy, discussed below.

      4. Circulation and recall of Draft Encryption Policy

      On October 19, 2015, the Department of Electronic and Information Technology (DeitY) released for public comment a draft National Encryption Policy. The draft received an immediate and severe backlash from commentators, and was withdrawn by September 22, 2015. [9] The government blamed a junior official for the poor drafting of the document and noted that it had been released without a review by the Telecom Minister, Ravi Shankar Prasad and other senior officials.[10] The main areas of contention were a requirement that individuals store plain text versions of all encrypted communication for a period of 90 days, to be made available to law enforcement agencies on demand; the government's right to prescribe key-strength, algorithms and ciphers; and only government-notified encryption products and vendors registered with the government being allowed to be used for encryption.[11] The purport of the above was to limit the ways in which citizens could encrypt electronic communication, and to allow adequate access to law enforcement agencies. The requirement to keep all encrypted information in plain text format for a period of 90 days garnered particular criticism as it would allow for creation of a 'honeypot' of unencrypted data, which could attract theft and attacks.[12] The withdrawal of the draft policy is not the final chapter in this story, as the Telecom Minister has promised that the Department will come back with a revised policy. [13] This attempt to put restrictions on use of encryption technologies is not only in line with a host of surveillance initiatives that have mushroomed in India in the last few years,[14] but also finds resonance with a global trend which has seen various governments and law enforcement organisations argue against encryption. [15]

      5. Privacy concerns raised about Digital India

      The Digital India initiative includes over thirty Mission Mode Projects in various stages of implementation. [16] All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies.[17] There is also a lack of properly articulated access control mechanisms and doubts over important issues such as data ownership owing to most projects involving public private partnership which involves private organisation collecting, processing and retaining large amounts of data. [18] Ahead of Prime Minister Modi's visit to the US, over 100 hundred prominent US based academics released a statement raising concerns about "lack of safeguards about privacy of information, and thus its potential for abuse" in the Digital India project. [19] It has been pointed out that the initiatives could enable a "cradle-to-grave digital identity that is unique, lifelong, and authenticable, and it plans to widely use the already mired in controversy Aadhaar program as the identification system." [20]

      6. Issues with Human DNA Profiling Bill, 2015

      The Human DNA Profiling Bill, 2015 envisions the creation of national and regional DNA databases comprising DNA profiles of the categories of persons specified in the Bill.[21] The categories include offenders, suspects, missing persons, unknown deceased persons, volunteers and such other categories specified by the DNA Profiling Board which has oversight over these banks. The Bill grants wide discretionary powers to the Board to introduce new DNA indices and make DNA profiles available for new purposes it may deem fit. [22] These, and the lack of proper safeguards surrounding issues like consent, retention and collection pose serious privacy risks if the Bill becomes a law. Significantly, there is no element of purpose limitation in the proposed law, which would allow the DNA samples to be re-used for unspecified purposes.[23]

      7. Impact of the Schrems ruling on India

      In Schrems v. Data Protection Commissioner, the Court of Justice in European Union (CJEU) annulled the Commission Decision 2000/520 according to which US data protection rules were deemed sufficient to satisfy EU privacy rules enabling transfers of personal data from EU to US, otherwise known as the 'Safe Harbour' framework. The court ruled that broad formulations of derogations on grounds of national security, public interest and law enforcement in place in the US goes beyond the test of proportionality and necessity under the Data Protection rules.[24] This judgment could also have implications for the data processing industry in India. For a few years now, a framework similar to the Safe Harbour has been under discussion for transfer of data between India and EU. The lack of a privacy legislation has been among the significant hurdles in arriving at a framework.[25] In the absence of a Safe Harbour framework, the companies in India rely on alternate mechanisms such as Binding Corporate Rules (BCR) or Model Contractual Clauses. These contracts impose the obligation on the data exporters and importers to ensure that 'adequate level of data protection' is provided. The Schrems judgement makes it clear that 'adequate level of data protection' entails a regime that is 'essentially equivalent' to that envisioned under Directive 95/46.[26] What this means is that any new framework of protection between EU and other countries like US or India will necessarily have to meet this test of essential equivalence. The PRISM programme in the US and a host of surveillance programmes that have been initiated by the government in India in the last few years could pose problems in satisfying this test of essential equivalence as they do not conform to the proportionality and necessity principles.

      8. The definition of "unfair trade practices" in the Consumer Protection Bill, 2015

      The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session[27] has introduced an expansive definition of the term "unfair trade practices." The definition as per the Bill includes the disclosure "to any other person any personal information given in confidence by the consumer."[28] This clause exclude from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Currently, the only law governing data protection law are the Reasonable security practices and procedures and sensitive personal data or information Rules, 2011[29] prescribed under Section 43A of the Information Technology Act, 2000. Under these rules, sensitive personal data or information is protected in that their disclosure requires prior permission from the data subject. [30] For other kinds of personal information not categorized as sensitive personal data or information, the only recourse of data subjects in case to claim breach of the terms of privacy policy which constitutes a lawful contract. [31] The Consumer Protection Bill, 2015, if enacted as law, could significantly expand the scope of protection available to data subjects. First, unlike the Section 43A rules, the provisions of the Bill would be applicable to physical as well as electronic collection of personal information. Second, disclosure to a third party of personal information other than sensitive personal data or information could also have similar 'prior permission' criteria under the Bill, if it can be shown that the information was shared by the consumer in confidence.

      What we see above are events largely built around a few trends that we have been witnessing in the context of privacy in India, in particular and across the world, in general. Lack of privacy safeguards in initiatives like the Aadhaar project and Digital India is symptomatic of policies that are not comprehensive in their scope, and consequently fail to address key concerns. Dr Usha Ramanathan has called these policies "powerpoint based policies" which are implemented based on proposals which are superficial in their scope and do not give due regard to their impact on a host of issues. [32] Second, the privacy concerns posed by the draft Encryption Policy and the Human DNA Profiling Bill point to the motive of surveillance that is in line with other projects introduced with the intent to protect and preserve national security. [33] Third, the incidents that championed the cause of privacy like the Schrems judgment have largely been initiated by activists and civil society actors, and have typically entailed the involvement of the judiciary, often the single recourse of actors in the campaign for the protection of civil rights. It must be noted that jurisprudence on the right to privacy in India has not moved beyond the guidelines set forth by the Supreme Court in PUCL v. Union of India.[34] However, new mass surveillance programmes and massive collection of personal data by both public and private parties through various schemes mandated a re-look at the standards laid down twenty years ago. The privacy issue pending resolution by a larger bench in the Aadhaar case affords an opportunity to revisit those principles in light of how surveillance has changed in the last two decades and strengthen privacy and data protection.


      [1] Right to Privacy not a fundamental right, cannot be invoked to scrap Aadhar: Centre tells Supreme Court, available at http://articles.economictimes.indiatimes.com/2015-07-23/news/64773078_1_fundamental-right-attorney-general-mukul-rohatgi-privacy

      [4] Five SC Orders Later, Aadhaar Requirement Continues to Haunt Many, available at http://thewire.in/2015/09/19/five-sc-orders-later-aadhaar-requirement-continues-to-haunt-many-11065/

      [5] Digital Locker scheme challenged in Supreme Court, available at http://www.moneylife.in/article/digital-locker-scheme-challenged-in-supreme-court/42607.html

      [6] Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise, available at http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise

      [7] 1954 SCR 1077.

      [8] Supra Note 1.

      [10] Encryption policy poorly worded by officer: Telecom Minister Ravi Shankar Prasad, available at http://economictimes.indiatimes.com/articleshow/49068406.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

      [11] Updated: India's draft encryption policy puts user privacy in danger, available at http://www.medianama.com/2015/09/223-india-draft-encryption-policy/

      [12] Bhairav Acharya, The short-lived adventure of India's encryption policy, available at http://notacoda.net/2015/10/10/the-short-lived-adventure-of-indias-encryption-policy/

      [13] Supra Note 9.

      [14] Maria Xynou, Big democracy, big surveillance: India's surveillance state, available at https://www.opendemocracy.net/opensecurity/maria-xynou/big-democracy-big-surveillance-indias-surveillance-state

      [15] China passes controversial anti-terrorism law to access encrypted user accounts, available at http://www.theverge.com/2015/12/27/10670346/china-passes-law-to-access-encrypted-communications ; Police renew call against encryption technology that can help hide terrorists, available at http://www.washingtontimes.com/news/2015/nov/16/paris-terror-attacks-renew-encryption-technology-s/?page=all .

      [18] Indira Jaising, Digital India Schemes Must Be Preceded by a Data Protection and Privacy Law, available at http://thewire.in/2015/07/04/digital-india-schemes-must-be-preceded-by-a-data-protection-and-privacy-law-5471/

      [19] US academics raise privacy concerns over 'Digital India' campaign, available at http://yourstory.com/2015/08/us-digital-india-campaign/

      [20] Lisa Hayes, Digital India's Impact on Privacy: Aadhaar numbers, biometrics, and more, available at https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/

      [22] Comments on India's Human DNA Profiling Bill (June 2015 version), available at http://www.genewatch.org/uploads/f03c6d66a9b354535738483c1c3d49e4/IndiaDNABill_FGPI_15.pdf

      [23] Elonnai Hickok, Vanya Rakesh and Vipul Kharbanda, CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015, available at http://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-human-dna-profiling-bill-2015

      [25] Jyoti Pandey, Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India, available at http://cis-india.org/internet-governance/blog/contestations-of-data-ecj-safe-harbor-ruling-and-lessons-for-india

      [26] Simon Cox, Case Watch: Making Sense of the Schrems Ruling on Data Transfer, available at https://www.opensocietyfoundations.org/voices/case-watch-making-sense-schrems-ruling-data-transfer

      [28] Section 2(41) (I) of the Consumer Protection Bill, 2015.

      [30] Rule 6 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [31] Rule 4 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [33] Supra Note 11.

      [34] Chaitanya Ramachandra, PUCL V. Union of India Revisited: Why India's Sureveillance Law must be redesigned for the Digital Age, available at http://nujslawreview.org/wp-content/uploads/2015/10/Chaitanya-Ramachandran.pdf

      Free Basics: Negating net parity

      by Sunil Abraham last modified Jan 03, 2016 05:58 AM
      Researchers funded by Facebook were apparently told by 92 per cent of Indians they surveyed from large cities, with Internet connection and college degree, that the Internet “is a human right and that Free Basics can help bring Internet to all of India.” What a strange way to frame the question given that the Internet is not a human right in most jurisdictions.

      The article was published in the Deccan Herald on January 3, 2016.


      Free Basics is gratis service offered by Facebook in partnership with telcos in 37 countries. It is a mobile app that features less than a 100 of the 1 billion odd websites that are currently available on the WWW which in turn is only a sub-set of the Internet. Free Basics violates Net Neutrality because it introduces an unnecessary gatekeeper who gets to decide on “who is in” and “who is out”. Services like Free Basics could permanently alienate the poor from the full choice of the Internet because it creates price discrimination hurdles that discourage those who want to leave the walled garden.

      Inika Charles and Arhant Madhyala, two interns at Centre for Internet and Society (CIS), surveyed 1/100th of the Facebook sample, that is, 30 persons with the very same question at a café near our office in Bengaluru. Seventy per cent agreed with Facebook that the Internet was a human right but only 26 per cent thought Free Basics would achieve universal connectivity. My real point here is that numbers don’t matter. At least not in the typical way they do. Facebook dismissed Amba Kak’s independent, unfunded, qualitative research in Delhi, in their second public rebuttal, saying the sample size was only 20.

      That was truly ironical. The whole point of her research was the importance of small numbers. Kak says, “For some, it was the idea of an ‘emergency’ which made all-access plans valuable.” A respondent stated: “But maybe once or twice a month, I need some information which only Google can give me... like the other day my sister needed to know results to her entrance exams.” If you consider that too mundane, take a moment to picture yourself stranded in the recent Chennai flood. The statistical rarity of a Black Swan does not reduce its importance. A more neutral network is usually a more resilient network. When we do have our next national disaster, do we want to be one of the few countries on the planet who, thanks to our flawed regulation, have ended up with a splinternet?

      Telecom Regulatory Authority of India (Trai) chairman R S Sharma rightly expressed some scepticism around numbers when he said “the consultation paper is not an opinion poll.” He elaborated: “The issue here is some sites are being offered to one person free of cost while another is paying for it. Is this a good thing and can operators have such powers?” Had he instead asked “Is this the best option?” my answer would be “no”. Given the way he has formulated the question, our answer is a lawyerly “it depends”. The CIS believes that differential pricing should be prohibited. However, it can be allowed under certain exceptional standards when it is done in a manner that can be justified by the regulator against four axes of sometimes orthogonal policy objectives. They are increased access, enhanced competition, increased user choice and contribution to openness. For example, a permanent ban on Free Basics makes sense in the Netherlands but regulation may be sufficient for India.

      Gatekeeping powers

      To the second and more important part to Trai chairman’s second question on gatekeeping powers of operators, our answer is a simple “no”. But then, do we have any evidence that gatekeeping powers have been abused to the detriment of consumer and public interest? No. What do we do when we cannot, like Russell’s chicken, use induction to explain our future? Prof Simon Wren-Lew says, “If Bertrand Russell’s chicken had been an economist ...(it would have)... asked a crucial additional question: Why is the farmer doing this? What is in it for him?” There were five serious problems with Free Basics that Facebook has at least partially fixed, thanks mostly to criticism from consumers in India and Brazil. One, exclusivity with access provider; two, exclusivity with a set of web services; three, lack of transparency regarding retention of personal information; four, misrepresentation through the name of the service, Internet.org and five, lack of support for encrypted traffic. But how do we know these problems will stay fixed? Emerging markets guru Jan Chipchase tweeted asking “Do you trust Facebook? Today? Tomorrow? When its share price is under pressure and it wants to wring more $$$ from the platform?”

      Zero. Facebook pays telecom operators zero. The operators pay Facebook zero. The consumers pay zero. Why do we need to regulate philanthropy? Because these freebies are not purely the fruit of private capital. They are only possible thanks to an artificial state-supported oligopoly dependent on public resources like spectrum and wires (over and under public property). Therefore, these oligopolies much serve the public interest and also ensure that users are treated in a non-discriminatory fashion.

      Also provision of a free service should not allow powerful corporations to escape regulation–in jurisdictions like Brazil it is clear that Facebook has to comply with consumer protection law even if users are not paying for the service. Given that big data is the new oil, Facebook could pay the access provider in advertisements or manipulation of public discourse or by tweaking software defaults such as autoplay for videos which could increase bills of paying consumers quite dramatically.

      India needs a Net Neutrality regime that allows for business models and technological innovation as long as they don’t discriminate between users and competitors. The Trai should begin regulation based on principles as it has rightly done with the pre-emptive temporary ban. But there is a need to bring “numbers we can trust” to the regulatory debate. We as citizens need to establish a peer-to-peer Internet monitoring infrastructure across mobile and fixed lines in India that we can use to crowd source data.

      (The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world)

      Ground Zero Summit

      by Amber Sinha — last modified Jan 03, 2016 06:06 AM
      The Ground Zero Summit which claims to be the largest collaborative platform in Asia for cyber-security was held in New Delhi from 5th to 8th November. The conference was organised by the Indian Infosec Consortium (IIC), a not for profit organisation backed by the Government of India. Cyber security experts, hackers, senior officials from the government and defence establishments, senior professionals from the industry and policymakers attended the event.

      Keynote Address

      The Union Home Minister, Mr. Rajnath Singh, inaugurated the conference. Mr Singh described cyber-barriers that impact the issues that governments face in ensuring cyber-security. Calling the cyberspace as the fifth dimension of security in addition to land, air, water and space, Mr Singh emphasised the need to curb cyber-crimes in India, which have grown by 70% in 2014 since 2013. He highlighted the fact that changes in location, jurisdiction and language made cybercrime particularly difficult to address. Continuing in the same vein, Mr. Rajnath Singh also mentioned cyber-terrorism as one the big dangers in the time to come. With a number of government initiatives like Digital India, Smart Cities and Make in India leveraging technology, the Home Minister said that the success of these projects would be dependent on having robust cyber-security systems in place.

      The Home Minister outlined some initiatives that Government of India is planning to take in order to address concerns around cyber security - such as plans to finalize a new national cyber policy. Significantly, he referred to a committee headed by Dr. Gulshan Rai, the National Cyber Security Coordinator mandated to suggest a roadmap for effectively tackling cybercrime in India. This committee has recommended the setting up of Indian Cyber Crime Coordination Centre (I-4C). This centre is meant to engage in capacity building with key stakeholders to enable them to address cyber crimes, and work with law enforcement agencies. Earlier reports about the recommendation suggest that the I-4C will likely be placed under the National Crime Records Bureau and align with the state police departments through the Crime and Criminal Tracking and Network Systems (CCTNS). I-4C is supposed to be comprised of high quality technical and R&D experts who would be engaged in developing cyber investigation tools.

      Other keynote speakers included Alok Joshi, Chairman, NTRO; Dr Gulshan Rai, National Cyber Security Coordinator; Dr. Arvind Gupta, Head of IT Cell, BJP and Air Marshal S B Dep, Chief of the Western Air Command.

      Technical Speakers

      There were a number of technical speakers who presented on an array of subjects. The first session was by Jiten Jain, a cyber security analyst who spoke on cyber espionage conducted by actors in Pakistan to target defence personnel in India. Jiten Jain talked about how the Indian Infosec Consortium had discovered these attacks in 2014. Most of these websites and mobile apps posed as defence news and carried malware and viruses. An investigation conducted by IIC revealed the domains to be registered in Pakistan. In another session Shesh Sarangdhar, the CEO of Seclabs, an application security company, spoke about the Darknet and ways to break anonymity on it. Sarangdhar mentioned that anonymity on Darknet is dependent on all determinants of the equation in the communication maintaining a specific state. He discussed techniques like using audio files, cross domain on tor, siebel attacks as methods of deanonymization. Dr. Triveni Singh. Assistant Superintendent of Police, Special Task Force, UP Police made a presentation on the trends in cyber crime. Dr. Singh emphasised the amount of uncertainty with regard to the purpose of a computer intrusion. He discussed real life case studies such as data theft, credit card fraud, share trading fraud from the perspective of law enforcement agencies.

      Anirudh Anand, CTO of Infosec Labs discussed how web applications are heavily reliant on filters or escaping methods. His talk focused on XSS (cross site scripting) and bypassing regular expression filters. He also announced the release of XSS labs, an XSS test bed for security professionals and developers that includes filter evasion techniques like b-services, weak cryptographic design and cross site request forgery. Jan Siedl, an authority on SCADA presented on TOR tricks which may be used by bots, shells and other tools to better use the TOR network and I2P. His presentation dealt with using obfuscated bridges, Hidden Services based HTTP, multiple C&C addresses and use of OTP. Aneesha, an intern with the Kerala Police spoke about elliptical curve cryptography, its features such as low processing overheads. As this requires elliptic curve paths, efficient Encoding and Decoding techniques need to be developed. Aneesha spoke about an algorithm called Generator-Inverse for encoding and decoding a message using a Single Sign-on mechanism. Other subjects presented included vulnerabilities that remained despite using TLS/SSL, deception technology and cyber kill-chain, credit card frauds, Post-quantum crypto-systems and popular android malware.

      Panels

      There were also two panels organised at the conference. Samir Saran, Vice President of Observer Research Foundation, moderated the first panel on Cyber Arms Control. The panel included participants like Lt. General A K Sahni from the South Western Air Command; Lt. General A S Lamba, Retired Vice Chief Indian Army, Alok Vijayant, Director of Cyber Security Operation of NTRO and Captain Raghuraman from Reliance Industries. The panel debated the virtues of cyber arms control treaties. It was acknowledged by the panel that there was a need to frame rules and create a governance mechanism for wars in cyberspace. However, this would be effective only if the governments are the primary actors with the capability for building cyber-warfare know-how and tools. The reality was that most kinds of cyber weapons involved non state actors from the hacker community. In light of this, the cyber control treaties would lose most of their effectiveness.

      The second panel was on the Make for India’ initiatives. Dinesh Bareja, the CEO of Open Security Alliance and Pyramid Cyber Security was the moderator for this panel which also included Nandakumar Saravade, CEO of Data Security Council of India; Sachin Burman, Director of NCIIPC; Dr. B J Srinath, Director General of ICERT and Amit Sharma, Joint Director of DRDO. The focus of this session was on ‘Make in India’ opportunities in the domain of cyber security. The panelist discussed the role the government and industry could play in creating an ecosystem that supports entrepreneurs in skill development. Among the approaches discussed were: involving actors in knowledge sharing and mentoring chapters which could be backed by organisations like NASSCOM and bringing together industry and government experts in events like the Ground Zero Summit to provide knowledge and training on cyber-security issues.

      Exhibitions

      The conference was accompanied by a exhibitions showcasing indigenous cybersecurity products. The exhibitors included Smokescreen Technologies, Sempersol Consultancy, Ninja Hackon, Octogence Technologies, Secfence, Amity, Cisco Academy, Robotics Embedded Education Services Pvt. Ltd., Defence Research and Development Organisation (DRDO), Skin Angel, Aksit, Alqimi, Seclabs and Systems, Forensic Guru, Esecforte Technologies, Gade Autonomous Systems, National Critical Information Infrastructure Protection Centre (NCIIPC), Indian Infosec Consortium (IIC), INNEFU, Forensic Guru, Event Social, Esecforte Technologies, National Internet Exchange of India (NIXI) and Robotic Zone.

      The conference also witnessed events such Drone Wars, in which selected participants had to navigate a drone, a Hacker Fashion Show and the official launch of the Ground Zero’s Music Album.

      Understanding the Freedom of Expression Online and Offline

      by Prasad Krishna last modified Jan 03, 2016 10:24 AM

      PDF document icon PROVISIONAL PROGRAMME AGENDA_.pdf — PDF document, 542 kB (555783 bytes)

      ICFI Workshop

      by Prasad Krishna last modified Jan 03, 2016 10:33 AM

      PDF document icon ICFI Workshop note 10thDec2015.pdf — PDF document, 664 kB (680175 bytes)

      Facebook Free Basics: Gatekeeping Powers Extend to Manipulating Public Discourse

      by Vidushi Marda last modified Jan 09, 2016 01:43 PM
      15 million people have come online through Free Basics, Facebook's zero rated walled garden, in the past year. "If we accept that everyone deserves access to the internet, then we must surely support free basic internet services. Who could possibly be against this?" asks Facebook founder Mark Zuckerberg, in a recent op-ed defending Free Basics.

      The article was published in Catchnews on January 6, 2015. For more info click here.


      This rhetorical question however, has elicited a plethora of answers. The network neutrality debate has accelerated over the past few weeks with the Telecom Regulatory Authority of India (TRAI) releasing a consultation paper on differential pricing.

      While notifications to "Save Free Basics in India" prompt you on Facebook, an enormous backlash against this zero rated service has erupted in India.

      Free Basics

      The policy objectives that must guide regulating net neutrality are consumer choice, competition, access and openness. Facebook claims that Free Basics is a transition to the full internet and digital equality. However, by acting as a gatekeeper, Facebook gives itself the distinct advantage of deciding what services people can access for free by virtue of them being "basic", thereby violating net neutrality.

      Amidst this debate, it's important to think of the impact Facebook can have on manipulating public discourse. In the past, Facebook has used it's powerful News Feed algorithm to significantly shape our consumption of information online.

      In July 2014, Facebook researchers revealed that for a week in January 2012, it had altered the news feeds of 689,003 randomly selected Facebook users to control how many positive and negative posts they saw. This was done without their consent as part of a study to test how social media could be used to spread emotions online.

      Their research showed that emotions were in fact easily manipulated. Users tended to write posts that were aligned with the mood of their timeline.

      Another worrying indication of Facebook's ability to alter discourse was during the ALS Ice Bucket Challenge in July and August, 2014. Users' News Feeds were flooded with videos of individuals pouring a bucket of ice over their head to raise awareness for charitable cause, but not entirely on its merit.

      The challenge was Facebook's method of boosting its native video feature which was launched at around the same time. Its News Feed was mostly devoid of any news surrounding riots in Ferguson, Missouri at the same time, which happened to be a trending topic on Twitter.

      Each day, the news feed algorithm has to choose roughly 300 posts out of a possible 1500 for each user, which involves much more than just a random selection. The posts you view when you log into Facebook are carefully curated keeping thousands of factors in mind. Each like and comment is a signal to the algorithm about your preferences and interests.

      The amount of time you spend on each post is logged and then used to determine which post you are most likely to stop to read. Facebook even keeps into account text that is typed but not posted and makes algorithmic decisions based on them.

      It also differentiates between likes - if you like a post before reading it, the news feed automatically assumes that your interest is much fainter as compared to liking a post after spending 10 minutes reading it.

      Facebook believes that this is in the best interest of the user, and these factors help users see what he/she will most likely want to engage with. However, this keeps us at the mercy of a gatekeeper who impacts the diversity of information we consume, more often than not without explicit consent. Transparency is key.


      (Vidushi Marda is a programme officer at the Centre for Internet and Society)

      Human Rights in the Age of Digital Technology: A Conference to Discuss the Evolution of Privacy and Surveillance

      by Amber Sinha — last modified Jan 11, 2016 02:12 AM
      The Centre for Internet and Society organised a conference in roundtable format called ‘Human Rights in the Age of Digital Technology: A Conference to discuss the evolution of Privacy and Surveillance. The conference was held at Indian Habitat Centre on October 30, 2015. The conference was designed to be a forum for discussion, knowledge exchange and agenda building to draw a shared road map for the coming months.

      In India, the Right to Privacy has been interpreted to mean an individual's’ right to be left alone. In the age of massive use of Information and Communications Technology, it has become imperative to have this right protected. The Supreme Court has held in a number of its decisions that the right to privacy is implicit in the fundamental right to life and personal liberty under Article 21 of the Indian Constitution, though Part III does not explicitly mention this right. The Supreme Court has identified the right to privacy most often in the context of state surveillance and introduced the standards of compelling state interest, targetted surveillance and oversight mechanism which have been incorporated in the forms of rules under the Indian Telegraph Act, 1885.  Of late, privacy concerns have gained importance in India due to the initiation of national programmes like the UID Scheme, DNA Profiling, the National Encryption Policy, etc. attracting criticism for their impact on the right to privacy. To add to the growing concerns, the Attorney General, Mukul Rohatgi argued in the ongoing Aadhaar case that the judicial position on whether the right to privacy is a fundamental right is unclear and has questioned the entire body of jurisprudence on right to privacy in the last few decades.

      Participation

      The roundtable saw participation from various civil society organisation such as Centre for Communication Governance, The Internet Democracy Project, as well as individual researchers like Dr. Usha Ramanathan and Colonel Mathew.

      Introductions

      Vipul Kharbanda, Consultant, CIS made the introductions and laid down the agenda for the day. Vipul presented a brief overview of the kind of work of CIS is engaged in around privacy and surveillance, in areas including among others, the Human DNA Profiling Bill, 2014, the Aadhaar Project, the Privacy Bill and surveillance laws in India. It was also highlighted that CIS was engaged in work in the field of Big Data in light of the growing voices wanting to use Big Data in the Smart Cities projects, etc and one of the questions was to analyse whether the 9 Privacy Principles would still be valid in a Big Data and IoT paradigm.

      The Aadhaar Case

      Dr. Usha Ramanathan began by calling the Aadhaar project an identification project as opposed to an identity project. She brought up various aspects of project ranging from the myth of voluntariness, the strong and often misleading marketing that has driven the project, the lack of mandate to collect biometric data and the problems with the technology itself. She highlighted  inconsistencies, irrationalities and lack of process that has characterised the Aadhaar project since its inception. A common theme that she identified in how the project has been run was the element of ad-hoc-ness about many important decisions taken on a national scale and migrating from existing systems to the Aadhaar framework. She particularly highlighted the fact that as civil society actors trying to make sense of the project, an acute problem faced was the lack of credible information available. In that respect, she termed it as ‘powerpoint-driven project’ with a focus on information collection but little information available about the project itself. Another issue that Dr. Ramanathan brought up was that the lack of concern that had been exhibited by most people in sharing their biometric information without being aware of what it would be used, was in some ways symptomatic of they way we had begun to interact with technology and willingly giving information about ourselves, with little thought. Dr Ramanathan’s presentation detailed the response to the project from various quarters in the form of petitions in different high courts in India, how the cases were received by the courts and the contradictory response from the government at various stages. Alongside, she also sought to place the Aadhaar case in the context of various debates and issues, like its conflict with the National Population Register, exclusion, issues around ownership of data collected, national security implications and impact on privacy and surveillance. Aside from the above issues, Dr. Ramanathan also posited that the kind of flat idea of identity envisaged by projects like Aadhaar is problematic in that it adversely impacts how people can live, act and define themselves. In summation, she termed the behavior of the government as irresponsible for the manner in which it has changed its stand on issues to suit the expediency of the moment, and was particularly severe on the Attorney General raising questions about the existence of a fundamental right to privacy and casually putting in peril jurisprudence on civil liberties that has evolved over decades.

      Colonel Mathew concurred with Dr. Ramanathan that the Aadhaar Project was not about identity but about identification. Prasanna developed on this further saying that while identity was a right unto the individual, identification was something done to you by others. Colonel Mathew further presented a brief history of the Aadhaar case, and how the significant developments over the last few years have played out in the courts. One of the important questions that Colonel Mathew addressed was the claim of uniqueness made by the UID project. He pointed to research conducted by Hans Varghese Mathew which analysed the data on biometric collection and processing released by the UID and demonstrated that there was a clear probability of a duplication in 1 out of every 97 enrolments. He also questioned the oft-repeated claim that UID would give identification to those without it and allow them to access welfare schemes. In this context, he pointed at the failures of the introducer system and the fact that only 0.03% of those registered have been enrolled through the introducer system. Colonel Mathew also questioned the change in stance by the ruling party, BJP which had earlier declared that the UID project should be scrapped as it was a threat to national security. According to him, the prime mover of the scheme were corporate interests outside the country interested in the data to be collected. This, he claimed created very serious risks to the national security. Prasanna further added to this point stating that while, on the face of it, some of the claims of threats to national security may sound alarmist in nature, if one were to critically study the manner in which the data had collected for this project, the concerns appeared justified.

      The Draft Encryption Policy

      Amber Sinha, Policy Officer at CIS, made a presentation on the brief appearance of the Draft Encryption Policy which was released in October this year, and withdrawn by the government within a day. Amber provided an overview of the policy emphasising on clauses around limitations on kind of encryption algorithms and key sizes individuals and organisations could use and the ill-advised procedures that needed to be followed. After the presentation, the topic was opened for discussion. The initial part of the discussion was focussed on specific clauses that threatened privacy and could serve the ends of enabling greater surveillance of the electronic communications of individuals and organisations, most notably having an exhaustive list of encryption algorithms, and the requirement to keep all encrypted communication in plain text format for a period of 90 days. We also attempted to locate the draft policy in the context of privacy debates in India as well as the global response to encryption. Amber emphasised that while mandating minimum standards of encryption for communication between government agencies may be a honorable motive, as it is concerned with matters of national security, however when this is extended to private parties and involved imposes upward thresholds on the kinds of encryption they can use, it stems from the motive of surveillance. Nayantara, of The Internet Democracy Project, pointed out that there had been global push back against encryption by governments in various countries like US, Russia, China, Pakistan, Israel, UK, Tunisia and Morocco. In India also, the IT Act places limits on encryption. Her points stands further buttressed by the calls against encryption in the aftermath of the terrorist attacks in Paris last month.

      It also intended to have a session on the Human DNA Profiling Bill led by Dr. Menaka Guruswamy. However, due to certain issues in scheduling and paucity of time, we were not able to have the session.

      Questions Raised

      On Aadhaar, some of the questions raised included the question of  applicability of the Section 43A, IT Act rules to the private parties involved in the process. The issue of whether Aadhaar can be tool against corruption was raised by Vipul. However, Colonel Mathew demonstrated through his research that issues like corruption in the TPDS system and MNREGA which Aadhaar is supposed to solve, are not effectively addressed by it but that there were simpler solutions to these problems.

      Ranjit raised questions about the different contexts of privacy, and referred to the work of Helen Nissenbaum. He spoke about the history of freely providing biometric information in India, initially for property documents and how it has gradually been used for surveillance. He argued has due to this tradition, many people in India do not view sharing of biometric information as infringing on their privacy. Dipesh Jain, student at Jindal Global Law School pointed to challenges like how individual privacy is perceived in India, its various contexts, and people resorting to the oft-quoted dictum of ‘why do you want privacy if you have nothing to hide’. In the context, it is pertinent to mention the response of Edward Snowden to this question who said, “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” Aakash Solanki, researcher

      Vipul and Amber also touched upon the new challenges that are upon us in a world of Big Data where traditional ways to ensure data protection through data minimisation principle and the methods like anonymisation may not work. With advances in computer science and mathematics threatening to re-identify anonymized datasets, and more and more reliances of secondary uses of data coupled with the inadequacy of the idea of informed consent, a significant paradigm shift may be required in how we view privacy laws.

      A number of action items going forward were also discussed, where different individuals volunteered to lead research on issues like the UBCC set up by the UIDAI, GSTN, the first national data utility, looking the recourses available to individual where his data is held by parties outside India’s jurisdiction.

      A Critique of Consent in Information Privacy

      by Amber Sinha and Scott Mason — last modified Jan 18, 2016 02:20 AM
      The idea of informed consent in privacy law is supposed to ensure the autonomy of an individual in any exercise which involves sharing of the individual's personal information. Consent is usually taken through a document, a privacy notice, signed or otherwise agreed to by the participant.

      Notice and Consent as cornerstone of privacy law
      The privacy notice, which is the primary subject of this article, conveys all pertinent information, including risks and benefits to the participant, and in the possession of such knowledge, they can make an informed choice about whether to participate or not.

      Most modern laws and data privacy principles seek to focus on individual control. In this context, the definition by the late Alan Westin, former Professor of Public Law & Government Emeritus, Columbia University, which characterises privacy as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other," [1] is most apt. The idea of privacy as control is what finds articulation in data protection policies across jurisdictions beginning from the Fair Information Practice Principles (FIPP) from the United States. [2] Paul Schwarz, the Jefferson E. Peyser Professor at UC Berkeley School of Law and a Director of the Berkeley Center for Law and Technology, called the FIPP the building blocks of modern information privacy law. [3] These principles trace their history to a report called 'Records, Computers and Rights of Citizens'[4] prepared by an Advisory Committee appointed by the US Department of Health, Education and Welfare in 1973 in response to the increasing automation in data systems containing information about individuals. The Committee's mandate was to "explore the impact of computers on record keeping about individuals and, in addition, to inquire into, and make recommendations regarding, the use of the Social Security number."[5] The most important legacy of this report was the articulation of five principles which would not only play a significant role in the privacy laws in US but also inform data protection law in most privacy regimes internationally[6] like the OECD Privacy Guidelines, the EU Data Protection Principles, the FTC Privacy Principles, APEC Framework or the nine National Privacy Principles articulated by the Justice A P Shah Committee Report which are reflected in the Privacy Bill, 2014 in India. Fred Cate, the C. Ben Dutton Professor of Law at the Indiana University Maurer School of Law, effectively summarises the import of all of these privacy regimes as follows:

      "All of these data protection instruments reflect the same approach: tell individuals what data you wish to collect or use, give them a choice, grant them access, secure those data with appropriate technologies and procedures, and be subject to third-party enforcement if you fail to comply with these requirements or individuals' expressed preferences"[7]

      This makes the individual empowered and allows them to weigh their own interests in exercising their consent. The allure of this paradigm is that in one elegant stroke, it seeks to "ensure that consent is informed and free and thereby also to implement an acceptable tradeoff between privacy and competing concerns."[8] This system was originally intended to be only one of the multiple ways in data processing would be governed, along with other substantive principles such as data quality, however, it soon became the dominant and often the only mechanism.[9] In recent years however, the emergence of Big Data and the nascent development of the Internet of Things has led many commentators to begin questioning the workability of consent as a principle of privacy. [10] In this article we will look closely at the some of issues with the concept of informed consent, and how these notions have become more acute in recent years. Following an analysis of these issues, we will conclude by arguing that today consent, as the cornerstone of privacy law, may in fact be thought of as counter-productive and that a rethinking of a principle based approach to privacy may be necessary.

      Problems with Consent

      To a certain extent, there are some cognitive problems that have always existed with the issue of informed consent such as long and difficult to understand privacy notices,[11] although, in recent past with these problems have become much more aggravated. Fred Cate points out that FIPPs at their inception were broad principles which included both substantive and procedural aspects. However, as they were translated into national laws, the emphasis remained on the procedural aspect of notice and consent. From the idea of individual or societal welfare as the goals of privacy, the focus had shifted to individual control.[12] With data collection occurring with every use of online services, and complex data sets being created, it is humanly impossible to exercise rational decision-making about the choice to allow someone to use our personal data. The thrust of Big Data technologies is that the value of data resides not in its primary purposes but in its numerous secondary purposes where data is re-used many times over. [13] In that sense, the very idea of Big Data conflicts with the data minimization principle.[14] The idea is to retain as much data as possible for secondary uses. Since, these secondary uses are, by their nature, unanticipated, its runs counter to the the very idea of the purpose limitation principle. [15] The notice and consent requirement has simply led to a proliferation of long and complex privacy notices which are seldom read and even more rarely understood. We will articulate some issues with privacy notices which have always existed, and have only become more exacerbated in the context of Big Data and the Internet of Things.

      1. Failure to read/access privacy notices

      The notice and consent principle relies on the ability of the individual to make an informed choice after reading the privacy notice. The purpose of a privacy notice is to act as a public announcement of the internal practices on collection, processing, retention and sharing of information and make the user aware of the same.[16] However, in order to do so the individual must first be able to access the privacy notices in an intelligible format and read them. Privacy notices come in various forms, ranging from documents posted as privacy policies on a website, to click through notices in a mobile app, to signs posted in public spaces informing about the presence of CCTV cameras. [17]

      In order for the principle of notice and consent to work, the privacy notices need to be made available in a language understood by the user. As per estimates, about 840 million people (11% of the world population) can speak or understand English. However, most privacy notices online are not available in the local language in different regions.[18] Further, with the ubiquity of smartphones and advent of Internet of Things, constrained interfaces on mobile screens and wearables make the privacy notices extremely difficult to read. It must be remembered that privacy notices often run into several pages, and smaller screens effectively ensure that most users do not read through them. Further, connected wearable devices often have "little or no interfaces that readily permit choices." [19] As more and more devices are connected, this problem will only get more pronounced. Imagine in a world where refrigerators act as the intermediary disclosing information to your doctor or supermarket, at what point does the data subject step in and exercise consent.[20]

      Another aspect that needs to be understood is that unlike earlier when data collectors were far and few in between, the user could theoretically make a rational choice taking into account the purpose of data collection. However, in the world of Big Data, consent often needs to be provided while the user is trying to access services. In that context click through privacy notices such as those required to access online application, are treated simply as an impediment that must be crossed in order to get access to services. The fact that the consent need to be given in real time almost always results in disregarding what the privacy notices say.[21]

      Finally, some scholars have argued that while individual control over data may be appealing in theory, it merely gives an illusion of enhanced privacy but not the reality of meaningful choice.[22] Research demonstrates that the presence of the term 'privacy policy' leads people to the false assumption that if a company has a privacy policy in place, it automatically means presence of substantive and responsible limits on how data is handled.[23] Joseph Turow, the Robert Lewis Shayon Professor of Communication at the Annenberg School for Communication, and his team for example has demonstrated how "[w]hen consumers see the term 'privacy policy,' they believe that their personal information will be protected in specific ways; in particular, they assume that a website that advertises a privacy policy will not share their personal information."[24] In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. Most people tend to ignore privacy policies.[25] Cass Sunstein states that our cognitive capacity to make choices and take decisions is limited. When faced with an overwhelming number of choices to make, most of us do not read privacy notices and resort to default options.[26] The requirement to make choices, sometimes several times in a day, imposes significant burden on the consumers as well the business seeking such consent. [27]

      2. Failure to understand privacy notices

      FTC chairperson Edith Ramirez stated: "In my mind, the question is not whether consumers should be given a say over unexpected uses of their data; rather, the question is how to provide simplified notice and choice."[28] Privacy notices often come in the form of long legal documents much to the detriment of the readers' ability to understand them. These policies are "long, complicated, full of jargon and change frequently."[29] Kent walker list five problems that privacy notices typically suffer from - a) overkill - long and repetitive text in small print, b) irrelevance - describing situations of little concern to most consumers, c) opacity - broad terms the reflect the truth that is impossible to track and control all the information collected and stored, d) non-comparability - simplification required to achieve comparability will lead to compromising accuracy, and e) inflexibility - failure to keep pace with new business models.[30] Erik Sherman did a review of twenty three corporate privacy notices and mapped them against three indices which give approximate level of education necessary to understand text on a first read. His results show that most of policies can only be understood on the first read by people of a grade level of 15 or above. [31] FTC Chairperson Timothy Muris summed up the problem with long privacy notices when he said, "Acres of trees died to produce a blizzard of barely comprehensible privacy notices." [32]

      Margaret Jane Radin, the former Henry King Ransom Professor of Law Emerita at the University of Michigan, provides a good definition of free consent. It "involves a knowing understanding of what one is doing in a context in which it is actually

      possible for or to do otherwise, and an affirmative action in doing something, rather

      than a merely passive acquiescence in accepting something."[33] There have been various proposals advocating a more succinct and simpler standard for privacy notices,[34] or multi-layered notices[35] or representing the information in the form of a table. [36] However, studies show only an insignificant improvement in the understanding by consumers when privacy policies are represented in graphic formats like tables and labels. [37] It has also been pointed out that it is impossible to convey complex data policies in simple and clear language.[38]

      3. Failure to anticipate/comprehend the consequences of consent

      Today's infinitely complex and labyrinthine data ecosystem is beyond the comprehension of most ordinary users. Despite a growing willingness to share information online, most have no understanding of what happens to their data once they have uploaded it - Where it goes? Whom it is held by? Under what conditions? For what purpose? Or how might it be used, aggregated, hacked, or leaked in the future? For the most part, the above operations are "invisible, managed at distant centers, from behind the scenes, by unmanned powers."[39]

      The perceived opportunities and benefits of Big Data have led to an acceptance of the indiscriminate collection of as much data as possible as well as the retention of that data for unspecified future analysis. For many advocates, such practices are absolutely essential if Big Data is to deliver on its promises.. Experts have argued that key privacy principles particularly those of collection limitation, data minimization and purpose limitation should not be applied to Big Data processing.[40] As mentioned above, in the case of Big Data, the value of the data collected comes often not from its primary purpose but from its secondary uses. Deriving value from datasets involves amalgamating diverse datasets and executing speculative and exploratory kinds of analysis in order to discover hidden insights and correlations that might have previously gone unnoticed.[41] As such organizations are today routinely reprocessing data collected from individuals for purposes not directly related to the services they provide to the customer. These secondary uses of data are becoming increasingly valuable sources of revenue for companies as the value of data in and of itself continues to rise. [42]

      Purpose Limitation

      The principle of purpose limitation has served as a key component of data protection for decades. Purposes given for the processing of users' data should be given at the time of collection and consent and should be "specified, explicit and legitimate". In practice however, reasons given typically include phrases such as, 'for marketing purposes' or 'to improve the user experience' that are vague and open to interpretation. [43]

      Some commentators whilst conceding the fact that purpose limitation in the era of Big Data may not be possible have instead attempted to emphasise the notion of 'compatible use' requirements. In the view of Working Party on the protection of individuals with regard to the processing of person data, for example, use of data for a purpose other than that originally stated at the point of collection should be subject to a case-by-case review of whether not further processing for different purpose is justifiable - i.e., compatible with the original purpose. Such a review may take into account for example, the context in which the data was originally collected, the nature or sensitivity of the data involved, and the existence of relevant safeguards to insure fair processing of the data and prevent undue harm to the data subject.[44]

      On the other hand, Big Data advocates have argued that an assessment of legitimate interest rather than compatibility with the initial purpose is far better suited to Big Data processing.[45] They argue that today the notion of purpose limitation has become outdated. Whereas previously data was collected largely as a by-product of the purpose for which it was being collected. If for example, we opted to use a service the information we provided was for the most part necessary to enable the provision of that service. Today however, the utility of data is no longer restricted to the primary purpose for which it is collected but can be used to provide all kinds of secondary services and resources, reduce waste, increase efficiency and improve decision-making.[46] These kinds of positive externalities, Big Data advocates insist, are only made possible by the reprocessing of data.

      Unfortunately for the notion of consent the nature of these secondary purposes are rarely evident at the time of collection. Instead the true value of the data can often only be revealed when it is amalgamated with other diverse datasets and subjected to various forms of analysis to help reveal hidden and non-obvious correlations and insights.[47] The uncertain and speculative value of data therefore means that it is impossible to provide "specific, explicit, and legitimate" details about how a given data set will be used or how it might be aggregated in future. Without this crucial information data subjects have no basis upon which they can make an informed decision about whether or not to provide consent. Robert Sloan and Richard Warner argue that it is impossible for a privacy notice to contain enough information to enable free consent. They argue that current data collection practices are highly complex and that these practices involve collection of information at one stage for one purpose and then retain, analyze, and distribute it for a variety of other purposes in unpredictable ways. [48] Helen Nissenbaum points to the ever changing nature of data flow and the cognitive challenges it poses. "Even if, for a given moment, a

      snapshot of the information flows could be grasped, the realm is in constant flux, with new firms entering the picture, new analytics, and new back end contracts forged: in other words, we are dealing with a recursive capacity that is indefinitely extensible." [49]

      Scale and Aggregation

      Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life, 'creating countless new digital puddles, lakes, tributaries and oceans of information'.[50] In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabytes , by 2013 that had grown to 4 zettabytes , and with the nascent development of the Internet of Things gathering pace, these trends are set to continue. [51] Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum. [52]

      The dilution of the purpose limitation principle entails that even those who understand privacy notices and are capable of making rational choices about it, cannot conceptualize how their data will be aggregated and possibly used or re-used. Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. Daniel Solove, the John Marshall Harlan Research Professor of Law at the George Washington University Law School, in his book, "The Digital Person", calls it the aggregation effect. He argues that the ingenuity of the data mining techniques and the insights and predictions that could be made by it render any cost-benefit analysis that an individual could make ineffectual. [53]

      4. Failure to opt-out

      The traditional choice against the collection of personal data that users have had access to, at least in theory, is the option to 'opt-out' of certain services. This draws from the free market theory that individuals exercise their free will when they use services and always have the option of opting out, thus, arguing against regulation but relying on the collective wisdom of the market to weed out harms. The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data, the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services, is making opting-out of data collection not only impractical but in some cases impossible. [54]

      Whilst sceptics may object that individuals are still free to stop using services that require data. As online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less of a genuine choice. [55] Information flows not only from the individuals it is about but also from what other people say about them. Financial transactions made online or via debit/credit cards can be analysed to derive further information about the individual. If opting-out makes you look anti-social, criminal, or unethical, the claims that we are exercising free will seems murky and leads one to wonder whether we are dealing with coercive technologies.

      Another issue with the consent and opt-out paradigm is the binary nature of the choice. This binary nature of consent makes a mockery of the notion that consent can function as an effective tool of personal data management. What it effectively means is that one can either agree with the long privacy notices, or choose to abandon the desired service. "This binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data. In practice, it certainly is not the optimal mechanism to ensure that either information privacy or the free flow of information is being protected." [56]

      Conclusion: 'Notice and Consent' is counter-productive

      There continues to be an unwillingness amongst many privacy advocates to concede that the concept of consent is fundamentally broken, as Simon Davies, a privacy advocate based in London, comments 'to do so could be seen as giving ground to the data vultures', and risks further weakening an already dangerously fragile privacy framework.[57] Nevertheless, as we begin to transition into an era of ubiquitous data collection, evidence is becoming stronger that consent is not simply ineffective, but may in some instances might be counter-productive to the goals of privacy and data protection.

      As already noted, the notion that privacy agreements produce anything like truly informed consent has long since been discredited; given this fact, one may ask for whose benefit such agreements are created? One may justifiably argue that far from being for the benefit and protection of users, privacy agreement may in fact be fundamentally to the benefit of data brokers, who having gained the consent of users can act with near impunity in their use of the data collected. Thus, an overly narrow focus on the necessity of consent at the point of collection, risks diverting our attention from the arguably more important issue of how our data is stored, analysed and distributed by data brokers following its collection. [58]

      Furthermore, given the often complicated and cumbersome processes involved in gathering consent from users, some have raised concerns that the mechanisms put in place to garner consent could themselves morph into surveillance mechanisms. Davies, for example cites the case of the EU Cookie Directive, which required websites to gain consent for the collection of cookies. Davies observes how, 'a proper audit and compliance element in the system could require the processing of even more data than the original unregulated web traffic. Even if it was possible for consumers to use some kind of gateway intermediary to manage the consent requests, the resulting data collection would be overwhelming''. Thus in many instances there exists a fundamental tension between the requirement placed on companies to gather consent and the equally important principle of data minimization. [59]

      Given the above issues with notice and informed consent in the context of information privacy, and the fact that it is counterproductive to the larger goals of privacy law, it is important to revisit the principle or rights based approach to data protection, and consider a paradigm shift where one moves to a risk based approach that takes into account the actual threats of sharing data rather than relying on what has proved to be an ineffectual system of individual control. We will be dealing with some of these issues in a follow up to this article.


      [1] Alan Westin, Privacy and Freedom, Atheneum, New York, 2015.

      [2] FTC Fair Information Practice Principles (FIPP) available at https://www.it.cornell.edu/policies/infoprivacy/principles.cfm.

      [3] Paul M. Schwartz, "Privacy and Democracy in Cyberspace," 52 Vanderbilt Law Review 1607, 1614 (1999).

      [4] US Secretary's Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, available at http://www.justice.gov/opcl/docs/rec-com-rights.pdf

      [6] Marc Rotenberg, "Fair Information Practices and the Architecture of Privacy: What Larry Doesn't Get," available at https://journals.law.stanford.edu/sites/default/files/stanford-technology-law-review/online/rotenberg-fair-info-practices.pdf

      [7] Fred Cate, The Failure of Information Practice Principles, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1156972

      [8] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

      [9] Fred Cate, Viktor Schoenberger, Notice and Consent in a world of Big Data, available at http://idpl.oxfordjournals.org/content/3/2/67.abstract

      [10] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [11] Ben Campbell, Informed consent in developing countries: Myth or Reality, available at https://www.dartmouth.edu/~ethics/docs/Campbell_informedconsent.pdf ;

      [12] Supra Note 7.

      [13] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013 at 153.

      [14] The Data Minimization principle requires organizations to limit the collection of personal data to the minimum extent necessary to obtain their legitimate purpose and to delete data no longer required.

      [15] Omer Tene and Jules Polonetsky, "Big Data for All: Privacy and User Control in the Age of Analytics," SSRN Scholarly Paper, available at http://papers.ssrn.com/abstract=2149364

      [16] Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [17] Daniel Solove, The Digital Person: Technology and Privacy in the Information Age, NYU Press, 2006.

      [19] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

      [21] Supra Note 10.

      [22] Supra Note 7.

      [23] Chris Jay Hoofnagle & Jennifer King, Research Report: What Californians Understand

      About Privacy Online, available at http://ssrn.com/abstract=1262130

      [24] Joseph Turrow, Michael Hennesy, Nora Draper, The Tradeoff Fallacy, available at https://www.asc.upenn.edu/sites/default/files/TradeoffFallacy_1.pdf

      [25] Saul Hansell, "Compressed Data: The Big Yahoo Privacy Storm That Wasn't," New York Times, May 13, 2002 available at http://www.nytimes.com/2002/05/13/business/compressed-data-the-big-yahoo-privacy-storm-that-wasn-t.html?_r=0

      [26] Cass Sunstein, Choosing not to choose: Understanding the Value of Choice, Oxford University Press, 2015.

      [28] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

      [29] L. F. Cranor. Necessary but not sufficient: Standardized mechanisms for privacy notice and choice. Journal on Telecommunications and High Technology Law, 10:273, 2012, available at http://jthtl.org/content/articles/V10I2/JTHTLv10i2_Cranor.PDF

      [30] Kent Walker, The Costs of Privacy, 2001 available at https://www.questia.com/library/journal/1G1-84436409/the-costs-of-privacy

      [31] Erik Sherman, "Privacy Policies are great - for Phds", CBS News, available at http://www.cbsnews.com/news/privacy-policies-are-great-for-phds/

      [32] Timothy J. Muris, Protecting Consumers' Privacy: 2002 and Beyond, available at http://www.ftc.gov/speeches/muris/privisp1002.htm

      [33] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 1999 available at http://www.repository.law.indiana.edu/ilj/vol75/iss4/1/

      [34] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [35] The Center for Information Policy Leadership, Hunton & Williams LLP, "Ten Steps To Develop A Multi-Layered Privacy Notice" available at https://www.informationpolicycentre.com/files/Uploads/Documents/Centre/Ten_Steps_whitepaper.pdf

      [36] Allen Levy and Manoj Hastak, Consumer Comprehension of Financial Privacy Notices, Interagency Notice Project, available at https://www.sec.gov/comments/s7-09-07/s70907-21-levy.pdf

      [37] Patrick Gage Kelly et al., Standardizing Privacy Notices: An Online Study of the Nutrition Label Approach available at https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416-544506-00037/544506-00037.pdf

      [39] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

      [40] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013.

      [41] Supra Note 15.

      [42] Supra Note 40.

      [43] Article 29 Working Party, (2013) Opinion 03/2013 on Purpose Limitation, Article 29, available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

      [44] Ibid.

      [45] It remains unclear however whose interest would be accounted, existing EU legislation would allow commercial/data broker/third party interests to trump those of the user, effectively allowing re-processing of personal data irrespective of whether that processing would be in the interest of the user.

      [46] Supra Note 40.

      [47] Supra Note 10.

      [48] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

      [49] Helen Nissenbaum, A Contextual Approach to Privacy Online, available at http://www.amacad.org/publications/daedalus/11_fall_nissenbaum.pdf

      [50] D Bollier, The Promise and Peril of Big Data. The Aspen Institute, 2010, available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf

      [51] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

      [52] Supra Note 40.

      [53] Supra Note 17.

      [54] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

      [55] Ibid.

      [57] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

      [58] Supra Note 10.

      [59] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

      UID Ad

      by Prasad Krishna last modified Jan 13, 2016 02:28 AM

      PDF document icon Times of India 29_08_2015.pdf — PDF document, 1155 kB (1182894 bytes)

      Reply to RTI Application under RTI Act of 2005 from Vanya Rakesh

      by Vanya Rakesh last modified Jan 13, 2016 02:40 AM
      Unique Identification Authority of India replied to the RTI application filed by Vanya Rakesh.

      Madam,

      1. Please refer to your RTI application dated 3.12.2015 received in the Division on 10.12.2015 on the subject mentioned above requesting to provide the information in electronic form via the email address [email protected], copies of the artwork in print media released by UIDAI to create awareness about use of Aadhaar not being mandatory.
      2. I am directed to furnish herewith in electronic form, copy of the artwork in print media released / published in the epapers edition of the Times of India and Dainik Jagran in their respective editions of dated 29.8.2015 in a soft copy, about obtaining of Aadhaar not being mandatory for a citizen, as desired.
      3. In case, you want to go for an appeal in connection with the information provided, you may appeal to the Appellate Authority indicated below within thirty days from the date of receipt of this letter.
        Shri Harish Lal Verma,
        Deputy Director (Media),
        Unique Identification Authority of India
        3nd Floor, Tower – II, Jeevan Bharati Building,
        New Delhi – 110001.


      Yours faithfully,

      (T Gou Khangin)
      Section Officer & CPIO Media Division

      Copy for information to: Deputy Director (Establishment) & Nodal CPIO


      Below scanned copies:

      RTI Reply
      RTI Reply
      Coverage in Dainik Jagran
      Dainik Jagran

      Download the coverage in the Times of India here. Read the earlier blog entry here.

      Background Note Big Data

      by Prasad Krishna last modified Jan 17, 2016 01:55 AM

      PDF document icon Background Note-BigDataandandGovernanceinIndia.pdf — PDF document, 131 kB (134227 bytes)

      Network Neutrality across South Asia

      by Prasad Krishna last modified Jan 17, 2016 02:37 AM

      PDF document icon Network Neutrality Agenda Information_1.14..2016.pdf — PDF document, 411 kB (421545 bytes)

      NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

      by Sumandro Chattapadhyay last modified Jan 19, 2016 07:58 AM
      NASSCOM-DSCI organised the 10th Annual Information Security Summit (AISS) 2015 in Delhi during December 16-17. Sumandro Chattapadhyay participated in this engaging Summit. He shares a collection of his notes and various tweets from the event.
      NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

      Annual Information Security Summit (AISS) 2015

       

      Details about the Summit

      Event page: https://www.dsci.in/events/about/2261.

      Agenda: https://www.dsci.in/sites/default/files/Agenda-AISS-2015.pdf.

       

      Notes from the Summit

      Mr. G. K. Pillai, Chairman of Data Security Council of India (DSCI), set the tone of the Summit at the very first hour by noting that 1) state and private industries in India are working in silos when it comes to preventing cybercrimes, 2) there is a lot of skill among young technologists and entrepreneurs, and the state and the private sectors are often unaware of this, and 3) there is serious lack of (cyber-)capacity among law enforcement agencies.

      In his Inaugural Address, Dr. Arvind Gupta (Deputy National Security Advisor and Secretary, NSCS), provided a detailed overview of the emerging challenges and framework of cybersecurity in India. He focused on the following points:

      • Security is a key problem in the present era of ICTs as it is not in-built. In the upcoming IoT era, security must be built into ICT systems.
      • In the next billion addition to internet population, 50% will be from India. Hence cybersecurity is a big concern for India.
      • ICTs will play a catalytic role in achieving SDGs. Growth of internet is part of the sustainable development agenda.
      • We need a broad range of critical security services - big data analytics, identity management, etc.
      • The e-governance initiatives launched by the Indian government are critically dependent on a safe and secure internet.
      • Darkweb is a key facilitator of cybercrime. Globally there is a growing concern regarding the security of cyberspace.
      • On the other hand, there exists deep divide in access to ICTs, and also in availability of content in local languages.
      • The Indian government has initiated bilateral cybersecurity dialogues with various countries.
      • Indian government is contemplating setting up of centres of excellence in cryptography. It has already partnered with NASSCOM to develop cybersecurity guidelines for smart cities.
      • While India is a large global market for security technology, it also needs to be self-reliant. Indian private sector should make use of government policies and bilateral trust enjoyed by India with various developing countries in Africa and south America to develop security technology solutions, create meaningful jobs in India, and export services and software to other developing countries.
      • Strong research and development, and manufacturing base are absolutely necessary for India to be self-reliant in cybersecurity. DSCI should work with private sector, academia, and government to coordinate and realise this agenda.
      • In the line of the Climate Change Fund, we should create a cybersecurity fund, since it is a global problem.
      • Silos are our bane in general. Bringing government agencies together is crucial. Trust issues (between government, private sector, and users) remain, and can only be resolved over time.
      • The demand for cybersecurity solutions in India is so large, that there is space for everyone.
      • The national cybersecurity centre is being set up.
      • Thinktanks can play a crucial role in helping the government to develop strategies for global cybersecurity negotiations. Indian negotiators are often capacity constrained.

      Rajendra Pawar, Chair of the NASSCOM Cyber Security Task Force, NASSCOM Cybersecurity Initiative, provided glimpses of the emerging business opportunity around cybersecurity in India:

      • In next 10 years, the IT economy in India will be USD 350 bn, and 10% of that will be the cybersecurity pie. This means a million job only in the cybersecurity space.
      • Academic institutes are key to creation of new ideas and hence entrepreneurs. Government and private sectors should work closely with academic institutes.
      • Globally, cybersecurity innovation and industries happen in clusters. Cities and states must come forward to create such clusters.
      • 2/3rd of the cybersecurity market is provision of services. This is where India has a great advantage, and should build on that to become a global brand in cybersecurity services.
      • Everyday digital security literacy and cultures need to be created.
      • Publication of cybersecurity best practices among private companies is a necessity.
      • Dedicated cybersecurity spending should be made part of the e-governance budget of central and state governments.
      • DSCI should function as a clearing house of cybersecurity case studies. At present, thought leadership in cybersecurity comes from the criminals. By serving as a use case clearing house, DSCI will inform interested researchers about potential challenges for which solution needs to be created.

      Manish Tiwary of Microsoft informed the audience that India is in the top 3 positions globally in terms of malware proliferation, and this ensures that India is a big focus for Microsoft in its global war against malware. Microsoft India looks forward to work closely with CERT-In and other government agencies.

      The session on Catching Fraudsters had two insightful presentations from Dr. Triveni Singh, Additional SP of Special Task Force of UP Police, and Mr. Manoj Kaushik, IAS, Additional Director of FIU.

      Dr. Singh noted that a key challenge faced by police today is that nobody comes to them with a case of online fraud. Most fraud businesses are run by young groups operating BPOs that steal details from individuals. There exists a huge black market of financial and personal data - often collected from financial institutions and job search sites. Almost any personal data can be bought in such markets. Further, SIM cards under fake names are very easy to buy. The fraudsters are effective using all fake identity, and is using operational infrastructures outsourced from legitimate vendors under fake names. Without a central database of all bank customers, it is very difficult for the police to track people across the financial sector. It becomes even more difficult for Indian police to get access to personal data of potential fraudsters when it is stored in a foreign server. which is often the case with usual web services and apps. Many Indian ISPs do not keep IP history data systematically, or do not have the technical expertise to share it in a structured and time-sensitive way.

      Mr. Kaushik explained that no financial fraud is uniquely committed via internet. Many fraud begin with internet but eventually involve physical fraudulent money transaction. Credit/debit card frauds all involve card data theft via various internet-based and physical methods. However, cybercrime is continued to be mistakenly seen as frauds undertaken completely online. Further, mobile-based frauds are yet another category. Almost all apps we use are compromised, or store transaction history in an insecure way, which reveals such data to hackers. FIU is targeting bank accounts to which fraud money is going, and closing them down. Catching the people behind these bank accounts is much more difficult, as account loaning has become a common practice - where valid accounts are loaned out for a small amount of money to fraudsters who return the account after taking out the fraudulent money. Better information sharing between private sector and government will make catching fraudsters easier.

      The session on Smart Cities focused on discussing the actual cities coming up India, and the security challenges highlighted by them. There was a presentation on Mahindra World City being built near Jaipur. Presenters talked about the need to stabilise, standardise, and securitise the unique identities of machines and sensors in a smart city context, so as to enable secured machine-to-machine communication. Since 'smartness' comes from connecting various applications and data silos together, the governance of proprietary technology and ensuring inter-operable data standards are crucial in the smart city.

      As Special Purposed Vehicles are being planned to realise the smart cities, the presenters warned that finding the right CEOs for these entities will be critical for their success. Legacy processes and infrastructures (and labour unions) are a big challenge when realising smart cities. Hence, the first step towards the smart cities must be taken through connected enforcement of law, order, and social norms.

      Privacy-by-design and security-by-design are necessary criteria for smart cities technologies. Along with that regular and automatic software/middleware updating of distributed systems and devices should be ensured, as well as the physical security of the actual devices and cables.

      In terms of standards, security service compliance standards and those for protocols need to be established for the internet-of-things sector in India. On the other hand, there is significant interest of international vendors to serve the Indian market. All global data and cloud storage players, including Microsoft Azure cloud, are moving into India, and are working on substantial and complete data localisation efforts.

      Mr. R. Chandrasekhar, President of NASSCOM, foregrounded the recommendations made by the Cybersecurity Special Task Force of NASSCOM, in his Special Address on the second day. He noted:

      • There is a great opportunity to brand India as a global security R&D and services hub. Other countries are also quite interested in India becoming such a hub.
      • The government should set up a cybersecurity startup and innovation fund, in coordination with and working in parallel with the centres of excellence in internet-of-things (being led by DeitY) and the data science/analytics initiative (being led by DST).
      • There is an immediate need to create a capable workforce for the cybersecurity industry.
      • Cybersecurity affects everyone but there is almost no public disclosure. This leads to low public awareness and valuation of costs of cybersecurity failures. The government should instruct the Ministry of Corporate Affairs to get corporates to disclose (publicly or directly to the Ministry) security breeches.
      • With digital India and everyone going online, cyberspace will increasingly be prone to attacks of various kinds, and increasing scale of potential loss. Cybersecurity, hence, must be part of the core national development agenda.
      • The cybersecurity market in India is big enough and under-served enough for everyone to come and contribute to it.

      The Keynote Address by Mr. Rajiv Singh, MD – South Asia of Entrust Datacard, and Mr. Saurabh Airi, Technical Sales Consultant of Entrust Datacard, focused on trustworthiness and security of online identities for financial transactions. They argued that all kinds of transactions require a common form factor, which can be a card or a mobile phone. The key challenge is to make the form factor unique, verified, and secure. While no programme is completely secure, it is necessary to build security into the form factor - security of both the physical and digital kind, from the substrates of the card to the encryption algorithms. Entrust and Datacard have merged in recent past to align their identity management and security transaction workflows, from physical cards to software systems for transactions. The advantages of this joint expertise have allowed them to successfully develop the National Population Register cards of India. Now, with the mobile phone emerging as a key financial transaction form factor, the challenge across the cybersecurity industry is to offer the same level of physical, digital, and network security for the mobile phone, as are provided for ATM cards and cash machines.

      The following Keynote Address by Dr. Jared Ragland, Director - Policy of BSA, focused on the cybersecurity investment landscape in India and the neighbouring region. BSA, he explained, is a global trade body of software companies. All major global software companies are members of BSA. Recently, BSA has produced a study on the cybersecurity industry across 10 markets in the Asia Pacific region, titled Asia Pacific Cybersecurity Dashboard. The study provides an overview of cybersecurity policy developments in these countries, and sector-specific opportunities in the region. Dr. Ragland mentioned the following as the key building blocks of cybersecurity policy: legal foundation, establishment of operational entities, building trust and partnerships (PPP), addressing sector-specific requirements, and education and awareness. As for India, he argued that while steady steps have been taken in the cybersecurity policy space by the government, a lot remains to be done. Operationalisation of the policy is especially lacking. PPPs are happening but there is a general lack of persistent formal engagement with the private sector, especially with global software companies. There is almost no sector-specific strategy. Further, the requirement for India-specific testing of technologies, according to domestic and not global standards, is leading to entry barrier for global companies and export barrier for Indian companies. Having said that, Dr. Ragland pointed out that India's cybersecurity experience is quite representative of that of the Asia Pacific region. He noted the following as major stumbling blocks from an international industry perspective: unnecessary and unreasonable testing requirements, setting of domestic standards, and data localisations rules.

      One of the final sessions of the Summit was the Public Policy Dialogue between Prof. M.V. Rajeev Gowda, Member of Parliament, Rajya Sabha, and Mr. Arvind Gupta, Head of IT Cell, BJP.

      Prof. Gowda focused on the following concerns:

      • We often freely give up our information and rights over to owners of websites and applications on the web. We need to ask questions regarding the ownership, storage, and usage of such data.
      • While Section 66A of Information Technology Act started as a anti-spam rule, it has actually been used to harass people, instead of protecting them from online harassment.
      • The bill on DNA profiling has raised crucial privacy concerns related to this most personal data. The complexity around the issue is created by the possibility of data leakage and usage for various commercial interests.
      • We need to ask if western notions of privacy will work in the Indian context.
      • We need to move towards a cashless economy, which will not only formalise the existing informal economy but also speed up transactions nationally. We need to keep in mind that this will put a substantial demand burden on the communication infrastructure, as all transactions will happen through these.

      Mr. Gupta shared his keen insights about the key public policy issues in digital India:

      • The journey to establish the digital as a key political agenda and strategy within BJP took him more than 6 years. He has been an entrepreneur, and will always remain one. His approached his political journey as an entrepreneur.
      • While we are producing numerous digitally literate citizens, the companies offering services on the internet often unknowingly acquire data about these citizens, store them, and sometimes even expose them. India perhaps produces the greatest volume of digital exhaust globally.
      • BJP inherited the Aadhaar national identity management platform from UPA, and has decided to integrate it deeply into its digital India architecture.
      • Financial and administrative transactions, especially ones undertake by and with governments, are all becoming digital and mostly Aadhaar-linked. We are not sure where all such data is going, and who all has access to such data.
      • Right now there is an ongoing debate about using biometric system for identification. The debate on privacy is much needed, and a privacy policy is essential to strengthen Aadhaar. We must remember that the benefits of Aadhaar clearly outweigh the risks. Greatest privacy threats today come from many other places, including simple mobile torch apps.
      • India is rethinking its cybersecurity capacities in a serious manner. After Paris attack it has become obvious that the state should be allowed to look into electronic communication under reasonable guidelines. The challenge is identifying the fine balance between consumers' interest on one hand, and national interest and security concerns on the other. Unfortunately, the concerns of a few is often getting amplified in popular media.
      • MyGov platform should be used much more effectively for public policy debates. Social media networks, like Twitter, are not the correct platforms for such debates.

       

       

      Transparency in Surveillance

      by Vipul Kharbanda last modified Jan 23, 2016 03:11 PM
      Transparency is an essential need for any democracy to function effectively. It may not be the only requirement for the effective functioning of a democracy, but it is one of the most important principles which need to be adhered to in a democratic state.

      Introduction

      A democracy involves the state machinery being accountable to the citizens that it is supposed to serve, and for the citizens to be able to hold their state machinery accountable, they need accurate and adequate information regarding the activities of those that seek to govern them. However, in modern democracies it is often seen that those in governance often try to circumvent legal requirements of transparency and only pay lip service to this principle, while keeping their own functioning as opaque as possible.

      This tendency to not give adequate information is very evident in the departments of the government which are concerned with surveillance, and merit can be found in the argument that all of the government's clandestine surveillance activities cannot be transparent otherwise they will cease to be "clandestine" and hence will be rendered ineffective. However, this argument is often misused as a shield by the government agencies to block the disclosure of all types of information about their activities, some of which may be essential to determine whether the current surveillance regime is working in an effective, ethical, and legal manner or not. It is this exploitation of the argument, which is often couched in the language of or coupled with concerns of national security, that this paper seeks to address while voicing the need for greater transparency in surveillance activities and structures.

      In the first section the paper examines the need for transparency, and specifically deals with the requirement for transparency in surveillance. In the next part, the paper discusses the regulations governing telecom surveillance in India. The final part of the paper discusses possible steps that may be taken by the government in order to increase transparency in telecom surveillance while keeping in mind that the disclosure of such information should not make future surveillance ineffective.

      Need for Transparency

      In today's age where technology is all pervasive, the term "surveillance" has developed slightly sinister overtones, especially in the backdrop of the Edward Snowden fiasco. Indeed, there have been several independent scandals involving mass surveillance of people in general as well as illegal surveillance of specific individuals. The fear that the term surveillance now invokes, especially amongst those social and political activists who seek to challenge the status quo, is in part due to the secrecy surrounding the entire surveillance regime. Leaving aside what surveillance is carried out, upon whom, and when - the state actors are seldom willing and open to talk about how surveillance is carried out, how decisions regarding who and how to target, are reached, how agency budgets are allocated and spent, how effective surveillance actions were, etc. While there may be justified security based arguments to not disclose the full extent of the state's surveillance activities, however this cloak of secrecy may be used illegally and in an unauthorized manner to achieve ends more harmful to citizen rights than the maintenance of security and order in the society.

      Surveillance and interception/collection of communications data can take place under different legal processes in different countries, ranging from court-ordered requests of specified data from telecommunications companies to broad executive requests sent under regimes or regulatory frameworks requiring the disclosure of information by telecom companies on a pro-active basis. However, it is an open secret that data collection often takes place without due process or under non-legal circumstances.

      It is widely believed that transparency is a critical step towards the creation of mechanisms for increased accountability through which law enforcement and government agencies access communications data. It is the first step in the process of starting discussions and an informed public debate regarding how the state undertakes activities of surveillance, monitoring and interception of communications and data. Since 2010, a large number of ICT companies have begun to publish transparency reports on the extent that governments request their user data as well as requirements to remove content. However, governments themselves have not been very forthcoming in providing such detailed information on surveillance programs which is necessary for an informed debate on this issue.[1] Although some countries currently report limited information on their surveillance activities, e.g. the U.S. Department of Justice publishes an annual Wiretap Report (U.S. Courts, 2013a), and the United Kingdom publishes the Interception of Communications Commissioner Annual Report (May, 2013), which themselves do not present a complete picture, however even such limited measures are unheard of in a country such as India.

      It is obvious that Governments can provide a greater level of transparency regarding the limits in place on the freedom of expression and privacy than transparency reports by individual companies. Company transparency reports can only illuminate the extent to which any one company receives requests and how that company responds to them. By contrast, government transparency reports can provide a much greater perspective on laws that can potentially restrict the freedom of expression or impact privacy by illustrating the full extent to which requests are made across the ICT industry. [2]

      In India, the courts and the laws have traditionally recognized the need for transparency and derive it from the fundamental right to freedom of speech and expression guaranteed in our Constitution. This need coupled with a sustained campaign by various organizations finally fructified into the passage of the Right to Information Act, 2005, (RTI Act) which amongst other things also places an obligation on the sate to place its documents and records online so that the same may be freely available to the public. In light of this law guaranteeing the right to information, the citizens of India have the fundamental right to know what the Government is doing in their name. The free flow of information and ideas informs political growth and the freedom of speech and expression is the lifeblood of a healthy democracy, it acts as a safety valve. People are more ready to accept the decisions that go against them if they can in principle seem to influence them. The Supreme Court of India is of the view that the imparting of information about the working of the government on the one hand and its decision affecting the domestic and international trade and other activities on the other is necessary, and has imposed an obligation upon the authorities to disclose information.[3]

      The Supreme Court, in Namit Sharma v. Union of India,[4] while discussing the importance of transparency and the right to information has held:

      "The Right to Information was harnessed as a tool for promoting development; strengthening the democratic governance and effective delivery of socio-economic services. Acquisition of information and knowledge and its application have intense and pervasive impact on the process of taking informed decision, resulting in overall productivity gains .

      ……..

      Government procedures and regulations shrouded in the veil of secrecy do not allow the litigants to know how their cases are being handled. They shy away from questioning the officers handling their cases because of the latters snobbish attitude. Right to information should be guaranteed and needs to be given real substance. In this regard, the Government must assume a major responsibility and mobilize skills to ensure flow of information to citizens. The traditional insistence on secrecy should be discarded."

      Although these statements were made in the context of the RTI Act the principle which they try to illustrate can be understood as equally applicable to the field of state sponsored surveillance. Though Indian intelligence agencies are exempt from the RTI Act, it can be used to provide limited insight into the scope of governmental surveillance. This was demonstrated by the Software Freedom Law Centre, who discovered via RTI requests that approximately 7,500 - 9,000 interception orders are sent on a monthly basis.[5]

      While it is true that transparency alone will not be able to eliminate the barriers to freedom of expression or harm to privacy resulting from overly broad surveillance,, transparency provides a window into the scope of current practices and additional measures are needed such as oversight and mechanisms for redress in cases of unlawful surveillance. Transparency offers a necessary first step, a foundation on which to examine current practices and contribute to a debate on human security and freedom.[6]

      It is no secret that the current framework of surveillance in India is rife with malpractices of mass surveillance and instances of illegal surveillance. There have been a number of instances of illegal and/or unathorised surveillance in the past, the most scandalous and thus most well known is the incident where a woman IAS officer was placed under surveillance at the behest of Mr. Amit Shah who is currently the president of the ruling party in India purportedly on the instructions of the current prime minister Mr. Narendra Modi.[7] There are also a number of instances of private individuals indulging in illegal interception and surveillance; in the year 2005, it was reported that Anurag Singh, a private detective, along with some associates, intercepted the telephonic conversations of former Samajwadi Party leader Amar Singh. They allegedly contacted political leaders and media houses for selling the tapped telephonic conversation records. The interception was allegedly carried out by stealing the genuine government letters and forging and fabricating them to obtain permission to tap Amar Singh's telephonic conversations. [8] The same individual was also implicated for tapping the telephone of the current finance minister Mr. Arun Jaitely.[9]

      It is therefore obvious that the status quo with regard to the surveillance mechanism in India needs to change, but this change has to be brought about in a manner so as to make state surveillance more accountable without compromising its effectiveness and addressing legitimate security concerns. Such changes cannot be brought about without an informed debate involving all stakeholders and actors associated with surveillance, however the basic minimum requirement for an "informed" debate is accurate and sufficient information about the subject matter of the debate. This information is severely lacking in the public domain when it comes to state surveillance activities - with most data points about state surveillance coming from news items or leaked information. Unless the state becomes more transparent and gives information about its surveillance activities and processes, an informed debate to challenge and strengthen the status quo for the betterment of all parties cannot be started.

      Current State of Affairs

      Surveillance laws in India are extremely varied and have been in existence since the colonial times, remnants of which are still being utilized by the various State Police forces. However in this age of technology the most important tools for surveillance exist in the digital space and it is for this reason that this paper shall focus on an analysis of surveillance through interception of telecommunications traffic, whether by tracking voice calls or data. The interception of telecommunications actually takes place under two different statutes, the Telegraph Act, 1885 (which deals with interception of calls) as well as the Information Technology Act, 2000 (which deals with interception of data).

      Currently, the telecom surveillance is done as per the procedure prescribed in the Rules under the relevant sections of the two statutes mentioned above, viz. Rule 419A of the Telegraph Rules, 1951 for surveillance under the Telegraph Act, 1885 and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 for surveillance under the Information Technology Act, 2000. These Rules put in place various checks and balances and try to ensure that there is a paper trail for every interception request. [10] The assumption is that the generation of a paper trail would reduce the number of unauthorized interception orders thus ensuring that the powers of interception are not misused. However, even though these checks and balances exist on paper as provided in the laws, there is not enough information in the public domain regarding the entire mechanism of interception for anyone to make a judgment on whether the system is working or not.

      As mentioned earlier, currently the only sources of information on interception that are available in the public domain are through news reports and a handful of RTI requests which have been filed by various activists.[11] The only other institutionalized source of information on surveillance in India is the various transparency reports brought out by companies such as Google, Yahoo, Facebook, etc.

      Indeed, Google was the first major corporation to publish a transparency report in 2010 and has been updating its report ever since. The latest data that is available for Google is for the period between January, 2015 to June, 2015 and in that period Google and Youtube together received 3,087 requests for data which asked for information on 4,829 user accounts from the Indian Government. Out of these requests Google only supplied information for 44% of the requests.[12] Although Google claims that they "review each request to make sure that it complies with both the spirit and the letter of the law, and we may refuse to produce information or try to narrow the request in some cases", it is not clear why Google rejected 56% of the requests. It may also be noted that the number of requests for information that Google received from India were the fifth highest amongst all the other countries on which information was given in the Transparency Report, after USA, Germany, France and the U.K.

      Facebook's transparency report for the period between January, 2015 to June, 2015 reveals that Facebook received 5,115 requests from the Indian Government for 6,268 user accounts, out of which Facebook produced data in 45.32% of the cases.[13] Facebook's transparency report claims that they respond to requests relating to criminal cases and "Each and every request we receive is checked for legal sufficiency and we reject or require greater specificity on requests that are overly broad or vague." However, even in Facebook's transparency report it is unclear why 55.68% of the requests were rejected.

      The Yahoo transparency report also gives data from the period between January 1, 2015 to June 30, 2015 and reveals that Yahoo received 831 requests for data, which related to 1,184 user accounts from the Indian Government. The Yahoo report is a little more detailed and also reveals that 360 of the 831 requests were rejected by Yahoo, however no details are given as to why the requests were rejected. The report also specifies that in 63 cases, no data was found by Yahoo, in 249 cases only non content data[14] was disclosed while in 159 cases content [15] was disclosed. The Yahoo report also claims that "We carefully scrutinize each request to make sure that it complies with the law, and we push back on those requests that don't satisfy our rigorous standards."

      While the Vodafone Transparency Report gives information regarding government requests for data in other jurisdictions, [16] it does not give any information on government requests in India. This is because Vodafone interprets the provisions contained in Rule 25(4) of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 (Interception Rules) and Rule 11 of the IT (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 as well as Rule 419A(19) of the Indian Telegraph Rules, 1954 which require service providers to maintain confidentiality/secrecy in matters relating to interception, as being a legal prohibition on Vodafone to reveal such information.

      Apart from the four major companies discussed above, there are a large number of private corporations which have published transparency reports in order to acquire a sense of trustworthiness amongst their customers. Infact, the Ranking Digital Rights Project has been involved in ranking some of the biggest companies in the world on their commitment to accountability and has brought out the Ranking Digital Rights 2015 Corporate Accountability Index that has analysed a representative group of 16 companies "that collectively hold the power to shape the digital lives of billions of people across the globe".

      Suggestions on Transparency

      It is clear from the discussions above, as well as a general overview of various news reports on the subject, that telecom surveillance in India is shrouded in secrecy and it appears that a large amount of illegal and unauthorized surveillance is taking place behind the protection of this veil of secrecy. If the status quo continues, then it is unlikely that any meaningful reforms would take place to bring about greater accountability in the area of telecom surveillance. It is imperative, for any sort of changes towards greater accountability to take place, that we have enough information about what exactly is happening and for that we need greater transparency since transparency is the first step towards greater accountability.

      Transparency Reports

      In very simplistic terms transparency, in anything, can best be achieved by providing as much information about that thing as possible so that there are no secrets left. However, it would be naïve to say that all information about interception activities can be made public on the altar of the principle of transparency, but that does not mean that there should be no information at all on interception. One of the internationally accepted methods of bringing about transparency in interception mechanisms, which is increasingly being adopted by both the private sector as well as governments, is to publish Transparency Reports giving various details of interception while keeping security concerns in mind. The two types of transparency reports that we require in India and what that would entail is briefly discussed below:

      By the Government

      The problem with India's current regime for interception is that the entire mechanism appears more or less adequate on paper with enough checks and balances involved in it to prevent misuse of the allotted powers. However, because the entire process is veiled in secrecy, nobody knows exactly how good or how rotten the system has become and whether it is working to achieve its intended purposes. It is clear that the current system of interception and surveillance being followed by the government has some flaws, as can be gathered from the frequent news articles which talk about incidents of illegal surveillance. However, without any other official or more reliable sources of information regarding surveillance activities these anecdotal pieces of evidence are all we have to shape the debate regarding surveillance in India. It is only logical then that the debate around surveillance, which is informed by such sketchy and unreliable news reports will automatically be biased against the current mechanism since the newspapers would also only be interested in reporting the scandalous and the extraordinary incidents. For example, some argue that the government undertakes mass surveillance, while others argue that India only carries out targeted surveillance, but there is not enough information publicly available for a third party to support or argue against either claim. It is therefore necessary and highly recommended that the government start releasing a transparency report such as the one's brought out by the United States and the UK as mentioned above.

      There is no need for a separate department or authority just to make the transparency report and this task could probably be performed in-house by any department, but considering the sector involved, it would perhaps be best if the Department of Telecommunications is given the responsibility to bring out a transparency report. These transparency reports should contain certain minimum amount of data for them to be an effective tool in informing the public discourse and debate regarding surveillance and interception. The report needs to strike a balance between providing enough information so that an informed analysis can be made of the effectiveness of the surveillance regime without providing so much information so as to make the surveillance activities ineffective. Below is a list of suggestions as to what kind of data/information such reports should contain:

      • Reports should contain data regarding the number of interception orders that have been passed. This statistic would be extremely useful in determining how elaborate and how frequently the state indulges in interception activities. This information would be easily available since all interception orders have to be sent to the Review Committee set up under Rule 419A of the Telegraph Rules, 1954.
      • The Report should contain information on the procedural aspects of surveillance including the delegation of powers to different authorities and individuals, information on new surveillance schemes, etc. This information would also be available with the Ministry of Home Affairs since it is a Secretary or Joint Secretary level officer in the said Ministry which is supposed to authorize every order for interception.
      • The report should contain an aggregated list of reasons given by the authorities for ordering interception. This information would reveal whether the authorities are actually ensuring legal justification before issuing interception or are they just paying lip service to the rules to ensure a proper paper trail. Since every order of interception has to be in writing, the main reasons for interception can easily be gleaned from a perusal of the orders.
      • It should also reveal the percentage of cases where interception has actually found evidence of culpability or been successful in prevention of criminal activities. This one statistic would itself give a very good review of the effectiveness of the interception regime. Granted that this information may not be very easily obtainable, but it can be obtained with proper coordination with the police and other law enforcement agencies.
      • The report should also reveal the percentage of order that have been struck down by the Review Committee as not following the process envisaged under the various Rules. This would give a sense of how often the Rules are being flouted while issuing interception orders. This information can easily be obtained from the papers and minutes of the meetings of the Review Committee.
      • The report should also state the number of times the Review Committee has met in the period being reported upon. The Review Committee is an important check on the misuse of powers by the authorities and therefore it is important that the Review Committee carries out its activities in a diligent manner.

      It may be noted here that some provisions of the Telegraph Rules, 1954 especially sub-Rules 17 and 18 of Rule 419A as well as Rules 22, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 may need to be amended so as to make them compliant with the reporting mechanism proposed above.

      By the Private Sector

      We have already discussed above the transparency reports published by certain private companies. Suffice it to say that reports from private companies should give as much of the information discussed under government reports as possible and/or applicable, since they may not have a large amount of the information that is sought to be published in the government reports such as whether the interception was successful, the reasons for interception, etc. It is important to have ISPs provide such transparency reports as this will provide two different data points for information on interception and the very existence of these private reports may act as a check to ensure the veracity of the government transparency reports.

      As in the case of government reports, for the transparency reports of the private sector to be effective, certain provisions of the Telegraph Rules, 1954 and the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009, viz. sub-Rules 14, 15 and 19 of Rule 419A of the Telegraph Rules, 1954 and Rules 20, 21, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009.

      Overhaul of the Review Committee

      The Review Committee which acts as a check on the misuse of powers by the competent authorities is a very important cog in the entire process. However, it is staffed entirely by the executive and does not have any members of any other background. Whilst it is probably impractical to have civilian members in the Review Committee which has access to potentially sensitive information, it is extremely essential that the Committee has wider representation from other sectors specially the judiciary. One or two members from the judiciary on the Review Committee would provide a greater check on the workings of the Committee as this would bring in representation from the judicial arm of the State so that the Review Committee does not remain a body manned purely by the executive branch. This could go some ways to ensure that the Committee does not just "rubber stamp" the orders of interception issued by the various competent authorities.

      Conclusion

      It is not in dispute that there is a need for greater transparency in the government's surveillance activities in order to address the problems associated with illegal and unauthorised interceptions. This paper is not making the case that greater transparency in and by itself will be able to solve the problems that may be associated with the government's currency interception and surveillance regime, however it is not possible to address any problem unless we know the real extent of it. It is essential for an informed debate and discussion that the people participating in the discussion are "informed", i.e. they should have accurate and adequate information regarding the issues which are being discussed. The current state of the debate on interception is rife with individuals using illustrative and anecdotal evidence which, in the absence of any other evidence, they assume to be the norm.

      A more transparent and forthcoming state machinery which regularly keeps its citizens abreast of the state of its surveillance regime would be likely to get better suggestions and perhaps less criticisms if it does come out that the checks and balances imposed in the regulations are actually making a difference to check unauthorized interceptions, and if not, then it is the right of the citizens to know about this and ask for reforms.


      [1] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9(2015), Feature 3450-3459, 2015.

      [2] Id.

      [4] http://www.judis.nic.in/supremecourt/imgs1.aspx?filename=39566 . Although the judgment was overturned on review, however this observation quoted above would still hold as it has not been specifically overturned.

      [6] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9 (2015), Feature 3450-3459, 2015.

      [10] For a detailed discussion of the Rules of interception please see Policy Paper on Surveillance in India, by Vipul Kharbanda, http://cis-india.org/internet-governance/blog/policy-paper-on-surveillance-in-india .

      [14] Non-content data (NCD) such as basic subscriber information including the information captured at the time of registration such as an alternate e-mail address, name, location, and IP address, login details, billing information, and other transactional information (e.g., "to," "from," and "date" fields from email headers).

      [15] Data that users create, communicate, and store on or through Yahoo. This could include words in a communication (e.g., Mail or Messenger), photos on Flickr, files uploaded, Yahoo Address Book entries, Yahoo Calendar event details, thoughts recorded in Yahoo Notepad or comments or posts on Yahoo Answers or any other Yahoo property.

      Big Data in the Global South - An Analysis

      by Tanvi Mani last modified Jan 24, 2016 02:54 AM

      I. Introduction

      "The period that we have embarked upon is unprecedented in history in terms of our ability to learn about human behavior." [1]

      The world we live in today is facing a slow but deliberate metamorphosis of decisive information; from the erstwhile monopoly of world leaders and the captains of industry obtained through regulated means, it has transformed into a relatively undervalued currency of knowledge collected from individual digital expressions over a vast network of interconnected electrical impulses.[2] This seemingly random deluge of binary numbers, when interpreted represents an intricately woven tapestry of the choices that define everyday life, made over virtual platforms. The machines we once employed for menial tasks have become sensorial observers of our desires, wants and needs, so much so that they might now predict the course of our future choices and decisions.[3] The patterns of human behaviour that are reflected within this data inform policy makers, in both a public and private context. The collective data obtained from our digital shadows thus forms a rapidly expanding storehouse of memory, from which interested parties can draw upon to resolve problems and enable a more efficient functioning of foundational institutions, such as the markets, the regulators and the government.[4]

      The term used to describe a large volume of collected data, in a structured as well as unstructured form is called Big Data. This data requires niche technology, outside of traditional software databases, to process; simply because of its exponential increment in a relatively short period of time. Big Data is usually identified using a "three V" characterization - larger volume, greater variety and distinguishably high rates of velocity. [5] This is exemplified in the diverse sources from which this data is obtained; mobile phone records, climate sensors, social media content, GPS satellite identifications and patterns of employment, to name a few. Big data analytics refers to the tools and methodologies that aim to transform large quantities of raw data into "interpretable data", in order to study and discern the same so that causal relationships between events can be conclusively established.[6] Such analysis could allow for the encouragement of the positive effects of such data and a concentrated mitigation of negative outcomes.

      This paper seeks to map out the practices of different governments, civil society, and the private sector with respect to the collection, interpretation and analysis of big data in the global south, illustrated across a background of significant events surrounding the use of big data in relevant contexts. This will be combined with an articulation of potential opportunities to use big data analytics within both the public and private spheres and an identification of the contextual challenges that may obstruct the efficient use of this data. The objective of this study is to deliberate upon how significant obstructions to the achievement of developmental goals within the global south can be overcome through an accurate recognition, interpretation and analysis of big data collected from diverse sources.

      II. Uses of Big Data in the Global Development

      Big Data for development is the process though which raw, unstructured and imperfect data is analyzed, interpreted and transformed into information that can be acted upon by governments and policy makers in various capacities. The amount of digital data available in the world today has grown from 150 exabytes in 2005 to 1200 exabytes in 2010.[7] It is predicted that this figure would increase by 40% annually in the next few years[8], which is close to 40 times growth of the world's population. [9] The implication of this is essentially that the share of available data in the world today that is less than a minute old is increasing at an exponential rate. Moreover, an increasing percentage of this data is produced and created real-time.

      The data revolution that is incumbent upon us is characterized by a rapidly accumulating and continuously evolving stock of data prevalent` in both industrialized as well as developing countries. This data is extracted from technological services that act as sensors and reflect the behaviour of individuals in relation to their socio-economic circumstances.

      For many global south countries, this data is generated through mobile phone technology. This trend is evident in Sub Saharan Africa, where mobile phone technology has been used as an effective substitute for often weak and unstructured State mechanisms such as faulty infrastructure, underdeveloped systems of banking and inferior telecommunication networks.[10]

      For example, a recent study presented at the Data for Development session at the NetMob Conference at MIT used mobile phone data to analyze the impact of opening a new toll highway in Dakar, Senegal on human mobility, particularly how people commute to work in the metropolitan area. [11] A huge investment, the improved infrastructure is expected to result in a significant increase of people in and out of Dakar, along with the transport of essential goods. This would initiate rural development in the areas outside of Dakar and boost the value of land within the region.[12] The impact of the newly constructed highway can however only be analyzed effectively and accurately through the collection of this mobile phone data from actual commuters, on a real time basis.

      Mobile phones technology is no longer used just for personal communication but has been transformed into an effective tool to secure employment opportunities, transfer money, determine stock options and assess the prices of various commodities.[13] This generates vast amounts of data about individuals and their interactions with the government and private sector companies. Internet Traffic is predicted to grow between 25 to 30 % in the next few years in North America, Western Europe and Japan but in Latin America, The Middle East and Africa this figure has been expected to touch close to 50%.[14] The bulk of this internet traffic can be traced back to mobile devices.

      The potential applicability of Big Data for development at the most general level is the ability to provide an overview of the well being of a given population at a particular period of time.[15] This overcomes the relatively longer time lag that is prevalent with most other traditional forms of data collection. The analysis of this data has helped, to a large extent, uncover "digital smoke signals" - or inherent changes in the usage patterns of technological services, by individuals within communities.[16] This may act as an indicator of the changes in the underlying well-being of the community as a whole. This information about the well-being of a community derived from their usage of technology provides significantly relevant feedback to policy makers on the success or failure of particular schemes and can pin point changes that need to be made to status quo. [17]The hope is that this feedback delivered in real-time, would in turn lead to a more flexible and accessible system of international development, thus securing more measurable and sustained outcomes. [18]

      The analysis of big data involves the use of advanced computational technology that can aid in the determination of trends, patterns and correlations within unstructured data so as to transform it into actionable information. It is hoped that this in addition to the human perspective and experience afforded to the process could enable decision makers to rely upon information that is both reliable and up to date to formulate durable and self-sustaining development policies.

      The availability of raw data has to be adequately complemented with intent and a capacity to use it effectively. To this effect, there is an emerging volume of literature that seeks to characterize the primary sources of this Big Data as sharing certain easily distinguishable features. Firstly, it is digitally generated and can be stored in a binary format, thus making it susceptible to requisite manipulation by computers attempting to engage in its interpretation. It is passively produced as a by-product of digital interaction and can be automatically extracted for the purpose of continuous analysis. It is also geographically traceable within a predetermined time period. It is however important to note that "real time" does not necessarily refer to information occurring instantly but is reflective of the relatively short time in which the information is produced and made available thus making it relevant within the requisite timeframe. This allows efficient responsive action to be taken in a short span of time thus creating a feedback loop. [19]

      In most cases the granularity of the data is preferably sought to be expanded over a larger spatial context such as a village or a community as opposed to an individual simply because this affords an adequate recognition of privacy concerns and the lack of definitive consent of the individuals in the extraction of this data. In order to ease the process of determination of this data, the UN Global Pulse has developed taxonomy of sorts to assess the types of data sources that are relevant to utilizing this information for development purposes.[20] These include the following sources;

      Data Exhaust or the digital footprint left behind by individuals' use of technology for service oriented tasks such as web purchases, mobile phone transactions and real time information collected by UN agencies to monitor their projects such as levels of food grains in storage units, attendance in schools etc.

      Online Information which includes user generated content on the internet such as news, blog entries and social media interactions which may be used to identify trends in human desires, perceptions and needs.

      Physical sensors such as satellite or infrared imagery of infrastructural development, traffic patterns, light emissions and topographical changes, thus enabling the remote sensing of changes in human activity over a period of time.

      Citizen reporting or crowd sourced data , which includes information produced on hotlines, mobile based surveys, customer generated maps etc. Although a passive source of data collection, this is a key instrument in assessing the efficacy of action oriented plans taken by decision makers.

      The capacity to analyze this big data is hinged upon the reliance placed on technologically advanced processes such as powerful algorithms which can synthesize the abundance of raw data and break down the information enabling the identification of patterns and correlations. This process would rely on advanced visualization techniques such "sense-making tools"[21]

      The identification of patterns within this data is carried out through a process of instituting a common framework for the analysis of this data. This requires the creation of a specific lexicon that would help tag and sort the collected data. This lexicon would specify what type of information is collected and who it is interpreted and collected by, the observer or the reporter. It would also aid in the determination of how the data is acquired and the qualitative and quantitative nature of the data. Finally, the spatial context of the data and the time frame within which it was collected constituting the aspects of where and when would be taken into consideration. The data would then be analyzed through a process of Filtering, Summarizing and Categorizing the data by transforming it into an appropriate collection of relevant indicators of a particular population demographic. [22]

      The intensive mining of predominantly socioeconomic data is known as "reality mining" [23] and this can shed light on the processes and interactions that are reflected within the data. This is carried out via a tested three fold process. Firstly, the " Continuous Analysis over the streaming of the data", which involves the monitoring and analyzing high frequency data streams to extract often uncertain raw data. For example, the systematic gathering of the prices of products sold online over a period of time. Secondly, "The Online digestion of semi structured data and unstructured data", which includes news articles, reviews of services and products and opinion polls on social media that aid in the determination of public perception, trends and contemporary events that are generating interest across the globe. Thirdly, a 'Real-time Correlation of streaming data with slowly accessible historical data repositories,' which refers to the "mechanisms used for correlating and integrating data in real-time with historical records."[24] The purpose of this stage is to derive a contextualized perception of personalized information that seeks to add value to the data by providing a historical context to it. Big Data for development purposes would make use of a combination of these depending on the context and need.

      (i) Policy Formulation

      The world today has become increasingly volatile in terms of how the decisions of certain countries are beginning to have an impact on vulnerable communities within entirely different nations. Our global economy has become infinitely more susceptible to fluctuating conditions primarily because of its interconnectivity hinged upon transnational interdependence. The primordial instigators of most of these changes, including the nature of harvests, prices of essential commodities, employment structures and capital flows, have been financial and environmental disruptions. [25] According to the OECD, " Disruptive shocks to the global economy are likely to become more frequent and cause greater economic and social hardship. The economic spillover effects of events like the financial crisis or a potential pandemic will grow due to the increasing interconnectivity of the global economy and the speed with which people, goods and data travel."[26]

      The local impacts of these fluctuations may not be easily visible or even traceable but could very well be severe and long lasting. A vibrant literature on the vulnerability of communities has highlighted the impacts of these shocks on communities often causing children to drop out of school, families to sell their productive assets, and communities to place a greater reliance on state rations.[27] These vulnerabilities cannot be definitively discerned through traditional systems of monitoring and information collection. The evidence of the effects of these shocks often take too long to reach decision makers; who are unable to formulate effective policies without ascertaining the nature and extent of the hardships suffered by these in a given context. The existing early warning systems in place do help raise flags and draw attention to the problem but their reach is limited and veracity compromised due to the time it takes to extract and collate this information through traditional means. These traditional systems of information collection are difficult to implement within rural impoverished areas and the data collected is not always reliable due to the significant time gap in its collection and subsequent interpretation. Data collected from surveys does provide an insight into the state of affairs of communities across demographics but this requires time to be collected, processed, verified and eventually published. Further, the expenses incurred in this process often prove to be difficult to offset.

      The digital revolution therefore provides a significant opportunity to gain a richer and deeper insight into the very nature and evolution of the human experience itself thus affording a more legitimate platform upon which policy deliberations can be articulated. This data driven decision making, once the monopoly of private institutions such as The World Economic Forum and The McKinsey Institute [28] has now emerged at the forefront of the public policy discourse. Civil society has also expressed an eagerness to be more actively involved in the collection of real-time data after having perceived its benefits. This is evidenced by the emergence of 'crowd sourcing'[29] and other 'participatory sensing' [30] efforts that are founded upon the commonalities shared by like minded communities of individuals. This is being done on easily accessible platforms such as mobile phone interfaces, hand-held radio devices and geospatial technologies. [31]

      The predictive nature of patterns identifiable from big data is extremely relevant for the purpose of developing socio-economic policies that seek to bridge problem-solution gaps and create a conducive environment for growth and development. Mobile phone technology has been able to quantify human behavior on an unprecedented scale.[32] This includes being able to detect changes in standard commuting patterns of individuals based on their employment status[33] and estimating a country's GDP in real-time by measuring the nature and extent of light emissions through remote sensing. [34]

      A recent research study has concluded that "due to the relative frequency of certain queries being highly correlated with the percentage of physician visits in which individuals present influenza symptoms, it has been possible to accurately estimate the levels of influenza activity in each region of the United States, with a reporting lag of just a day." Online data has thus been used as a part of syndromic surveillance efforts also known as infodemiology. [35] The US Centre for Disease Control has concluded that mining vast quantities of data through online health related queries can help detect disease outbreaks " before they have been confirmed through a diagnosis or a laboratory confirmation." [36] Google trends works in a similar way.

      Another public health monitoring system known as the Healthmap project compiles seemingly fragmented data from news articles, social media, eye-witness reports and expert discussions based on validated studies to "achieve a unified and comprehensive view of the current global state of infectious diseases" that may be visualized on a map. [37]

      Big Data used for development purpose can reduce the reliance on human inputs thus narrowing the room for error and ensuring the accuracy of information collected upon which policy makers can base their decisions.

      (ii) Advocacy and Social Change

      Due to the ability of Big Data to provide an unprecedented depth of detail on particular issues, it has often been used as a vehicle of advocacy to highlight various issues in great detail. This makes it possible to ensure that citizens are provided with a far more participative experience, capturing their attention and hence better communicating these problems. Numerous websites have been able to use this method of crowd sourcing to broadcast socially relevant issues[38]. Moreover, the massive increase in access to the internet has dramatically improved the scope for activism through the use of volunteered data due to which advocates can now collect data from volunteers more effectively and present these issues in various forums. Websites like Ushahidi[39] and the Black Monday Movement [40] being prime examples of the same. These platforms have championed various causes, consistently exposing significant social crises' that would otherwise go unnoticed.

      The Ushahidi application used crowd sourcing mechanisms in the aftermath of the Haiti earthquake to set up a centralized messaging system that allowed mobile phone users to provide information on injured and trapped people.[41] An analysis of the data showed that the concentration of text messages was correlated with the areas where there was an increased concentration of damaged buildings. [42] Patrick Meier of Ushahidi noted "These results were evidence of the system's ability to predict, with surprising accuracy and statistical significance, the location and extent of structural damage post the earthquake." [43]

      Another problem that data advocacy hopes to tackle, however, is that of too much exposure, with advocates providing information to various parties to help ensure that there exists no unwarranted digital surveillance and that sensitive advocacy tools and information are not used inappropriately. An interesting illustration of the same is The Tactical Technology Collective[44] that hopes to improve the use of technology by activists and various other political actors. The organization, through various mediums such as films, events etc. hopes to train activists regarding data protection and privacy awareness and skills among human rights activists. Additionally, Tactical Technology also assists in ensuring that information is used in an appealing and relevant manner by human rights activists and in the field of capacity building for the purposes of data advocacy.

      Observed data such as mobile phone records generated through network operators as well as through the use of social media are beginning to embody an omnipotent role in the development of academia through detailed research. This is due to the ability of this data to provide microcosms of information within both contexts of finer granularity and over larger public spaces. In the wake of natural disasters, this can be extremely useful, as reflected by the work of Flowminder after the 2010 Haiti earthquake.[45] A similar string of interpretive analysis can be carried out in instances of conflict and crises over varying spans of time. Flowminder used the geospatial locations of 1.9 million subscriber identity modules in Haiti, beginning 42 days before the earthquake and 158 days after it. This information allowed researches to empirically determine the migration patterns of population post the earthquake and enabled a subsequent UNFPA household survey.[46] In a similar capacity, the UN Global Pulse is seeking to assist in the process of consultation and deliberation on the specific targets of the millennium development goals through a framework of visual analytics that represent the big data procured on each of the topics proposed for the post- 2015 agenda online.[47]

      A recent announcement of collaboration between RTI International, a non-profit research organization and IBM research lab looks promising in its initiative to utilize big data analytics in schools within Mombasa County, Kenya.[48] The partnership seeks to develop testing systems that would capture data that would assist governments, non-profit organizations and private enterprises in making more informed decisions regarding the development of education and human resources within the region. Äs observed by Dr. Kamal Bhattacharya, The Vice President of IBM Research, "A significant lack of data on Africa in the past has led to misunderstandings regarding the history, economic performance and potential of the government." The project seeks to improve transparency and accountability within the schooling system in more than 100 institutions across the county. The teachers would be equipped with tablet devices to collate the data about students, classrooms and resources. This would allow an analysis of the correlation between the three aspects thus enabling better policy formulation and a more focused approach to bettering the school system. [49] This is a part of the United States Agency for International Development's Education Data for Decision Making (EdData II) project. According to Dr Kommy Weldemariam, Research Scientist , IBM Research, "… there has been a significant struggle in making informed decisions as to how to invest in and improve the quality and content of education within Sub-Saharan Africa. The Project would create a school census hub which would enable the collection of accurate data regarding performance, attendance and resources at schools. This would provide valuable insight into the building of childhood development programs that would significantly impact the development of an efficient human capital pool in the near future."[50]

      A similar initiative has been undertaken by Apple and IBM in the development of the "Student Achievement App" which seeks to use this data for "content analysis of student learning". The Application as a teaching tool that analyses the data provided to develop actionable intelligence on a per-student basis." [51] This would give educators a deeper understanding of the outcome of teaching methodologies and subsequently enable better leaning. The impact of this would be a significant restructuring of how education is delivered. At a recent IBM sponsored workshop on education held in India last year , Katharine Frase, IBM CTO of Public Sector predicted that "classrooms will look significantly different within a decade than they have looked over the last 200 years."[52]

      (iii) Access and the exchange of information

      Big data used for development serves as an important information intermediary that allows for the creation of a unified space within which unstructured heterogeneous data can be efficiently organized to create a collaborative system of information. New interactive platforms enable the process of information exchange though an internal vetting and curation that ensures accessibility to reliable and accurate information. This encourages active citizen participation in the articulation of demands from the government, thus enabling the actualization of the role of the electorate in determining specific policy decisions.

      The Grameen Foundation's AppLab in Kampala aids in the development of tools that can use the information from micro financing transactions of clients to identify financial plans and instruments that would be be more suitable to their needs.[53] Thus, through working within a community, this technology connects its clients in a web of information sharing that they both contribute to and access after the source of the information has been made anonymous. This allows the individual members of the community to benefit from this common pool of knowledge. The AppLab was able to identify the emergence of a new crop pest from an increase in online searches for an unusual string of search terms within a particular region. Using this as an early warning signal, the Grameen bank sent extension officers to the location to check the crops and the pest contamination was dealt with effectively before it could spread any further.[54]

      (iv) Accountability and Transparency

      Big data enables participatory contributions from the electorate in existing functions such as budgeting and communication thus enabling connections between the citizens, the power brokers and elites. The extraction of information and increasing transparency around data networks is also integral to building a self-sustaining system of data collection and analysis. However it is important to note that this information collected must be duly analyzed in a responsible manner. Checking the veracity of the information collected and facilitating individual accountability would encourage more enthusiastic responses from the general populous thus creating a conducive environment to elicit the requisite information. The effectiveness of the policies formulated by relying on this information would rest on the accuracy of such information.

      An example of this is Chequeado, a non-profit Argentinean media outlet that specializes in fact-checking. It works on a model of crowd sourcing information on the basis of which it has fact checked everything from the live presidential speech to congressional debates that have been made open to the public. [55] It established a user friendly public database, DatoCHQ, in 2014 which allowed its followers to participate in live fact-checks by sending in data, which included references, facts, articles and questions, through twitter. [56] This allowed citizens to corroborate the promises made by their leaders and instilled a sense of trust in the government.

      III. Big Data and Smart Cities in the Global South

      Smart cities have become a buzzword in South Asia, especially after the Indian government led by Prime Minister Narendra Modi made a commitment to build 100 smart cities in India[57]. A smart city is essentially designed as a hub where the information and communication technologies (ICT) are used to create feedback loops with an almost minimum time gap. In traditional contexts, surveys carried out through a state sponsored census were the only source of systematic data collection. However these surveys are long drawn out processes that often result in a drain on State resources. Additionally, the information obtained is not always accurate and policy makers are often hesitant to base their decisions on this information. The collection of data can however be extremely useful in improving the functionality of the city in terms of both the 'hard' or physical aspects of the infrastructural environment as well as the 'soft' services it provides to citizens. One model of enabling this data collection, to this effect, is a centrally structured framework of sensors that may be able to determine movements and behaviors in real-time, from which the data obtained can be subsequently analyzed. For example, sensors placed under parking spaces at intersections can relay such information in short spans of time. South Korea has managed to implement a similar structure within its smart city, Songdo.[58]

      Another approach to this smart city model is using crowd sourced information through apps, either developed by volunteers or private conglomerates. These allow for the resolving of specific problems by organizing raw data into sets of information that are attuned to the needs of the public in a cohesive manner. However, this system would require a highly structured format of data sets, without which significantly transformational result would be difficult to achieve.[59]

      There does however exist a middle ground, which allows the beneficiaries of this network, the citizens, to take on the role of primary sensors of information. This method is both cost effective and allows for an experimentation process within which an appropriate measure of the success or failure of the model would be discernible in a timely manner. It is especially relevant in fast growing cities that suffer congestion and breakdown of infrastructure due to the unprecedented population growth. This population is now afforded with the opportunity to become a part of the solution.

      The principle challenge associated with extracting this Big Data is its restricted access. Most organizations that are able to collect this big data efficiently are private conglomerates and business enterprises, who use this data to give themselves a competitive edge in the market, by being able to efficiently identify the needs and wants of their clientele. These organizations are reluctant to release information and statistics because they fear it would result in them losing their competitive edge and they would consequently lose the opportunity to benefit monetarily from the data collected. Data leaks would also result in the company getting a bad name and its reputation could be significantly hampered. Despite the individual anonymity, the transaction costs incurred in ensuring the data of their individual customers is protected is often an expensive process. In addition to this there is a definite human capital gap resulting from the significant lack of scientists and analysts to interpret raw data transmitted across various channels.

      (i) Big Data in Urban Planning

      Urban planning would require data that is reflective of the land use patterns of communities, combined with their travel descriptions and housing preferences. The mobility of individuals is dependent on their economic conditions and can be determined through an analysis of their purchases, either via online transactions or from the data accumulated by prominent stores. The primary source of this data is however mobile phones, which seemed to have transcend economic barriers. Secondary sources include cards used on public transport such as the Oyster card in London and the similar Octopus card used in Hong Kong. However, in most developing countries these cards are not available for public transport systems and therefore mobile network data forms the backbone of data analytics. An excessive reliance on the data collected through Smart phones could however be detrimental, especially in developing countries, simply because the usage itself would most likely be concentrated amongst more economically stable demographics and the findings from this data could potentially marginalize the poor.[60]

      Mobile network big data (MNBD) is generated by all phones and includes CDRs, which are obtained from calls or texts that are sent or received, internet usage, topping up a prepaid value and VLR or Visitor Location Registry data which is generated whenever the phone is question has power. It essentially communicates to the Base Transceiver Stations (BSTs) that the phone is in the coverage area. The CDR includes records of calls made, duration of the call and information about the device. It is therefore stored for a longer period of time. The VLR data is however larger in volume and can be written over. Both VLR and CDR data can provide invaluable information that can be used for urban planning strategies. [61] LIRNEasia, a regional policy and regulation think-tank has carried out an extensive study demonstrating the value of MNBD in SriLanka.[62] This has been used to understand and sometimes even monitor land use patterns, travel patterns during peak and off seasons and the congregation of communities across regions. This study was however only undertaken after the data had been suitably pseudonymised.[63] The study revealed that MNBD was incredibly valuable in generating important information that could be used by policy formulators and decision makers, because of two primary characteristics. Firstly, it comes close to a comprehensive coverage of the demographic within developing countries, thus using mobile phones as sensors to generate useful data. Secondly, people using mobile phones across vast geographic areas reflect important information regarding patterns of their travel and movement. [64]

      MNBD allows for the tracking and mapping of changes in population densities on a daily basis, thus identifying 'home' and 'work' locations, informing policy makers of population congestion so that thy may be able to formulate policies with respect to easing this congestion. According to Rohan Samarajiva, founding chair of LIRNEasia, "This allows for real-time insights on the geo-spatial distribution of population, which may be used by urban planners to create more efficient traffic management systems."[65] This can also be used for the developmental economic policies. For example, the northern region of Colombo, a region inhabited by the low income families shows a lower population density on weekdays. This is reflective of the large numbers travelling to southern Colombo for employment. [66]Similarly, patterns of land use can be ascertained by analyzing the various loading patterns of base stations. Building on the success of the Mobile Data analysis project in SriLanka LIRNEasia plans to collaborate with partners in India and Bangladesh to assimilate real time information about the behavioral tendencies of citizens, using which policy makers may be able to make informed decisions. When this data is combined with user friendly virtual platforms such as smartphone Apps or web portals, it can also help citizens make informed choices about their day to day activities and potentially beneficial long term decisions. [67]

      Challenges of using Mobile Network Data

      Mobile networks invest significant sums of money in obtaining information regarding usage patterns of their services. Consequently, they may use this data to develop location based advertizing. In this context, there is a greater reluctance to share data for public purposes. Allowing access to one operator's big data by another could result in significant implications on the other with respect to the competitive advantage shared by the operator. A plausible solution to this conundrum is the accumulation of data from multiple sources without separating or organizing it according to the source it originates from. There is thus a lesser chance of sensitive information of one company being used by another. However, even operators do have concerns about how the data would be handled before this "mashing up" occurs and whether it might be leaked by the research organization itself. LIRNEasia used comprehensive non-disclosure agreements to ensure that the researchers who worked with the data were aware of the substantial financial penalties that may be imposed on them for data breaches. The access to the data was also restricted. [68]

      Another line of argumentation advocates for the open sharing of data. A recent article in the Economist has articulated this in the context of the Ebola outbreak in West Africa. " Releasing the data, though, is not just a matter for firms since people's privacy is involved. It requires governmental action as well. Regulators in each affected country would have to order operators to make their records accessible to selected researchers, who through legal agreements would only be allowed to use the data in a specific manner. For example, Orange, a major mobile phone network operator has made millions of CDRs from Senegal and The Ivory Coast available for researchers for their use under its Data Development Initiative. However the Political will amongst regulators and Network operators to do this seems to be lacking."[69]

      It would therefore be beneficial for companies to collaborate with the customers who create the data and the researchers who want to use it to extract important insights. This however would require the creation of and subsequent adherence to self regulatory codes of conduct. [70] In addition to this cooperation between network operators will assist in facilitating the transference of the data of their customers to research organizations. Sri Lanka is an outstanding example of this model of cooperation which has enabled various operators across spectrums to participate in the mobile-money enterprise.[71]

      (ii) Big Data and Government Delivery of Services and Functions

      The analysis of Data procured in real time has proven to be integral to the formulation of policies, plans and executive decisions. Especially in an Asian context, Big data can be instrumental in urban development, planning and the allocation of resources in a manner that allows the government to keep up with the rapidly growing demands of an empowered population whose numbers are on an exponential rise. Researchers have been able to use data from mobile networks to engage in effective planning and management of infrastructure, services and resources. If, for example, a particular road or highway has been blocked for a particular period of time an alternative route is established before traffic can begin to build up creating a congestion, simply through an analysis of information collected from traffic lights, mobile networks and GPS systems.[72]

      There is also an emerging trend of using big data for state controlled services such as the military. The South Korean Defense Minister Han Min Koo, in his recent briefing to President Park Geun-hye reflected on the importance of innovative technologies such as Big Data solutions. [73]

      The Chinese government has expressed concerns regarding data breaches and information leakages that would be extremely dangerous given the exceeding reliance of governments on big data. A security report undertaken by Qihoo 360, China's largest software security provider established that 2,424 of the 17,875 Web security loopholes were on government websites. Considering the blurring line between government websites and external networks, it has become all the more essential for authorities to boost their cyber security protections.[74]

      The Japanese government has considered investing resources in training more data scientists who may be able to analyze the raw data obtained from various sources and utilize requisite techniques to develop an accurate analysis. The Internal Affairs and Communication Ministry planned to launch a free online course on big data, the target of which would be corporate workers as well as government officials.[75]

      Data analytics is emerging as an efficient technique of monitoring the public transport management systems within Singapore. A recent collaboration between IBM, StarHub, The Land Transport Authority and SMRT initiated a research study to observe the movement of commuters across regions. [76] This has been instrumental in revamping the data collection systems already in place and has allowed for the procurement of additional systems of monitoring.[77] The idea is essentially to institute a "black box" of information for every operational unit that allows for the relaying of real-time information from sources as varied as power switches, tunnel sensors and the wheels, through assessing patterns of noise and vibration. [78]

      In addition to this there are numerous projects in place that seek to utilize Big Data to improve city life. According to Carlo Ritti, Director of the MIT Senseable City Lab, "We are now able to analyze the pulse of a city from moment to moment. Over the past decade, digital technologies have begun to blanket our cities, forming the backbone of a large, intelligent infrastructure." [79] The professor of Information Architecture and Founding Director of the Singapore ETH Centre, Gerhart Schmitt has observed that "the local weather has a major impact on the behavior of a population." In this respect the centre is engaged in developing a range of visual platforms to inform citizens on factors such as air quality which would enable individuals to make everyday choices such as what route to take when planning a walk or predict a traffic jam. [80] Schmitt's team has also been able to arrive at a pattern that connects the demand for taxis with the city's climate. The amalgamation of taxi location with rainfall data has been able to help locals hail taxis during a storm. This form of data can be used in multiple ways allowing the visualization of temperature hotspots based on a "heat island" effect where buildings, cars and cooling units cause a rise in temperature. [81]

      Microsoft has recently entered into a partnership with the Federal University of Minas Gerais, one of the largest universities in Brazil to undertake a research project that could potentially predict traffic jams up to an hour in advance. [82] The project attempts to analyze information from transport departments, road traffic cameras and drivers social network profiles to identify patterns that they could use to help predict traffic jams approximately 15 to 60 minutes before they actually happen.[83]

      In anticipation of the increasing demand for professionals with requisite training in data sciences, the Malaysian Government has planned to increase the number of local data scientists from the present 80 to 1500 by 2020, through the support of the universities within the country.

      IV. Big Data and the Private Sector in the Global South

      Essential considerations in the operations of Big Data in the Private sector in the Asia Pacific region have been extracted by a comprehensive survey carried out by the Economist Intelligence Unit.[84] Over 500 executives across the Asia Pacific region were surveyed, from across industries representing a diverse range of functions. 69% of these companies had an annual turnover of over US $500m. The respondents were senior managers responsible for taking key decisions with regard to investment strategies and the utilization of big data for the same.

      The results of the Survey conclusively determined that firms in the Asia Pacific region have had limited success with implementing Big Data Practices. A third of the respondents claimed to have an advanced knowledge of the utilization of big data while more than half claim to have made limited progress in this regard. Only 9% of the Firms surveyed cited internal barriers to implementing big data practices. This included a significant difficulty in enabling the sharing of information across boundaries. Approximately 40% of the respondents surveyed claimed they were unaware of big data strategies, even if they had in fact been in place simply because these had been poorly communicated to them. Almost half of the firms however believed that big data plays an important role in the success of the firm and that it can contribute to increasing revenue by 25% or more.

      Numerous obstacles in the adoption of big data were cited by the respondents. These include the lack of suitable software to interpret the data and the lack of in-house skills to analyze the data appropriately. In addition to this, the lack of willingness on the part of various departments to share their data for the fear of a breach or leak was thought to be a major hindrance. This combined with a lack of communication between the various departments and exceedingly complicated reports that cannot be analyzed given the limited resources and lack of human capital qualified enough to carry out such an analysis, has resulted in an indefinite postponement of any policy propounding the adoption of big data practices.

      Over 59% of the firms surveyed agreed that collaboration is integral to innovation and that information silos are a huge hindrance within a knowledge based economy. There is also a direct correlation between the size of the company and its progress in adopting big data, with larger firms adopting comprehensive strategies more frequently than smaller ones. A major reason for this is that large firms with substantially greater resources are able to actualize the benefits of big data analytics more efficiently than firms with smaller revenues. These businesses which have advanced policies in place outlining their strategies with respect to their reliance on big data are also more likely to communicate these strategies to their employees to ensure greater clarity in the process.

      The use of big data was recently voted as the "best management practice" of the past year according to a cumulative ranking published by Chief Executive China Magazine, a Trade journal published by Global Sources on 13th January, 2015 in Beijing. The major benefit cited was the real-time information sourced from customers, which allows for direct feedback from clients when making decisions regarding changes in products or services. [85]

      A significant contributor to the lack of adequate usage of data analytics is the belief that a PhD is a prerequisite for entering the field of data science. This misconception was pointed out by Richard Jones, vice president of Cloudera in the Australia, New Zealand and the Asean region. Cloudera provides businesses with the requisite professional services that they may need to effectively utilize Big Data. This includes a combination of the necessary manpower, technology and consultancy services.[86] Deepak Ramanathan, the chief technology officer, SAS Asia Pacific believes that this skill gap can be addressed by forming data science teams within both governments and private enterprises. These teams could comprise of members with statistical, coding and business skills and allow them to work in a collaborative manner to address the problem at hand.[87] SAS is an Enterprise Software Giant that creates tools tailored to suit business users to help them interpret big data. Eddie Toh, the planning and marketing manager of Intel's data center platform believes that businesses do not necessarily need data scientists to be able to use big data analytics to their benefit and can in fact outsource the technical aspects of the interpretation of this data as and when required.[88]

      The analytical team at Dell has forged a partnership with Brazilian Public Universities to facilitate the development of a local talent pool in the field of data analytics. The Instituto of Data Science (IDS) will provide training methodologies for in person or web based classes. [89] The project is being undertaken by StatSoft, a subsidiary of Dell that was acquired by the technology giant last year. [90]

      V. Conclusion

      There have emerged numerous challenges in the analysis and interpretation of Big Data. While it presents an extremely engaging opportunity, which has the potential to transform the lives of millions of individuals, inform the private sector and influence government, the actualization of this potential requires the creation of a sustainable foundational framework ; one that is able to mitigate the various challenges that present themselves in this context.

      A colossal increase in the rate of digitization has resulted in an unprecedented increment in the amount of Big Data available, especially through the rapid diffusion cellular technology. The importance of mobile phones as a significant source of data, especially in low income demographics cannot be overstated. This can be used to understand the needs and behaviors of large populations, providing an in depth insight into the relevant context within which valuable assessments as to the competencies, suitability and feasibilities of various policy mechanisms and legal instruments can be made. However, this explosion of data does have a lasting impact on how individuals and organizations interact with each other, which might not always be reflected in the interpretation of raw data without a contextual understanding of the demographic. It is therefore vital to employ the appropriate expertise in assessing and interpreting this data. The significant lack of a human resource to capital to analyze this information in an accurate manner poses a definite challenge to its effective utilization in the Global South.

      The legal and technological implications of using Big Data are best conceptualized within the deliberations on protecting the privacy of the contributors to this data. The primary producers of this information, from across platforms, are often unaware that they are in fact consenting to the subsequent use of the data for purposes other than what was intended. For example people routinely accept terms and conditions of popular applications without understanding where or how the data that they inadvertently provide will be used.[91] This is especially true of media generated on social networks that are increasingly being made available on more accessible platforms such as mobile phones and tablets. Privacy has and always will remain an integral pillar of democracy. It is therefore essential that policy makers and legislators respond effectively to possible compromises of privacy in the collection and interpretation of this data through the institution of adequate safeguards in this respect.

      Another challenge that has emerged is the access and sharing of this data. Private corporations have been reluctant to share this data due to concerns about potential competitors being able to access and utilize the same. In addition to this, legal considerations also prevent the sharing of data collected from their customers or users of their services. The various technical challenges in storing and interpreting this data adequately also prove to be significant impediments in the collection of data. It is therefore important that adequate legal agreements be formulated in order to facilitate a reliable access to streams of data as well as access to data storage facilities to accommodate for retrospective analysis and interpretation.

      In order for the use of Big Data to gain traction, it is important that these challenges are addressed in an efficient manner with durable and self-sustaining mechanisms of resolving significant obstructions. The debates and deliberations shaping the articulation of privacy concerns and access to such data must be supported with adequate tools and mechanisms to ensure a system of "privacy-preserving analysis." The UN Global Pulse has put forth the concept of data philanthropy to attempt to resolve these issues, wherein " corporations [would] take the initiative to anonymize (strip out all personal information) their data sets and provide this data to social innovators to mine the data for insights, patterns and trends in realtime or near realtime."[92]

      The concept of data philanthropy highlights particular challenges and avenues that may be considered for future deliberations that may result in specific refinements to the process.

      One of the primary uses of Big Data, especially in developing countries is to address important developmental issues such as the availability of clean water, food security, human health and the conservation of natural resources. Effective Disaster management has also emerged as one of the key functions of Big Data. It therefore becomes all the more important for organizations to assess the information supply chains pertaining to specific data sources in order to identify and prioritize the issues of data management. [93] Data emerging from different contexts, across different sources may appear in varied compositions and would differ significantly across economic demographics. The Big Data generated from certain contexts would be inefficient due to the unavailability of data within certain regions and the resulting studies affecting policy decisions should take into account this discrepancy. This data unavailability has resulted in a digital divide which is especially prevalent in the global south. [94]

      Appropriate analysis of the Big Data generated would provide a valuable insight into the key areas and inform policy makers with respect to important decisions. However, it is necessary to ensure that the quality of this data meets a specific standard and appropriate methodological processes have been undertaken to interpret and analyze this data. The government is a key actor that can shape the ecosystem surrounding the generation, analysis and interpretation of big data. It is therefore essential that governments of countries across the global south recognize the need to collaborate with civic organizations as well technical experts in order to create appropriate legal frameworks for the effective utilization of this data.


      [1] Onella, Jukka- Pekka. "Social Networks and Collective Human Behavior." UN Global Pulse. 10 Nov.2011. <http://www.unglobalpulse.org/node/14539>

      [2] http://www.business2community.com/big-data/evaluating-big-data-predictive-analytics-01277835

      [3] Ibid

      [4] http://unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf

      [5] Ibid, p.13, pp.5

      [6] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [7] Helbing, Dirk , and Stefano Balietti. "From Social Data Mining to Forecasting Socio-Economic Crises." Arxiv (2011) 1-66. 26 Jul 2011 http://arxiv.org/pdf/1012.0178v5.pdf.

      [8] Manyika, James, Michael Chui, Brad Brown, Jacques Bughin, Richard Dobbs, Charles Roxburgh andAngela H. Byers. "Big data: The next frontier for innovation, competition, and productivity." McKinsey

      Global Institute (2011): 1-137. May 2011.

      [9] "World Population Prospects, the 2010 Revision." United Nations Development Programme. <http://esa.un.org/unpd/wpp/unpp/panel_population.htm>

      [10] Mobile phone penetration, measured by Google, from the number of mobile phones per 100 habitants, was 96% in Botswana, 63% in Ghana, 66% in Mauritania, 49% in Kenya, 47% in Nigeria, 44% in Angola, 40% in Tanzania (Source: Google Fusion Tables)

      [11] http://www.brookings.edu/blogs/africa-in-focus/posts/2015/04/23-big-data-mobile-phone-highway-sy

      [12] Ibid

      [13] <http://www.google.com/fusiontables/Home/>

      [14] "Global Internet Usage by 2015 [Infographic]." Alltop. <http://holykaw.alltop.com/global-internetusage-by-2015-infographic?tu3=1>

      [15] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011 <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [16] Ibid

      [17] Ibid

      [18] Ibid

      [19] Goetz, Thomas. "Harnessing the Power of Feedback Loops." Wired.com. Conde Nast Digital, 19 June 2011. <http://www.wired.com/magazine/2011/06/ff_feedbackloop/all/1>.

      [20] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [21] Bollier, David. The Promise and Peril of Big Data. The Aspen Institute, 2010. <http://www.aspeninstitute.org/publications/promise-peril-big-data>

      [22] Ibid

      [23] Eagle, Nathan and Alex (Sandy) Pentland. "Reality Mining: Sensing Complex Social Systems",Personal and Ubiquitous Computing, 10.4 (2006): 255-268.

      [24] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [25] OECD, Future Global Shocks, Improving Risk Governance, 2011

      [26] "Economy: Global Shocks to Become More Frequent, Says OECD." Organisation for Economic Cooperationand Development. 27 June. 2011.

      [27] Friedman, Jed, and Norbert Schady. How Many More Infants Are Likely to Die in Africa as a Result of the Global Financial Crisis? Rep. The World Bank <http://siteresources.worldbank.org/INTAFRICA/Resources/AfricaIMR_FriedmanSchady_060209.pdf>

      [28] Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute,June 2011<http://www.mckinsey.com/mgi/publications/big_data/pdfs/MGI_big_data_full_report.pdf>

      [29] The word "crowdsourcing" refers to the use of non-official actors ("the crowd") as (free) sources of information, knowledge and services, in reference and opposition to the commercial practice of

      outsourcing. "

      [30] Burke, J., D. Estrin, M. Hansen, A. Parker, N. Ramanthan, S. Reddy and M.B. Srivastava. ParticipatorySensing. Rep. Escholarship, University of California, 2006. <http://escholarship.org/uc/item/19h777qd>.

      [31] "Crisis Mappers Net-The international Network of Crisis Mappers." <http://crisismappers.net>, http://haiti.ushahidi.com and Goldman et al., 2009

      [32] Alex Pentland cited in "When There's No Such Thing As Too Much Information". The New York Times.23 Apr. 2011<http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

      [33] Nathan Eagle also cited in "When There's No Such Thing As Too Much Information". The New YorkTimes. 23 Apr. 2011. <http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

      [34] Helbing and Balietti. "From Social Data Mining to Forecasting Socio-Economic Crisis."

      [35] Eysenbach G. Infodemiology: tracking flu-related searches on the Web for syndromic surveillance.AMIA (2006)<http://yi.com/home/EysenbachGunther/publications/2006/eysenbach2006cinfodemiologyamia proc.pdf>

      [36] Syndromic Surveillance (SS)." Centers for Disease Control and Prevention. 06 Mar. 2012.<http://www.cdc.gov/ehrmeaningfuluse/Syndromic.html>.

      [37] Health Map <http://healthmap.org/en/>

      [39] www.ushahidi.com

      [41] Ushahidi is a nonprofit tech company that was developed to map reports of violence in Kenya followingthe 2007 post-election fallout. Ushahidi specializes in developing "free and open source software for

      information collection, visualization and interactive mapping." <http://ushahidi.com>

      [42] Conducted by the European Commission's Joint Research Center against data on damaged buildingscollected by the World Bank and the UN from satellite images through spatial statistical techniques.

      [43] www.ushahidi.com

      [44] See https://tacticaltech.org/

      [45] see www. flowminder.org

      [46] Ibid

      [48] http://allafrica.com/stories/201507151726.html

      [49] Ibid

      [50] Ibid

      [51] http://www.computerworld.com/article/2948226/big-data/opinion-apple-and-ibm-have-big-data-plans-for-education.html

      [52] Ibid

      [53] http://www.grameenfoundation.org/where-we-work/sub-saharan-africa/uganda

      [54] Ibid

      [55] http://chequeado.com/

      [56] http://datochq.chequeado.com/

      [57] Times of India (2015): "Chandigarh May Become India's First Smart City," 12 January, http://timesofi ndia.indiatimes.com/india/Chandigarh- may-become-Indias-fi rst-smart-city/articleshow/ 45857738.cms

      [58] http://www.cisco.com/web/strategy/docs/scc/ioe_citizen_svcs_white_paper_idc_2013.pdf

      [59] Townsend, Anthony M (2013): Smart Cities: Big Data, Civic Hackers and the Quest for a New Utopia, New York: WW Norton.

      [60] See "Street Bump: Help Improve Your Streets" on Boston's mobile app to collect data on roadconditions, http://www.cityofboston.gov/DoIT/ apps/streetbump.asp

      [61] Mayer-Schonberger, V and K Cukier (2013): Big Data: A Revolution That Will Transform How We Live, Work, and Think, London: John Murray.

      [62] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

      [63] Ibid

      [64] Newman, M E J and M Girvan (2004): "Finding and Evaluating Community Structure in Networks,"Physical Review E, American Physical Society, Vol 69, No 2.

      [65] http://www.sundaytimes.lk/150412/sunday-times-2/big-data-can-make-south-asian-cities-smarter-144237.html

      [66] Ibid

      [67] Ibid

      [68] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

      [69] GSMA (2014): "GSMA Guidelines on Use of Mobile Data for Responding to Ebola," October, http:// www.gsma.com/mobilefordevelopment/wpcontent/ uploads/2014/11/GSMA-Guidelineson-

      protecting-privacy-in-the-use-of-mobilephone- data-for-responding-to-the-Ebola-outbreak-_ October-2014.pdf

      [70] An example of the early-stage development of a self-regulatory code may be found at http:// lirneasia.net/2014/08/what-does-big-data-sayabout- sri-lanka/

      [71] See "Sri Lanka's Mobile Money Collaboration Recognized at MWC 2015," http://lirneasia. net/2015/03/sri-lankas-mobile-money-colloboration- recognized-at-mwc-2015/

      [72] http://www.thedailystar.net/big-data-for-urban-planning-57593

      [74] http://www.news.cn/, 25/11/2014

      [76] http://www.todayonline.com/singapore/can-big-data-help-tackle-mrt-woes

      [77] Ibid

      [78] Ibid

      [79] http://edition.cnn.com/2015/06/24/tech/big-data-urban-life-singapore/

      [80] Ibid

      [81] Ibid

      [82] http://venturebeat.com/2015/04/03/how-microsofts-using-big-data-to-predict-traffic-jams-up-to-an-hour-in-advance/

      [83] Ibid

      [84] https://www.hds.com/assets/pdf/the-hype-and-the-hope-summary.pdf

      [85] http://www.news.cn , 14/01/2015

      [86] http://www.techgoondu.com/2015/06/29/plugging-the-big-data-skills-gap/

      [87] Ibid

      [88] Ibid

      [89] http://www.zdnet.com/article/dell-to-create-big-data-skills-in-brazil/

      [90] Ibid

      [91] Efrati, Amir. "'Like' Button Follows Web Users." The Wall Street Journal. 18 May 2011.

      <http://online.wsj.com/article/SB10001424052748704281504576329441432995616.html>

      [92] Krikpatrick, Robert. "Data Philanthropy: Public and Private Sector Data Sharing for Global Resilience."

      UN Global Pulse. 16 Sept. 2011. <http://www.unglobalpulse.org/blog/data-philanthropy-public-privatesector-data-sharing-global-resilience>

      [93] Laney D (2001) 3D data management: Controlling data volume, velocity and variety. Available at: http://blogs. gartner.com/doug-laney/files/2012/01/ad949-3D-DataManagement-Controlling-Data-Volume-Velocity-andVariety.pdf

      [94] Boyd D and Crawford K (2012) Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, & Society 15(5): 662-679.

      The Creation of a Network for the Global South - A Literature Review

      by Tanvi Mani last modified Feb 04, 2016 01:13 PM

      I. Introduction

      The organization of societies and states is predicated on the development of Information Technology and has begun to enable the construction of specialized networks. These networks aid in the mobilization of resources on a global platform.[1] There is a need for governance structures that embody this globalized thinking and adopt superior information technology devices to bridge gaps in the operation and participation of not only political functions but also economic processes and operations.[2] Currently, public institutions fall short of an optimum level of functioning simply because they lack the information, know-how and resources to respond effectively to this newly globalized and economically liberalized world order. Civil society is beginning to seek a greater participatory voice in both policy making and ideating, which require public institutions to institute a method of allowing this participation while at the same time retaining the crux of their functions and processes. The network society thus requires, As argued by Castells, a new methodology of social structuring, one amalgamating the analysis of social structure and social action within the same overarching framework.[3] This Network propounds itself as a 'dynamic, self-evolving structure, which, powered by information technology and communicating with the same digital language, can grow, and include all social expressions, compatible with each network's goals. Networks increase their value exponentially through their contribution to human resources, markets, raw materials and other such components of production and distribution.' [4]

      As noted by Kevin Kelly,' The Atom is the past. The symbol of science for the next century is the dynamical Net.…Whereas the Atom represents clean simplicity, the Net channels the messy power of complexity. The only organization capable of nonprejudiced growth or unguided learning is a network. All other topologies limit what can happen. A network swarm is all edges and therefore open ended any way you come at it. Indeed the network is the least structured organization that can be said to have any structure at all. ..In fact a plurality of truly divergent components can only remain coherent in a network. No other arrangement - chain, pyramid, tree, circle, hub - can contain true diversity working as a whole .'[5]

      A network therefore is integral to the facilitation, coordination and advocacy of different agenda within a singular framework, which seeks to formulate suitable responses to a wide range of problems across regions. An ideal model of a network would therefore be one that is reflective of the interconnectivity between relationships, strengthened by effective communication and based on a strong foundation of trust.

      The most powerful element of a network is however the idea of a common purpose. The pursuit is towards similar ends and therefore the interconnected web of support it offers is in realization of a singular goal,

      II. Evolution of the Network

      There are certain norms that must be incorporated for a network to be able to work at its best. Robert Chambers, in his book, Whose Reality Counts? Identifies these norms and postulates their extension to every form of a network, in order to capture its creative spirit and aid in the realization of its goals.[6] A network should therefore ideally foster four fundamental elements in order to inculcate an environment of trust, encouragement and the overall actualization of its purpose. These elements are; Diversity or the encouragement of a multitude of narratives from diverse sources, Dynamism or the ability of participants to retain their individual identities while maintaining a facilitative structure, Democracy or an equitable system of decision making to enable an efficient working of the net and finally, Decentralization or the feasibility of enjoying local specifics on a global platform.[7]

      In order to attain these ideal elements it is integral to strengthen certain aspects of the practice through performing specific and focused functions, these include making sure of a clear broad consensus, which ensures the co-joining of a common purpose. Additionally, centralization, in the form of an overarching set of rules must be kept to a minimum, in order to facilitate a greater level of flexibility while still providing the necessary support structure. The building of trust and solid relationships between participants is prioritized to enhance creative ideation in a supportive environment. Joint activities, more than being output oriented are seen as the knots that tie together the entire web of support. Input and participation are the foremost objectives of the network, in keeping with the understanding that "contribution brings gain". [8]

      Significant management issues that plague networks include the practical aspects of bringing the network into function through efficient leadership and the consolidation of a common vision. A balanced approach would entail a common consultation on the goals of the network, the sources of funding and an agreed upon structure within which the network would operate. It is also important to create alliances outside of the sector of familiarity and ensure an inclusive environment for members across regions, allowing them to retain their localized individuality while affording them with a global platform. [9]

      III. Structure

      The structural informality of a network is essential to its sustenance. Networks must therefore ensure that they embody a non-hierarchized structure, devoid of bureaucratic interferences and insulated from a centralized system of control and supervision. This requires an internal system of checks and balances, consisting of periodic reviews and assessments. Networks must therefore limit the powers of supervision of the secretariat. The secretariat must allow for the coordination of its activities and allocate appropriate areas of engagement according to the relative strength of the participating members.

      One form of a network structure, postulated within a particular research study is the threads, knots and Nets model. [10] It consists of members within a network bound together by threads of relationship, communication and trust. These threads represent the commonality that binds together the participants of the particular network. The threads are established through common ideas and a voluntary participation in the process of communication and conflict resolution. [11]

      The knots represent the combined activities which the participants engage in, with the common goal of realizing a singular purpose. These knots signify an optimum level of activity, wherein members of the network are able to support, inspire and confer tangible benefits onto each other. The net represents the entire structure of the network, which is constructed through a confluence of relationships and common activities. [12] The structure is autonomous in nature and allows participants to contribute without losing their individual identities. It is also dynamic and flexible; incorporating new elements with relative ease. It is therefore a collaboration which affords onto its members the opportunity to expand without losing its purpose. The maintenance of such a structure requires constant review and repair, with adequate awareness of weak links or "threads" and the capability and willingness to knot them together with new participants, thereby extending the net.

      For example, the Global Alliance for Vaccines and Immunization used a system of organizational "milestones" to monitor the progress of the network and keep the network concentrated. It requires a sustained institutional effort to fulfill its mandate of "the right of every child to be protected against vaccine-preventable diseases" and brings together international organizations, civil society and private industry. [13] As postulated within the Critical Choices research study of the United Nations, clearly defined milestones are integral to sustaining an effective support mechanism for donors and ensuring that all relevant participants are on board. [14] This also allows for donors to be made aware of the tangible outcomes that have been achieved by the network. Interim goals that are achievable within a short span of time also afford a sense of legitimacy onto the network, allowing it to deliver on its mandate early on. Setting milestones would require an in depth focus and a nuanced understanding of specific aspects of larger problems and delivering early results on these problems would allow for a foundational base of trust, on the foundation of which, a possibly long drawn out consultative process can be fixed.[15]

      A Network might often find alliances outside of its sector of operation. For example, Greenpeace was able to make its voice heard in International Climate Change negotiations by engaging with private insurance companies and enlisting their support.[16] The organization looked towards the private sector for support to mobilize resources and enlist the requisite expertise within their various projects. [17]

      A. Funding

      The financial support a network receives is essential for its sustenance. The initial seed money it receives can be obtained from a single source however, cross sectoral financing is necessary to build a consensus with regards to issues that may be a part of the network's mandate. The World Commission for Dams (WCD), for example, obtains funding from multiple sources in order to retain its credibility. The sources of funding of the WCD include government agencies, multilateral organizations, business associations, NGO's and Government Agencies, without a single donor contributing more than 10% of the total funding it receives.[18] However, the difficulty with this model of funding is the relative complexity in assimilating a number of smaller contributions, which may take away from its capacity to expand its reach and enhance the scope of its work. Cross sectoral funding is less of a fundamental requirement for networks whose primary mandate is implementation, such as The Global Environment Facility (GEF), whose legitimacy is derived from intergovernmental treaties and is therefore only funded by governments.[19] The GEF has only recently broadened its sources of funding to include external contributions from the private sector.

      A network can also be funded through the objective it seeks to achieve through the course of its activities. For example, Rugmark an international initiative which seeks to mitigate the use of child labor in South Asia uses an external on site monitoring system to verify and provide labels certifying the production of carpets without the use of child labor.[20] The monitors of this system are trained by Rugmark and carpet producers have to sign a binding agreement, undertaking not to employ children below the age of 14 in order to receive the certification. The funds generated from these carpets, for the import of which American and European importers pay 1% of the import value, are used to provide rehabilitation and education facilities for the children in affected areas. The use of these funds is reported regularly. [21]

      The funding must be sustained for a few years, which is a difficult task for networks that require an overall consensus of participants. The greatest outcomes of the network are not tangible solutions to the problem but the facilitation of an environment which allows stakeholders to derive a tangible solution. Thus, the elements of trust, communication and collaboration are integral to the efficient functioning of the network. However, the lack of tangible outcomes exposes the funders to financial risks. The best way to reduce such risks is to institute an uncompromising time limit for the initiative, within which it must achieve tangible results or solutions that can be implemented. A less stringent approach would be to incorporate a system of periodic review and assessment of the accomplishments of the network, subsequent to which further recommendations may be made for a further course of action.[22]

      B. Relationships

      A three year study conducted by Newell & Swan drew definitive conclusions with respect to the inter-organizational collaboration between participants within a network. The study determined that there currently exist three types of trust; Companion trust or the trust that exists within the goodwill and friendship between participants, Competence trust, wherein the competence of other participants to carry out the tasks assigned to them is agreed upon and lastly, Commitment trust or the trust which is predicated on contractual or inter-institutional that are agreed upon. [23] While companion and competence trust are easily identifiable, commitment trust is more subjective as it is determined by the agreement surrounding the core values and overall identifiable aims. Sheppard & Tuchinsky refer to an identification based trust which is based on a collective understanding of shared values. Such a trust requires significant investment but they argue, "The rewards are commensurably greater and the he benefits go beyond quantity, efficiency and flexibility." [24] Powell postulates, "Trust and other forms of social capital are moral resources that operate in fundamentally different manner than physical capital. The supply of trust increases, rather than decreases, with use: indeed, trust can be depleted if not used." [25]

      Karl Wieck endorses the "maintenance of tight control values and beliefs which allow for local adaptation within centralized systems." [26] The autonomy that participants within a network enjoy is therefore considered to be close to sacred, so as to allow them to engage with each other on an equitable footing, while still maintain their individual identities. Freedman and Reynders believe that networks place a so called 'premium' on " the autonomy of those linked through the network…..networks provide a structure through which different groups - each with their own organizational styles, substantive priorities, and political strategies - can join together for common purposes that fill needs felt by each. "[27] Consequently, lower the level of centralized control within a network, the greater the requirement of trust. Allen Nan resonates with this idea, as is evident from her review of coordinating conflict resolution NGO's. She believes that these NGO's are most effective when " beginning with a loose voluntary association which grows through relationship building, gradually building more structure and authority as it develops. No NGO wants to give away its authority until it trusts a networking body of people that it knows. " [28]

      C. Communication and Collaboration

      The binding force that ties together any network is the importance of relationships between participants and their interactions with organizations outside the network. Research has shown that face to face interaction works best and although email may be practical, a face-to-face meeting at regular intervals builds a level of trust amongst participants. [29] It is however important to prevent network from turning into 'self-selecting oligarchies' and to prevent this, there needs to be a balance drawn between goodwill and the trust in others' competence along with a common understanding of differently hierarchized values. [30]

      There is also an impending need to develop a relationship vocabulary, as suggested by Taylor, which would be of particular use within transnational networks and afford a deeper understanding of cross cultural relationships.[31]

      D. Participation

      A significant issue that networks today have to address is how to inculcate and then subsequently maintain participation in the activities of the network. This would include providing incentives to participants, encouraging diversity and enabling greater creative inflow across sectors to generate innovative output. Participation involves three fundamental elements; Action, which includes active contribution in the form of talking, listening, commenting, responding and sharing information, Process, which aids in an equitable system of decision making and constructing relationships and the underpinned values associated with these two elements, which include spreading equality, inculcating openness and including previously excluded communities or individuals. [32] Participation in itself envisages a three leveled definition; participation as a contribution, where people offer a tangible input, participation as an organization process, where people organize themselves to influence certain pre-existing processes and participation as a form of empowerment where people seek to gain power and authority from participating.

      In order to create an autonomous system of evaluating and monitoring the nature and context of participation, a network would have to attempt to systematically incorporate a few fundamental processes, such as; enabling an understanding of the dynamism of a network through an established criteria of monitoring the levels of participation of the members, creating an explicit checklist of qualifications of this participation, such as the contributions of the participants, the limits of commitment and the available resources that must be shared and distributed, acknowledging the importance of relationships as fundamental to the success of any network., building a capacity for facilitative and shared leadership, tracing the changes that occur when the advocacy and lobbying activities of individuals are linked and using these individuals as participants who have the power to influence policy and development at various levels.[33] Finally, the recognition that utilizing the combined faculties of the network would aid in the effectuation of further change is vital to sustaining an active participation in the network.[34] It is common for networks to stagnate simply because of the lack of clarity on what a network really is or what it entails. There are significant misconceptions as to the activities engaged in by the network, such as the idea that a network "works solely as a resource center, to provide information, material and papers, rather than as forums for two way exchanges of information and experiences," contribute to the misunderstanding regarding the participation requirements within a network.[35] To facilitate an active, participatory function of learning, a network needs to be more than a resource center that seeks to meet the needs of beneficiaries. While meeting these needs is essential, development projects tend to obfuscate the benefit/input relationship within a network, thus significantly depleting its dynamism quotient. [36]

      One method of moving away from the needs based model is to create a tripartite functionary, as was created within a particular research study. [37] This involves A Contributions Assessment, A Weaver's Triangle for Networks and An identification of channels of participation.

      Contributions Assessment is an analysis of what the participants within a network are willing to contribute. It enables the network to assess what resources it has access to and how those resources may be distributes amongst the participants, multiplied or exchanged. [38] This system is predicated on a premise of assessing what participants have to offer as opposed to what they need. It challenges the long held notion of requiring an evaluation to identify problems, to address which recommendations are made and in fact seeks to focus on the moments of excellence and enable a discussion on the factors that contributed to these moments. [39] It thus places a value on the best of "what is" as opposed to trying to find a plausible "what ought to be". This approach allows participants to recognize that they are in fact the real "resource Centre" of the network and are encouraged act accordingly.

      A Contributions Assessment may be practically incorporated through a few steps. It must be focused on the contributions, after a discussion on who the contributors may be. The aims of the network must be clarified, along with a specification of the contributions required such as perhaps newsletters, a conference, policy analysis etc. The members of the network must be clear on what they would like to contribute to the network and how such contribution might be delivered. Finally, the secretariat must be able to ideate or innovate on how it can enable more contributions from the networks in a more effective manner. [40]

      The Weaver's Triangle has been adapted to be applies within networks and enables participants to understand what the aims and activities of the network are. It identifies the overall aim of the network and the change the network seeks to bring about to the status quo. It then lays out the objectives of the network in the form of specific statements about the said differences that the network seeks to bring about. Finally, the network would have to explain why a particular activity has been chosen. [41] The base of the triangle reflects the specific activities that the network seeks to engage in to achieve the said objectives. The triangle is further divided into two, to ensure that action aims and process aims have equal weightage; this allows for the facilitation of an exchange and a connection between the members of the network. [42]

      The Circles of Participation is an idea that has been put forth by the Latin American and Caribbean Women's Health Network. (LACWHN). [43] This Network has three differentiated categories of membership, which it uses to determine the degree of commitment of an organization to the network. R- refers to the members who receive the women's health journal, P refers to members who actively participate in events and campaigns and who are advisors for specific topics. PP refers to the permanent participants within the network at national and international levels. They also receive a journal. This categorization allows the network to make an assessment of the dynamism and growth of a network, with members moving through the categories depending on their levels of participation. [44]

      An important space for contributions to the network is the newsletter. This can be facilitated by allowing contributions from various sources, provided they meet the established quality checks, ensuring a balance between regions of origin of the members of the network, ensuring a balance between the policy and program activities of the members and keeping the centralized editorial process to a minimum. This is in keeping with the ideal of a decentralized system of expression that allows each member to retain its individuality while still contributing to the aims of the network. The Women's Global Network on Reproductive Rights (WGNRR) sought to create a similar system of publication to measure the success of their linkages, the levels of empowerment amongst members, in terms of strategizing and enabling localized action and the allocation of space in a fair and equitable manner. [45] Another Network, Creative Exchange customizes its information flow within the network so that each member only receives the information it expresses interest in.[46] This prevents the overburdening of members with unnecessary information.

      The activities of the network which don't directly pass through the secretariat or the coordinator of the network can be monitored efficiently by keeping I close contact with new entrants to the network and capturing the essence of the activities that occur on the fringes of the network. This would allow an assessment of the diversity of the network. For example, Creative exchange sends out short follow up emails to determine the number and nature of contacts that have been made subsequent to a particular item in the newsletter. The UK Conflict Development and Peace Network (CODEP) records the newest subscribers to the network after every issue of their newsletter and AB Colombia sends out weekly news summaries electronically which are available for free to recipients who provide details of their professional engagements and why or how they wish to use these summaries. [47] This enables the mapping of the type of recipients the information reaches.

      E. Leadership and Coordination

      Sarason and Lorentz postulate four distinguishing characteristics that capture the creativity and expertise required by individuals leading and coordinating networks.[48] Knowledge of the territory or a broad understanding of the type of members, the resources available and the needs of the members is extremely important to facilitate an ideal environment of mutual trust and open dialogue between the members. Scanning the network for fluidity and assessing openings, making connections and innovating solutions would enable an efficient leadership that would contribute to the overall dynamism of the network. In addition to this, perceiving strengths and building on assets of existing resources would allow the network to capitalize on its strengths. Finally, the coordinators of a network must be a resource to all members of the network and thus enable them to create better and more efficient systems. They must therefore exercise their personal influence over members wherever required for the overall benefit of the network. Practically, a beneficial leadership would also require an inventive approach by providing fresh and interesting solutions to immediate problems. A sense of clarity, transparency and accountability would also encourage members of the network to participate more and engage with each other. It is important for the leadership within a network to deliver on expectations, while building consensus amongst its members.

      A shared objective, a collaborative setting and a constant review of strategies is important to maintain linkages within a network. Responsible relationships underpinned by values and supported by flows of relevant information would allow an effective and fruitful analysis by those who are engaged within a network to do the relevant work. In addition to this, a respect for the autonomy of the network is essential.

      F. Inclusion

      Public policy networks are more often than not saturated with the economic and social elite from across the developed world. A network across the Global South would have to change this norm and extend its ambit of membership to grass root organizations, which might not have otherwise had the resources or the opportunity to be a part of a network.[49] Networks can achieve their long term goals only if they are driven by the willingness to include organizations from across economic demographics. This would ensure that their output is the result of a collaborative process that takes into account cross cultural norms and differentials across economic demographics.

      The participation of diverse actors is reflective of the policy making processing having given due regard to on the ground realities and being sensitive towards the concerns of differently placed interest groups. Networks have been accused of catering only to the needs of industrial countries and subscribing to values of the global north thus stunting local development and enforcing double standards. This tarnishes the legitimacy of the processes inculcated within the network itself. It is therefore all the more essential that a network focused on the global south have a diverse collection of members from across backgrounds and economic contexts. Additionally, the accountability of the network to civil society is dependent on the nature of the links it maintains with the public. Inclusion thus fosters a sense of legitimacy and accountability. The inclusion of local institutions from the beginning would also increase the chances of the solutions provided by the network, being effectively implemented. Local inclusion affords a sense of responsibility and ensures that the network would remain sustainable in the long run. Allowing local stakeholders to take ownership of the network and participate in the formulation of policies, engage in planning and facilitate participation would enable an efficient addressing of significant public policy issues. [50] Thus networks would need to create avenues for participation of local institutions and civil society to engage in a democratic form of decision making.

      III. Evaluation

      The process of evaluation of a network is most efficiently effectuated through a checklist that has been formulated within a research study for the purpose of evaluating its own network. [51]

      This checklist enumerates the various elements that have to be taken into consideration while evaluating the success of a network, as follows;

      FIG 1.[52]

      1. What is a network?

      'Networks are energising and depend crucially on the motivation of members'

      (Networks for Development, 2000:35)

      This definition is one that is broadly shared across the literature, although it is more detailed than some.

       

      A network has:

      • A common purpose derived from shared perceived need for action
      • Clear objectives and focus
      • A non-hierarchical structure
      A network encourages
      • Voluntary participation and commitment
      • The input of resources by members for benefit of all

      A network provides

      • Benefit derived from participation and linking

       

      2. What does a network do?

      • Facilitate shared space for exchange, learning, development - the capacity-building aspect
      • Act for change in areas where none of members is working in systematic way - the advocacy, lobbying and campaigning aspect
      • Include a range of stakeholders - the diversity/ broad-reach aspect

       

      3. What are the guiding principles and values?

      • Collaborative action
      • Respect for diversity
      • Enabling marginalised voices to be heard
      • Acknowledgement of power differences, and commitment to equality

      4. How do we do what we do, in accordance with our principles and values?

      Building Participation

      • Knowing the membership, what each can put in, and what each seeks to gain
      • Valuing what people can put in
      • Making it possible for them to do so
      • Seeking commitment to a minimum contribution
      • Ensuring membership is appropriate to the purpose and tasks
      • Encouraging members to be realistic about what they can give
      • Ensuring access to decision-making and opportunities to reflect on achievements
      • Keeping internal structural and governance requirements to a necessary minimum.

       

      Building Relationships and Trust

      • Spending time on members getting to know each other, especially face-to-face
      • Coordination point/secretariat has relationship-building as vital part of work
      • Members/secretariat build relations with others outside network - strategic individuals and institutions

       

      Facilitative Leadership (may be one person, or rotating, or a team)

      • Emphasis on quality of input rather than control
      • Knowledgeable about issues, context and opportunities,
      • Enabling members to contribute and participate
      • Defining a vision and articulating aims
      • Balancing the creation of forward momentum and action, with generating consensus
      • Understanding the dynamics of conflict and how to transform relations
      • Promoting regular monitoring and participatory evaluation
      • Have the minimum structure and rules necessary to do the work. Ensure governance is light, not strangling.Give members space to be dynamic
      • Encourage all those who can make a contribution to the overall goal to do so, even if it is small.

      Working toward decentralised and democratic governance

      • At the centre, make only the decisions that are vital to continued functioning. Push decision-making outwards.
      • Ensure that those with least resources and power have the opportunity to participate in a meaningful way.

       

      Building Capacity

      • Encourage all to share the expertise they have to offer. Seek out additional expertise that is missing.

       

      5. What are the evaluation questions that we can ask about these generic qualities? How do each contribute to the achievement of your aims and objectives?

      Participation

      • What are the differing levels or layers of participation across the network?
      • Are people participating as much as they are able to and would like?
      • Is the membership still appropriate to the work of the network? Purpose and membership may have evolved over time
      • Are opportunities provided for participation in decision-making and reflection?
      • What are the obstacles to participation that the network can do something about?

      Trust

      • What is the level of trust between members? Between members and secretariat?
      • What is the level of trust between non-governing and governing members?
      • How do members perceive levels of trust to have changed over time?
      • How does this differ in relation to different issues?
      • What mechanisms are in place to enable trust to flourish? How might these be strengthened?

       

      Leadership

      • Where is leadership located?
      • Is there a good balance between consensus-building and action?
      • Is there sufficient knowledge and analytical skill for the task?
      • What kind of mechanism is in place to facilitate the resolution of conflicts?

       

      Structure and control

      • How is the structure felt and experienced? Too loose, too tight, facilitating, strangling?
      • Is the structure appropriate for the work of the network?
      • How much decision-making goes on?
      • Where are most decisions taken? Locally, centrally, not taken?
      • How easy is it for change in the structure to take place?

       

      Diversity and dynamism

      • How easy is it for members to contribute their ideas and follow-through on them?
      • If you map the scope of the network through the membership, how far does it reach? Is this as broad as

      intended? Is it too broad for the work you are trying to do?

      Democracy

      • What are the power relationships within the network? How do the powerful and less powerful interrelate? Who sets the objectives, has access to the resources, participates in the governance?

      Factors to bear in mind when assessing sustainability

      • Change in key actors, internally or externally; succession planning is vital for those in central roles
      • Achievement of lobbying targets or significant change in context leading to natural decline in energy;
      • Burn out and declining sense of added value of network over and above every-day work.
      • Membership in networks tends to be fluid. A small core group can be a worry if it does not change and renew itself over time, but snapshots of moments in a network's life can be misleading. In a flexible, responsive environment members will fade in and out depending on the 'fit' with their own priorities. Such changes may indicate dynamism rather than lack of focus.
      • Decision-making and participation will be affected by the priorities and decision-making processes of members' own organisations.
      • Over-reaching, or generating unrealistic expectations may drive people away
      • Asking same core people to do more may diminish reach, reduce diversity and encourage burn-out

      V. Learning and Recommendations

      In order to facilitate the optimum working of a network several factors need to be taken into consideration and certain specific processes have to be incorporated into the regular functioning of the network. These are for example,

      • Ensuring that the evaluation of the network occurs at periodic intervals with the requisite level of attention to detail and efficiency to enable an in depth recalibration of the functions and processes of the network. To this effect, evaluation specialists must be engaged not just at times of crises or instability but as accompaniments to the various processes undertaken by the network. This would enable a holistic development of the network.
      • It is also important to understand the underlying values that define the unique nature of the network. The coordination of the network, its functions and its activities are intrinsically linked to these values and recognition of this element of the network would enable a greater functionality in the overall operation of the network.
      • A strong relationship between the members of the network, predicated on trust and open dialogue is essential for its efficient functioning. This would allow the accumulation of innovative ideas and dynamic thought to direct the future activities of the network.
      • The Secretariat or coordinator of the network must be able to engage the member in monitoring and evaluating the progress of the network. One method of enabling this coordination is through the institution of 'participant observer' methods at international conferences or meetings, which allow the members of the network to report back on the work that they have, which is linked to the work of other members.
      • The autonomy of a network and its decentralized mechanism of functioning are integral to retain the individuality of its members, who seek to pursue institutional objectives. The members seek to facilitate creative thinking and share ideas and this must be supported by financial resources. A strong bond of trust between the members of a network is therefore essential to enable long term commitments and the flourishing of interpersonal communication between members.
      • It is important that the subject area of operation of the network be comprehensively defined before the network comes into existence.
      • As seen with the experience of Canadian Knowledge Networks, it is beneficial to be selective in inviting participant to the network and following a rigorous process of review and selection would ensure that only the best candidates are selected so as to facilitate effective partnerships with other networks, as a result of demonstrable expertise within a particular field.
      • The management of a network must be disciplined, with clearly demarcated project deadlines and an optimum level of transparency and accountability. At the helm of leadership of every successful network, there has been intelligent, decisive and facilitative exchange, which is essential in securing a durable and potentially expandable space for the network to operate in.

      A. Canadian Perspectives

      A study of Canadian experiences was conducted by examining The Centers of Excellence and the Networks of Centers of Excellence (NCEs), which were funded through three Federal Granting Councils.[53] An initial observation that was made through the course of this study was that each network is intrinsically different and there is no uniform description which would fit all of them. The objectives of the Networks of Centers of Excellence Program are broadly, as follows; to encourage fundamental and applied research in fields which are critical to the economic development of Canada, to encourage the development and retention of world class scientists and engineers specializing in essential technologies, to manage multidisciplinary, cross sectoral national research programs which integrate stakeholder priorities through established partnerships and finally, to accelerate the exchange of research results within networks by accelerating technology transfers, made to users for social and economic development. [54] Extensive interviews carried out in the course of the research conducted by the ARA Consulting Group Inc. drew up particularly relevant conclusions with respect to the NCEs.

      Firstly, they have been able to produce significant "cultural shifts" among the researchers associated with the network. This is attributed to the network facilitating a collaborative effort amongst researchers as opposed to their previous working, which was largely in isolation. The benefits of this collaboration have been identified as providing innovative ideas and leading the research itself in unprecedented directions. This has the effect of equipping Canada with the capability to compete on a global level with respect to its research endeavors. The culture shift has also allowed researchers to be more aware of the problems that plague industry and has instigated more in depth research into the development of the industrial sector. Government initiatives that have attempted to cohesively apply academic research to industry have had limited success. The NCE's however have managed to successfully disintegrate the barriers between these two seemingly disparate fields. This has resulted in a faster and more effective system of knowledge dissemination resulting in durable and self-sustaining economic development, which takes place at a faster rate. The NCE's have also been able to contribute to healthcare, wellness and overall sustainable development through their cross sectoral research approach, a model that can be used worldwide.

      Another tangible effect has been that the relationship between industry and academic research is evolving into a positive and collaborative exchange, as opposed to the previous state which was largely isolationist, bordering on confrontational.[55] A possible cause of this is the increased representation of companies in the establishment of networks resulting in them influencing the course of research. This has not been met with any resistance from academic researchers who are driven by the imperative of an open publication. [56] Besides influencing the style of management, industrial representation has also brought about an increase in the level of private sector financial contributions made to NCEs. It is believed that these NCEs may even be able to support themselves in the next 7-8 years through the funding they receive from the commercialization of their research.

      A third benefit that has emerged is the faster rate of production of new knowledge and innovative thinking. This is the result of collaborative techniques which is made more efficient through the use of modern technology. The increasing number of multi authored cross institutional scholarly publications made available by the NCE is evidentiary of this trend. The rate and quantity of technology transfers has also increased exponentially as a result of this. Knowledge networks also facilitate the mobilization of human resources and address cross disciplinary problems, resulting in an efficient and synergistic solutions. Their low cost, fast pace approach has been instrumental in constructing an understanding of and capacity to engage in sustainable development.

      The significant contributions to sustainable development include the Canadian Genetic Diseases Network, which has discovered two specific genes that cause early onset Alzheimer's disease. The Sustainable Forest Management Network has claimed that its research does have a considerable level of influence on the industrial approach to sustainability. The Canadian Bacterial Disease Network conducts research on bacterially caused diseases which are mostly prevalent in developing countries, with a view to produce antibiotics and vaccines that may be able to successfully combat these vaccines. TeleLearning, another such network is working on the creation of software environments which will form the basis of technology based education in the future. [57] The greatest advantage of these knowledge networks is that they have been able to surpass traditional disciplinary barriers and have emerged at the forefront of interdisciplinary articulation, which is emerging as the path to breakthroughs in the fields of applied sciences and technology in the future. The NCE's have also been able to provide diverse working environments for graduate students, where they have been able to work under scientists associated with different specializations and across different departments. They have also been able to interact with government and industry representatives, giving them a far greater exposure of the field and equipping them to avail of a wide range of employment opportunities.

      The corporate style of management incorporated within the NCEs encourages a sense of discipline and an enthusiasm for innovation. The Board of Directors at NCE's take on a perfunctory role and function as a typical corporate board. Researchers are therefore required to provide regular reports and meet deadlines to achieve predetermined goals that have been agreed upon. The new paradigm of sustainable development and the fluid transfer of knowledge requires this structure of management, even within a previously strictly academically oriented environment. NCEs have been incorporated as non-profit corporation for largely legal reasons such as the ownership of intellectual property.

      The participation to these networks is restricted and is open only through an invitation, in the form of a submission of project proposals under a particular theme, with the final selection being made subject to a rigorous process of evaluation. This encourages the participants of the network to embody a degree of discipline and carry out their activities in a constructive, time bound manner.

      B. Perceived Challenges

      These knowledge networks, although extremely beneficial in the long run, do have certain specific issues that need to be addressed. Firstly, most formal knowledge networks do not have a formalized communication strategy. While they do make use of various forms of telecommunication, this communication is is no way formally directed or specific. Although some networks have managed to set up a directed communications strategy, supplemented by the involvement of specifically communications based networks (such as CANARIE) , there is still a long way to go in this area.

      As is evident with most academic endeavors in recent years, efficient and sustained development both in terms of economy as well as self-sustenance, requires a smooth transitioning to a close collaboration with the industry. Although the NCE's have made progress in this area, a lesson that can be learnt from this is that knowledge networks do require a collaborative arrangement between researchers, the industry and the financial sector. [58] The nature of this collaboration cannot be predicted before tangible research outputs are developed that reflect the relevance of academia in the industrial and financial sectors. A particular network, PENCE has mandated that the boards of directors include a representative of the financial sector. This is a step forward in opening the doors to greater collaboration and mutually assured growth and sustainable development in both academia as well as the industrial and financial sectors.

      As with all knowledge networks there is a continuous need for expansion of the focus areas to cover more fields and instigate research in neglected areas. The largest number of networks has been in the fields of healthcare and health associated work. However there is an impending need for networks to be established in other fields as well, such as those related to environmental issues, social dynamics and the general quality of life. [59]

      The Canadian experience has resulted in a nuanced understanding of specific actions that need to be taken to strengthen knowledge networks across the spectrum. Firstly, there is an impending need to build new knowledge networks, which would be required to strengthen institutions upon which the networks are based. These include universities and research institutions, which have been weakened both financially and academically over the past few years. The NCE Program, on the face of it, seems to be strengthening universities, by attracting funding for research endeavors that would otherwise not be available to them. While this may be true, it tends to obfuscate the true nature of a university as an intellectual community, by portraying it as a funding source for research and equipment.[60] The deteriorating role of the university in fostering research and laying the foundation of an intellectual community can be reversed by the competition posed by the NCEs which tend to threaten its stature in the fields of multi-disciplinary and graduate institution. Another aspect that needs to be considered is the role of knowledge networks in fostering sustainable development not only on a national or regional scale but on a global level. This can be effectuated by allowing the amalgamation of the academia and industry through ample representation, a model that has proven to be effective within the NCEs. This is all the more relevant today where multinational corporations hold considerable sway over the global economy, so much so that the role of governments in regulating this economy is gradually decreasing. Multilateral investment treaties and agreements are reflective of this.

      The final issue is that of the long standing debate between public good and proprietary knowledge. Canadian knowledge networks are of the opinion that knowledge must be freely disseminated. However, certain networks including the NCEs grant the exclusive right of the development and application of this knowledge to specific industry affiliates. On one hand this facilitates further investment into the research, which creates better products, new jobs and further social development. This is predicated on a fine balance of allowing this development without widening the already disparate socio-economic gaps that exist between developed and developing countries. Thus the balance between public good and propriety knowledge must be effectively managed by the regulatory role discharged by the governments and the decision making faculties of these knowledge networks. [61]

      Establishing international linkages across networks based within different regions across the world would also be an effective means of ensuring effective partnerships and the creation of a new, self-sustaining structure. This would bring new prospects of funding into sustainable development activities and engage industrial affiliates with international development activities.

      C. Donor Perspectives

      The International Development Research Centre, based in Canada has also been instrumental in the setting up of support structures for networks. The IDRC has remained consistent in its emphasis of networks as mechanisms of linking scientists engaged in similar problems across the globe instead of as mechanisms to fund research in countries. This has afforded the IDRC with a greater level of flexibility in responding to the needs of developing countries as well as responding to the financial pressures within Canada to deliver superior technical support with a reduction in overheads. The IDRC sees networking an indispensable aspect of scientific pursuit and technological adaptation in the most effective manner. It is currently supporting four specific types of networks; horizontal networks which link together institutions with similar areas of specialization, vertical networks which work on disparate aspects of the same problem of different but interrelated problems, information networks which provide a centralized form of information service to members, which enables them to exchange information in the manner necessary and finally training networks which provide supervisory services to independent participants within the network.[62]

      (I) Internal Evaluations

      There is an outstanding need to monitor visits that are undertaken by the coordinator or the specific representatives of the member or donor as applicable. This would expedite the process of identifying problems and aid in deriving tangible solutions in an efficient manner. The criteria for the assessment would vary depending on the goals of the organization. Donors may pose questions with respect to the cost effectiveness of a particular pattern of research and may seek a formal report regarding this aspect. A more extensive model of donor evaluations may even include assessments with respect to the monitoring and coordination of specific functions.

      (II) External Evaluations

      A system of external evaluation would be useful with assessing data with respect to the operations of programs and their objectives. This would engage newer participants by injecting newer ideas and insights into the management and scope of the network. The most extensive method of network evaluation was one that was postulated by Valverde [63] and reviewed by Faris [64]. It aimed to draw an analysis of particular constraints and specific elements that would influence the execution of network programs. This method identifies a list of threats, opportunities, strengths and weaknesses which would inform future recommendations. The Valverde method makes use of both formal as well as informal data which is varied depending on the type of network and the management structure it employs.[65]

      (III) Financial Viability

      A network almost always requires external resources to aid in the setting up and coordination of its activities. Donor agencies must recognize the long term commitment that is required in this respect. It is therefore essential that the period for which this funding will be made available be clarified at the outset, to leave agencies with ample time to plan for the possibility of cessation of external financial support. [66] As concluded from the findings of the research study, although most networks are offered external support, it is primarily technology transfer and information networks that have been able to generate the bulk of funding in this respect. They have been able to obtain this financial assistance from a variety of sources including participating organizations as well as governments. [67] The funding for purely research networks however are inconsistent and the networks would have to plan in advance for a possible cessation of financial support.[68]

      (IV) Adaptability

      From the perspective of donors, the degree of adaptability and level of responsiveness of a particular network is especially relevant in assessing the coordination, control and leadership of a particular network. A network that is plagued by ineffective leadership and the lack of coordination is unable to adapt to changing circumstances and meet the needs of its participants. A combination of collaborative effort, a localized approach and far-sighted leadership instills in the participants of the network a sense of comfort in its processes and in the donors a faith in its ability to address topical issues and remain relevant.

      (V) The Exchange of Information

      As noted by Akhtar, a network is created to respond to the growing need to improve channels of information exchange and communication. [69] Information needs to be tailored to suit its users and must be disseminated accordingly. The study conducted has concluded that information networks that are engaged in the transfer of technology are inefficient in disseminating internally derived information and recognizing the needs of their users.[70] Given that these networks are especially user oriented this systemic failure is extremely problematic. There is also a need to review the mechanism of transferring strategic research techniques and the approaches employed in dealing with developing countries. Special attention must be paid to the beneficiaries of a particular network so that the research conducted is directed towards that particular demographic. This is especially relevant for information networks, which from the evaluation; appear to be generating data but not considering who would be using these services.[71]

      (VI) Capacity Building

      Facilitating the training of individuals both on a formal and informal level has led to an enhance level of research and reporting, as well as the designing of projects. There is however a need to tailor this training to suit the needs of the participants of a particular network. Networks which have been able to provide inputs which are not ordinarily locally provided have instigated the establishment of national and regional institutions. [72]

      (VII) Cost Effectiveness

      It is important to note however that networks need to employ the most cost effective mechanism of delivering support services to national programs. A network must work in a manner that allows for enough individual enterprise but at the same time follows a collaborative model to generate more effective and relevant research within a short span of time and through the utilization of minimum resources. The Caribbean Technology Consultation Services (CTCS) for example was found to be far more cost effective and in fact 50% cheaper than the services of the United Nations Industrial Development Organization. [73] Similarly, the evaluators of the LAAN found that funding a network was significantly cheaper than finding individual research projects.[74]


      [1] Castells, Manuel (2000) "Toward a Sociology of the Network Society" Contemporary Sociology, Vol

      29 (5) p693-699

      [2] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

      and the Future of Global Governance IDRC, Ottawa

      [3] Supra ., n.1, p.697

      [4] Ibid

      [5] Supra n.1, p.61

      [6] Chambers, Robert (1997) Whose Reality Counts? Putting the First Last Intermediate Technology

      Publications, London

      [7] Ibid

      [8] Chisholm, Rupert. F (1998) Developing Network Organizations: Learning from Practice and Theory

      Addison Wesley

      [9] Brown, L. David. 1993. "Development Bridging Organizations and Strategic

      Management for Social Change." Advances in Strategic Management 9.

      [10] Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 16

      [11] Ibid

      [12] Ibid

      [13] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

      and the Future of Global Governance IDRC, Ottawa, p.61

      [14] Ibid

      [15] Ibid

      [16] Supra n.13, p. 65

      [17] Ibid

      [18] Supra n. 13, p. 62

      [19] Ibid

      [20] Supra n. 13, p. 63

      [21] Ibid

      [22] Supra n. 13, p. 64

      [23] Newell, Sue & Jacky Swan (2000) "Trust and Inter-organizational Networking" in Human Relations,

      Vol 53 (10)

      [24] Sheppard, Blair H & Marla Tuchinsky (1996) "Micro-OB and the Network Organisation" in Kramer, R.

      And Tyler T. (eds) Trust in Organisations, Sage

      [25] Powell, Walter W (1996) "Trust-based forms of governance" in Kramer, R. And Tyler T. (eds) Trust in

      Organisations , Sage

      [26] Stern, Elliot (2001) "Evaluating Partnerships: Developing a Theory Based Framework", Paper for

      European Evaluation Society Conference 2001, Tavistock Institute

      [27] Freedman, Lynn & Jan Reynders (1999) Developing New Criteria for Evaluating Networks in Karl, M.

      (ed) Measuring the Immeasurable: Planning Monitoring and Evaluation of Networks, WFS

      [28] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

      International Alert./UNHCR Working Group on Conflict Management and Prevention

      [29] Supra n. 10, p. 20

      [30] Ibid

      [31] Taylor, James, (2000) "So Now They Are Going To Measure Empowerment!", paper for INTRAC 4th

      International Workshop on the Evaluation of Social Development, Oxford, April

      [32] Karl, Marilee (2000) Monitoring And Evaluating Stakeholder Participation In Agriculture And Rural

      Development Projects: A Literature Review, FAO

      [33] Supra n. 10, p.25

      [34] Ibid

      [35] Supra n. 10, p. 26

      [36] Ibid

      [37] Supra n. 10, p.27

      [38] Ludema, James D, David L Cooperrider & Frank J Barrett (2001) "Appreciative Inquiry: the Power of

      the Unconditional Positive Question" in Reason, P. & Bradbury, H. (eds) Handbook of Action

      Research , Sage

      [39] Ibid

      [40] Supra n. 10, p. 29

      [41] Ibid

      [42] Ibid

      [43] Sida (2000) Webs Women Weave, Sweden, 131-135

      [44] Ibid

      [45] Dutting, Gisela & Martha de la Fuente (1999) "Contextualising our Experiences: Monitoring and

      Evaluation in the Women's Global Network for Reproductive Rights" in Karl, M. (ed) Measuring the

      Immeasurable: Planning Monitoring and Evaluation of Networks , WFS

      [46] Supra n. 10, p. 30

      [47] Supra n. 10, p. 32

      [48] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

      International Alert./UNHCR Working Group on Conflict Management and Prevention

      [49] Supra n. 13, p. 67

      [50] Supra n. 13, 68

      [51] Supra n 10, 36

      [52] See Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 36-37

      [53] The three granting councils are: the Natural Sciences and Engineering Research Council (NSERC),

      the Social Sciences and Humanities Research Council (SSHRC), and the Medical Research Council

      (MRC).

      [54] Howard C. Clark, Formal Knowledge Networks: A Study of Canadian Experiences, International Institute for Sustainable Development 1998, p. 16

      [55] Ibid, p. 18

      [56] Ibid, p. 18

      [57] Ibid, p. 19

      [58] Ibid , p 21

      [59] Ibid , p. 22

      [60] Ibid, p. 31

      [61] Ibid

      [62] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 232

      [63] Valverde, C. 1988, Agricultural research networking : Development and evaluation, International Services for National Agricultural Research, The Hague, Netherlands. Staff Notes (18-26 November 1988)

      [64] Faris, D.G 1991, Agricultural research networks as development tools: Views of a network coordinator, IDRC, Ottawa, Canada, and International Crops Research Institute for the Semi-Arid Tropic, Patancheru, Andhra Pradesh, India

      [65] Supra n. 62

      [66] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 233

      [67] ibid

      [68] Ibid

      [69] Akhtar, S. 1990. Regional Information Networks : Some Lessons from Latin America. Information Development 6 (1) : 35-42

      [70] Ibid, p. 242

      [71] Ibid, p. 242

      [72] Ibid., p. 243

      [73] Stanley, J.L and Elwela, S.S.B 1988, Evaluation report for the Caribbean Technology Consultancy Services (CTCS), CTCS Network Project (1985-1988) IDRC Ottawa, Canada

      [74] Moreau,L. 1991, Evaluation of Latin American Aqualculture Network. IDRC, Ottawa, Canada

      Summary of the Public Consultation by Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad

      by Vipul Kharbanda last modified Jan 28, 2016 03:22 PM
      On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart City scheme of the Central Government. An agenda for the same is attached herewith. City scheme of the Central Government.

      The Centre for Internet and Society, Bangalore (CIS) is researching  the 100 Smart City Scheme from the perspective of Big Data and is seeking to understand the role of Big Data in smart cities in India as well as the impact of the generation and use of the same. CIS is also examining whether the current legal framework is adequate to deal with these new technologies. It was in this background that CIS attended a part of the workshop.

      At the outset the organizers had noted that there will be no discussion on technology and its adoption in this particular workshop.. The format involved a speaker providing his/her viewpoint on the topic concerned and  the discussion revolved mainly around problems relating to traffic, parking, roads, drainage, etc. and there was no discussion of technology or how to utilise it to solve these problems. From the discussions CIS has had with certain people who are quite involved with these public consultations, the impression that we have is that the solutions to these problems were not very complicated and required only some intent and execution, and if that was achieved it would go a long way in improving the infrastructure of the city. This perspective raises the question of whether or not India needs 'Smart Cities' to improve the life of residents or if basic urban solutions are adequate and are in fact needed to lay the foundation for any potential smart city that might be established in the future.

      It is quite interesting to see the difference in the levels at which the debate on smart cities is happening, in that when the central government talks about smart cities they try to highlight technology and other aspects such as smart meters, smart grids, etc. while the discussion on the ground in the actual cities is currently at a much more basic stage. For example the government website for the smart city project, while describing a smart city, mentions a number of “smart solutions” such as “electronic service delivery”, “smart meters” for water, “smart meters” for electricity, “smart parking”, Intelligent Traffic Management”, “Tele-medicine”, etc. Even in all the major public service announcements on the smart city project, the government effort seems to be to focus on these “smart solutions”, projecting technology as the answer to urban problems. However those in the cities themselves appear to be more concerned with adequate parking, adequate water supply, proper roads, waste disposal, etc. This difference in approach is only representative of the yawning gap between the mindspace of those who conceive these schemes and market them on the one hand and those who are tasked with implementing the schemes on the other hand as well as the realities of what cities in India need to address problems related to infrastructure and functioning. However the silver lining in this scenario, atleast on a personal level, is that the people on the ground, are not blindly turning to technology to solve their problems but actually trying to look for the best solutions regardless of whether it is a technology based solution or not.

      Agenda


      Rashtriya 1

      Rashtriya 2

      CIS's Comments on the CCWG-Accountability Draft Proposal

      by Pranesh Prakash last modified Jan 29, 2016 03:17 PM
      The Centre for Internet & Society (CIS) gave its comments on the failures of the CCWG-Accountability draft proposal as well as the processes that it has followed.

      We from the Centre for Internet and Society wishes to express our dismay at the consistent way in which CCWG-Accountability has completely failed to take critical inputs from organizations like ours (and others, some instances of which have been highlighted in Richard Hill’s submission) into account, and has failed to even capture our concerns and misgivings about the process — as expressed in our submission to the CCWG-Accountability’s 2nd Draft Proposal on Work Stream 1 Recommendations — in any document prepared by the CCWG.  We cannot support the proposal in its current form.

      Time for Comments

      We believe firstly that the 21 day comment period itself was too short and is going to result effectively in many groups or categories of people from not being able to meaningfully participate in the process, which flies in the face of the values that ICANN claims to uphold. This extremely short period amounts to procedural unsoundness, and restrains educated discussion on the way forward, especially given that the draft has altered quite drastically in the aftermath to ICANN55.

      Capture of ICANN and CCWG Process

      The participation in the accountability-cross-community mailing list clearly shows that the process is dominated by developed countries (of the top 30 non-staff posters to the list, 26 were from the ‘WEOG’ UN grouping, with 14 being from the USA, with only 1 from Asia Pacific, 2 from Africa, and 1 from Latin America), by males (27 of the 30 non-staff posters), and by industry/commercial interests (17 of the top 30 non-staff posters).  If this isn’t “capture”, what is?  There is no stress test that overcomes this reality of capture of ICANN by Western industry interests.  The global community is only nominally multistakeholder, while actually being grossly under-representative of the developing nations, women and minority genders, and communities that are not business communities or technical communities.  For instance, of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.

      Culling statistics from the accountability-cross-community mailing list, we find that of the top 30 posters (excluding ICANN staff):

      • 57% were, as far as one could ascertain from public records, from a single country: the United States of America.
      • 87% were, as far as one could ascertain from public records, participants from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries. None of those who participated substantively were from the EEC (Eastern European) group and only 1 was from Asia-Pacific and only 1 was from GRULAC (Latin American and Caribbean Group).
      • 90% were male and 3 were female, as far as one could ascertain from public records.
      • 57% were identifiable as primarily being from industry or the technical community, as far as one could ascertain from public records, with only 2 (7%) being readily identifiable as representing governments.

      This lack of global multistakeholder representation greatly damages the credibility of the entire process, since it gains its legitimacy by claiming to represent the global multistakeholder Internet community.

      Bogey of Governmental Capture

      With respect to Stress Test 18, dealing with the GAC, the report proposes that the ICANN Bylaws, specifically Article XI, Section 2, be amended to create a provision where if two-thirds of the Board so votes, they can reject a full GAC consensus advice. This amendment is not connected to the fear of government capture or the fear that ICANN will become a government-led body; given that the advice given by the GAC is non-binding that is not a possibility. Given the state of affairs described in the submission made above, it is clear that for much of the world, their governments are the only way in which they can effectively engage within the ICANN ecosystem. Therefore, nullifying the effectiveness of GAC advice is harmful to the interests of fostering a multistakeholder ecosystem, and contributes to the strengthening of the kind of industry capture described above.

      Jurisdiction

      All discussions on the Sole Designator Model seem predicated on the unflinching certainty of ICANN’s jurisdiction continuing to remain in California, as the legal basis of that model is drawn from Californian corporate law.  To quote the draft report itself, in Annexe 12, it is stated that:

      "Jurisdiction directly influences the way ICANN’s accountability processes are structured and operationalized. The fact that ICANN today operates under the legislation of the U.S. state of California grants the corporation certain rights and implies the existence of certain accountability mechanisms. It also imposes some limits with respect to the accountability mechanisms it can adopt. The topic of jurisdiction is, as a consequence, very relevant for the CCWG-Accountability. ICANN is a public benefit corporation incorporated in California and subject to California state laws, applicable U.S. federal laws and both state and federal court jurisdiction."

      Jurisdiction has been placed within the mandate of WS2, to be dealt with post the transition.  However, there is no analysis in the 3rd Draft on how the Sole Designator Model would continue to be upheld if future Work Stream 2 discussions led to a consensus that there needed to be a shift in the jurisdiction of ICANN. In the event that ICANN shifts to, say, Delaware or Geneva, would there be a basis to the Sole Designator Model in the law?  Therefore this is an issue that needs to be addressed before this model is adopted, else there is a risk of either this model being rendered infructuous in the future, or this model foreclosing open debate and discussion in Work Stream 2.

      Right of Inspection

      We strongly support the incorporation of the rights of Inspection under this model as per Section 6333 of the California Corporations Code as a fundamental bylaw. As there is a severe gap between the claims that ICANN raises about its own transparency and the actual amount of transparency that it upholds, we opine that the right of inspection needs to be provided to each member of the ICANN community.

      Timeline for WS2 Reforms

      We support the CCWG’s commitment to the review of the DIDP Process, which they have committed to enhancing in WS2. Our research on this matter indicates that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide. Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. See "Peering behind the veil of ICANN's DIDP (II)".

      In the present format, there has been little analysis on the timeline of WS2; the report itself merely states that:

      "The CCWG-Accountability expects to begin refining the scope of Work Stream 2 during the upcoming ICANN 55 Meeting in March 2016. It is intended that Work Stream 2 will be completed by the end of 2016."

      Without further clarity and specification of the WS2 timeline, meaningful reform cannot be initiated. Therefore we urge the CCWG to come up with a clear timeline for transparency processes.

      The Internet Has a New Standard for Censorship

      by Jyoti Panday last modified Jan 30, 2016 09:17 AM
      The introduction of the new 451 HTTP Error Status Code for blocked websites is a big step forward in cataloguing online censorship, especially in a country like India where access to information is routinely restricted.
      The Internet Has a New Standard for Censorship

      Featured image credit: span112/Flickr, CC BY 2.0.

      The article was published in the Wire on January 29, 2016. The original can be read here.


      Ray Bradbury’s dystopian novel Fahrenheit 451 opens with the declaration, “It was a pleasure to burn.” The six unassuming words offer a glimpse into the mindset of the novel’s protagonist, ‘the fireman’ Guy Montag, who burns books. Montag occupies a world of totalitarian state control over the media where learning is suppressed and censorship prevails. The title alludes to the ‘temperature at which book paper catches fire and burns,’ an apt reference to the act of violence committed against citizens through the systematic destruction of literature. It is tempting to think about the novel solely as a story of censorship. It certainly is. But it is also a story about the value of intellectual freedom and the importance of information.

      Published in 1953, Bradbury’s story predates home computers, the Internet, Twitter and Facebook, and yet it anticipates the evolution of these technologies as tools for censorship. When the state seeks to censor speech, they use the most effective and easiest mechanisms available. In Bradbury’s dystopian world, burning books did the trick; in today’s world, governments achieve this by blocking access to information online. The majority of the world’s Internet users encounter censorship even if the contours of control vary depending on the country’s policies and infrastructure.

      Online censorship in India

      In India, information access blockades have become commonplace and are increasingly enforced across the country for maintaining political stability, for economic reasons, in defence of national security or preserving social values. Last week, the Maharashtra Anti-terror Squad blocked 94 websites that were allegedly radicalising the youth to join the militant group ISIS. Memorably, in 2015 the NDA government’s ham-fisted attempts at enforcing a ban on online pornography resulted in widespread public outrage. Instead of revoking the ban, the government issued yet another vaguely worded and in many senses astonishing order. As reported by Medianama, the revised order delegates the responsibility of determining whether banned websites should remain unavailable to private intermediaries.

      The state’s shifting reasons for blocking access to information is reflective of its tendentious attitude towards speech and expression. Free speech in India is messily contested and normally, the role of the judiciary acts as a check on the executive’s proclivity for banning. For instance, in 2010 the Supreme Court upheld the Maharashtra High Court’s decision to revoke the ban on the book on Shivaji by American author James Laine, which, according to the state government, contained material promoting social enmity. However, in the context of communications technology the traditional role of courts is increasingly being passed on to private intermediaries.

      The delegation of authority is evident in the government notifying intermediaries to proactively filter content for ‘child pornography’ in the revised order issued to deal with websites blocked as result of its crackdown on pornography. Such screening and filtering requires intermediaries to make a determination on the legality of content in order to avoid direct liability. As international best practices such as the Manila Principles on Intermediary Liability point out, such screening is a slow process and costly and  intermediaries are incentivised to simply limit access to information.

      Blocking procedures and secrecy

      The constitutional validity of Section 69A of the Information Technology Act, 2008 which grants power to the executive to block access to information unchecked, and in secrecy was challenged in Shreya Singhal v. Union of India. Curiously, the Supreme Court upheld S69A reasoning that the provisions were narrowly-drawn with adequate safeguards and noted that any procedural inconsistencies may be challenged through writ petitions under Article 226 of the Constitution. Unfortunately as past instances of blocking under S69A reveal the provisions are littered with procedural deficiencies, amplified manifold by the authorities responsible for interpreting and implementing the orders.

      Problematically, an opaque confidentiality criteria built into the blocking rules mandates secrecy in requests and recommendations for blocking and places written orders outside the purview of public scrutiny. As there are no comprehensive list of blocked websites or of the legal orders, the public has to rely on ISPs leaking orders, or media reports to understand the censorship regime in India. RTI applications requesting further information on the implementation of these safeguards have at best provided incomplete information.

      Historically, the courts in India have held that Article 19(1)(a) of the Constitution of India is as much about the right to receive information as it is to disseminate, and when there is a chilling effect on speech, it also violates the right to receive information. Therefore, if a website is blocked citizens have a constitutional right to know the legal grounds on which access is being restricted. Just like the government announces and clarifies the grounds when banning a book, users have a right to know the grounds for restrictions on their speech online.

      Unfortunately, under the present blocking regime in India there is no easy way for a service provider to comply with a blocking order while also notifying users that censorship has taken place. The ‘Blocking Rules’ require notice “person or intermediary” thus implying that notice may be sent to either the originator or the intermediary. Further, the confidentiality clause raises the presumption that nobody beyond the intermediaries ought to know about a block.

      Naturally, intermediaries interested in self-preservation and avoiding conflict with the government become complicit in maintaining secrecy in blocking orders. As a result, it is often difficult to determine why content is inaccessible and users often mistake censorship for technical problem in accessing content. Consequently, pursuing legal recourse or trying to hold the government accountable for their censorious activity becomes a challenge. In failing to consider the constitutional merits of the confidentiality clause, the Supreme Court has shied away from addressing the over-broad reach of the executive.

      Secrecy in removing or blocking access is a global problem that places limits on the transparency expected from ISPs. Across many jurisdictions intermediaries are legally prohibited from publicising filtering orders as well as information relating to content or service restrictions. For example in United Kingdom, ISPs are prohibited from revealing blocking orders related to terrorism and surveillance. In South Korea, the Korean Communications Standards Commission holds public meetings that are open to the public. However, the sheer volume of censorship (i.e. close to 10,000 URLs a month) makes it unwieldy for public oversight.

      As the Manila Principles note, providing users with an explanation and reasons for placing restrictions on their speech and expression increases civic engagement. Transparency standards will empower citizens to demand that companies and governments they interact with are more accountable when it comes to content regulation. It is worth noting, for conduits as opposed to content hosts, it may not always be technically feasible for to provide a notice when content is unavailable due to filtering. A new standard helps improve transparency standards for network level intermediaries and for websites bound by confidentiality requirements. The recently introduced HTTP code for errors is a critical step forward in cataloguing censorship on the Internet.

      A standardised code for censorship

      On December 21, 2015, the Internet Engineering Standards Group (IESG) which is the organisation responsible for reviewing and updating the internet’s operating standards approved the publication of 451-’An HTTP Status Code to Report Legal Obstacles’. The code provides intermediaries a standardised way to notify users know when a website is unavailable following a legal order. Publishing the code allows intermediaries to be transparent about their compliance with court and executive orders across jurisdictions and is a huge step forward for capturing online censorship. HTTP code 451 was introduced by software engineer Tim Bray and the code’s name is an homage to Bradbury’s novel Fahrenheit 451.

      Bray began developing the code after being inspired by a blog post by Terence Eden calling for a  censorship error code. The code’s official status comes after two years of discussions within the technical community and is a result of campaigning from transparency and civil society advocates who have been pushing for clearer labelling of internet censorship. Initially, the code received pushback from within the technical community for reasons enumerated by Mark Nottingham, Chair of the IETF HTTP Working Group in his blog. However, soon sites began using the code on an experimental and unsanctioned basis and faced with increasing demand for and feedback, the code was accepted.

      The HTTP code 451 works as a machine-readable flag and has immense potential as a tool for organisations and users who want to quantify and understand censorship on the internet. Cataloguing online censorship is a challenging, time-consuming and expensive task. The HTTP code 451 circumvents confidentiality obligations built into blocking or licensing regimes and reduces the cost of accessing blocking orders.

      The code creates a distinction between websites blocked following a court or an executive order, and when information is inaccessible due to technical errors. If implemented widely, Bray’s new code will help prevent confusion around blocked sites. The code addresses the issue of the ISP’s misleading and inaccurate usage of Error 403 ‘Forbidden’ (to indicate that the server can be reached and understood the request, but refuses to take any further action) or 404 ‘Not Found’ (to indicate that the requested resource could not be found but may be available again in the future).

      Adoption of the new standard is optional, though at present there are no laws in India that prevent intermediaries doing so. Implementing a standardised machine-readable flag for censorship will go a long way in bolstering the accountability of ISPs that have in the past targeted an entire domain instead of the specified URL. Adoption of the standard by ISPs will also improve the understanding of the burden imposed on intermediaries for censoring and filtering content as presently, there is no clarity on what constitutes compliance.  Of course, censorious governments may prohibit the use of the code, for example by issuing an order that specifies not only that a page be blocked, but also precisely which HTTP return code should be used. Though such sanctions should be viewed as evidence of systematic rights violation and totalitarian regimes.

      In India where access to software code repositories such as Github and Sourceforge are routinely restricted, the need for such code is obvious. The use of the code will improve confidence in blocking practices, allowing  users to understand the grounds on which their right to information is being restricted. Improving transparency around censorship is the only way to build trust between the government and its citizens about the laws and policies applicable to internet content.

      Nature of Knowledge

      by Scott Mason — last modified Jan 30, 2016 11:42 AM

      Introduction

      In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [1]. For him and others, the hidden patterns and correlations revealed through Big Data analytics enable us to produce objective and actionable knowledge about complex phenomena not previously possible using traditional methodologies. As Anderson himself put it, 'there is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot' [2] .

      In spite of harsh criticism of Anderson's article from across the academy, his uniquely (dis)utopian vision of the scientific utility of Big Data has since become increasingly mainstream with regular interventions from politicians and business leaders evangelising about Big Data's potentially revolutionary applications. Nowhere is this bout of data-philia more apparent than in India where the governments recently announced the launch of 'Digital India', a multi-million dollar project which aims to harness the power of public data to increase the efficiency and accessibility of public services [3]. In spite of the ambitious promises associated with Big Data however, many theorists remain sceptical about its practical benefits and express concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favour of shallow correlative analysis. In response, they emphasise the continued importance of theory and specialist knowledge to science, and warn against what they see as the uncritical adoption of Big Data in public policy-making [4]. In this article I will examine the challenges posed by Big Data technologies to established scientific epistemologies as well as the possible implications of these changes for public-policymaking. Beginning with an exploration of some of the ways in which Big Data is changing our understanding of scientific research and knowledge, I will argue that claims that Big Data represents a new paradigm of scientific inquiry are predicated upon a number of implicit assumptions about the nature of knowledge. Through a critic of these assumptions I will highlights some of the potential risks that an over-reliance on Big Data analytics poses for public policy-making, before finally making the case for a more nuanced approach to Big Data, which emphasises the continued importance of theory to scientific research.

      Big Data: The Fourth Paradigm?

      "Revolutions in science have often been preceded by revolutions in measurement".

      In his book the Structure of Scientific Revolutions Kuhn describes scientific paradigms as 'universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of researchers'[5]. Paradigms as such designate a field of intelligibility within a given discipline, defining what kinds of empirical phenomena are to be observed and scrutinized, the types of questions which can be asked of those phenomena, how those questions are to be structured as well as the theoretical frameworks within which the results can be analysed and interpreted. In short, they 'constitute an accepted way of interrogating the world and synthesizing knowledge common to a substantial proportion of researchers in a discipline at any one moment in time'[6]. Periodically however, Kuhn argues, that these paradigms can become destabilised by the development of new theories or the discovery of anomalies that cannot be explained through reference to the dominate paradigm. In such instances Kuhn claims, the scientific discipline is thrown into a period of 'crisis', during which new ideas and theories are proposed and tested, until a new paradigm is established and gains acceptance from the community.

      More recently computer scientists Jim Gray, adopted and developed Kuhn's concept of the paradigm shift, charting history of science through the evolution of four broad paradigms, experimental science, theoretical science, computational science and exploratory science [7]. Unlike Kuhn however, who proposed that paradigm shifts occur as the result of anomalous empirical observations which scientists are unable to account for within the existing paradigm, Gray suggested that transitions in scientific practice are in fact primarily driven by advances and innovations in methods of data collection and analysis. The emergence of the experimental paradigm according to Gray can therefore be traced back to the ancient Greece and China when philosophers began to describe their empirical observations using natural rather an spiritual explanations. Likewise, the transition to the theoretical paradigm of science can be located in the 17th Century during which time scientists began to build theories and models which made generalizations based upon their empirical observations. Thirdly, Gray identifies the emergence of a computational paradigm in the latter part of the 20th Century in which advanced techniques of simulation and computational modelling were developed to help solve equations and explore fields of inquiry such as climate modelling which would have been impossible using experimental or theoretical methods. Finally, Gray proposed that we are today witnessing a transition to a 'fourth paradigm of science', which he termed the exploratory paradigm. Although also utilising advanced computational methods, unlike the previous computational paradigm which developed programs based upon established rules and theories, Gray suggested that within this new paradigm, scientists begin with the data itself; designing programs to mine enormous databases in the search for correlations and patterns; in effect using the data to discover the rules [8].

      The implications of this shift are potentially significant for the nature of knowledge production, and are already beginning to be seen across a wide range of sectors. In the retail sector for example, data mining and algorithmic analysis are already being used to help predict items that a customers may wish to purchase based upon previous shopping habits[9]. Here, unlike with traditional research methodologies the analysis does not presuppose or hypothesise a relationship between items which it then attempts to prove through a process of experimentation, instead the relationships are identified inductively through the processing and reprocessing of vast quantities of data alone. By starting with the data itself, Big Data analysts circumvent the need for predictions or hypothesis about what one is likely to find, as Dyche observes 'mining Big Data reveals relationships and patterns that we didn't even know to look for'[10]. Similarly, by focussing primarily on the search for correlations and patterns as opposed to causation Big Data analysts also reject the need interpretive theory to frame the results instead researchers claim the outcomes are inherently meaningful and interpretable by anyone without the need for domain specific or contextual knowledge. For example, Joh observes how Big Data is being used in policing and law enforcement to help make better decisions about the allocation of police resources. By looking for patterns in the crime data they are able to make accurate predictions about the localities and times in which crimes are most likely to occur and dispatch their officers accordingly[11]. Such analysis according to Big Data proponents requires no knowledge of the cause of the crime, nor the social or cultural context within which it is being perpetrated, instead predictions and assessments are made purely on the basis of patterns and correlations identified within the historical data by statistical modelling.

      In summary then, Gray's exploratory paradigm represents a radical inversion of the deductive scientific method, allowing researchers to derive insights directly from the data itself without the use of hypothesis or theory. Thus it is claimed, by enabling the collection and analysis of datasets of unprecedented scale and variety Big Data allows analysts to 'let the data speak for itself'[12], providing exhaustive coverage of social phenomena, and revealing correlations that are inherently meaningful and interpretable by anyone without the need for specialised subject knowledge or theoretical frameworks.

      For Gray and others this new paradigm is made possible only by the recent exponential increase in the generation and collection of data as well as the emergence of new forms of data science, known collectively as "Big Data". For them the 'deluge of data' produced by the increase in the number of internet enabled devices as well as the nascent development of the internet of things, presents scientists and researchers with unprecedented opportunities to utilise data in new and innovative way to develop new insights across a wide range of sectors, many of which would have been unimaginable even 10 years ago. Furthermore, advances in computational and statistical methods as well as innovations in data visualization and methods of linking datasets, mean that scientist can now utilise the data available to its full potential or as professor Gary King quipped ' Big Data is nothing compared to a big algorithm'[13].

      These developments in statistical and computational analysis combined with the velocity variety and quantity of data available to analysts have therefore allowed scientists to pursue new types of research, generating new forms of knowledge and facilitating a radical shift in how we think about "science" itself. As Boyd and Crawford note, ' Big Data [creates] a profound change at the levels of epistemology and ethics. Big Data reframes key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and the categorization of reality . . . [and] stakes out new terrains of objects, methods of knowing, and definitions of social life '[14]. For many these changes in the nature of knowledge production provide opportunities to improve decision-making, increase efficiency, encourage innovation across a broad range of sectors from healthcare and policing to transport to international development[15]. For others however, many of the claims of Big Data are premised upon some questionable methodological and epistemological assumptions, some of which threat to impoverish the scientific method and undermine scientific rigour [16].

      Assumptions of Big Data

      Given its bold claims the allure of Big Data in both the public and privates sectors is perhaps understandable. However despite the radical and rapid changes to research practice and methodology, there has nevertheless seemingly been a lack of reflexive and critical reflection concerning the epistemological implications of the research practices used in Big Data analytics. And yet implicit within this vision of the future of scientific inquiry lie a number of important and arguably problematic epistemological and ontological assumptions, most notably;

      - Big Data can provide comprehensive coverage of phenomenon, capturing all relevant information.

      - Big Data does not require hypothesis, a priori theory, or models to direct the data collection or research questions.

      - Big Data analytics do not require theoretical framing in order to be interpretable. The data is inherently meaningful transcending domain specific knowledge and can be understood be anyone.

      - Correlative knowledge is sufficient to make accurate predictions and guide policy decisions.

      For many, these assumptions are highly problematic and call into question the claims that Big Data makes about itself. I will now look at each one in turn before proposing there possible implications for Big Data in Policy-making.

      Firstly, whilst Big Data may appear to be exhaustive in its scope, it can only be considered to be so in the context of the particular ontological and methodological framework chosen by the researcher. No data set however large can scrutinize all information relevant to a given phenomenon. Indeed, even if it were somehow possible to capture all relevant quantifiable data within a specific domain, Big Data analytics would still be unable to fully account for the multifarious variables which are unquantifiable or undatafiable. As such Big Data does not provide an omnipresent 'gods-eye view', instead much like any other scientific sample it must be seen to provide the researcher with a singular and limited perspective from which he or she can observe a phenomenon and draw conclusions. It is important to recognise that this vantage point provides only one of many possible perspectives, and is shaped by the technologies and tools used to collect the data, as well as the ontological assumptions of the researchers. Furthermore, as with any other scientific sample, it is also subject to sampling bias and is dependent upon the researcher to make subjective judgements about which variables are relevant to the phenomena being studied and which can be safely ignored.

      Secondly, claims by Big Data analysts to be able to generate insights directly from the data, signals a worrying divergence from deductive scientific methods which have been hegemonic within the natural sciences for centuries. For Big Data enthusiasts such as Prensky, 'scientists no longer have to make educated guesses, construct hypotheses and models, and test them with data-based experiments and examples. Instead, they can mine the complete set of data for patterns that reveal effects, producing scientific conclusions without further experimentation '[17]. Whereas, deductive reasoning begins with general statements or hypotheses and then proceeds to observe relevant data equipped with certain assumptions about what should be observed if the theory is to be proven valid; inductive reasoning conversely begins with empirical observations of specific examples from which it attempts to draw general conclusions. The more data collected the greater the probability that the general conclusions generated will be accurate, however regardless of the quantity of observations no amount of data can ever conclusively prove causality between two variables, since it is always possible that my conclusions may in future be falsified by an anomalous observation. For example, a researcher who had only ever observed the existence of white swans may reasonably draw the conclusion that 'all swans are white', whilst they would be justified in making such a claim, it would nevertheless be comprehensively disproven the day a black swan was discovered. This is what David Hume called the 'problem of induction'[18] and strikes at the foundation of Big Data claims to be able to provide explanatory and predictive analysis of complex phenomena, since any projections made are reliant upon the 'principle of uniformity of nature', that is the assumption that a sequence of events will always occur as it has in the past. As a result, although Big Data may be well suited to providing detailed descriptive accounts of social phenomena, without theoretical grounding it nevertheless remains unable to prove casual links between variables and therefore is limited in its ability to provide robust explanatory conclusions or give accurate predictions about future events.

      Finally, just as Big Data enthusiasts claim that theory or hypotheses are not needed to guide data collection, so too they insist human interpretation or framing is no longer required for the processing and analysis of the data. Within this new paradigm therefore, 'the data speaks for itself' [19], and specialised knowledge is not needed to interpret the results which are now supposedly rendered comprehensible to anyone with even a rudimentary grasp of statistics. Furthermore, the results we are told are inherently meaningful, transcending culture, history or social context and providing pure objective facts uninhibited by philosophical or ideological commitments.

      Initially inherited from the natural sciences, this radical form of empiricism thus presupposes the existence of an objective social reality occupied by static and immutable entities whose properties are directly determinable through empirical investigation. In this way, Big Data reduces the role of social science to the perfunctory calculation and analysis of the mechanical processes of pre-formed subjects, in much the same way as one might calculate the movement of the planets or the interaction of balls on a billiard table. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, it nevertheless has the effect of restricting both the scope and character of social scientific inquiry; projecting onto the field of social research meta-theoretical commitments that have long been implicit in the positivist method, whilst marginalising those projects which do not meet the required levels of scientificity or erudition.

      This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[20].

      Despite these criticisms however, Big Data is perhaps unsurprisingly increasingly becoming popular within the business community, lured by the promise of cheap and actionable scientific knowledge, capable of making their operations more efficient reducing overheads and producing better more competitive services. Perhaps most alarming from the perspective of Big Data's epistemological and methodological implications however, is the increasingly prominent role Big Data is playing in public policy-making. As I will now demonstrate, whilst Big Data can offer useful inputs into public policy-making processes, the methodological assumptions implicit within Big Data methodologies problems pose a number of risks to the effectiveness as well as the democratic legitimacy of public policy-making. Following an examination of these risks I will argue for a more reflexive and critical approach to Big Data in the public sector.

      Big Data and Policy-Making: Opportunities and Risks

      In recent year Big Data has begun to play an increasingly important role in public policy-making. Across the global, government funded projects designed to harvest and utilise vast quantities of public data are being developed to help improve the efficiency and performance of public services as well as better inform policy-making processes. At first glance, Big Data would appear to be the holy-grail for policy-makers - enabling truly evidence-based policy-making, based upon pure and objective facts, undistorted by political ideology or expedience. Furthermore, in an era of government debt and diminishing budgets, Big Data promises not only to produce more effective policy, but also to deliver on the seemingly impossible task of doing more with less, improving public services whilst simultaneously reducing expenditure.

      In the Indian context, the government's recently announced 'Digital India' project promises to harness the power of public data to help modernise Indian's digital infrastructure and increase access to public services. The use of Big Data is seen as being central to the project's success, however, despite the commendable aspirations of Digital India, many commentators remain sceptical about the extent to which Big Data can truly deliver on its promises of better more efficient public services, whilst others have warned of the risk to public policy of an uncritical and hasty adoption of Big Data analytics [21]. Here I argue that the epistemological and methodological assumptions which are implicit within the discourse around Big Data threaten to undermine the goal of evidence based policy-making, and in the process widen already substantial digital divides.

      It has long been recognised that science and politics are deeply entwined. For many social scientists the results of social research can be never entirely neutral, but are conditioned by the particular perspective of the researcher. As Shelia Jasanoff observed, 'Most thoughtful advisers have rejected the facile notion that giving scientific advice is simply a matter of speaking truth to power. It is well recognized that in thorny areas of public policy, where certain knowledge is difficult to come by, science advisers can offer at best educated guesses and reasoned judgments, not unvarnished truth' [22]. Nevertheless, 'unvarnished truth' is precisely what Big Data enthusiasts claim to be able to provide. For them the capacity of Big Data to derive results and insights directly from the data without any need for human framing, allows policy-makers to incorporate scientific knowledge directly into their decision-making processes without worrying about the 'philosophical baggage' usually associated with social scientific research.

      However, in order to be meaningful, all data requires a certain level of interpretative framing. As such far from cleansing science of politics, Big Data simply acts to shift responsibility for the interpretation and contextualisation of results away from domain experts - who possess the requisite knowledge to make informed judgements regarding the significance of correlations - to bureaucrats and policy-makers, who are more susceptible to emphasise those results and correlations which support their own political agenda. Thus whilst the discourse around Big Data may promote the notion of evidence based policy-making, in reality the vast quantities of correlations generated by Big Data analytics act simply to broaden the range of 'evidence' from which politician can chose to support their arguments; giving new meaning to Mark Twain's witticism that there are 'lies, damn lies, and statistics'.

      Similarly, for many an over-reliance on Big Data analytics for policy-making, risks leading to public policy which is blind to the unquantifiable and intangible. As already discussed above, Big Data's neglect of theory and contextual knowledge in favour of strict empiricism marginalises qualitative studies which emphasise the importance of traditional social scientific categories such as race, gender, and religion, in favour of a purely quantitative analysis of relational data. For many however consideration of issues such as gender, race, and religious sensitivity can be just as important to good public policy-making as quantitative data; helping to contextualise the insights revealed in the data and provide more explanatory accounts of social relations. They warn that neglect of such considerations as part of policy-making processes can have significant implications for the quality of the policies produced[23]. Firstly, although Big Data can provide unrivalled accounts of "what" people do, without a broader understanding of the social context in which they act, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially acute in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes. Secondly, whilst Big Data analytics can help decision-makers to design more cost-effective policy, by for example ensuring better use of scarce resources; efficiency and cost-effectiveness are not the only metrics by which good policy can be judged. Public policy regardless of the sector must consider and balance a broad range of issues during the policy process including matters such as race, gender issues and community relations. Normative and qualitative considerations of this kind are not subject to a simplistic 1-0 quantification but instead require a great deal of contextual knowledge and insight to navigate successfully

      Finally, to the extent that policy-makers are today attempting to harvest and utilise individual citizens personal data as direct inputs for the policy-making process, Big Data driven policy can in a very narrow sense be considered to offer a rudimentary form of direct democracy. At first glance this would appear to help to democratise political participation allowing public services to become automatically optimised to better meet the needs and preferences of citizens without the need for direct political participation. In societies such as India however, where there exist high levels of inequality in access to information and communication technologies, there remain large discrepancies in the quantities of data produced by individuals. In a Big Data world in which every byte of data is collected, analysed and interpreted in order to make important decisions about public services therefore, those who produce the greatest amounts of data, are better placed to have their voices heard the loudest, whilst those who lack access to the means to produce data risk becoming disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority. Similarly, using user generated data as the basis for policy decisions also leaves systems vulnerable to coercive manipulation. That is, once it has become apparent that a system has been automated on the basis of user inputs, groups or individuals may change their behaviour in order to achieve a certain outcome. Given these problems it is essential that in seeking to utilise new data resources for policy-making, we avoid an uncritical adoption of Big Data techniques, and instead as I argue below encourage a more balanced and nuanced approach to Big Data.

      Data-Driven Science: A more Nuanced Approach?

      Although an uncritical embrace of Big Data analytics is clearly problematic, it is not immediately obvious that a stubborn commitment to traditional knowledge-driven deductive methodologies would necessarily be preferable. Whilst deductive methods have formed the basis of scientific inquiry for centuries, the particular utility of this approach is largely derived from its ability to produce accurate and reliable results in situations where the quantities of data available are limited. In an era of ubiquitous data collection however, an unwillingness to embrace new methodologies and forms of analysis which maximise the potential value of the volumes of data available would seem unwise.

      For Kitchen and others however, it is possible to reap the benefits of Big Data without comprising scientific rigour or the pursuit of casual explanations. Challenging the 'either or' propositions which favour either scientific modelling and hypothesis or data correlations, Kitchen instead proposes a hybrid approach which utilises the combined advantages of inductive, deductive and so-called 'abductive' reasoning, to develop theories and hypotheses directly from the data[24]. As Patrick W. Gross, commented 'In practice, the theory and the data reinforce each other. It's not a question of data correlations versus theory. The use of data for correlations allows one to test theories and refine them' [25].

      Like the radical empiricism of Big Data, 'data-driven science' as Kitchen terms it, introduces an aspect of inductivism into the research design, seeking to develop hypotheses and insights 'born from the data' rather than 'born from theory'. Unlike the empiricist approach however, the identification of patterns and correlations is not considered the ultimate goal of the research process. Instead these correlations simply form the basis for new types of hypotheses generation, before more traditional deductive testing is used to assess the validity of the results. Put simply therefore, rather than interpreting data deluge as the 'end of theory', data-driven science instead attempts to harness its insights to develop new theories using alternative data-intensive methods of theory generation.

      Furthermore unlike new empiricism, data is not collected indiscriminately from every available source in the hope that sheer size of the dataset will unveil some hidden pattern or insight. Instead, in keeping with more conventional scientific methods, various sampling techniques are utilised, 'underpinned by theoretical and practical knowledge and experience as to whether technologies and their configurations will capture or produce appropriate and useful research material'[26]. Similarly analysis of the data once collected does not take place within a theoretical vacuum, nor are all relationships deemed to be inherently meaningful; instead existing theoretical frameworks and domain specific knowledge are used to help contextualise and refine the results, identifying those patterns that can be dismissed as well as those that require closer attention.

      Thus for many, data-driven science provides a more nuanced approach to Big Data allowing researchers to harness the power of new source of data, whilst also maintaining the pursuit of explanatory knowledge. In doing so, it can help to avoid the risks of uncritical adoption of Big Data analytics for policy-making providing new insights but also retaining the 'regulating force of philosophy'.

      Conclusion

      Since the publication of the Structure of Scientific Revolutions, Kuhn's notion of the paradigm has been widely criticised for producing a homogenous and overly smooth account of scientific progress, which ignores the clunky and often accidental nature of scientific discovery and innovation. Indeed the notion of the 'paradigm shift' is in many ways in typical of a self-indulgent and somewhat egotistical tendency amongst many historians and theorists to interpret events contemporaneous to themselves as in some way of great historical significance. Historians throughout the ages have always perceived themselves as living through periods of great upheaval and transition. In actual fact as has been noted by many, history and the history of science in particular rarely advances in a linear or predictable way, nor can progress when it does occur be so easily attributed to specific technological innovations or theoretical developments. As such we should remain very sceptical of the claims that Big Data represents a historic and paradigmatic shift in scientific practice. Such claims exhibit more than a hint of technological determinism and often ignore the substantial limitations to Big Data analytics. In contrast to these claims, it is important to note that technological advances alone do not drive scientific revolutions; the impact of Big Data will ultimately depend on how we decide to use it as well as the types of questions we ask of it.

      Big Data holds the potential to augment and support existing scientific practices, creating new insights and helping to better inform public policy-making processes. However, contrary to the hyperbole surrounding its development, Big Data does not represent a sliver-bullet for intractable social problems and if adopted uncritically and without consideration of its consequences, Big Data risks not only to diminishing scientific knowledge but also jeopardising our privacy and creating new digital divides. It is critical therefore that we see through the hyperbole and headlines to reflect critically on the epistemological consequences of Big Data as well as its implications for policy making, a task unfortunately which in spite of the pace of technological change is only just beginning.

      Bibliography

      Anderson C (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired, 23 June 2008. Available at: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory (accessed 31 October 2015).

      Bollier D (2010) The Promise and Peril of Big Data. The Aspen Institute. Available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf (accessed 19 October 2015).

      Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

      Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

      Cukier K (2010) Data, data everywhere. The Economist, 25 February (accessed 5 November 2015).

      Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

      Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

      Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

      Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

      Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

      Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspacelaw/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1;

      Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

      Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

      Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

      Raghupathi, W., & Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal



      [1] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

      [2] Ibid.,

      [3] Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

      [4] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [5] Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

      [6] Ibid.,

      [7] Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

      [8] Ibid.,

      [9] Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

      [10] Ibid.,

      [11] Joh. E, (2014) 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

      [12] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      [13] King quoted in Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal

      [14] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

      [15] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1 ; Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014); Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/ ; , Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      [16] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [17] Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

      [18] Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

      [19] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      [20] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      [21] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12;

      [22] Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

      [23] Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

      [24] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [25] Gross quoted in Ibid.,

      [26] Ibid.,

      Facebook's Fall from Grace: Arab Spring to Indian Winter

      by Sunil Abraham last modified Feb 11, 2016 03:51 PM
      Facebook’s Free Basics has been permanently banned in India! The Indian telecom regulator, TRAI has issued the world’s most stringent net neutrality regulation! To be more accurate, there is more to come from TRAI in terms of net neutrality regulations especially for throttling and blocking but if the discriminatory tariff regulation is anything to go by we can expect quite a tough regulatory stance against other net neutrality violations as well.

      The article was published in First Post on February 9, 2016. It can be read here.


      Even the regulations it cites in the Explanatory Memorandum don’t go as far as it does. The Dutch regulation will have to be reformulated in light of the new EU regulations and the Chilean regulator has opened the discussion on an additional non-profit exception by allowing Wikipedia to zero-rate its content in partnership with telecom operators.

      Bravo to Nikhil Pahwa, Apar Gupta, Raman Chima, Kiran Jonnalagadda and the thousands of volunteers at Save The Internet and associated NGOs, movements, entrepreneurs and activists who mobilized millions of Indians to stand up and petition TRAI to preserve some of the foundational underpinnings of the Internet. And finally bravo to Facebook for having completely undermined any claim to responsible stewardship of our information society through their relentless, shrill and manipulative campaign filled with the staggeringly preposterous lies. Having completely lost the trust of the Indian public and policy-makers, Facebook only has itself to blame for polarizing what was quite a nuanced debate in India through its hyperbole and setting the stage for this firm action by TRAI.

      And most importantly bravo to RS Sharma and his team at TRAI for several reasons for the notification of “Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016” aka differential pricing regulations. The regulation exemplifies six regulatory best practices that I briefly explore below.

      Transparency and Agility: Two months from start to finish, what an amazing turn around! TRAI was faced with unprecedented public outcry and also comments and counter-comments. Despite visible and invisible pressures, from the initial temporary ban on Free Basics to RS Sharma’s calm, collected and clear interactions with different stakeholders resulted in him regaining the credibility which was lost during the publication of the earlier consultation paper on Regulatory Framework for Over-the-top (OTTs) services. Despite being completely snowed over electronically by what Rohin Dharmakumar dubbed as Facebook’s DDOS attack, he gave Facebook one last opportunity to do the right thing which they of course spectacularly blew.

      Brevity and Clarity: The regulation fits onto three A4-sized pages and is a joy to read. Clarity is often a result of brevity but is not necessarily always the case. At the core of this regulation is a single sentence which prohibits discriminatory tariffs on the basis of content unless it is a “data service over closed electronic communications network”. And unlike many other laws and regulations, this regulation has only one exemption for offering or charging of discriminatory tariffs and that is for “emergency services” or during “grave public emergency”. Even the best lawyers will find it difficult to drive trucks through that one. Even if imaginative engineers architect a technical circumvention, TRAI says “if such a closed network is used for the purpose of evading these regulations, the prohibition will nonetheless apply”. Again clear signal that the spirit is more important than the letter of the regulation when it comes to enforcement.

      Certainty and Equity: Referencing the noted scholar Barbara Van Schewick, TRAI explains that a case-by-case approach based on principles [standards] or rules would “fail to provide much needed certainty to industry participants…..service providers may refrain from deploying network technology” and perversely “lead to further uncertainty as service providers undergoing [the] investigation would logically try to differentiate their case from earlier precedents”. Our submission from the Centre for Internet and Society had called for more exemptions but TRAI went with a much cleaner solution as it did not want to provide “a relative advantage to well-financed actors and will tilt the playing field against those who do not have the resources to pursue regulatory or legal actions”.

      What next? Hopefully the telecom operators and Facebook will have the grace to abide with the regulation without launching a legal challenge. And hopefully TRAI will issue equally clear regulations on throttling and blocking to conclude the “Regulatory Framework for Over-the-top Services” consultation process. Critically, TRAI must forbear from introducing any additional regulatory burdens on OTTs, a.k.a Internet companies based on unfounded allegations of regulatory arbitrage. There are some legitimate concerns around issues like taxation and liability but that has to be addressed by other arms of the government. To address the digital divide, there are other issues outside net neutrality such as shared spectrum, unlicensed spectrum and shared backhaul infrastructure that TRAI must also prioritize for regulation and deregulation.

      Without doubt other regulators from the global south will be inspired by India’s example and will hopefully take firm steps to prevent the rise of additional and unnecessary gatekeepers and gatekeeping practices on the Internet. The democratic potential of the Internet must be preserved through enlightened and appropriate regulation informed by principles and evidence.


      The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world).

      Database on Big Data and Smart Cities International Standards

      by Vanya Rakesh last modified Feb 11, 2016 03:49 PM
      The Centre for Internet and Society is in the process of mapping international standards specifically around Big Data, IoT and Smart Cities. Here is a living document containing a database of some of these key globally accepted standards.

      1. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Big Data (WG 9 )

      ● Background

      - The International Organization for Standardization /International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information Technology announced the creation of a Working Group (WG) focused on standardization in connection with big data.

      - JTC 1 is the standards development environment where experts come together to develop worldwide standards on Information and Communication Technology (ICT) for integrating diverse and complex ICT technologies.[1]

      - The American National Standards Institute (ANSI) holds the secretariat to JTC 1 and the ANSI-accredited U.S. Technical Advisory Group (TAG) Administrator to JTC 1 is theInterNational Committee for Information Technology Standards (INCITS) [2], an ANSI member and accredited standards developer (ASD). InterNational Committee for Information Technology standards (INCITS) is a technical committee on Big Data to serve as the US Technical Advisory Group (TAG) to JTC 1/WG 9 on Big Data/ pending approval of a New Work Item Proposal (NWIP). The INCITS/Big Data will address standardization in the areas assigned to JTC 1/WG 9. [3]

      - Under U.S. leadership, WG 9 on Big Data will serve as the focus of JTC 1's big data standardization program.

      ● Objective

      - To identify standardization gaps.

      - Develop foundational standards for Big Data.

      - Develop and maintain liaisons with all relevant JTC 1 entities

      - Grow the awareness of and encourage engagement in JTC 1 Big Data standardization efforts within JTC 1. [4]

      ● Status

      - JTC 1 appoints Mr. Wo Chang to serve as Convenor of the JTC 1 Working Group on Big Data.

      - The WG has set up a Study Group on Big Data.

      2. International Organisation for Standardization: ISO/IEC JTC 1 Study group on Big Data

      ● Background

      - The ISO/IEC JTC1 Study Group on Big Data (JTC1 SGBD) was created by Resolution 27 at the November, 2013 JTC1 Plenary at the request of the USA and other national bodies for consideration of Big Data activities across all of JTC 1.

      - A Study Group (SG) is an ISO mechanism by which the convener of a Working Group (WG) under a sub-committee appoints a smaller group of experts to do focused work in a specific area to identify a clear group to focus attention on a major area and expand the manpower of the committee.

      - The goal of an SG is to create a proposal suitable for consideration by the whole WG, and it is the WG that will then decide whether and how to progress the work.[5]

      ● Objective

      JTC 1 establishes a Study Group on Big Data for consideration of Big Data

      activities across all of JTC 1 with the following objectives:

      - Mapping the existing landscape: Map existing ICT landscape for key technologies and relevant standards /models/studies /use cases and scenarios for Big Data from JTC 1, ISO, IEC and other standards setting organizations,

      - Identify key terms : Identify key terms and definitions commonly used in the area of Big Data,

      - Assess status of big data standardization : Assess the current status of Big Data standardization market requirements, identify standards gaps, and propose standardization priorities to serve as a basis for future JTC 1 work, and

      - Provide a report with recommendations and other potential deliverables to the 2014 JTC 1 Plenary. [6]

      ● Current Status

      - The study group released a preliminary report in the year 2014, which can be accessed here : http://www.iso.org/iso/big_data_report-jtc1.pdf.

      3. The National Institute of Standards and Technology Big Data Interoperability Framework :

      ● Background

      - NIST is leading the development of a Big Data Technology Roadmap which aims to define and prioritize requirements for interoperability, portability, reusability, and extensibility for big data analytic techniques and technology infrastructure to support secure and effective adoption of Big Data.

      - To help develop the ideas in the Big Data Technology Roadmap, NIST is creating the Public Working Group for Big Data which Released Seven Volumes of Big Data Interoperability Framework on September 16, 2015.[7]

      ● Objective

      - To advance progress in Big Data, the NIST Big Data Public Working Group (NBD-PWG) is working to develop consensus on important, fundamental concepts related to Big Data.

      ● Status

      - The results are reported in the NIST Big Data Interoperability Framework series of volumes. Under the framework, seven volumes have been released by NIST, available here:

      http://bigdatawg.nist.gov/V1_output_docs.php

      4. IEEE Standards Association

      ● Background:

      - The IEEE Standards Association introduced a number of standards

      related to big-data applications.

      ● Status:

      The following standard is under development:

      - IEEE P2413

      "IEEE Standard for an Architectural Framework for the Internet of Things (IoT)" defines the relationships among devices used in industries, including transportation and health care. It also provides a blueprint for data privacy, protection, safety, and security, as well as a means to document and mitigate architecture divergence.[8]

      5. ITU

      ● Background:

      - The International Telecommunications Union (ITU) has announced its first standards for big data services, entitled 'Recommendation ITU-T Y.3600 "Big data - cloud computing based requirements and capabilities"', recognizing the need for strong technical standards considering the growth of big data to ensure that processing tools are able to achieve powerful results in the areas of collection, analysis, visualization, and more.[9]

      ● Objective:

      - Recommendation Y.3600 provides requirements, capabilities and use cases of

      cloud computing based big data as well as its system context. Cloud computing

      based big data provides the capabilities to collect, store, analyze, visualize and

      manage varieties of large volume datasets, which cannot be rapidly transferred

      and analysed using traditional technologies.[10]

      - It also outlines how cloud computing systems can be leveraged to provide big-data services.

      ● Status:

      - The standard was relseased in the year 2015 and is avaiabe here: http://www.itu.int/rec/T-REC-Y.3600-201511-I .

      Smart cities

      1. ISO Standards on Smart Cities

      ● Background:

      - ISO, the International Organization for Standardization, established a strategic advisory group in 2014 for smart cities, comprised of a wide range of international experts to advise ISO on how to coordinate current and future Smart City standardization activities, in cooperation with other international standards organizations, to benefit the market.[11]

      - Seven countries, China, Germany, UK, France, Japan, Korea and USA, are currently involved in the research.

      ● Objective:

      - The main aims of which are to formulate a definition of a Smart City

      - Identify current and future ISO standards projects relating to Smart Cities

      - Examine involvement of potential stakeholders, city requirements, potential interface problems. [12]

      ● Status:

      - ISO/TC 268, which is focused on sustainable development in communities, has one working group developing city indicators and other developing metrics for smart community infrastructures. In early 2016 this committee will be joined by another - IEC - systems committee. The first standard produced by ISO/TC 268 is ISO/TR 37150:2014.

      - ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics: this standard provides a review of existing activities relevant to metrics for smart community infrastructures. The concept of smartness is addressed in terms of performance relevant to technologically implementable solutions, in accordance with sustainable development and resilience of communities, as defined in ISO/TC 268. ISO/TR 37150:2014 addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). It focuses on the technical aspects of existing activities which have been published, implemented or discussed. Economic, political or societal aspects are not analyzed in ISO/TR 37150:2014.[13]

      - ISO 37120:2014 provides city leaders and citizens a set of clearly defined city performance indicators and a standard approach for measuring each. Though some indicators will be more helpful for cities than others, cities can now consistently apply these indicators and accurately benchmark their city services and quality of life against other cities.[14] This new international standard was developed using the framework of the Global City Indicators Facility (GCIF) that has been extensively tested by more than 255 cities worldwide. This is a demand-led standard, driven and created by cities, for cities. ISO 37120 defines and establishes definitions and methodologies for a set of indicators to steer and measure the performance of city services and quality of life. The standard includes a comprehensive set of 100 indicators - of which 46 are core - that measures a city's social, economic, and environmental performance. [15]

      The GCIF global network, supports the newly constituted World Council on City Data - a sister organization of the GCI/GCIF - which allows for independent, third party verification of ISO 37120 data.[16]

      - ISO/TS 37151 and ISO/TR 37152 Smart community infrastructures -- Common framework for development & operation: outlines 14 categories of basic community needs (from the perspective of residents, city managers and the environment) to measure the performance of smart community infrastructures. These are typical community infrastructures like energy, water, transportation, waste and information and communication technology systems, which have been optimized with sustainable development and resilience in mind. [17] The committee responsible for this document is ISO/TC 268, Sustainable development in communities, Subcommittee SC 1, Smart community infrastructures. The objective is to develop international consensus on a harmonised metrics to evaluate the smartness of key urban infrastructure.[18]

      - ISO 37101 Sustainable development of communities -- Management systems -- Requirements with guidance for resilience and smartness : By setting out requirements and guidance to attain sustainability with the support of methods and tools including smartness and resilience, it can help communities improve in a number of areas such as: Developing holistic and integrated approaches instead of working in silos (which can hinder sustainability), Fostering social and environmental changes, Improving health and wellbeing, Encouraging responsible resource use and Achieving better governance. [19] The objective is to develop a Management System Requirements Standard reflecting consensus on an integrated, cross-sector approach drawing on existing standards and best practices.

      - ISO 37102 Sustainable development & resilience of communities - Vocabulary . The objective is to establish a common set of terms and definitions for standardization in sustainable development, resilience and smartness in communities, cities and territories since there is pressing need for harmonization and clarification. This would provide a common language for all interested parties and stakeholders at the national, regional and international levels and would lead to improved ability to conduct benchmarks and to share experiences and best practices.

      - ISO/TR 37121 Inventory & review of existing indicators on sustainable development & resilience in cities : A common set of indicators useable by every city in the world and covering most issues related to sustainability, resilience and quality of life in cities. [20]

      - ISO/TR 12859:2009 gives general guidelines to developers of intelligent transport systems (ITS) standards and systems on data privacy aspects and associated legislative requirements for the development and revision of ITS standards and systems. [21]

      2. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Smart Cities (WG 11 )

      ● Background:

      - Serve as the focus of and proponent for JTC 1's Smart Cities standardization program and works for development of foundational standards for the use of ICT in Smart Cities - including the Smart City ICT Reference Framework and an Upper Level Ontology for Smart Cities - for guiding Smart Cities efforts throughout JTC 1 upon which other standards can be developed.[22]

      ● Objective:

      - To develop a set of ICT related indicators for Smart Cities in collaboration with ISO/TC 268.

      - Identify JTC 1 (and other organization) subgroups developing standards and related material that contribute to Smart Cities.

      - Grow the awareness of, and encourage engagement in, JTC 1 Smart Cities standardization efforts within JTC 1.

      ● Status

      - Ms Yuan Yuan is the Convenor of this Working group.

      - The purpose was to provide a report with recommendations to the JTC 1 Plenary in the year 2014, to which a preliminary report was submitted. [23]

      3. International Organisation for Standardization: ISO/IEC JTC 1 Study Group (SG1) on Smart Cities

      ● Background:

      - The Study Group (SG) - Smart Cities was established in 2013[24] SG 1 will explicitly consider the work going on in the following committees: ISO/TMB/AG on Smart Cities, IEC/SEG 1, ITU-T/FG SSC and ISO/TC 268. [25]

      ● Objective :

      - To examine the needs and potentials for standardization in this area.

      ● Status:

      - SG 1 is paying particular attention to monitoring cloud computing activities, which it sees as the key element of the Smart Cities infrastructure. DIN's Information Technology and Selected IT Applications Standards Committee (NIA (www.nia.din.de)) is formally responsible for ISO/IEC JTC1 /SG 1, but an autonomous national mirror committee on Smart Cities does not yet exist and the work is being overseen by DIN's Smart Grid steering body. [26]

      - A preliminary report has been released in the 2014, available here- http://www.iso.org/iso/smart_cities_report-jtc1.pdf

      4. ITU

      ● Background:

      - ITU members have established an ITU-T Study Group titled "ITU-T Study Group 20: IoT and its applications, including smart cities and communities" [27]

      - ITU-T has also established a Focus Group on Smart Sustainable Cities (FG-SSC).

      ● Objective:

      - The study group will address the standardization requirements of Internet of Things (IoT) technologies, with an initial focus on IoT applications in smart cities.

      - The focus group shall assess the standardization requirements of cities aiming to boost their social, economic and environmental sustainability through the integration of information and communication technologies (ICTs) in their infrastructures and operations.

      - The Focus Group will act as an open platform for smart-city stakeholders - such as municipalities; academic and research institutes; non-governmental organizations (NGOs); and ICT organizations, industry forums and consortia - to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities.[28]

      ● Status:

      - The study group will develop standards that leverage IoT technologies to address urban-development challenges.

      - The FG-SSC concluded its work in May 2015 by approving 21 Technical Specifications and Reports. [29]

      - So far, ITU-T SG 5 FG-SSC has issued the following reports- Technical report "An overview of smart sustainable cities and the role of information and communication technologies", Technical report "Smart sustainable cities: an analysis of definitions", Technical report "Electromagnetic field (EMF) considerations in smart sustainable cities", Technical specifications "Overview of key performance indicators in smart sustainable cities", Technical report "Smart water management in cities".[30]

      5. PRIPARE Project :

      ● Background:

      - The 7001 - PRIPARE Smart City Strategy is to to ensure that ICT solutions integrated in EIP smart cities will be compliant with future privacy regulation.

      - PRIPARE aims to develop a privacy and security-by-design software and systems engineering methodology, using the combined expertise of the research community and taking into account multiple viewpoints (advocacy, legal, engineering, business).

      ● Objective:

      - The mission of PRIPARE is to facilitate the application of a privacy and security-by-design methodology that will contribute to the advent of unhindered usage of Internet against disruptions, censorship and surveillance, support its practice by the ICT research community to prepare for industry practice and foster risk management culture through educational material targeted to a diversity of stakeholders.

      ● Status:

      - Liaison is currently on-going so that it becomes a standard (OASIS and ISO).[31]

      6. BSI-UK

      ● Background:

      - In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed.

      - The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. [32]

      ● Objective:

      The BIS launched the City's Standards Institute to bring together cities and key

      industry leaders and innovators :

      - To work together in identifying the challenges facing cities,

      - Providing solutions to common problems, and

      - Defining the future of smart city standards.[33]

      ● Status:

      The following standards and publications help address various issues for a city to

      become a smart city:

      - The development of a standard on Smart city terminology (PAS 180)

      - The development of a Smart city framework standard (PAS 181)

      - The development of a Data concept model for smart cities (PAS 182)

      - A Smart city overview document (PD 8100)

      - A Smart city planning guidelines document (PD 8101)

      - BS 8904 Guidance for community sustainable development provides a decision-making framework that will help setting objectives in response to the needs and aspirations of city stakeholders

      - BS 11000 Collaborative relationship management

      - BSI BIP 2228:2013 Inclusive urban design - A guide to creating accessible public spaces.

      7. Spain

      ● Background:

      - AENOR, the Spanish standards developing organization (SDO), has issued two new standards on smart cities: the UNE 178303 and UNE-ISO 37120. These standards joined the already published UNE 178301.

      ● Objective:

      - The texts, prepared by the Technical Committee of Standardization of AENOR on Smart Cities (AEN / CTN 178) and sponsored by the SETSI (Secretary of State for Telecommunications and Information Society of the Ministry of Industry, Energy and Tourism), aim to encourage the development of a new model of urban services management based on efficiency and sustainability.

      ● Status:

      Some of the standards that have been developed are:

      - UNE 178301 on Open Data evaluates the maturity of open data created or held by the public sector so that its reuse is provided in the field of Smart Cities.

      - UNE 178303 establishes the requirements for proper management of municipal assets.

      - UNE-ISO 37120 which collects the international urban sustainability indicators.

      - Following the publication of these standards, 12 other draft standards on Smart Cities have just been made public, most of them corresponding to public services such as water, electricity and telecommunications, and multiservice city networks. [34]

      8. China

      ● Background:

      Several national standardization committees and consortia have started

      standardization work on Smart Cities, including:

      - China National IT Standardization TC (NITS),

      - China National CT Standardization TC,

      - China National Intelligent Transportation System Standardization TC,

      - China National TC on Digital Technique of Intelligent Building and Residence Community of Standardization Administration, China Strategic Alliance of Smart City Industrial Technology Innovation[35]

      ● Objective:

      - In the year 2014, all the ministries involved in building smart cities in China joined with the Standardization Administration of China to create working groups whose job is to manage and standardize smart city development, though their activities have not been publicized. [36]

      ● Status:

      - China will continue to promote international standards in building smart cities and improve the competitiveness of its related industries in global market.

      - Also, China's Standardization Administration has joined hands with National Development and Reform Commission, Ministry of Housing and Urban-Rural Development and Ministry of Industry and Information Technology in establishing and implementing standards for smart cities.

      - When building smart cities, the country will adhere to the ISO 37120 and by the year 2020, China will establish 50 national standards on smart cities. [37]

      9. Germany

      ● Background :

      - Member of European Innovation Partnership (EIP) for Smart Cities and Communities DKE (German Commission for Electrical, Electronic & Information Technologies) and DIN (GermanInstitute for Standardization) have developed a joint roadmap and Smart Cities recommendations for action in Germany.

      ● Objective:

      - Its purpose is to highlight the need for standards and to serve as a strategic template for national and international standardization work in the field of smart city technology.

      - The Standardization Roadmap highlights the main activities required to create smart cities. [38]

      ● Status:

      - An updated version of the standardization roadmap was released in the year 2015. [39]

      10. Poland

      ● Background:

      - A coordination group on Smart and Sustainable Cities and Communities (SSCC) was set up in the beginning of 2014 to monitor any national standardization activities.

      ● Objective:

      - It was decided to put forward a proposal to form a group at the Polish Committee for Standardization (PKN) providing recommendations for smart sustainable city standardization in Poland.

      ● Status:

      It has two thematic groups:

      - GT 1-2 on terminology and Technical Bodies in PKN Its scope covers a collection of English terms and their Polish equivalents related to smart and sustainable development of cities and communities to allow better communication among various smart city stakeholders. This includes the preparation of the list of Technical Bodies (OT) in PKN involved in standardization activities related to specific aspects of smart and sustainable local development and making proposals concerning the allocation of standardization works to the relevant OT in PKN.

      - GT 3 for gathering information and the development and implementation of a work programme Its scope includes identifying stakeholders in Poland, and gathering information on any national "smart city" initiatives having an impact on environment-friendly development, sustainability, and liveability of a city. The group is also tasked with developing a work programme for GZ 1 based on identified priorities for Poland. Finally, its aim is to conduct communication and dissemination of activities to make the results of GZ 1 visible. [40]

      11. Europe

      ● Background:

      - In 2012, the European standardization organizations CEN and CENELEC founded the Smart and Sustainable Cities and Communities Coordination Group (SSCC-CG), which is a Coordination Group established to coordinate standardization activities and foster collaboration around standardization work. [41]

      ● Objective:

      - The aim of the CEN-CENELEC-ETSI (SSCC-CG) is to coordinate and promote European standardization activities relating to Smart Cities and to advise the CEN and CENELEC (Technical) and ETSI Boards on standardization activities in the field of Smart and Sustainable Cities and Communities.

      - The scope of the SSCC-CG is to advise on European interests and needs relating to standardization on Smart and Sustainable cities and communities.

      ● Status:

      - Originally conceived to be completed by the end of 2014, SSCC-CG's mandate has been extended by the European standards organizations CEN, CENELEC and ETSI by a further two years and will run until the end of 2016.[42]

      - The SSCC-CG does not develop standards, but reports directly to the management boards of the standardization organizations and plays an advisory role. Current members of the SSCC.CG include representatives of the relevant technical committees, the CEN/CENELEC secretariat, the European Commission, the European associations and the national standardization organizations.[43]

      - CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids: The aim of this document is to provide a strategic report which outlines the standardization requirements for implementing the European vision of smart grids, especially taking into account the initiatives by the Smart Grids Task Force of the European Commission. It provides an overview of standards, current activities, fields of action, international cooperation and strategic recommendations[44]

      12. Singapore

      ● Background:

      - In the year 2015, SPRING Singapore, the Infocomm Development Authority of Singapore (IDA) and the Information Technology Standards Committee (ITSC), under the purview of the Singapore Standards Council (SSC), have laid out an Internet of Things (IoT) Standards Outline in support of Singapore's Smart Nation initiative.

      ● Objective:

      - Realising importance of standards in laying the foundation for the nation empowered by big data, analytics technology and sensor networks in light of Singapore's vision of becoming a Smart Nation.

      ● Status:

      Three types of standards - sensor network standards, IoT foundational standards and domain-specific standards - have been identified under the IoT Standards Outline. Singapore actively participates in the ISO Technical Committee (TC) working on smart city standards.[45]


      [1] ISO/IEC JTC 1, Information Technology, http://www.iso.org/iso/jtc1_home.html

      [2] The InterNational Committee for Information Technology Standards, JTC 1 Working Group on Big Data, http://www.incits.org/committees/big-data

      [3] ISO/IEC JTC 1 Forms Two Working Groups on Big Data and Internet of Things, 27th January 2015, https://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=5b101d27-47b5-4540-bca3-657314402591

      [4] JTC 1 November 2014 Resolution 28 - Establishment of a Working Group on Big Data, and Call for Participation, 20th January 2015, http://jtc1sc32.org/doc/N2601-2650/32N2625-J1N12445_JTC1_Big_Data-call_for_participation.pdf

      [5] SD-3: Study Group Organizational Information, https://isocpp.org/std/standing-documents/sd-3-study-group-organizational-information

      [6] ISO/IEC JTC 1 Study Group on Big Data (BD-SG), http://jtc1bigdatasg.nist.gov/home.php

      [7] NIST Released V1.0 Seven Volumes of Big Data Interoperability Framework (September 16, 2015),http://bigdatawg.nist.gov/home.php

      [8] Standards That Support Big Data, Monica Rozenfeld, 8th September 2014, http://theinstitute.ieee.org/benefits/standards/standards-that-support-big-data

      [9] ITU releases first ever big data standards, Madolyn Smith, 21st December 2015, http://datadrivenjournalism.net/news_and_analysis/itu_releases_first_ever_big_data_standards#sthash.m3FBt63D.dpuf

      [10] ITU-T Y.3600 (11/2015) Big data - Cloud computing based requirements and capabilities, http://www.itu.int/itu-t/recommendations/rec.aspx?rec=12584

      [11] ISO Strategic Advisory Group on Smart Cities - Demand-side survey, March 2015, http://www.platform31.nl/uploads/media_item/media_item/41/62/Toelichting_ISO_Smart_cities_Survey-1429540845.pdf

      [12] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [13] ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics, http://www.iso.org/iso/catalogue_detail?csnumber=62564

      [14] Dissecting ISO 37120: Why this new smart city standard is good news for cities, 30th July 2014, http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities

      [15] World Council for City Data, http://www.dataforcities.org/wccd/

      [16] Global City Indicators Facility, http://www.cityindicators.org/

      [17] How to measure the performance of smart cities, Maria Lazarte, 5th October 2015

      http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001

      [18] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

      [19] A standard for improving communities reaches final stage, Clare Naden, 12th February 2015,

      http://www.iso.org/iso/news.htm?refid=Ref1932

      [20] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

      [21] ISO/TR 12859:2009 Intelligent transport systems -- System architecture -- Privacy aspects in ITS standards and systems, http://www.iso.org/iso/catalogue_detail.htm?csnumber=52052

      [22] ISO/IEC JTC 1 Information technology, WG 11 Smart Cities, http://www.iec.ch/dyn/www/f?p=103:14:0::::FSP_ORG_ID,FSP_LANG_ID:12973,25

      [23] Work of ISO/IEC JTC1 Smart Ci4es Study group , https://interact.innovateuk.org/documents/3158891/17680585/2+JTC1+Smart+Cities+Group/e639c7f6-4354-4184-99bf-31abc87b5760

      [24] JTC1 SAC - Meeting 13 , February 2015, http://www.finance.gov.au/blog/2015/08/05/jtc1-sac-meeting-13-february-2015/

      [25] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [26] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [27] ITU standards to integrate Internet of Things in Smart Cities, 10th June 2015, https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx

      [28] ITU-T Focus Group Smart Sustainable Cities, https://www.itu.int/dms_pub/itu-t/oth/0b/04/T0B0400004F2C01PDFE.pdf

      [29] Focus Group on Smart Sustainable Cities, http://www.itu.int/en/ITU-T/focusgroups/ssc/Pages/default.aspx

      [30] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [31] 7001 - PRIPARE Smart City Strategy, https://eu-smartcities.eu/commitment/7001

      [32] Financing Tomorrow's Cities: How Standards Can Support the Development of Smart Cities, http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3

      [33] BSI-Smart Cities, http://www.bsigroup.com/en-GB/smart-cities/

      [34] New Set of Smart Cities Standards in Spain, https://eu-smartcities.eu/content/new-set-smart-cities-standards-spain

      [35] Technical Report, M2M & ICT Enablement in Smart Cities, Telecommunication Engineering Centre, Department of Telecommunications, Ministry of Communications and Information Technology, Government of India, November 2015, http://tec.gov.in/pdf/M2M/ICT%20deployment%20and%20strategies%20for%20%20Smart%20Cities.pdf

      [36] Smart City Development in China, Don Johnson, 17th June 2014, http://www.chinabusinessreview.com/smart-city-development-in-china/

      [37] China to continue develop standards on smart cities, 17th December 2015, http://www.chinadaily.com.cn/world/2015wic/2015-12/17/content_22732897.htm

      [38] The German Standardization Roadmap Smart City, April 2014, https://www.dke.de/de/std/documents/nr_smart%20city_en_version%201.0.pdf

      [39] This version of the Smart City Standardization Roadmap, Version 1.1, is an incremental revision of Version 1.0. In Version 1.1, a special focus is placed on giving an overview of current standardization activities and interim results, thus illustrating German ambitions in this area.

      [40] SSCC-CG Final report Smart and Sustainable Cities and Communities Coordination Group, January 2015, https://www.etsi.org/images/files/SSCC-CG_Final_Report-recommendations_Jan_2015.pdf

      [41] Orchestrating infrastructure for sustainable Smart Cities , http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf

      [42] Urbanization- Why do we need standardization?, http://www.din.de/en/innovation-and-research/smart-cities-en

      [43] CEN-CENELEC-ETSI Coordination Group 'Smart and Sustainable Cities and Communities' (SSCC-CG), http://www.cencenelec.eu/standards/Sectors/SmartLiving/smartcities/Pages/SSCC-CG.aspx

      [44] Final report of the CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids, https://www.etsi.org/WebSite/document/Report_CENCLCETSI_Standards_Smart%20Grids.pdf

      [45] SPRING Singapore Supported Close to 600 Companies in Standards Adoption, and Service Excellence Projects , 12th August 2015, http://www.spring.gov.sg/NewsEvents/PR/Pages/Internet-of-Things-(IoT)-Standards-Outline-to-Support-Smart-Nation-Initiative-Unveiled-20150812.aspx

      India Electronics Week 2016 & the IoT Show

      by Vanya Rakesh last modified Feb 12, 2016 03:12 AM
      The India Electronics Week 2016 was held at the Bangalore International Exhibition Centre from 11th-13th January 2016, along with Bangalore's biggest IoT Exhibition and Conference, bringing the global electronics industry together. The event also had the EFY Expo 2016, supported by the Department of Electronics and Information Technology & the Ministry of Communications and Information Technology, Government of India.

      Expo

      The show catered to manufacturers, developers and technology leaders interested in the domestic as well as global markets by displaying their products & services. EFY Expo was a catalyst for  accelerated growth and value addition, acquisition of technology and joint ventures between Indian and global players to enable growth of Electronic Manufacturing in the country.

      Conference
      CIS had the opportunity to attend the conference on Smart Cities on the 13th with experts discussing Smart Governance, Risk Assessment of Iot, Role of IoT in Smart Cities, and building Smart Cities with everything as a service.

      The session started with a talk on building secure and flexible IoT platforms, where the need to focus on risk and security was emphasised. Several issues which require attention from the security perspective were raised, including: the focus must be on end-to-end security with IoT being present everywhere. Secondly, there must be IoT resilient standards addressing authentication and device management, and the Industry and Government must adopt must open standards to make the ecosystem flexible.  Also, the platforms must be secured and employ encryption to ensure trusted execution of software.

      This was followed by a session on Smart Governance, discussing the changing nature of society where we see people moving from being connected with people to now being connected to devices. From the perspective of smart governance, the talk was divided into segments like Government to Government, Government to Business, Government to Employees and Government to Citizens. For smart cities, several e-governance initiatives have been undertaken so far, apart from e-delivery of services. After the Smart Cities Mission was announced, the Central Government sent several indicators of smart governance to the State Governments like : telecare (for example Karnataka had telejob portal), smart parking, smart grids, etc. From the business point of view, areas to be considered for building in-house competence for companies to build efficient and successful smart cities were suggested, some of them being: smarter education, buildings, environment, transportation, etc. It was suggested that smart governance can be ensured by regular measurement of the outcomes, redefining the gaps and analysis of these gaps with clearly laid policies. The key challenges to implementation of smart governance include :

      • The inherent IoT challenges
      • Government departments working in silos
      • Lack of clarity in objectives
      • Lack of transparency
      • No standardized platforms
      • Data privacy- the issue of personal data being stored in Government repository
      • Scalable infrastructure
      • Growing population

      A survey was done to study the success rate of e-governance projects in India, where it was found that  50% of them were complete failures, while 35% were partial failures. Therefore, it becomes important to ponder over these challenges which may create a roadblock to smart governance raising concerns on projects like smart cities.

      RIOT-Risk assessment of Internet of Things-  A Session to understand the security issue in IoT and discuss about secure IoT implementation. In smart cities, IoT has huge potential which may face roadblocks due to lack of open platforms, lack of an ecosystem of sensors, gateways, platforms and the challenges of integration with existing systems. The IoT security issues, on the other hand, like absence of set standards, lack of motivation for security and little awareness about such issues need due attention. This requires levels of check, for example, at the IoT surface level  in devices, the cloud or the mobile. Another important area here becomes the issue of data privacy and security for IoT implementation.

      Everything as a service- An insight into what it takes to build a smart city with EaaS and understand the various components that go into this, how they interact and how it can be implemented. This session highlighted the importance of data in a city, as it becomes very useful to provide information like-about disasters, enabling the Government to make plans and take actions accordingly, information about the traffic in the city, the waste level, city health map, etc. With multiple actors using the same data, the use of such information in a smart city varies across various sectors like

      • Smart Government- for Transparency, accountability and better decision-making
      • Smart Mobility- Intelligent traffic, management, safer roads
      • Smart Healthcare- health maps, better emergency services
      • Smart Living-  safety and security, better quality of life
      • Smart Utilities- Resource conservation, Resilience
      • Smart Environment- Better waste, management, air quality monitoring.

      To use everything as a service, it is considered as an attribute/state where there is a nexus between the users and state. For this, information is collected on the basis of data captured so far, or new data is captured by opening up existing sources like telecom operators, machines, citizens , hospitals, etc., or install new sensors to generate new data. Here, the need for data privacy and government policy was emphasized upon . For EaaS, there is an urgent need to standardize the interface between the sensor network, data publisher, insight providers and service provider in a smart city.

      The conference gave insight into the perspective of the industry about smart cities, along with  the actors involved, issues and challenges envisioned by private companies in the development of smart cities in India. The companies see role of IoT as an integral part of the project, with data security, privacy and need to formulate/adopt standards for implementation of IoT in the new as well as the existing structure key for the Smart Cities Mission in India.

      There is No Such Thing as Free Basics

      by Subhashish Panigrahi last modified Feb 14, 2016 11:37 AM
      India would not see the rain of Free Basics advertisements on billboards with images of farmers and common people explaining how much they could benefit from this Firefox project. Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning the differential pricing without discriminating services.

      The article was published in Bangalore Mirror on February 9, 2016.


      In their notes, TRAI has explained, "In India, given that a majority of the population are yet to be connected to the Internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users' Internet experience." Not just that, violation of this ban would cost Rs 50,000 every day.

      Facebook's earlier plan was to launch Free Basics in India by making a few websites—that are mostly partners with Facebook—available for free. The company not just advertised heavily on billboards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics.

      TRAI criticised Facebook's attempt for such a manipulative public provocation. However, Facebook was heavily criticised by many policy and Internet advocates, including non-profits groups like Free Software Movement of India and Savetheinternet.in campaign.

      The latter two collectives were strongly discouraging Free Basics by bringing public opinion wherein Savetheinternet.org was used to send over 10 lakh emails to TRAI to disallow Free Basics.

      Furthermore 500 start ups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to prime minister Narendra Modi requesting continued support for Net Neutrality — a concept that advocates equal treating of websites — on the Republic Day.

      Stand-up comedy groups like AIB and East India Comedy had created humorous but informative videos explaining the regulatory debate and supporting net neutrality which went viral.

      Technology critic and Quartz writer Alice Truong reacted saying: "Zuckerberg almost portrays net neutrality as a first-world problem that doesn't apply to India because having some service is better than no service."

      In the light of differential pricing, news portal Medianama's founder Nikhil Pawa, in his opinion piece in Times of India, emphasised the way Aircel in India, Grameenphone in Bangladesh and Orange in Africa were providing free access to Internet with a sole motif of access to Internet, and criticised the walled Internet of Facebook that confines users inside Facebook only.

      Had the differential pricing been allowed, it would have affected start ups and content-based smaller companies adversely, as they could never have managed to pay the high price to a partner service provider to make their service available for free.

      On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception of the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

      The decision by the government has been welcomed largely in the country and outside.

      In support of the move, Web We Want programme manager at the World Wide Web Foundation, Renata Avila, has shared saying,

      "As the country with the second largest number of Internet users worldwide, this decision will resonate around the world.

      "It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can't create a two-tier Internet — one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web."

      A Case for Greater Privacy Paternalism?

      by Amber Sinha — last modified Feb 20, 2016 07:28 AM
      This is the second part of a series of three articles exploring the issues with the privacy self management framework and potential alternatives.
       

      The first part of the series can be accessed here.

       

      Background

      The current data privacy protection framework across most jurisdictions is built around a rights based approach which entrusts the individual with having the wherewithal to make informed decisions about her interests and well-being.[1] In his book, The Phantom Public, published in 1925, Walter Lippmann argues that the rights based approach is based on the idea of a sovereign and omnicompetent citizens, who can direct public affairs, however, this idea is a mere phantom or an abstraction. [2] Jonathan Obar, Assistant Professor of Communication and Digital Media Studies in the Faculty of Social Science and Humanities at University of Ontario Institute of Technology, states that Lippmann's thesis remains equally relevant in the context of current models of self-management, particularly for privacy.[3] In the previous post, Scott Mason and I had looked at the limitations of a 'notice and consent' regime for privacy governance. Having established the deficiencies of the existing framework for data protection, I will now look at some of the alternatives proposed that may serve to address these issues.

      In this article, I will look at paternalistic solutions posed as alternatives to the privacy self-management regime. I will look at theories of paternalism and libertarianism in the context of privacy and with reference to the works of some of the leading philosophers on jurisprudence and political science. The paper will attempt to clarify the main concepts and the arguments put forward by both the proponents and opponents of privacy paternalism. The first alternative solution draws on Anita Allen's thesis in her book, Unpopular Privacy,[4] which deals with the questions whether individuals have a moral obligation to protect their own privacy. Allen expands the idea of rights to protect one's own self interests and duties towards others to the notion that we may have certain duties not only towards others but also towards ourselves because of their overall impact on the society. In the next section, we will look at the idea of 'libertarian paternalism' as put forth by Cass Sunstein and Richard Thaler[5] and what its impact could be on privacy governance.

      Paternalism

      Gerald Dworkin, Professor Emeritus at University of California, Davis, defines paternalism as "interference of a state or an individual with another person, against their will, and defended or motivated by a claim that the person interfered with will be better off or protected from harm." [6] Any act of paternalism will involve some limitation on the autonomy of the subject of the regulation usually without the consent of the subject, and premised on the belief that such act shall either improve the welfare of the subject or prevent it from diminishing.[7] Seana Shiffrin, Professor of Philosophy and Pete Kameron Professor of Law and Social Justice at UCLA, takes a broader view of paternalism and includes within its scope not only matters which are aimed at improving the subject's welfare, but also the replacement of the subject's judgement about matters which may otherwise have lied legitimately within the subject's control.[8] In that sense, Shiffrin's view is interesting for it dispenses with both the requirement for active interference, and such act being premised on the subject's well-being.

      The central premise of John Stuart Mill's On Liberty is that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. "His own good, either physical or moral," according to Mill, "is not a sufficient warrant." However, various scholars over the years have found Mill's absolute prohibition problematic and support some degree of paternalism. John Rawls' Principle of Fairness, for instance has been argued to be inherently paternalistic. If one has to put it in a nutshell, the aspect about paternalism that makes it controversial is that it involves coercion or interference, which in any theory of normative ethics or political science needs to be justified based on certain identified criteria. Staunch opponents of paternalism believe that this justification can never be met. Most scholars however, do not argue that all forms of paternalism are untenable and the bulk of scholarship on paternalism is devoted to formulating the conditions under which this justification is satisfied.

      Paternalism interferes with self-autonomy in two ways according to Peter de Marneffe, the Professor of Philosophy at the School of Historical, Philosophical and Religious Studies, Arizona State University.[9] The first is the prohibition principle, under which a person's autonomy is violated by being prohibited from making a choice. The second is the opportunity principle which undermines the autonomy of a person by reducing his opportunities to make a choice. Both the cases should be predicated upon a finding that the paternalistic act will lead to welfare or greater autonomy. According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy.[10]

      There are two main strands of arguments made against paternalism.[11] The first argues that interference with the choices of informed adults will always be an inferior option to letting them decide for themselves, as each person is the 'best judge' of his or her interests. The second strand does not engage with the question about whether paternalism can make better decisions about individuals, but states that any benefit derived from the paternalist act is outweighed by the harm of violation of self-autonomy. Most proponents of soft-paternalism build on this premise by trying to demonstrate that not all paternalistic acts violate self-autonomy. There are various forms of paternalism that we do not question despite them interfering with our autonomy - seat belt laws and restriction of tobacco advertising being a few of them. If we try to locate arguments for self-autonomy in the Kantian framework, it refers not just to the ability to do what one chooses, but to rational self-governance.[12] This theory automatically "opens the door for justifiable paternalism."[13] In this paper, I assume that certain forms of paternalism are justified. In the remaining two section, I will look at two different theories advocating greater paternalism in the context of privacy governance and try to examine the merits and issues with such measures.

      A moral obligation to protect one's privacy

      Modest Paternalism

      In her book, Unpopular Privacy,[14] Anita Allen states that enough emphasis is not placed by people on the value of privacy. The right of individuals to exercise their free will and under the 'notice and consent' regime, give up their rights to privacy as they deem fit is, according to her, problematic. The data protection law in most jurisdictions, is designed to be largely value-neutral in that it does not sit on judgement on what is the nature of information that is being revealed and how the collector uses it. Its primary emphasis is on providing the data subject with information about the above and allowing him to make informed decisions. In my previous post, Scott Mason and I had discussed that with online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less and less of a genuine option.[15] Lamenting that people put little emphasis on privacy and often give away information which, upon retrospection and due consideration, they would feel, they ought not have disclosed, Allen proposes what she calls 'modest paternalism' in which regulations mandate that individuals do not waive their privacy is certain limited circumstances.

      Allen acknowledges the tension between her arguments in favor of paternalism and her avowed support for the liberal ideals of autonomy and that government interference should be limited, to the extent possible. However, she tries to make a case for greater paternalism in the context of privacy. She begins by categorizing privacy as a "primary good" essential for "self respect, trusting relationships, positions of responsibility and other forms of flourishing." In another article, Allen states that this "technophilic generation appears to have made disclosure the default rule of everyday life."[16] Relying on various anecdotes and examples of individuals' disregard for privacy, she argues that privacy is so "neglected in contemporary life that democratic states, though liberal and feminist, could be justified in undertaking a rescue mission that includes enacting paternalistic privacy laws for the benefit of un-eager beneficiaries." She does state that in most cases it may be more advantageous to educate and incentivise individuals towards making choices that favor greater privacy protection. However, in exceptional cases, paternalism would be justified as a tool to ensure greater privacy.

      A duty towards oneself

      In an article for the Harvard Symposium on Privacy in 2013, Allen states that laws generally provide a framework built around rights of individuals that enable self-protection and duties towards others. G A Cohen describes Robert Nozick's views which represents this libertarian philosophy as follows: "The thought is that each person is the morally rightful owner of himself. He possesses over himself, as a matter of moral right, all those rights that a slaveholder has over a chattel slave as a matter of legal right, and he is entitled, morally speaking, to dispose over himself in the way such a slaveholder is entitled, legally speaking, to dispose over his slave."[17] As per the libertarian philosophy espoused by Nozick, everyone is licensed to abuse themselves in the same manner slaveholders abused their slaves.

      Allen asks the question whether there is a duty towards oneself and if such a duty exists, should it be reflected in policy or law. She accepts that a range of philosophers consider the idea of duties to oneself as illogical or untenable. [18] Allen, however relies on the works of scholars such as Lara Denis, Paul Eisenberg and Daniel Kading who have located such a duty. She develops a schematic of two kinds of duties - first order duties that requires we protect ourselves for the sake of others, and second order, derivative duties that we protect ourself. Through the essay, she relies on the Kantian framework of categorical imperative to build the moral thrust of her arguments. Kantian view of paternalism would justify those acts which interfere with an individual's autonomy in order to prevent her from exercising her autonomy irrationally, and draw her towards rational end that agree with her conception of good.[19] However, Allen goes one step further and she locates the genesis for duties to both others (perfect duties) and oneself (imperfect duties) in the categorical imperative . Her main thesis is that there are certain situations where we have a moral duty to protect our own privacy where failure to do so would have an impact on either specific others or the society, at large.

      Issues

      Having built this interesting and somewhat controversial premise, Allen does not sufficiently expand upon it to present a nuanced solution. She provides a number of anecdotes but does not formulate any criteria for when privacy duties could be self-regarding. Her test for what kinds of paternalistic acts are justified is also extremely broad. She argues for paternalism where is protects privacy rights that "enhance liberty, liberal ways of life, well-being and expanded opportunity." She does not clearly define the threshold for when policy should move from incentives to regulatory mandate nor does she elaborate upon what forms paternalism would both serve the purpose of protecting privacy as well as ensuring that there is no unnecessary interference with the rights of individual.[20]

      Nudge and libertarian paternalism

      What is nudge?

      In 2006, Richard Thaler and Cass Sunstein published their book Nudge: Improving decisions about health, wealth and happiness. [21] The central thesis of the book is that in order to make most of decisions, we rely on a menu of options made available to us and the order and structure of choices is characterised by Thaler and Sunstein as "choice architecture." According to them, the choice architecture has a significant impact on the choices that we make. The book looks at examples from a food cafeteria, the position of restrooms and how whether the choice is to opt-in or opt-out influences the retirement plans that were chosen. This choice architecture influences our behavior without coercion or a set of incentives, as conventional public policy theory would have us expect. The book draws on work done by cognitive scientists such as Daniel Kahneman[22] and Amos Tversky[23] as well as Thaler's own research in behavioral economics. [24] The key takeaway from cognitive science and behavioral economics used in this book is that choice architecture influences our actions in anticipated ways and leads to predictably irrational behavior. Thaler and Sunstein believe that this presents a great potential for policy makers. They can tweak the choice architecture in their specific domains to influence the decisions made by its subjects and nudge them towards behavior that is beneficial to them and/or the society.

      The great attraction of the argument made by Thaler and Sunstein is that it offers a compromise between forbearance and mandatory regulation. If we identify the two ends of the policy spectrum as - a) paternalists who believe in maximum interference through legal regulations that coerce behavior to meet the stated goals of the policy, and b) libertarians who believe in the free market theory that relies on the individuals making decisions in their best interests, 'nudging' falls somewhere in the middle, leading to the oxymoronic yet strangely apt phrase, "libertarian paternalism." The idea is to design choices in such as way that they influence decision-making so as to increase individual and societal welfare. In his book, The Laws of Fear, Cass Sunstein argues that the anti-paternalistic position is incoherent as "there is no way to avoid effects on behavior and choices."

      The proponents of libertarian paternalism refute the commonly posed question about who decides the optimal and desirable results of choice architecture, by stating that this form of paternalism does not promote a perfectionist standard of welfare but an individualistic and subjective standard. According to them, choices are not prohibited, cordoned off or made to carry significant barriers. However, it is often difficult to conclude what it is that is better for the welfare of people, even from their own point of view. The claim that nudges lead to choices that make them better off by their own standards seems more and more untenable. What nudges do is lead people towards certain broad welfare which the choice-architects believe make the lives of people better in the longer term.[25]

      How nudges could apply to privacy?

      Our previous post echoes the assertion made by Thaler and Sunstein that the traditional rational choice theory that assumes that individuals will make rationally optimal choices in their self interest when provided with a set of incentives and disincentives, is largely a fiction. We have argued that this assertion holds true in the context of privacy protection principles of notice and informed consent. Daniel Solove has argued that insights from cognitive science, particularly using the theory of nudge would be an acceptable compromise between the inefficacy of privacy self-management and the dangers of paternalism.[26] His rationale is that while nudges influence choice, they are not overly paternalistic in that they still give the individual the option of making choices contrary to those sought by the choice architecture. This is an important distinction and it demonstrates that 'nudging' is less coercive than how we generally understand paternalistic policies.

      One of the nudging techniques which makes a lot of sense in the context of the data protection policies is the use of defaults. It relies on the oft-mentioned status quo bias.[27] This is mentioned by Thaler and Sunstein with respect to encouraging retirement savings plans and organ donation, but would apply equally to privacy. A number of data collectors have maximum disclosure as their default settings and effort in understanding and changing these settings is rarely employed by users. A rule which mandates that data collectors set optimal defaults that ensure that the most sensitive information is subjected to least degree of disclosure unless otherwise chosen by the user, will ensure greater privacy protection.

      Ryan Calo and Dr. Victoria Groom explored an alternative to the traditional notice and consent regime at the Centre of Internet and Society, Stanford University.[28] They conducted a two-phase experimental study. In the first phase, a standard privacy notice was compared with a control condition and a simplified notice to see if improving the readability impacted the response of users. In the second phase, the notice was compared with five notices strategies, out of which four were intended to enhance privacy protective behavior and one was intended to lower it. Shara Monteleone and her team used a similar approach but with a much larger sample size.[29] One of the primary behavioral insights used was that when we do repetitive activities including accepting online terms and conditions or privacy notices, we tend to use our automatic or fast thinking instead to reflective or slow thinking.[30] Changing them requires leveraging the automatic behavior of the individuals.

      Alessandro Acquisti, Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University, has studied the application of methodologies from behavioral economics to investigate privacy decision-making.[31] He highlights a variety of factors that distort decision-making such as - "inconsistent preferences and frames of judgment; opposing or contradictory needs (such as the need for publicity combined with the need for privacy); incomplete information about risks, consequences, or solutions inherent to provisioning (or protecting) personal information; bounded cognitive abilities that limit our ability to consider or reflect on the consequences of privacy-relevant actions; and various systematic (and therefore predictable) deviations from the abstractly rational decision process." Acquisti looks at three kinds of policy solutions taking the example of social networking sites collecting sensitive information- a) hard paternalistic approach which ban making visible certain kind of information on the site, b) a usability approach that entails designing the system in way that is most intuitive and easy for users to decide whether to provide the information, c) a soft paternalistic approach which seeks to aid the decision-making by providing other information such as how many people would have access to the information, if provided, and set defaults such that the information is not visible to others unless explicitly set by the user. The last two approaches are typically cited as examples of nudging approaches to privacy.

      Another method is to use tools that lead to decreased disclosure of information. For example, tools like Social Media Sobriety Test[32] or Mail Goggles[33] serve to block the sites during certain hours set by user during which one expects to be at their most vulnerable, and the online services are blocked unless the user can pass a dexterity examination.[34] Rebecca Belabako and her team are building privacy enhanced tools for Facebook and Twitter that will provide greater nudges in restricting who they share their location on Facebook and restricting their tweets to smaller group of people.[35] Ritu Gulia and Dr. Sapna Gambhir have suggested nudges for social networking websites that randomly select pictures of people who will have access to the information to emphasise the public or private setting of a post.[36] These approaches try to address the myopia bias where we choose immediate access to service over long term privacy harms.

      The use of nudges as envisioned in the examples above is in some ways an extension of already existing research which advocates a design standard that makes the privacy notices more easily intelligible.[37] However, studies show only an insignificant improvement by using these methods. Nudging, in that sense goes one step ahead. Instead of trying to make notices more readable and enable informed consent, the design standard will be intended to simply lead to choices that the architects deem optimal.

      Issues with nudging

      One of the primary justifications that Thaler and Sunstein put forward for nudging is that the choice architecture is ubiquitous. The manner in which option are presented to us impact how we make decision whether it was intended to do so or not, and that there is no such thing a neutral architecture. This inevitability, according to them, makes a strong case for nudging people towards choices that will lead to their well-being. However, this assessment does not support the arguments made by them that libertarian paternalism nudges people towards choices from their own point of view. It is my contention that various examples of libertarian paternalism, as put forth by Thaler and Sunstein, do in fact interfere with our self-autonomy as the choice architecture leads us not to options that we choose for ourselves in a fictional neutral environments, but to those options that the architects believe are good for us. This substitution of judgment would satisfy the definition by Seana Shiffron. Second, the fact that there is no such things as a neutral architecture, is by itself, not justification enough for nudging. If we view the issue only from the point of view of normative ethics, assuming that coercion and interference are undesirable, intentional interference is much worse than unintentional interference.

      However, there are certain nudges that rely primarily on providing information, dispensing advice and rational persuasion.[38] The freedom of choice is preserved in these circumstances. Libertarians may argue that even these circumstances the shaping of choice is problematic. This issue, J S Blumenthal-Barby argues, is adequately addressed by the publicity condition, a concept borrowed by Thaler and Sunstein from John Rawls.[39] The principle states that officials should never use a technique they would be uncomfortable defending to the public; nudging is no exception. However, this seems like a simplistic solution to a complex problem. Nudges are meant to rely on inherent psychological tendencies, leveraging the theories about automatic and subconscious thinking as described by Daniel Kahneman in his book, "Thinking Fast, Thinking Slow."[40] In that sense, while transparency is desirable it may not be very effective.

      Other commentators also note that while behavioral economics can show why people make certain decisions, it may not be able to reliably predict how people will behave in different circumstances. The burden of extrapolating the observations into meaningful nudges may prove to be too heavy.[41] However, the most oft-quoted criticism of nudging is that it will rely on officials to formulate the desired goals towards which the choice architecture will lead us.[42] The judgments of these officials could be flawed and subject to influence by large corporations.[43] These concerns echo the best judge argument made against all forms of paternalism, mentioned earlier in this essay. J S Blumenthal-Barby, Assistant Professor at the Center for Medical Ethics and Health Policy, Baylor College of Medicine, also examines the claim that the choice architects will be susceptible to the same biases while designing the choice environment.[44] His first argument in response to this is that experts who extensively study decision-making may be less prone to these errors. Second, he argues that even with errors and biases, a choice architecture which attempts to the rights the wrongs of a random and unstructured choice environment is a preferable option.[45]

      Conclusion

      Most libertarians will find the notion that individuals are prevented from sharing some information about themselves problematic. Anita Allen's idea about self-regarding duties is at odds how we understand rights and duties in most jurisdictions. Her attempt to locate an ethical duty to protect one's privacy, while interesting, is not backed by a formulation of how such a duty would work. While she relies largely on an Kantian framework, her definition of paternalism, as can be drawn from her writing is broader than that articulated by Kant himself. On the other hand, Thaler and Sunstein's book Nudge and related writings by them do attempt to build a framework of how nudging would work and answer some questions they anticipate would be raised against the idea of libertarian paternalism.

      By and large, I feel that, Thaler and Sunstein's idea of libertarian paternalism could be justified in the context of privacy and data protection governance. It would be fair to say the first two conditions of de Marneffe under which such acts of paternalism are justified [46] are largely satisfied by nudges that ensures greater privacy protection. If nudges can ensure greater privacy protection, its benefits are both substantial and evident. However, the larger question is whether these purported benefits outweigh the costs of loss of self-autonomy. Given the numerous ways in which the 'notice and consent' framework is ineffective and leads to very little informed consent, it can be argued that there is little exercise of autonomy, to begin with, and hence, the loss of self-autonomy is not substantial. Some of the conceptual issues which doubt the ability of nudges to solve complex problems remain unanswered and we will have to wait for more analysis by both cognitive scientists and policy-makers. However, given the growing inefficacy of the existing privacy protection framework, it would be a good idea of begin using some insights from cognitive science and behavioral economics to ensure greater privacy protection.

      The current value-neutrality of data protection law with respect of the kind of data collected and its use, and its complete reliance on the data subject to make an informed choice is, in my opinion, an idea that has run its course. Rather than focussing solely on the controls at the stage of data collection, I believe we need a more robust theory of how to govern the subsequent uses of data. This will is the focus of the next part of this series in which I will look at the greater use of risk-based approach to privacy protection.



      [1] With invaluable inputs from Scott Mason.

      [2] Walter Lippmann, The Phantom Public, Transaction Publishers, 1925.

      [3] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

      [4] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

      [5] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

      [7] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 29.

      [8] Seana Shiffrin, Paternalism, Unconscionability Doctrine, and Accommodation, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2682745

      [9] Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

      [10] Id .

      [11] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 74.

      [12] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 115.

      [13] Ibid at 116.

      [14] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

      [15] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

      [16] Anita Allen, Privacy Law: Positive Theory and Normative Practice, available at http://harvardlawreview.org/2013/06/privacy-law-positive-theory-and-normative-practice/ .

      [17] G A Cohen, Self ownership, world ownership and equality, available at http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=3093280

      [19] Michael Cholbi, Kantian Paternalism and suicide intervention, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

      [20] Eric Posner, Liberalism and Concealment, available at https://newrepublic.com/article/94037/unpopular-privacy-anita-allen

      [21] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

      [22] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [23] Daniel Kahneman, Paul Slovic and Amos Tversky, Judgment under uncertainty: heuristics and biases, Cambridge University Press, 1982; Daniel Kahneman and Amos Tversky, Choices, Values and Frames, Cambridge University Press, 2000.

      [24] Richard Thaler, Advances in behavioral finance, Russell Sage Foundation, 1993.

      [25] Thaler, Sunstein and Balz, Choice Architecture, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583509.

      [26] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [27] Frederik Borgesius, Behavioral sciences and the regulation of privacy on the Internet, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2513771.

      [28] Ryan Calo and Dr. Victoria Groom, Reversing the Privacy Paradox: An experimental study, available at http://ssrn.com/abstract=1993125

      [29] Shara Monteleon et al, Nudges to Privacy Behavior: Exploring an alternative approahc to privacy notices, available at http://publications.jrc.ec.europa.eu/repository/bitstream/JRC96695/jrc96695.pdf

      [30] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [31] Alessandro Acquisti, Nudging Privacy, available at http://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf

      [34] Rebecca Balebako et al, Nudging Users towards privacy on mobile devices, available at https://www.andrew.cmu.edu/user/pgl/paper6.pdf.

      [35] Id .

      [36] Ritu Gulia and Dr. Sapna Gambhir, Privacy and Privacy Nudges for OSNs: A Review, available at http://www.ijircce.com/upload/2014/march/14L_Privacy.pdf

      [37] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [38] Daniel Hausman and Bryan Welch argue that these cases are mistakenly characterized as nudges. They believe that nudges do not try to inform the automatic system, but manipulate the inherent cognitive biases. Daniel Hausman and Bryan Welch, Debate: To Nudge or Not to Nudge, Journal of Political Philosophy 18(1).

      [39] Ryan Calo, Code, Nudge or Notice, available at

      [40] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [41] Evan Selinger and Kyle Powys Whyte, Nudging cannot solve complex policy problems.

      [42] Mario J. Rizzo & Douglas Glen Whitman, The Knowledge Problem of New Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1310732; Pierre Schlag, Nudge, Choice Architecture, and Libertarian Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585362.

      [43] Edward L. Glaeser, Paternalism and Psychology, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=917383.

      [44] J S BLumenthal-Barby, Choice Architecture: A mechanism for improving decisions

      while preserving liberty?, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

      [45] Id .

      [46] According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy. Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

      Internet Freedom

      by Sunil Abraham and Vidushi Marda — last modified Feb 15, 2016 02:51 AM
      The modern medium of the web is an open-sourced, democratic world in which equality is an ideal, which is why what is most important is Internet freedom.

      The article by Sunil Abraham and Vidushi Marda was published by Asian Age on February 14, 2016.


      What would have gone wrong if India’s telecom regulator Trai had decided to support programmes like Facebook’s Free Basics and Airtel’s Zero Rating instead of issuing the regulation that prohibits discriminatory tariffs? Here are possible scenarios to look at in case the discriminatory tarrifs were allowed as they are in some countries.

      Possible impact on elections

      Facebook would have continued to amass its product — eyeballs. Indian eyeballs would be more valuable than others for three reasons 1. Facebook would have an additional layer of surveillance thanks to the Free Basics proxy server which stores the time, the site url and data transferred for all the other destinations featured in the walled garden 2. As part of Digital India, most government entities will set up Facebook pages and a majority of the interaction with citizens would happen on the social media rather than the websites of government entities and, consequently, Facebook would know what is and what is not working in governance 3. Given the financial disincentive to leave the walled garden, the surveillance would be total.

      What would this mean for democracies? Eight years ago, Facebook began to engineer the News Feed to show more posts of a user’s friends voting in order to influence voting behavior. It introduced the “I’m Voting” button into 61 million users’ feeds during the 2010 US presidential elections to increase voter turnout and found that this kind of social pressure caused people to vote. Facebook has also admitted to populating feeds with posts from friends with similar political views. During the 2012 Presidential elections, Facebook was able to increase voter turnout by altering 1.9 million news feeds.

      Indian eyeballs may not be that lucrative in terms of advertising. But these users are extremely valuable to political parties and others interested in influencing elections. Facebook’s notifications to users when their friends signed on to the “Support Free Basics” campaign was configured so that you were informed more often than with other campaigns. In other words, Facebook is not just another player on their platform. Given that margins are often slim, would Facebook be tempted to try and install a government of its choice in India during the 2019 general elections?

      In times of disasters

      Most people defending Free Basics and defending forbearance as the regulatory response in 2015/16 make the argument that “95 per cent of Internet users in developing countries spend 95 per cent of their time on Facebook”.

      This is not too far from the truth as LirneAsia demonstrated in 2012 with most people using Facebook in Indonesia not even knowing they were using the internet. In other words, they argue that regulators should ignore the fringe user and fringe usage and only focus on the mainstream. The cognitive bias they are appealing to is smaller numbers are less important.

      Since all the sublime analogies in the Net Neutrality debate have been taken, forgive us for using the scatological. That is the same as arguing that since we spend only 5% of our day in toilets, only 5% of our home’s real estate should be devoted to them.

      Everyone agrees that it is far easier to live in a house without a bedroom than a house without a toilet. Even extremely low probabilities or ‘Black Swan’ events can be terribly important! Imagine you are an Indian at the bottom of the pyramid. You cannot afford to pay for data on your phone and, as a result, you rarely and nervously stray out of the walled garden of Free Basics.

      During a natural disaster you are able to use the Facebook Safety Check feature to mark yourself safe but the volunteers who are organising both offline and online rescue efforts are using a wider variety of platforms, tools and technologies.

      Since you are unfamiliar with the rest of the Internet, you are ill equipped when you try to organise a rescue for you and your loved ones.

      Content and carriage converge

      Some people argue that TRAI should have stayed off the issue since the Competition Commission of India (CCI) is sufficient to tackle Net Neutrality harms. However it is unclear if predatory pricing by Reliance, which has only 9% market share, will cross the competition law threshold for market dominance? Interestingly, just before the Trai notification, the Ambani brothers signed a spectrum sharing pact and they have been sharing optic fibre since 2013.

      Will a content sharing pact follow these carriage pacts? As media diversity researcher, Alam Srinivas, notes “If their plans succeed, their media empires will span across genres such as print, broadcasting, radio and digital. They will own the distribution chains such as cable, direct-to-home (DTH), optic fibre (terrestrial and undersea), telecom towers and multiplexes.”

      What does this convergence vision of the Ambani brothers mean for media diversity in India? In the absence of net neutrality regulation could they use their dominance in broadcast media to reduce choice on the Internet? Could they use a non-neutral provisioning of the Internet to increase their dominance in broadcast media? When a single wire or the very same radio spectrum delivers radio, TV, games and Internet to your home — what under competition law will be considered a substitutable product? What would be the relevant market? At the Centre for Internet and Society (CI S), we argue that competition law principles with lower threshold should be applied to networked infrastructure through infrastructure specific non-discrimination regulations like the one that Trai just notified to protect digital media diversity.

      Was an absolute prohibition the best response for TRAI? With only two possible exemptions — i.e. closed communication network and emergencies - the regulation is very clear and brief. However, as our colleague Pranesh Prakash has said, TRAI has over regulated and used a sledgehammer where a scalpel would have sufficed. In CIS’ official submission, we had recommended a series of tests in order to determine whether a particular type of zero rating should be allowed or forbidden. That test may be legally sophisticated; but as TRAI argues it is clear and simple rules that result in regulatory equity. A possible alternative to a complicated multi-part legal test is the leaky walled garden proposal. Remember, it is only in the case of very dangerous technologies where the harms are large scale and irreversible and an absolute prohibition based on the precautionary principle is merited.

      However, as far as network neutrality harms go, it may be sufficient to insist that for every MB that is consumed within Free Basics, Reliance be mandated to provide a data top up of 3MB.

      This would have three advantages. One, it would be easy to articulate in a brief regulation and therefore reduce the possibility of litigation. Two, it is easy for the consumer who is harmed to monitor the mitigation measure and last, based on empirical data, the regulator could increase or decrease the proportion of the mitigation measure.

      This is an example of what Prof Christopher T. Marsden calls positive, forward-looking network neutrality regulation. Positive in the sense that instead of prohibitions and punitive measures, the emphasis is on obligations and forward-looking in the sense that no new technology and business model should be prohibited.

      What is Net neutrality?

      According to this principle, all service providers and governments should not discriminate between various data on the internet and consider all as one. They cannot give preference to one set of apps/ websites while restricting others.

      • 2006: TRAI invites opinions regarding the regulation of net neutrality from various telecom industry bodies and stakeholdersFeb. 2012: Sunil Bharti Mittal, CEO of Bharti Airtel, suggests services like YouTube should pay an interconnect charge to network operators, saying that if telecom operators are building highways for data then there should be a tax on the highway
      • July 2012: Bharti Airtel’s Jagbir Singh suggests large Internet companies like Facebook and Google should share revenues with telecom companies.
      • August 2012: Data from M-Lab said You Broadband, Airtel, BSNL were throttling traffic of P2P services like BitTorrent
      • Feb. 2013: Killi Kiruparani, Minister for state for communications and technology says government will look into legality of VoIP services like Skype
      • June 2013: Airtel starts offering select Google services to cellular broadband users for free, fixing a ceiling of 1GB on the data
      • Feb. 2014: Airtel operations CEO Gopal Vittal says companies offering free messaging apps like Skype and WhatsApp should be regulated
      • August 2014: TRAI rejects proposal from telecom companies to make messaging application firms share part of their revenue with the carriers/government
      • Nov. 2014: Trai begins investigation on Airtel implementing preferential access with special packs for WhatsApp and Facebook at rates lower than standard data rates
      • Dec. 2014: Airtel launches 2G, 3G data packs with VoIP data excluded in the pack, later launches VoIP pack.
      • Feb. 2015: Facebook launches Internet.org with Reliance communications, aiming to provide free access to 38 websites through single app
      • March 2015: Trai publishes consultation paper on regulatory framework for over the top services, explaining what net neutrality in India will mean and its impact, invited public feedback
      • April 2015: Airtel launches Airtel Zero, a scheme where apps sign up with airtle to get their content displayed free across the network. Flipkart, which was in talks for the scheme, had to pull out after users started giving it poor rating after hearing about the news
      • April 2015: Ravi Shankar Prasad, Communication and information technology minister announces formation of a committee to study net neutrality issues in the country
      • 23 April 2015: Many organisations under Free Software Movement of India protested in various parts of the country. In a counter measure, Cellular Operators Association of India launches campaign , saying its aim is to connect the unconnected citizens, demanding VoIP apps be treated as cellular operators
      • 27 April 2015: Trai releases names and email addresses of users who responded to the consultation paper in millions. Anonymous India group, take down Trai’s website in retaliation, which the government could not confirm
      • Sept. 2015: Facebook rebrands Internet.org as Free Basics, launches in the country with massive ads across major newspapers in the country. Faces huge backlash from public
      • Feb. 2016: Trai rules in favour of net neutrality, barring telecom operators from charging different rates for data services.

      The writers work at the Centre for Internet and Society, Bengaluru. CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world

      Free Speech and the Law on Sedition

      by Siddharth Narrain — last modified Feb 17, 2016 09:13 AM
      Siddharth Narrain explains how the law in India has addressed sedition.

      Sedition is an offence that criminalizes speech that is construed to be disloyal to or threatening to the state. The main legal provision in India is section 124A of the Indian Penal Code that criminalizes speech that “brings or attempts to bring into hatred or contempt, or attempts or attempts to excite disaffection” towards the government. The law makes a distinction between “disapprobation” (lawful criticism of the government) and “disaffection” (expressing disloyalty or enmity which is proscribed).

      The British introduced this law in 1898, as a part of their efforts to curb criticism of colonial rule, and to stamp out any dissent. Many famous nationalists including Bal Gangadhar Tilak and Mahatma Gandhi have been tried and imprisoned for sedition. After a spirited debate, the Indian Constitutional Assembly decided not to include ‘sedition’ as a specific exception to Article 19(1)(a). However section 124A IPC remained on the statute book. After the First Amendment to the Constitution and the introduction of the words “in the interests of public order” to the exceptions to Article 19(1)(a), it became extremely difficult to challenge the constitutionality of section 124A.

      In 1962, the Supreme Court upheld the constitutionality of the law in the Kedarnath Singh case, but narrowed the scope of the law to acts involving intention or tendency to create disorder, or disturbance of law and order, or incitement to violence. Thus the Supreme Court provided an additional safeguard to the law: not only was constructive criticism or disapprobation allowed, but if the speech concerned did not have an intention or tendency to cause violence or a disturbance of law and order, it was permissible.

      However, even though the law allows for peaceful dissent and constructive criticism, over the years various governments have used section 124A to curb dissent. The trial and conviction of the medical doctor and human rights activist Binayak Sen, led to a renewed call for the scrapping of this law. In the Aseem Trivedi case, where a cartoonist was arrested for his work around the theme of corruption, the Bombay High Court has laid down guidelines to be followed by the government in arrests under section 124A. The court reaffirmed the law laid down in Kedarnath Singh, and held that for a prosecution under section 124A, a legal opinion in writing must be obtained from the law officer of the district(it did not specify who this was) followed by a legal opinion in writing within two weeks from the state public prosecutor. This adds to the existing procedural safeguard under section 196 of the Code of Criminal Procedure (CrPC) that says that courts cannot take cognizance of offences punishable under section 124A IPC unless the Central or State government has given sanction or permission to proceed.

      The serious nature of section 124A is seen in the light of the punishment associated with it. Section 124A is a cognizable (arrests can be made without a warrant), non-bailable and non-compoundable offence. Punishment for the offence can extend up to life imprisonment. Because of the seriousness of the offence, courts are often reluctant to grant bail. Sedition law is seen as an anachronism in many countries including the United Kingdom, and it has been repealed in most Western democracies.

      IMPORTANT CASE LAW

      Kedarnath Singh v. State of Bihar, AIR 1962 SC 955 Supreme Court, 5 Judges,

      Medium: Offline

      Brief Facts: Kedarnath Singh, a member of the Forward Communist Party, was prosecuted for sedition related to a speech that he made criticising the government for its capitalist policies. Singh challenged the constitutionality of the sedition law. The Supreme Court bunched Singh’s case with other similar incidents where persons were prosecuted under the sedition law.

      Held: The law is constitutional and covered written or spoken words that had the implicit idea of subverting the government by violent means. However, this section would not cover words that were used as disapprobation of measures of the government that were meant to improve or alter the policies of the government through lawful means. Citizens can criticize the government as long as they are not inciting people to violence against the government with an intention to create public disorder. The court drew upon the Federal Court’s decision in Niharendru Dutt Majumdar where the court held that offence of sedition is the incitement to violence or the tendency or the effect of bringing a government established by law into hatred or contempt or creating disaffection in the sense of disloyalty to the state. While the Supreme Court upheld the validity of section 124A, it limited its application to acts involving intention or tendency to create disorder, or a disturbance of law and order, or incitement to violence.

      Balwant Singh and Anr v. State of Punjab: AIR 1985 SC 1785

      Brief Facts: The accused had raised the slogan “Khalistan Zindabad” outside a cinema hall just after the assassination of Prime Minister Indira Gandhi.

      Held: The slogans raised by the accused had no impact on the public. Two individuals casually raising slogans could not be said to be exciting disaffection towards the government. Section 124A would not apply to the facts and circumstances of this case.

      Sanskar Marathe v. State of Maharashtra & Ors, Criminal Public Interest Litigation No. 3 of 2015, Bombay High Court, 2 judges

      Medium: Online and Offline

      Brief Facts: The case arose out of the arrest of Aseem Trivedi, a political cartoonist who was involved with the India Against Corruption movement. Trivedi was arrested in 2012 in Mumbai for sedition and insulting the National Emblems Act. The court considered the question of how it could intervene to prevent the misuse of section 124A. Held: The cartoons were in the nature of political satire, and there was no allegation of incitement to violence, or tendency or intention to create public disorder. The Court issued guidelines to all police personnel in the form of preconditions for prosecutions under section 124A: Words, signs, or representations must bring the government into hatred or contempt, or must cause, or attempt to cause disaffection, enmity or disloyalty to the government. The words, signs or representation must also be an incitement to violence or must be intended or tend to create public disorder or a reasonable apprehension of public disorder. Words, signs or representations, just by virtue of being against politicians or public officials cannot be said to be against the government. They must show the public official as representative of the government. Disapproval or criticism of the government to bring about a change in government through lawful means does not amount to sedition. Obscenity or vulgarity by itself is not a factor to be taken into account while deciding if a word, sign or representation violates section 124A. In order to prosecute under section 124A, the government has to obtain a legal opinion in writing from the law officer of the district (the judgment does not specify who this is) and in the next two weeks, a legal opinion in writing from the public prosecutor of the state.

      Free Speech and Public Order

      by Gautam Bhatia — last modified Feb 18, 2016 06:23 AM
      In this post, Gautam Bhatia has explained the law on public order as a reasonable restriction to freedom of expression under Article 19(2) of the Constitution of India.

      Article 19(2) of the Constitution authorises the government to impose, by law, reasonable restrictions upon the freedom of speech and expression “in the interests of… public order.” To understand the Supreme Court’s public order jurisprudence, it is important to break down the sub-clause into its component parts, and focus upon their separate meanings. Specifically, three terms are important: “reasonable restrictions”, “in the interests of”, and “public order”.

      The Supreme Court’s public order jurisprudence can be broadly divided into three phases. Phase One (1949 – 1950), which we may call the pre-First Amendment Phase, is characterised by a highly speech-protective approach and a rigorous scrutiny of speech-restricting laws. Phase Two (1950 – 1960), which we may call the post-First Amendment Expansionist Phase, is characterised by a judicial hands-off approach towards legislative and executive action aimed at restricting speech. Phase Three (1960 - present day), which we may call the post-First Amendment Protectionist phase, is characterised by a cautious, incremental move back towards a speech-protective, rigorous-scrutiny approach. This classification is broad-brush and generalist, but serves as a useful explanatory device.

      Before the First Amendment, the relevant part of Article 19(2) allowed the government to restrict speech that “undermines the security of, or tends to overthrow, the State.” The scope of the restriction was examined by the Supreme Court in Romesh Thappar vs State of Madras and Brij Bhushan vs State of Delhi, both decided in 1950. Both cases involved the ban of newspapers or periodicals, under state laws that authorised the government to prohibit the entry or circulation of written material, ‘in the interests of public order’. A majority of the Supreme Court struck down the laws. In doing so, they invoked the concept of “over-breadth”: according to the Court, “public order” was synonymous with public tranquility and peace, while undermining the security of, or tending to overthrow the State, referred to acts which could shake the very foundations of the State. Consequently, while acts that undermined or tended to overthrow the State would also lead to public disorder, not all acts against public order would rise to the level of undermining the security of the State. This meant that the legislation proscribed acts that, under Article 19(2), the government was entitled to prohibit, as well as those that it wasn’t. This made the laws “over-broad”, and unconstitutional. In a dissenting opinion, Fazl Ali J. argued that “public order”, “public tranquility”, “the security of the State” and “sedition” were all interchangeable terms, that meant the same thing.

      In Romesh Thappar and Brij Bhushan, the Supreme Court also held that the impugned legislations imposed a regime of “prior restraint” – i.e., by allowing the government to prohibit the circulation of newspapers in anticipation of public disorder, they choked off speech before it even had the opportunity to be made. Following a long-established tradition in common law as well as American constitutional jurisprudence, the Court held that a legislation imposing prior restraint bore a heavy burden to demonstrate its constitutionality.

      The decisions in Romesh Thappar and Brij Bhushan led to the passage of the First Amendment, which substituted the phrase “undermines the security of, or tends to overthrow, the State” with “public order”, added an additional restriction in the interests of preventing an incitement to an offence, and – importantly – added a the word “reasonable” before “restrictions”.

      The newly-minted Article 19(2) came to be interpreted by the Supreme Court in Ramji Lal Modi vs State of UP (1957). At issue was a challenge to S. 295A of the Indian Penal Code, which criminalised insulting religious beliefs with an intent to outrage religious feelings of any class. The challenge made an over-breadth argument: it was contended that while some instances of outraging religious beliefs would lead to public disorder, not all would, and consequently, the Section was unconstitutional. The Court rejected this argument and upheld the Section. It focused on the phrase “in the interests of”, and held that being substantially broader than a term such as “for the maintenance of”, it allowed the government wide leeway in restricting speech. In other words, as long as the State could show that there was some connection between the law, and public order, it would be constitutional. The Court went on to hold that the calculated tendency of any speech or expression aimed at outraging religious feelings was, indeed, to cause public disorder, and consequently, the Section was constitutional. This reasoning was echoed in Virendra vs State of Punjab (1957), where provisions of the colonial era Press Act, which authorised the government to impose prior restraint upon newspapers, were challenged. The Supreme Court upheld the provisions that introduced certain procedural safeguards, like a time limit, and struck down the provisions that didn’t. Notably, however, the Court upheld the imposition of prior restraint itself, on the ground that the phrase “in the interests of” bore a very wide ambit, and held that it would defer to the government’s determination of when public order was jeopardised by speech or expression.

      In Ramji Lal Modi and Virendra, the Court had rejected the argument that the State can only impose restrictions on the freedom of speech and expression if it demonstrates a proximate link between speech and public order. The Supreme Court had focused closely on the breadth of the phrase “in the interests of”, but had not subjected the reasonable requirement to any analysis. In earlier cases such as State of Madras vs V.G. Row, the Court had stressed that in order to be “reasonable”, a restriction would have to take into account the nature and scope of the right, the extent of infringement, and proportionality. This analysis failed to figure in Ramji Lal Modi and Virendra. However, in Superintendent, Central Prison vs Ram Manohar Lohia, the Supreme Court changed its position, and held that there must be a “proximate” relationship between speech and public disorder, and that it must not be remote, fanciful or far fetched. Thus, for the first time, the breath of the phrase “in the interests of” was qualified, presumably from the perspective of reasonableness. In Lohia, the Court also stressed again that “public order” was of narrower ambit than mere “law and order”, and would require the State to discharge a high burden of proof, along with evidence.

      Lohia marks the start of the third phase in the Court’s jurisprudence, where the link of proximity between speech and public disorder has gradually been refined. In Babulal Parate vs State of Maharashtra (1961) and Madhu Limaye vs Sub-Divisional Magistrate (1970), the Court upheld prior restraints under S. 144 of the CrPC, while clarifying that the Section could only be used in cases of an Emergency. Section 144 of the CrPC empowers executive magistrates (i.e., high-ranking police officers) to pass very wide-ranging preventive orders, and is primarily used to prohibit assemblies at certain times in certain areas, when it is considered that the situation is volatile, and could lead to violence. In Babulal Parate and Madhu Limaye, the Supreme Court upheld the constitutionality of Section 144, but also clarified that its use was restricted to situations when there was a proximate link between the prohibition, and the likelihood of public dirsorder.

      In recent years, the Court has further refined its proximity test. In S. Rangarajan vs P. Jagjivan Ram (1989), the Supreme Court required proximity to be akin to a “spark in a powder keg”. Most recently, in Arup Bhuyan vs State of Assam (2011), the Court read down a provision in the TADA criminalizing membership of a banned association to only apply to cases where an individual was responsible for incitement to imminent violence (a standard borrowed from the American case of Brandenburg).[GB1]

      Lastly, in 2015,  we have seen the first instance of the application of Section 144 of the CrPC to online speech. The wide wording of the section was used in Gujarat to pre-emptively block mobile internet services, in the wake of Hardik Patel’s Patidar agitation for reservations. Despite the fact that website blocking is specifically provided for by Section 69A of the IT Act, and its accompanying rules, the Gujarat High Court upheld the state action.

      The following conclusions emerge:

      (1)  “Public Order” under Article 19(2) is a term of art, and refers to a situation of public tranquility/public peace, that goes beyond simply law-breaking

      (2)  Prior restraint in the interests of public order is justified under Article 19(2), subject to a test of proximity; by virtue of the Gujarat High Court judgment in 2015, prior restraint extends to the online sphere as well

      (3)  The proximity test requires the relationship between speech and public order to be imminent, or like a spark in a powder keg

      World Trends in Freedom of Expression and Media Development

      by Pranesh Prakash last modified Feb 17, 2016 04:41 PM

      PDF document icon WTR Global - FINAL 27 Feb.pdf — PDF document, 1592 kB (1630530 bytes)

      World Trends in Freedom of Expression and Media Development

      by Pranesh Prakash last modified Feb 17, 2016 05:03 PM
      The United Nations Educational, Scientific and Cultural Organisation (UNESCO) had published a book in 2014 that examines free speech, expression and media development. The chapter contains a Foreword by Irina Bokova, Director General, UNESCO. Pranesh Prakash contributed to Independence: Introduction - Global Media Chapter. The book was edited by Courtney C. Radsch.

      Foreword

      Tectonic shifts in technology and economic models have vastly expanded the opportunities for press freedom and the safety of journalists, opening new avenues for freedom of expression for women and men across the world. Today, more and more people are able to produce, update and share information widely, within and across national borders. All of this is a blessing for creativity, exchange and dialogue.

      At the same time, new threats are arising. In a context of rapid change, these are combining with older forms of restriction to pose challenges to freedom of expression, in the shape of controls not aligned with international standards for protection of freedom of expression and rising threats against journalists.

      These developments raise issues that go to the heart of UNESCO’s mandate “to promote the flow of ideas by word and image” between all peoples, across the world. For UNESCO, freedom of expression is a fundamental human right that underpins all other civil liberties, that is vital for the rule of law and good governance, and that is a foundation for inclusive and open societies. Freedom of expression stands at the heart of media freedom and the practice of journalism as a form of expression aspiring to be in the public interest.

      At the 36th session of the General Conference (November 2011), Member States mandated UNESCO to explore the impact of change on press freedom and the safety of journalists. For this purpose, the Report has adopted four angles of analysis, drawing on the 1991 Windhoek Declaration, to review emerging trends through the conditions of media freedom, pluralism and independence, as well as the safety of journalists. At each level, the Report has also examined trends through the lens of gender equality.

      The result is the portrait of change -- across the world, at all levels, featuring as much opportunity as challenge. The business of media is undergoing a revolution with the rise of digital networks, online platforms, internet intermediaries and social media. New actors are emerging, including citizen journalists, who are redrawing the boundaries of the media. At the same time, the Report shows that the traditional news institutions continue to be agenda-setters for media and public communications in general – even as they are also engaging with the digital revolution. The Report highlights also the mix of old and new challenges to media freedom, including increasing cases of threats against the safety of journalists.

      The pace of change raises questions about how to foster freedom of expression across print, broadcast and internet media and how to ensure the safety of journalists. The Report draws on a rich array of research and is not prescriptive -- but it sends a clear message on the importance of freedom of expression and press freedom on all platforms.

      To these ends, UNESCO is working across the board, across the world. This starts with global awareness raising and advocacy, including through World Press Freedom Day. It entails supporting countries in strengthening their legal and regulatory frameworks and in building capacity. It means standing up to call for justice every time a journalist is killed, to eliminate impunity. This is the importance of the United Nations Plan of Action on the Safety of Journalists and the Issue of Impunity, spearheaded by UNESCO and endorsed by the UN Chief Executives Board in April 2012. UNESCO is working with countries to take this plan forward on the ground. We also seek to better understand the challenges that are arising – most recently, through a Global Survey on Violence against Female Journalists, with the International News Safety Institute, the International Women’s Media Foundation, and the Austrian Government.

      Respecting freedom of expression and media freedom is essential today, as we seek to build inclusive, knowledge societies and a more just and peaceful century ahead. I am confident that this Report will find a wide audience, in Member States, international and regional organizations, civil society and academia, as well as with the media and journalists, and I wish to thank Sweden for its support to this initiative. This is an important contribution to understanding a world in change, at a time when the international community is defining a new global sustainable development agenda, which must be underpinned and driven by human rights, with particular attention to freedom of expression.

      Executive Summary

      Freedom of expression in general, and media development in particular, are core to UNESCO’s constitutional mandate to advance ‘the mutual knowledge and understanding of peoples, through all means of mass communication’ and promoting ‘the free flow of ideas by word and image.’ For UNESCO, press freedom is a corollary of the general right to freedom of expression. Since 1991, the year of the seminal Windhoek Declaration, which was endorsed by the UN General Assembly, UNESCO has understood press freedom as designating the conditions of media freedom, pluralism and independence, as well as the safety of journalists.  It is within this framework that this report examines progress as regards press freedom, including in regard to gender equality, and makes sense of the evolution of media actors, news media institutions and journalistic roles over time.

      This report has been prepared on the basis of a summary report on the global state of press freedom and the safety of journalists, presented to the General Conference of UNESCO Member States in November 2013, on the mandate of the decision by Member States taken at the 36th session of the General Conference of the Organization.[*]

      The overarching global trend with respect to media freedom, pluralism, independence and the safety of journalists over the past several years is that of disruption and change brought on by technology, and to a lesser extent, the global financial crisis. These trends have impacted traditional economic and organizational structures in the news media, legal and regulatory frameworks, journalism practices, and media consumption and production habits. Technological convergence has expanded the number of and access to media platforms as well as the potential for expression. It has enabled the emergence of citizen journalism and spaces for independent media, while at the same time fundamentally reconfiguring journalistic practices and the business of news.

      The broad global patterns identified in this report are accompanied by extensive unevenness within the whole.  The trends summarized above, therefore, go hand in hand with substantial variations between and within regions as well as countries.

      Download the PDF


      [*]. 37 C/INF.4 16 September 2013 “Information regarding the implementation of decisions of the governing bodies”. http://unesdoc.unesco.org/images/0022/002230/223097e.pdf; http://unesdoc.unesco.org/images/0022/002230/223097f.pdf

      Net Neutrality Advocates Rejoice As TRAI Bans Differential Pricing

      by Subhashish Panigrahi last modified Feb 23, 2016 02:10 AM
      India would not see any more Free Basics advertisements on billboards with images of farmers and common people explaining how much they benefited from this Facebook project.

      The article by Subhashish Panigrahi was published by Odisha TV on February 9, 2016.


      Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning differential pricing without discriminating services. In their notes TRAI has explained, “In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.” Not just that, violation of this ban would cost Rs. 50,000 every day.

      Facebook planned to launch Free Basics in India by making a few websites – mostly partners with Facebook—available for free. The company not just advertised aggressively on bill boards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics. TRAI criticized Facebook’s attempt to manipulate public opinion. Facebook was also heavily challenged by many policy and internet advocates including non-profits like Free Software Movement of India and Savetheinternet.in campaign. The two collectives strongly discouraged Free Basics by moulding public opinion against it with Savetheinternet.in alone used to send over 2.4 million emails to TRAI to disallow Free Basics. Furthermore, 500 Indian start-ups, including major names like Cleartrip, Zomato, Practo, Paytm and Cleartax, also wrote to India’s Prime Minister Narendra Modi requesting continued support for Net Neutrality – a concept that advocates equal treatment of websites – on Republic Day. Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous but informative videos explaining the regulatory debate and supporting net neutrality. Both went viral.

      Technology critic and Quartz writer Alice Truong reacted to Free Basics saying; “Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

      The decision of the Indian government has been largely welcomed in the country and outside. In support of the move, Web We Want programme manager at the World Wide Web Foundation Renata Avila has said; “As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.”

      There are mixed responses on the social media, both in support and in opposition to the TRAI decision. Josh Levy, Advocacy Director at Accessnow, has appreciated saying, “India is now the global leader on #NetNeutrality. New rules are stronger than those in EU and US.”

      Had differential pricing been allowed, it would have affected start-ups and content-based smaller companies adversely as they could never have managed to pay the high price to a partner service provider to make their service available for free. On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception, the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

      ‘A Good Day for the Internet Everywhere': India Bans Differential Data Pricing

      by Subhashish Panigrahi last modified Feb 25, 2016 01:21 AM
      India distinguished itself as a global leader on network neutrality on February 8, when regulators officially banned “differential pricing”, a process through which telecommunications service providers could or charge discriminatory tariffs for data services offered based on content.

      The article was published by Global Voices on February 9, 2016


      In short, this means that Internet access in India will remain an open field, where users should be guaranteed equal access to any website they want to visit, regardless of how they connect to the Internet.

      In their ruling, Telecommunication Regulatory Authority of India (TRAI) commented:

      In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.

      The decision of the Indian government has been welcomed largely in the country and outside. In support of the move, the World Wide Web Foundation's Renata Avila, also a Global Voices community member, wrote:

      As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.

      A blow for Facebook's “Free Basics”

      While the new rules should long outlast this moment in India's Internet history, the ruling should immediately force Facebook to cancel the local deployment of “Free Basics”, a smart phone application that offers free access to Facebook, Facebook-owned products like WhatsApp, and a select suite of other websites for users who do not pay for mobile data plans.

      Facebook's efforts to deploy and promote Free Basics as what they described as a remedy to India's lack of “digital equality” has encountered significant backlash. Last December, technology critic and Quartz writer Alice Truong reacted to Free Basics saying:

      Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

      When TRAI solicited public comments on the matter of differential pricing, Facebook responded with an aggressive advertising campaign on bill boards and in television commercials across the nation. It also embedded a campaign inside Facebook, asking users to write to TRAI in support of Free Basics.

      TRAI criticized Facebook for what it seemed to regard as manipulation of the public. Facebook was also heavily challenged by many policy and open Internet advocates including non-profits like the Free Software Movement of India and the Savetheinternet.in campaign. The latter two collectives strongly discouraged Free Basics by bringing public opinion where Savetheinternet.in alone facilitated a campaign in which citizens sent over 2.4 million emails to TRAI urging the agency to put a stop to differential pricing.

      Alongside these efforts, 500 Indian startups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to India's prime minister Narendra Modi requesting continued support for net neutrality—on the Indian Republic Day January 26.

      Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous and informative videos explaining the regulatory debate and supporting net neutrality which went viral.

      Had differential pricing been officially legalized, it would have adversely affected startups and content-based smaller companies, who most likely could never manage to pay higher prices to partner with service providers to make their service available for free. This would have paved the way for tech-giants like Facebook to capture the entire market. And this would be no small gain for a company like Facebook: India represents the world's largest market of Internet users after the US and China, where Facebook remains blocked.

      The Internet responds

      There have been mixed responses on social media, both supporting and opposing. Among open Internet advocates both in India and the US, the response was celebratory:

      There are also those like Panuganti Rajkiran who opposed the ruling:

      A terrible decision.. The worst part here is the haves deciding for the have nots what they can have and what they cannot.

      Soumya Manikkath says:

      So all is not lost in the world, for the next two years at least. Do come back with a better plan, dear Facebook, and we'll rethink, of course.

      The ruling leaves an open pathway for companies to offer consumers free access to the Internet, provided that this access is truly open and does not limit one's ability to browse any site of her choosing.

      Bangalore-based Internet policy expert Pranesh Prakash noted that this work must continue until India is truly — and equally — connected:

      Comments by the Centre for Internet and Society on the Report of the Committee on Medium Term Path on Financial Inclusion

      by Vipul Kharbanda last modified Mar 01, 2016 01:53 PM
      Apart from item-specific suggestions, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

       

      The Centre for Internet and Society (CIS) is a non-governmental organization which undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives.

      In the course of its work CIS has also extensively researched and witten about the Aadhaar Scheme of the Government of India, specially from a privacy and technical point of view. CIS was part of the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee and was instrumental in drafting a major part of the report of the Group. In this background CIS would like to mention that it is neither an expert on banking policy in general nor wishes to comment upon the purely banking related recommendations of the Committee. We would like to limit our recommendations to the areas in which we have some expertise and would therefore be commenting only on certain Recommendations of the Committee.

      Before giving our individual comments on the relevant recommendations, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

      Another problem with linking bank accounts with Aadhaar numbers, even if it is not mandatory, is that when the RBI issues an advisory to (optionally) link Aadhaar numbers with bank accounts, a number of banks may implement the advisory too strictly and refuse service to customers (especially marginal customers) whose bank accounts are not linked to their Aadhaar numbers, perhaps due to technical problems in the registration procedure, thereby denying those individuals access to the banking sector, which is contrary to the aims and objectives of the Committee and the stated policy of the RBI to improve access to banking.

      Individual Comments

      Recommendation 1.4 - Given the predominance of individual account holdings, the Committee recommends that a unique biometric identifier such as Aadhaar should be linked to each individual credit account and the information shared with credit information companies. This will not only be useful in identifying multiple accounts, but will also help in mitigating the overall indebtedness of individuals who are often lured into multiple borrowings without being aware of its consequences.

      CIS Comment: The discussion of the committee before making this recommendation revolves around the total incidence of indebtedness in rural areas and their Debt-to-Asset ratio representing payment capacity. However, the committee has not discussed any evidence which indicates that borrowing from multiple banks leads to greater indebtedness for individual account holders in the rural sector. Without identifying the problem through evidence the Committee has suggested linking bank accounts with Aadhaar numbers as a solution.

      Recommendation 2.2 - On the basis of cross-country evidence and our own experience, the Committee is of the view that to translate financial access into enhanced convenience and usage, there is a need for better utilization of the mobile banking facility and the maximum possible G2P payments, which would necessitate greater engagement by the government in the financial inclusion drive.

      CIS Comment: The drafting of the recommendation suggests that RBI is batting for the DBT rather than the subsidy model. However an examination of the discussion in the report suggests that all that the Committee has not discussed or examined the subsidy model vis-à-vis the direct benefit transfer (DBT) model here (though it does recommend DBT in the chapter on G-2-P payments), but only is trying to say is that where government to people money transfer has to take place, it should take place using mobile banking, payment wallets or other such technologies, which have been known to be successful in various countries across the world.

      Recommendation 3.1 - The Committee recommends that in order to increase formal credit supply to all agrarian segments, the digitization of land records should be taken up by the states on a priority basis.

      Recommendation 3.2 - In order to ensure actual credit supply to the agricultural sector, the Committee recommends the introduction of Aadhaar-linked mechanism for Credit Eligibility Certificates. For example, in Andhra Pradesh, the revenue authorities issue Credit Eligibility Certificates to Tenant Farmers (under ‘Andhra Pradesh Land Licensed Cultivators Act No 18 of 2011'). Such tenancy /lease certificates, while protecting the owner’s rights, would enable landless cultivators to obtain loans. The Reserve Bank may accordingly modify its regulatory guidelines to banks to directly lend to tenants / lessees against such credit eligibility certificates.

      CIS Comment: The Committee in its discussion before the recommendation 3.2 has discussed the problems faced by landless farmers, however there is no discussion or evidence which suggests that an Aadhaar linked Credit Eligibility Certificate is the best solution, or even a solution to the problem. The concern being expressed here is not with the system of a Credit Eligibility Certificate, but with the insistence on linking it to an Aadhaar number, and whether the system can be put in place without linking the same to an Aadhaar number.

      Recommendation 6.11 - Keeping in view the indebtedness and rising delinquency, the Committee is of the view that the credit history of all SHG members would need to be created, linking it to individual Aadhaar numbers. This will ensure credit discipline and will also provide comfort to banks.

      CIS Comment: There is no discussion in the Report on the reasons for increase in indebtedness of SHGs. While the recommendation of creating credit histories for SHGs is laudable and very welcome, however there is no logical reason that has been brought out in the Report as to why the same needs to be linked to individual Aadhaar numbers and how such linkage will solve any problems.

      Recommendation 6.13 - The Committee recommends that bank credit to MFIs should be encouraged. The MFIs must provide credit information on their borrowers to credit bureaus through Aadhaar-linked unique identification of individual borrowers.

      CIS Comment: Since the discussion before this recommendation clearly indicates multiple lending practices as one of the problems in the Microfinance sector and also suggests better credit information of borrowers as a possible solution, therefore this recommendation per se, seems sound. However, we would still like to point out that the RBI may think of alternative means to get borrower credit history rather than relying upon just the Aadhaar numbers.

      Recommendation 7.3 - Considering the widespread availability of mobile phones across the country, the Committee recommends the use of application-based mobiles as PoS for creating necessary infrastructure to support the large number of new accounts and cards issued under the PMJDY. Initially, the FIF can be used to subsidize the associated costs. This will also help to address the issue of low availability of PoS compared to the number of merchant outlets in the country. Banks should encourage merchants across geographies to adopt such applicationbased mobile as a PoS through some focused education and PoS deployment drives.

      Recommendation 7.5 - The Committee recommends that the National Payments Corporation of India (NPCI) should ensure faster development of a multi-lingual mobile application for customers who use non-smart phones, especially for users of NUUP; this will address the issue of linguistic diversity and thereby promote its popularization and quick adoption.

      Recommendation 7.8 - The Committee recommends that pre-paid payment instrument (PPI) interoperability may be allowed for non-banks to facilitate ease of access to customers and promote wider spread of PPIs across the country. It should however require non-bank PPI operators to enhance their customer grievance redressal mechanism to deal with any issues thereof.

      Recommendation 7.9 - The Committee is of the view that for non-bank PPIs, a small-value cashout may be permitted to incentivize usage with the necessary safeguards including adequate KYC and velocity checks.

      CIS Comments: While CIS supports the effort to use technology and mobile phones to increase banking penetration and improve access to the formal financial sector for rural and semi-rural areas, sufficient security mechanisms should be put in place while rolling out these services keeping in mind the low levels of education and technical sophistication that are prevalent in rural and semi-rural areas.

      Recommendation 8.1 - The Committee recommends that the deposit accounts of beneficiaries of government social payments, preferably all deposits accounts across banks, including the ‘inprinciple’ licensed payments banks and small finance banks, be seeded with Aadhaar in a timebound manner so as to create the necessary eco-system for cash transfer. This could be complemented with the necessary changes in the business correspondent (BC) system (see Chapter 6 for details) and increased adoption of mobile wallets to bridge the ‘last mile’ of service delivery in a cost-efficient manner at the convenience of the common person. This would also result in significant cost reductions for the government besides promoting financial inclusion.

      CIS Comment: While the report of the Committee has already given several examples of how cash transfer directly into the bank accounts (rather than requiring the beneficiaries to be at a particular place at a particular time) could be more efficient as well as economical, the Committee is making the same point again here under the chapter that deals specifically with government to person payments. However even before this recommendation, there has been no discussion as to the need for linking or “seeding” the deposit accounts of the beneficiaries with Aadhaar numbers, let alone a discussion of how it would solve any problems.

      Recommendation 10.6 - Given the focus on technology and the increasing number of customer complaints relating to debit/credit cards, the National Payments Corporation of India (NPCI) may be invited to SLBC meetings. They may particularly take up issues of Aadhaar-linkage in bank and payment accounts.

      CIS Comment: There is no discussion on why this recommendation has been made, more particularly; there is no discussion at all on why issues of Aadhaar linkage in bank and payment accounts need to be taken up at all.

      NN_Conference Report.pdf

      by Prasad Krishna last modified Feb 27, 2016 08:07 AM

      PDF document icon NN_Conference Report.pdf — PDF document, 1049 kB (1075119 bytes)

      Adoption of Standards in Smart Cities - Way Forward for India

      by Vanya Rakesh last modified Apr 11, 2016 03:04 AM
      With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient.

      Click here to download the full file

      Globally, the pace of urbanization is increasing exponentially. The world’s urban population is projected to rise from 3.6 billion to 6.3 billion between 2011 and 2050. A solution for the same has been development of sustainable cities by improving efficiency and integrating infrastructure and services [1]. It has been estimated that during the next 20 years, 30 Indians will leave rural India for urban areas every minute, necessitating smart and sustainable cities to accommodate them [2]. The Smart Cities Mission of the Ministry of Urban Development was announced in the year 2014, followed by selection of 100 cities in the year 2015 and 20 of them being selected for the first Phase of the project in the year 2016. The Mission [3] lists the “core infrastructural elements” that a smart city would incorporate like adequate water supply, assured electricity, sanitation, efficient public transport, affordable housing (especially for the poor), robust IT connectivity and digitisation, e-governance and citizen participation, sustainable environment, safety and security for citizens, health and education.

      With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The envisioned modern and smart city promises delivery of high quality services to the citizens and will harness data capture and communication management technologies. The performance of such cities would be monitored on the basis of physical as well as the social structure comprising of smart approaches and solution to utilities and transport.

      The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient. Interoperability is essential and to ensure smart integration of various systems in a smart city, internationally agreed standards that include technical specifications and classifications must be adhered to. Development of international standards ensure seamless interaction between components from different suppliers and technologies [4].

      Standardized indicators within standards benefit smart cities in the following ways:

      1. Effective governance and efficient delivery of services.
      2. International and Local targets, benchmarking and planning.
      3. Informed decision making and policy formulation.
      4. Leverage for funding and recognition in international entities.
      5. Transparency and open data for investment attractiveness.
      6. A reliable foundation for use of big data and the information explosion to assist cities in building core knowledge for city decision-making, and enable comparative insight.

      The adoption of standards for smart cities has been advocated across the world as they are perceived to be an effective tool to foster development of the cities. The Director of the ITU Telecommunication Standardization Bureau Chaesub Lee is of the view that “Smart cities will employ an abundance of technologies in the family of the Internet of Things (IoT) and standards will assist the harmonized implementation of IoT data and applications , contributing to effective horizontal integration of a city’s subsystems” [5].

      Smart Cities standards in India

      National Association of Software and Services Companies (NASSCOM) partnered with Accenture [6] to prepare a report called ‘Integrated ICT and Geospatial Technologies Framework for 100 Smart Cities Mission’ [7] to explore the role of ICT in developing smart cities [8], after the announcement of the Mission by Indian Government. The report, released in May 2015, lists down 55 global standards, keeping in view several city sub-systems like urban planning, transport, governance, energy, climate and pollution management, etc which could be applicable to the smart cities in India.

      Though NASSCOM is working closely with the Ministry of Urban Development to create a sustainable model for smart cities [9], due to lack of regulatory standards for smart cities, the Bureau of Indian Standards (BIS) in India has undertaken the task to formulate standardised guidelines for central and state authorities in planning, design and construction of smart cities by setting up a technical committee under the Civil engineering department of the Bureau. However, adoption of the standards by implementing agencies would be voluntary and intends to complement internationally available documents in this area [10].

      Developing national standards in line with these international standards would enable interoperability (i.e. devices and systems working together) and provide a roadmap to address key issues like data protection, privacy and other inherent risks in the digital delivery and use of public services in the envisioned smart cities, which call for comprehensive data management standards in India to instill public confidence and trust [11].

      Key International Smart Cities Standards

      Following are the key internationally accepted and recognized Smart Cities standards developed by leading organisations and the national standardization bodies of several countries that India could adopt or develop national standards in line with these.

      The International Organization for Standardization (ISO) - Smart Cities Standards

      ISO is an instrumental body advocating and developing for smart cities to safeguard rights of the people against a liveable and sustainable environment. The ISO Smart Cities Strategic Advisory Group uses the following working definition: A ‘Smart City’ is one that dramatically increases the pace at which it improves its social, economic and environmental (sustainability) outcomes, responding to challenges such as climate change, rapid population growth, and political and economic instability by fundamentally improving how it engages society, how it applies collaborative leadership methods, how it works across disciplines and city systems, and how it uses data information and modern technologies in order to transform services and quality of life for those in and involved with the city (residents, businesses, visitors), now and for the foreseeable future, without unfair disadvantage of others or degradation of the natural environment. [For details see ISO/TMB Smart Cities Strategic Advisory Group Final Report, September 2015 ( ISO Definition, June 2015)].

      The ISO Technical Committee 268 works on standardization in the field of Sustainable Development in Communities [12] to encourage the development and implementation of holistic, cross-sector and area-based approaches to sustainable development in communities. The Committee comprises of 3 Working Groups [13]:

      • Working Group 1: System Management ISO 37101- This standard sets requirements, guidance and supporting techniques for sustainable development in communities. It is designed to help all kinds of communities manage their sustainability, smartness and resilience to improve the contribution of communities to sustainable development and assess their performance in this area [14].
      • Working Group  2 : City Indicators- The key Smart Cities Standards developed by ISO TC 268 WG 2 (City Indicators) are:

      ISO 37120 Sustainable Development of Communities — Indicators for City Services and Quality of Life

      One of the key standards and an important step in this regard was ISO 37120:2014 under the ISO’s Technical Committee 268 (See Working on Standardization in the field of Sustainable Development in Communities) providing clearly defined city performance indicators (divided into core and supporting indicators) as a benchmark for city services and quality of life, along with a standard approach for measuring each for city leaders and citizens [15]. The standard is global in scope and can help cities prioritize city budgets, improve operational transparency, support open data and applications [16]. It follows the principles [17] set out and can be used in conjunction with ISO 37101.

      ISO 37120 was the first ISO Standard on Global City Indicators published in the year 2014, developed on the basis of a set of indicators developed and extensively tested by the Global City Indicators Facility (a project by University of Toronto) and its 250+ member cities globally. GCIF is committed to build standardized city indicators for performance management including a database of comparable statistics that allow cities to track their effectiveness on everything from planning and economic growth to transportation, safety and education [18].

      The World Council on City Data (WCCD) [19] - a sister organization of the GCI/GCIF - was established in the year 2014 to operationalize ISO 37120 across cities globally. The standards encompasses 100 indicators developed around 17 themes to support city services and quality of life, and is accessible through the WCCD Open City Data Portal which allows for cutting-edge visualizations and comparisons. Indian cities are not yet listed with WCCD [20].

      The indicators are listed under the following heads [21]:

      1. Economy
      2. Education
      3. Environment
      4. Energy
      5. Finance
      6. Fire and Emergency Responses
      7. Governance
      8. Health
      9. Safety
      10. Shelter
      11. Recreation
      12. Solid Waste
      13. Telecommunication and innovation
      14. Transportation
      15. Urban Planning
      16. Waste water
      17. Water and Sanitation

      This International Standard is applicable to any city, municipality or local government that undertakes to measure its performance in a comparable and verifiable manner, irrespective of size and location or level of development. City indicators have the potential to be used as critical tools for city managers, politicians, researchers, business leaders, planners, designers and other professionals [22]. The WCCD forum highlights need for cities to have a set of globally standardized indicators to [23]:

      1. Manage and make informed decisions through data analysis
      2. Benchmark and target
      3. Leverage Funding with senior levels of government
      4. Plan and establish new frameworks for sustainable urban development
      5. Evaluate the impact of infrastructure projects on the overall performance of a city.

      ISO/DTR 37121- Inventory and Review of Existing Indicators on Sustainable Development and Resilience in Cities

      The second standard under ISO TC 268 WG 2 is ISO 37121, which defines additional indicators related to sustainable development and resilience in cities. Some of the indicators include: Smart Cities, Smart Grid, Economic Resilience, Green Buildings, Political Resilience, Protection of biodiversity, etc. The complete list can be viewed on the Resilient Cities website [24].

      Working Group 3: Terminology - There are no publicly available documents so far, giving details about the status of the activities of this group. The ISO Technical Committee 268 also includes Sub Committee 1 (Smart Community Infrastructure) [25], comprising of the following Working Groups: 1) WG 1 Infrastructure metrics, and 2) WG 2 Smart Community Infrastructure.

      The key Smart Cities Standards developed by ISO under this are:

      • ISO 37151:2015 Smart community infrastructures — Principles and Requirements for Performance Metrics
        In the year 2015, a new ISO technical specification for smart cities- 37151:2015 for Principles and requirements for performance metrics was released.  The purpose of standardization in the field of smart community infrastructures such as energy, water, transportation, waste, information and communications technology (ICT), etc. is to promote the international trade of community infrastructure products and services and improve sustainability in communities by establishing harmonized product standards [26]. The metrics in this standard will support city and community managers in planning and measuring performance, and also compare and select procurement proposals for products and services geared at improving community infrastructures [27].
        This Technical Specification gives principles and specifies requirements for the definition,identification, optimization, and harmonization of community infrastructure performance metrics, and gives recommendations for analysis, regarding interoperability, safety, security of community infrastructures [28]. This new Technical Specification supports the use of the ISO 37120 [29].

      • ISO/TR 37150:2014 Smart Community Infrastructures - Review of Existing Activities Relevant to Metrics
        This standard addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). Smart community infrastructures take into consideration environmental impact, economic efficiency and quality of life by using information and communications technology (ICT) and renewable energies to achieve integrated management and optimized control of infrastructures. Integrating smart community infrastructures for a community helps improve the lifestyles of its citizens by, for example: reducing costs, increasing mobility and accessibility, and reducing environmental pollutants.
        ISO/TR 37150 reviews relevant metrics for smart community infrastructures and provides stakeholders with a better understanding of the smart community infrastructures available around the world to help promote international trade of community infrastructure products and give information about leading-edge technologies to improve sustainability in communities [30]. This standard, along with the above mentioned standards [31] supports the multi-billion dollar smart cities technology industry.

      Several other ISO Working Groups developing standards applicable to smart and sustainable cities have been listed in our website [32].

      The International Telecommunications Union (ITU)

      The ITU is another global body working on development of standards regarding smart cities.

      A study group was formed in the year 2015 to tackle standardization requirements for the Internet of Things, with an initial focus on IoT applications in smart cities to address urban development challenges [33], to enable the coordinated development of IoT technologies, including machine-to-machine communications and ubiquitous sensor networks. The group is titled “ITU-T Study Group 20: IoT and its applications, including smart cities and communities”, established to develop standards that leverage IoT technologies to address urban-development challenges and the mechanisms for the interoperability of IoT applications and datasets employed by various vertically oriented industry sectors [34].

      ITU-T also concluded a focused study group looking at smart sustainable cities in May 2015, acting as an open platform for smart city stakeholders to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities. Its parent group is ITU-T Study Group 5, which has  agreed on the following definition of a Smart Sustainable City:
      "A smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social, environmental as well as cultural aspects".

      UK - British Standards Institution

      Apart from the global standards setting organisations, many countries have been looking at developing standards to address the growth of smart cities across the globe. In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed. The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. The BIS launched the City's Standards Institute to bring together cities and key industry leaders and innovators to work together in identifying the challenges facing cities, providing solutions to common problems and defining the future of smart city standards [35].

      • PAS 181 Smart city framework- Guide to establishing strategies for smart cities and communities establishes a good practice framework for city leaders to develop, agree and deliver smart city strategies that can help transform their city’s ability to meet challenges faced in the future and meet the goals. The smart city framework (SCF) does not intend to describe a one-size-fits-all model for the future of UK cities but focuses on the enabling processes by which the innovative use of technology and data, together with organizational change, can help deliver the diverse visions for future UK cities in more efficient, effective and sustainable ways [36].

      • PD 8101 Smart cities- Guide to the role of the planning and development process gives guidance regarding planning for new development for smart city plans and provides an overview of the key issues to be considered and prioritized. The document is for use by local authority planning and regeneration officers to identify good practice in a UK context, and what tools they could use to implement this good practice. This aims to enable new developments to be built in a way that will support smart city aspirations at minimal cost [37].

      • PAS 182 Smart city concept model. Guide to establishing a model for data establishes an interoperability framework and data-sharing between agencies for smart cities for the following purposes:

        1. To have a city where information can be shared and understood between organizations and people at each level
        2. The derivation of data in each layer can be linked back to data in the previous layer
        3. The impact of a decision can be observed back in operational data. The smart city concept model (SCCM) provides a framework that can normalize and classify information from many sources so that data sets can be discovered and combined to gain a better picture of the needs and behaviours of a city’s citizens (residents and businesses) to help identify issues and devise solutions. PAS 182 is aimed at organizations that provide services to communities in cities, and manage the resulting data, as well as decision-makers and policy developers in cities [38].
      • PAS 180 Smart cities Vocabulary helps build a strong foundation for future standardization and good practices by providing an industry-agreed understanding of smart city terms and definitions to be used in the UK. It provides a working definition of a Smart City- “Smart Cities” is a term denoting the effective integration of physical, digital and human systems in the built environment to deliver a sustainable, prosperous and inclusive future for its citizens [39]. This aims to help improve communication and understanding of smart cities by providing a common language for developers, designers, manufacturers and clients. The standard also defines smart city concepts across different infrastructure and systems’ elements used across all service delivery channels and is intended for city authorities and planners, buyers of smart city services and solutions [40], as well as product and service providers.

       

      Endnotes

      [1] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

      [2] See: http://www.ibm.com/smarterplanet/in/en/sustainable_cities/ideas/.

      [3] See: http://www.thehindubusinessline.com/economy/smart-cities-mission-welcome-to-tomorrows-world/article8163690.ece.

      [4] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

      [5] See: http://www.iso.org/iso/news.htm?refid=Ref2042.

      [6] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

      [7] See: http://www.nasscom.in/integrated-ict-and-geospatial-technologies-framework-100-smart-cities-mission.

      [8] See: http://www.cxotoday.com/story/nasscom-creates-framework-for-smart-cities-project/.

      [9] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

      [10] See: http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html.

      [11] See: http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3.

      [12] See: http://www.iso.org/iso/iso_technical_committee?commid=656906.

      [13] See: http://cityminded.org/wp-content/uploads/2014/11/Patricia_McCarney_PDF.pdf.

      [14] See: http://www.iso.org/iso/news.htm?refid=Ref1877.

      [15] See: http://smartcitiescouncil.com/article/new-iso-standard-gives-cities-common-performance-yardstick.

      [16] See: http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities.

      [17] See: http://www.iso.org/iso/catalogue_detail?csnumber=62436.

      [18] See: http://www.cityindicators.org/.

      [19] See: http://www.dataforcities.org/.

      [20] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

      [21] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

      [22] See: http://www.iso.org/iso/37120_briefing_note.pdf.

      [23] See: http://www.dataforcities.org/wccd/.

      [24] See: http://resilient-cities.iclei.org/fileadmin/sites/resilient-cities/files/Webinar_Series/HERNANDEZ_-_ICLEI_Resilient_Cities_Webinar__FINAL_.pdf.

      [25] See: http://www.iso.org/iso/iso_technical_committee?commid=656967.

      [26] See: https://www.iso.org/obp/ui/#iso:std:iso:ts:37151:ed-1:v1:en.

      [27] See: http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001&utm_medium=email&utm_campaign=ISO+Newsletter+November&utm_content=ISO+Newsletter+November+CID_4182720c31ca2e71fa93d7c1f1e66e2f&utm_source=Email%20marketing%20software&utm_term=Read%20more.

      [28] See: http://www.iso.org/iso/37120_briefing_note.pdf.

      [29] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

      [30] See: http://www.iso.org/iso/executive_summary_iso_37150.pdf.

      [31] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

      [32] See: http://cis-india.org/internet-governance/blog/database-on-big-data-and-smart-cities-international-standards.

      [33] See: http://smartcitiescouncil.com/article/itu-takes-internet-things-standards-smart-cities.

      [34] See: https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx.

      [35] See: http://www.bsigroup.com/en-GB/smart-cities/.

      [36] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-181-smart-cities-framework/.

      [37] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PD-8101-smart-cities-planning-guidelines/.

      [38] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-182-smart-cities-data-concept-model/.

      [39] See: http://www.iso.org/iso/smart_cities_report-jtc1.pdf.

      [40] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-180-smart-cities-terminology/.

      Flaws in the UIDAI Process

      by Hans Varghese Mathews last modified Mar 06, 2016 10:40 AM
      The accuracy of biometric identification depends on the chance of a false positive: the probability that the identifiers of two persons will match. Individuals whose identifiers match might be termed duplicands. When very many people are to be identified success can be measured by the (low) proportion of duplicands. The Government of India is engaged upon biometrically identifying the entire population of India. An experiment performed at an early stage of the programme has allowed us to estimate the chance of a false positive: and from that to estimate the proportion of duplicands. For the current population of 1.2 billion the expected proportion of duplicands is 1/121, a ratio which is far too high.

      The article was published in Economic & Political Weekly, Journal » Vol. 51, Issue No. 9, 27 Feb, 2016.


      A legal challenge is being mounted in the Supreme Court, currently, to the programme of biometric identification that the Unique Identification Authority of India (UIDAI) is engaged upon: an identification preliminary and a requisite to providing citizens with “Aadhaar numbers” that can serve them as “unique identifiers” in their transactions with the state. What follows will recount an assessment of their chances of success. We shall be using data that was available to the UIDAI and shall employ only elementary ways of calculation. It should be recorded immediately that an earlier technical paper by the author (Mathews 2013) has been of some use to the plaintiffs, and reference will be made to that in due course.

      The Aadhaar numbers themselves may or may not derive, in some way, from the biometrics in question; the question is not material here. For our purposes a biometric is a numerical representation of some organic feature: like the iris or the retina, for instance, or the inside of a finger, or the hand taken whole even. We shall consider them in some more detail later. The UIDAI is using fingerprints and iris images to generate a combination of biometrics for each individual. This paper bears on the accuracy of the composite biometric identifier. How well those composites will distinguish between individuals can be assessed, actually, using the results of an experiment conducted by the UIDAI itself in the very early stages of its operation; and our contention is that, from those results themselves, the UIDAI should have been able to estimate how many individuals would have their biometric identifiers matching those of some other person, under the best of circumstances even, when any good part of population has been identified.


      Read the full article here.


      The author thanks Nico Temme of the Centrum Wiskunde & Informatica in The Netherlands for the bounds he derived on the chance of a false positive. He is particularly grateful to the anonymous referee of this journal who, through two rounds of comment, has very much improved the presentation of the results. A technical supplement to this paper is placed on the EPW website along with this paper.

      Flaws_in_the_UIDAI_Process_0.pdf

      by Prasad Krishna last modified Mar 06, 2016 10:37 AM

      PDF document icon Flaws_in_the_UIDAI_Process_0.pdf — PDF document, 731 kB (749514 bytes)

      Aadhaar Bill fails to incorporate suggestions by the Standing Committee

      by Amber Sinha last modified Mar 10, 2016 03:58 PM
      In 2011, a standing committee report led by Yashwant Sinha had been scathing in its indictments of the Aadhaar BIll introduced by the UPA government. Five years later, the NDA government has introduced a new bill which is a rehash of the same. I look at the concerns raised by the committee report, none of which have been addressed by the new bill.

      The article was published by The Wire on March 10, 2016

      In December, 2010, the UPA Government introduced the National Identification Authority of India Bill, 2010 in the Parliament. It was subsequently referred to a Standing Committee on Finance by the Speaker of Lok Sabha under Rule 331E of the the Rules of Procedure and Conduct of Business in Lok Sabha. This Committee, headed by BJP leader Yashwant Sinha took evidence from the Minister of Planning and the UIDAI from the government, as well as seeking the view of parties such as the National Human Rights Commission, Indian Banks Association and researchers like Dr Reetika Khera and Dr. Usha Ramanathan. In 2011, having heard from various parties and considering the concerns and apprehensions about the UID scheme, the Committee deemed the bill unacceptable and suggested a re-consideration of the the UID scheme as well as the draft legislation.

      The Aadhaar programme has so far been implemented under the Unique Identification Authority of India, a Central Government agency created through an executive order. This programme has been shrouded in controversy over issues of privacy and security resulting in a Public Interest Litigation filed by Judge Puttaswamy in the Supreme Court. While the BJP had criticised the project as well as the draft legislation  when it was in opposition, once it came to power and particularly, after it launched various welfare schemes like Digital India and Jan Dhan Yojna, it decided to continue with it and use Aadhaar as the identification technology for these projects. In the last year, there have been orders passed by the Supreme Court which prohibited making Aadhaar mandatory for availing services. One of the questions that the government has had to answer both inside and outside the court on the UID project is the lack of a legislative mandate for a project of this size. About five years later, the new BJP led government has come back with a rehash of the same old draft, and no comments made by the standing committee have been taken into account.

      The Standing Committee on the old bill had taken great exception to the continued collection of data and issuance of Aadhaar numbers, while the Bill was pending in the Parliament. The report said that the implementation of the provisions of the Bill and continuing to incur expenditure from the exchequer was a circumvention of the prerogative powers of the Parliament. However, the project has continued without abeyance since its inception in 2009. I am listing below some of the issues that the Committee identified with the UID project and draft legislation, none of which have been addressed in current Bill.

      One of the primary arguments made by proponents of Aadhaar has been that it would be useful in providing services to marginalized sections of the society who currently do not have identification cards and consequently, are not able to receive state sponsored services, benefits and subsidies. The report points that the project would not be able to achieve this as no statistical data on the marginalized sections of the society are being used to by UIDAI to provide coverage to them. The introducer systems which was supposed to provide Aadhaar numbers to those without any form of identification, has been used to enroll only 0.03% of the total number of people registered. Further, the Biometrics Standards Committee of UIDAI has itself acknowledged the issues caused due to a high number of manual laborers in India which would lead to sub-optimal fingerprint scans. A report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population. In this manner, the project could actually end up excluding more people.

      The Report also pointed to a lack of cost-benefit analysis done before going ahead with scheme of this scale. It makes a reference to the report by the London School of Economics on the UK Identity Project which was shelved due to a) huge costs involved in the project, b) the complexity of the exercise and unavailability of reliable, safe and tested technology, c) risks to security and safety of registrants, d) security measures at a scale that will result in substantially higher implementation and operational costs and e) extreme dangers to rights of registrants and public interest. The Committee Report insisted that such global experiences remained relevant to the UID project and need to be considered. However, the new Bill has not been drafted with a view to address any of these issues.

      The Committee comes down heavily on the irregularities in data collection by the UIDAI. They raise doubts about the ability of the Registrars to effectively verify the registrants and a lack of any security audit mechanisms that could identify issues in enrollment. Pointing to the news reports about irregularities in the process being followed by the Registrars appointed by the UIDAI, the Committee deems the MoUs signed between the UIDAI and the Registrars as toothless. The involvement of private parties has been under question already with many questions being raised over the lack of appropriate safeguards in the contracts with the private contractors.

      Perhaps the most significant observation of the Committee was that any scheme that facilitates creation of such a massive database of personal information of the people of the country and its linkage with other databases should be preceded by a comprehensive data protection law. By stating this, the Committee has acknowledged that in the absence of a privacy law which governs the collection, use and storage of the personal data, the UID project will lead to abuse, surveillance and profiling of individuals. It makes a reference to the Privacy Bill which is still at only the draft stage. The current data protection framework in the Section 43A rules under the Information Technology Act, 2000 are woefully inadequate and far too limited in their scope. While there are some protection built into Chapter VI of the new bill, these are nowhere as comprehensive as the ones articulated in the Privacy Bill. Additionally, these protections are subject to broad exceptions which could significantly dilute their impact.

       

      A comparison of the 2016 Aadhaar Bill, and the 2010 NIDAI Bill

      by Vanya Rakesh — last modified Mar 09, 2016 04:08 AM
      This blog post does a clause-by-clause comparison of the provisions of National Identification Authority of India Bill, 2010 and the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016
      • Title

       

      2010 Bill: The Bill was titled as the National Identification Authority of India Bill, 2010.

      2016 Bill : The Bill has been titled as the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016.

       

      • Purpose/Object Clause

       

      2010 Bill: The purpose of Bill was stated to provide for the establishment of the National Identification Authority of India to issue identification numbers to residents of India as well as certain other classes of individuals , to facilitate access to benefits and services, to which they are entitled.

      2016 Bill : The purpose of this Bill has been stated to ensure targeted delivery of subsidies, benefits and services to residents of India in an efficient and transparent manner by assigning unique identity numbers to such individuals.

       
      • Definitions

       
      1. 2010 Bill: “Authentication” was defined as the process in which the Aadhaar number, along with other attributes (including biometrics) are submitted to the Central Identities Data Repository for verification, done on the basis of information, data or documents available with the Repository.

        2016 Bill : “Authentication” has been defined as the process by which the Aadhaar number, along with demographic or biometric information of an individual is submitted to the Central Identities Data Repository for the purpose of verification, done on the basis of the correctness of (or lack of) information available with it.
       
      1. 2010 Bill: “Authentication Record” was not defined in the previous Bill.

        2016 Bill : “Authentication Record”  has been defined under clause 2(d)  as the record of the time of authentication, the identity of the entity requesting such record and the response provided by the Authority for this purpose. 

       

      1. 2010 Bill: “Authority” was defined under clause 2(d) as National Identification Authority of India established under provisions of the Bill. 

             2016 Bill :“Authority” has been defined under clause 2(e) as Unique Identification Authority of India established under provisions of the Bill.

       

      1. 2010 Bill: “Benefit” was not defined in the previous Bill.  

        2016 Bill : “Benefit” has been defined under clause 2(f) as any advantage, gift, reward, relief, or payment (either in cash or kind), or such other benefits, which is provided to an

      individual/ a group of individuals as notified by the Central Government.

       

      1. 2010 Bill: “Biometric Information” was defined under clause 2(e) as a set of biological attributes of an individual as may be specified by regulations.

        2016 Bill : “Biometric Information” has been defined under clause 2(g) as biological attributes of an individual like photograph, fingerprint, Iris scan, or other such biological

      attributes as may be specified by regulations.

       

      1. 2010 Bill: “Core Biometric Information” was not defined in the previous Bill.

        2016 Bill : “Core Biometric Information” has been defined under clause 2(j) as biological attribute of an individual like fingerprint, Iris scan, or such other biological attribute as

      may be specified by regulations.

       

      1. 2010 Bill: “Demographic Information” was defined under clause 2(h) as information specified in the regulations for the purpose of issuing an Aadhaar number, like information relating to the name, age, gender and address of an individual (other than race, religion, caste, tribe, ethnicity, language, income or health), and such other information.

        2016 Bill : “Demographic Information” has been defined under clause 2(k) as information of an individual as may be specified by regulations for the purpose of issuing an Aadhaar number like information relating to the name, date of birth, address and other relevant information, excluding race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history of an individual.

       

      1. 2010 Bill: “Enrolling Agency” was defined under clause 2(i) as an agency appointed by the Authority or the Registrars for collecting information under the Act.

        2016 Bill : “Enrolling Agency” has been defined under clause 2(l) as an agency appointed by the Authority or a Registrar for collecting demographic and biometric information of individuals under this Act.

       

      1. 2010 Bill: “Member” was defined under clause 2(l) to include the Chairperson and a part-time Member of the Authority appointed under the provisions of the Bill.

        2016 Bill : “Member” has been defined under clause 2(o)  to include the Chairperson and Member of the Authority appointed under the provisions of the Bill.

       

      1. 2010 Bill: “Records of Entitlement” was not defined under the previous Bill.

        2016 Bill :  “Records of Entitlement” has been defined under clause 2(r) as the records of benefits, subsidies or services provided to, or availed by, any individual under any programme.

       

      1. 2010 Bill: “Requesting Entity” was not defined under the previous Bill.

        2016 Bill : “Requesting Entity” has been defined under clause 2(u) as an agency or person that submits information of an individual comprising of the Aadhaar number and

      demographic or biometric information to the Central Identities Data Repository for the purpose of authentication.

       

      1. 2010 Bill: “Resident” was defined under clause 2(q) as an individual usually residing in a village, rural area, town, ward, demarcated area (demarcated by the Registrar General of Citizen Registration) within a ward in a town or urban area in India.

        2016 Bill : “Resident” has been defined under clause 2(v) as an individual who has resided in India for a period or periods amounting in all to one hundred and eighty-two days or more in the twelve months immediately preceding the date of application for enrolment.

       

      1. 2010 Bill:  “Review Committee” was defined under clause 2(r) as the Identification Review Committee constituted under the provisions of the Bill.

        2016 Bill : “Review Committee” has not been defined under the Bill.

       

      1. 2010 Bill: “Service” was not defined in the previous Bill.

        2016 Bill : “Service” has been defined under clause 2 (w) as any provision, facility, utility or any other assistance provided in any form to an individual or a group of individuals as may be notified by the Central Government.

       

      1. 2010 Bill: “Subsidy” was not defined in the previous Bill.

        2016 Bill : “Subsidy” has been defined under clause 2(x) as any form of aid, support, grant, subvention, or appropriation (either in cash or kind), as may be notified by the Central Government, given to an individual or a group of individuals.

       

      • Enrolment

       

      1. Aadhaar Numbers

      2016 Bill : Under clause 3(2) of the Bill, it is stated that at the time of enrolment, The enrolling agency shall inform the individual undergoing enrolment the following details:

      (a) the manner in which the information so collected shall be used,

      (b) the nature of recipients with whom the information is intended to be shared during authentication,and

      (c) the existence of a right to access information, the procedure for making such requests for access, and details of the person/department in-charge to whom such requests can be

      made.

       

      1. Properties of Aadhaar Number 

      2010 Bill : Clause 4 (3) stated that subject to authentication, the Aadhaar number shall be accepted as a proof of identity of the Aadhaar number holder.

      2016 Bill : Clause 4 (3) states that subject to authentication, the Aadhaar number (either in physical or electronic form) shall be accepted as a proof of identity of the Aadhaar

      number holder.

      The Explanation under this clause states that for the purpose of this provision, “electronic form” shall have the same meaning as assigned to it in section 2 (1) (r) of the Information Technology Act, 2000.

       

      • Authentication

       

      1. Proof of Aadhaar number necessary for receipt of certain subsidies, benefits and services, etc. 

      2016 Bill : Under clause 7 of the Bill it is provided that for the purpose of establishing an individual's identity as a condition to receipt a a subsidy, benefit or service. the Central or State Government (as the case may be), require that such individual undergo authentication, or furnish proof of possession of Aadhaar number. In case the Aadhaar number has not been assigned to an individual, such individual must make an application for enrolment.

      The Proviso states that the individual shall be offered alternate and viable means of identification for delivery of the subsidy, benefit or service, in an Aadhaar number is not assigned to an individual.

       

      1. Authentication of Aadhaar number 

      2010 Bill: Clause 5 of the Bill stated that authentication of the Aadhaar number shall be performed by the Authority, in relation to the holders’ biometric and demographic information, subject to such conditions and on payment of the prescribed fees. Also, it was provided that the Authority shall respond to an authentication query with a positive, negative or other appropriate response (excluding any demographic and biometric information).

      2016 Bill : The Bill states that authentication of the Aadhaar number shall be performed by the Authority, in relation to the holders’ biometric and demographic information, subject to such conditions and on payment of the prescribed fees.

      Clause 8 (2) provides that unless otherwise provided in the Act, the requesting entity shall— 

      1. For the purpose of authentication, obtain the consent of an individual before collecting his identity information, and

      2. ensure that the identity information of an individual is only used for submission to the Central Identities Data Repository for authentication.

      Clause 8 (3) provides that the following details shall be informed by the requesting entity to the individual submitting his identity information for the purpose of authentication: 

         a. the nature of information that may be shared upon authentication;

         b. the uses to which the information received during authentication may be put by the requesting entity; and

         c. alternatives to submission of identity information to the requesting entity.

      Clause 8(4) states that the Authority shall respond to an authentication query with a positive, negative or other appropriate response (excluding any core biometric information).

       

      1. Prohibition on requiring certain information. 

      2010 Bill: Clause 9 of the Bill prohibited the Authority to make an individual give information pertaining to his race, religion, caste, tribe, ethnicity, language, income or health.

      2016 Bill : This provision has been removed from the 2016 Bill.

       

      • Unique Identification Authority Of India

       

      1. Establishment of Authority 

      2010 Bill: Clause 11(1) of the Bill stated that the Central Government shall establish an Authority called as the National Identification Authority of India, to exercise the powers conferred on it and to perform the functions assigned to it under this Act. Also, clause 11(3) provided that the head office of the Authority shall be in the National Capital Region, referred to in section 2(f) of the National Capital Region Planning Board Act, 1985. 

      2016 Bill : Clause 11(1) of the Bill states that the Central Government shall establish an Authority called as the Unique Identification Authority of India, responsible for the processes of enrolment, authentication and perform such other functions assigned to it under this Act. Also, clause 11(3) provides that the head office of the Authority shall be in New Delhi.

       

      1. Composition of Authority

      2010 Bill: Clause 12 provided that the Authority shall consist of a Chairperson and two part-time Members, to be appointed by the Central Government.  

      2016 Bill : Clause 12 of the Bill provides that the Authority shall consist of a Chairperson (appointed on part-time or full- time basis) , two part-time Members, and the chief executive officer (who shall be Member-Secretary of the Authority), to be appointed by the Central Government.

       

      1. Qualifications for appointment of Chairperson and Members of Authority

      2010 Bill: Clause 13 provided that the Chairperson and Members of the Authority shall be persons of ability, integrity and outstanding calibre having experience and knowledge in the matters relating to technology, governance, law, development, economics, finance, management, public affairs or administration. 

      2016 Bill : Clause 13 provides that the Chairperson and Members of the Authority shall be persons of ability and integrity having experience and knowledge of at least ten years in matters relating to technology, governance, law, development, economics, finance, management, public affairs or administration.

       

      1. Term of office and other conditions of service of Chairperson.

      2010 Bill: Proviso to Clause 14 (1) stated that  the Chairperson of the Unique Identification Authority of India, who would have been appointed before the commencement of this Act by notification A-43011/02/2009-Admn.I (Vol.II) dated the 2nd July, 2009, shall continue as a Chairperson of the Authority for the term for which he had been appointed. Clause 14(4) prohibited the Chairperson from holding any other office during the period of holding his office in the Authority. Proviso to clause 14 (5) stated the salary, allowances and the other terms and conditions of service of the Chairperson shall not be varied to his disadvantage after his appointment. 

      2016 Bill : These provisions have not been included in the Bill.

       

      1. Removal of Chairperson and Members

      2010 Bill:  Clause 15 (2) stated that unless a reasonable opportunity of being heard has been duly provided, the Chairperson or a Member shall not be removed under clauses (d) or (e) of sub-section (1).

      2016 Bill : Clause 15 (2) stated that unless a reasonable opportunity of being heard has been duly provided, the Chairperson or a Member shall not be removed under clauses (b), (d) or (e) of sub-section (1).

       

      1. Restrictions on Chairperson or Members on employment after cessation of office

      2010 Bill: Clause 16 (a) provided that the Chairperson or a member, who ceases to hold office, shall not accept any employment in, or connected with the management or administration of, any person which has been associated with any work under the Act, for a period of three years from the date on which they cease to hold office, without previous approval of the Central Government. 

      The proviso to this clause stated that this provision shall not apply to any employment under the Central Government, State Government, local authority, any statutory authority or any corporation established by or under any Central, State or provincial Act or a Government Company, as defined in section 617 of the Companies Act, 195.

      2016 Bill: Clause 16 (a) provides that the Chairperson or a member, who ceases to hold office, shall not accept any employment in, or connected with the management of any organisation, company or any other entity which has been associated with any work done or contracted out by the Authority (whether directly or indirectly), during his tenure as Chairperson or Member, as the case may be, for a period of three years from the date on which he ceases to hold office, without previous approval of the Central Government. 

      The proviso to this clause stated that this provision shall not apply to any employment under the Central Government, State Government, local authority, any statutory authority or any corporation established by or under any Central, State or provincial Act or a Government Company, as defined in clause (45) of section 2 of the Companies Act, 2013.

       

      1. Functions of Chairperson

      2010 Bill: Clause 17 of the Bill provided that the Chairperson shall have powers of general superintendence, direction in the conduct of the affairs of the Authority, preside over the meetings of the Authority, and exercise and discharge such other powers and functions of the Authority as prescribed, without prejudice to any of the provisions of the Act. 

      2016 Bill : Clause 17 of the Bill states that the Chairperson shall preside over the meetings of the Authority, and exercise and discharge such other powers and functions of the Authority as prescribed, without prejudice to any of the provisions of the Act.

       

      1. Chief Executive Officer

      2010 Bill: Clause 20 (1) of the Bill stated that a chief executive officer, not below the rank of the Additional Secretary to the Government of India, who shall be the Member-Secretary of the Authority,shall be appointed by the Central Government.

      2016 Bill : Clause 18 (1) stated that a chief executive officer, not below the rank of the Additional Secretary to the Government of India, shall be appointed by the Central Government. In the list of its responsibilities, clause 18 (2) (e) additionally provides for performing such other functions, or exercising such other powers, as may be specified by regulations.

       

      1. Meetings 

      2010 Bill: Clause 18 (4) provided that all decisions of the Authority shall be authenticated by the signature of the Chairperson or any other Member who is authorised by the Authority for this purpose.

      2016 Bill : Clause 19 (4) provided that all decisions of the Authority shall be signed by the Chairperson, any other Member or the Member-Secretary authorised by the Authority.

       

      1. Vacancies, etc., not to invalidate proceedings of Authority

      2010 Bill: Clause 19 (b) of the Bill stated that No act or proceeding of the Authority shall be invalid merely by reason of any defect in the appointment of a person as a Member of the Authority

      2016 Bill : Clause 20 (b) of the Bill stated that No act or proceeding of the Authority shall be invalid merely by reason of any defect in the appointment of a person as Chairperson or Member of the Authority

       

      1. Powers and functions of Authority

       Clause 23 (2) (k)

      2010 Bill: Clause 23 (2) (k) provided that the powers and functions of the Authority may include sharing the information of Aadhaar number holders, with their written consent, with such agencies engaged in delivery of public benefits and public services as the Authority may by order direct, in a manner as specified by regulations. 

      2016 Bill : Clause 23 (2) (k) provides that the powers and functions of the Authority may include sharing the information of Aadhaar number holders, subject to the provisions of this Act.

       

      Clause 23 (2) (r) 

      2010 Bill : Clause 23 (2) (r) stated that the powers and functions of the Authority may include specifying, by regulation, the policies and practices for Registrars, enrolling agencies and other service providers.

      2016 Bill : Clause 23 (2) (r) states that the powers and functions of the Authority may include evolving of, and specifying, by regulation, the policies and practices for Registrars, enrolling agencies and other service providers.

       

      • Grants, Accounts and Audit and Annual Report

       

      2010 Bill: Clause 25 provided that  the fees or revenue collected by the Authority shall be credited to the Consolidated Fund of India and the entire amount so credited be transferred to the Authority.

      2016 Bill : Clause 25  states that the fees or revenue collected by the Authority shall be credited to the Consolidated Fund of India.

       

      • Identity Review Committee

       

      2010 Bill: Clause 28 of the Bill provided for establishment of the Identity Review Committee, consisting of three members (including the chairperson) who are persons of eminence, ability, integrity and having knowledge and experience in the fields of technology, law, administration and governance, social service, journalism, management or social sciences. Clause 29 of the Bill enlisted several functions to be undertaken by the Review Committee so constituted.

      2016 Bill: These provisions have been removed from the Bill.

       

      • Protection of Information

       

      1. Security and confidentiality of information

      2010 Bill: Clause 30 (2) of the Bill stated that the Authority shall take measures (including security safeguards) to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against any loss, unauthorised access, use or unauthorised disclosure of the same.

      2016 Bill : Clause 28 (3) states that  the Authority shall take measures to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against access, use or disclosure not permitted under this Act or regulations made thereunder, and against accidental or intentional destruction, loss or damage.

      A new provision-clause 28(4)- states that the Authority shall undertake the following additional measures for protection of information:

      (a) adopt and implement appropriate technical and organisational security measures,

      (b) ensure that the agencies, consultants, advisors or other persons appointed or engaged for performing any function of the Authority under this Act, have in place appropriate technical and organisational security measures for the information, and

      (c) ensure that the agreements or arrangements entered into with such agencies, consultants, advisors or other persons, impose obligations equivalent to those imposed on the Authority under this Act, and require such agencies, consultants, advisors and other persons to act only on instructions from the Authority.

       

      1. Restriction on sharing information 

      2010 Bill: The Bill did not provide for restrictions on sharing of information.

      2016 Bill: This new provision under Clause 29 states that no core biometric information, collected or created under this Act, shall be—

      (a) shared with anyone for any reason whatsoever; or

      (b) used for any purpose other than generation of Aadhaar numbers and authentication under this Act.

      Also, the identity information, other than core biometric information, collected or created

      under this Act may be shared only in accordance with the provisions of this Act as specified under Regulations.

      Clause 29 (3) prohibits usage of identity information available with a requesting entity for any purpose, other than that specified to the individual at the time of submitting any identity information for authentication, or disclosed further, except with the prior consent of the individual to whom such information relates.

      Clause 29 (4) prohibits publication, displaying or publicly posting of the Aadhaar number or core biometric information collected or created under this Act in respect of an Aadhaar number holder, except for the purposes as may prescribed in Law.

       

      1. Biometric information deemed to be sensitive personal information.

       2010 Bill: The Bill did not contain provisions stating that the biometric information shall be deemed to be sensitive personal information for the purpose of this Act. 

      2016 Bill: Clause 30 states that the biometric information collected and stored in electronic form shall be deemed to be “electronic record” and “sensitive personal data or information”, and the provisions contained in the Information Technology Act, 2000 and the rules made thereunder shall apply to such information,to the extent not in derogation of the provisions of this Act.

       The Explanation defines

      (a) “electronic form” - as defined under section 2 (1) (r)  of the Information Technology Act, 2000,

      (b) “electronic record” as defined under section 2 (1) (t)  of the Information Technology Act, 2000

      (c)“sensitive personal data or information” - as defined under clause (iii) of the

      Explanation to section 43A of the Information Technology Act, 2000.

       

      1. Security and confidentiality of information

      2010 Bill: Clause 30 (2) of the Bill stated that the Authority shall take measures (including security safeguards) to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against any loss, unauthorised access, use or unauthorised disclosure of the same.

      2016 Bill : Clause 28 (3) states that  the Authority shall take measures to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against access, use or disclosure not permitted under this Act or regulations made thereunder, and against accidental or intentional destruction, loss or damage.

      A new provision-clause 28(4)- states that the Authority shall undertake the following additional measures for protection of information:

      (a) adopt and implement appropriate technical and organisational security measures,

      (b) ensure that the agencies, consultants, advisors or other persons appointed or engaged for performing any function of the Authority under this Act, have in place appropriate technical and organisational security measures for the information, and

      (c) ensure that the agreements or arrangements entered into with such agencies, consultants, advisors or other persons, impose obligations equivalent to those imposed on the Authority under this Act, and require such agencies, consultants, advisors and other persons to act only on instructions from the Authority.

       

      1. Alteration of demographic information or biometric information. 

      2016 Bill : Clause 31 (4) prohibits alteration of identity information in the Central Identities Data Repository, except in the manner provided in this Act or regulations made thereof.

       

      1. Access to own information and records of requests for authentication.

      2016 Bill : Clause 32 (3) provides that the Authority shall not collect, keep or maintain any information about the purpose of authentication, either by itself or through any entity under its control.

       

      1. Disclosure of information in certain cases 

      2010 Bill: The provision creates an exception under Clause 33 for the purposes of disclosure of information in certain cases like disclosure (including identity information or details of authentication) made pursuant to an order of a competent court; or disclosure (including identity information) made in the interests of national security in pursuance of directions issued by an officer(s) not below the rank of Joint Secretary or equivalent in the Central Government specifically authorised in this behalf by an order of the Central Government.

      2016 Bill : The provision creates an exception under Clause 33 for the purposes of disclosure of information in certain cases like disclosure (including identity information or details of authentication) made pursuant to an order not inferior to that of a District Judge (provided that the court order shall be made only after giving an opportunity of hearing to the Authority); or disclosure (including identity information or authentication records) made in the interests of national security in pursuance of directions issued by an officer not below the rank of Joint Secretary to the Government of India, authorised in this behalf by an order of the Central Government.

      The proviso to Clause 33 (2) states that every direction so issued shall be reviewed by an Oversight Committee consisting of the Cabinet Secretary and the Secretaries to the Government of India in the Department of Legal Affairs and the Department of Electronics and Information Technology, before it takes effect.

      The second proviso states that any such direction so issued shall be valid for a period of three months from the date of its issue, which may be extended for a further period of three months after the review by the Oversight Committee.

       

      • Offences and Penalties

       

      1. Penalty for impersonation at time of enrolment. 

      2010 Bill: The penalty for impersonation was prescribed under Clause 34  as imprisonment for a term which may extend to three years and fine which may extend to ten thousand rupees.

      2016 Bill : The penalty for impersonation was prescribed under Clause 34  as imprisonment for a term which may extend to three years, or with fine which may extend to ten thousand rupees, or both.

       

      1. Penalty for unauthorised access to the Central Identities Data Repository

      2010 Bill: Clause 38 (g) stated that any person not authorised by the Authority,  provides any assistance to any person to do any of the acts mentioned under sub-clauses (a)-(f) shall be punishable. If anyone, who is not authorised by the Authority, performs any activity as listed under (a)-(i), shall be punishable with imprisonment for a term which may extend to three years and shall be liable to a fine which shall not be less than one crore rupees.

      2016 Bill : Clause 38 (g) stated that any person not authorised by the Authority,  reveals any information in contravention of sub-section section 28 (5), or shares, uses or displays information in contravention of section 29 or assists any person in any of the acts mentioned under sub-clauses (a)-(f) shall be punishable. If anyone, who is not authorised by the Authority, performs any activity as listed under (a)-(i), shall be punishable with imprisonment for a term which may extend to three years and shall be liable to a fine which shall not be less than ten lakh rupees. Additionally, the Explanation states that the expression “computer source code” shall have the meaning assigned to it in the Explanation to section 65 of the Information Technology Act, 2000.

       

      1. Penalty for unauthorised use by requesting entity and noncompliance with intimation requirements

      2010 Bill: Clause 40 of the Bill prescribed penalty for manipulating biometric information and stated that a person who gives/attempts to give any biometric information which does not pertain to him for the purpose of getting an Aadhaar number, authentication or updating his information, shall be punishable with imprisonment for a term which may extend to three years or with a fine which may extend to ten thousand rupees or with both.

      2016 Bill:  Clause 40 prescribes penalty for a person, being a requesting entity, uses the identity information of an individual in contravention of clause 8(3) , to be punishable with imprisonment which may extend to three years or with a fine which may extend to ten thousand rupees or, in the case of a company, with a fine which may extend to one lakh rupees or with both. Clause 41 of the Bill states that Whoever, being an enrolling agency or a requesting entity, fails to comply with the requirements of clause 3(2)-list of details to be informed to the individual undergoing enrolment, and clause 8(3)-informing individual undergoing enrolment details for the purpose of authentication, shall be punishable with imprisonment which may extend to one year, or with a fine which may extend to ten thousand rupees or, in the case of a company, with a fine which may extend to one lakh rupees or with both.

       

      1. General Penalty

      2010 Bill: For an offence committed under the Act or rules made thereunder, for which no specific penalty was provided, the penalty was prescribed as imprisonment for a term which may extend to three years, or fine as prescribed.

      2016 Bill  : For an offence committed under the Act or rules made thereunder, for which no specific penalty was provided, the penalty was prescribed as imprisonment for a term which may extend to one year, or fine as prescribed.

       

      • Miscellaneous

       

      1. Power of Central Government to supersede Authority.

      2010 Bill: Clause 47(1)(c) stated that if at any time the Central Government is of the opinion that such circumstances exist which render it necessary in the public interest to supersede the Authority, may do so in the manner prescribed under this provision.

      2016 Bill : Clause 48(1)(c) states that if at any time the Central Government is of the opinion that a public emergency exists, then the Central Government may supersede the Authority, in the manner prescribed under this provision.

       

      1. Power to remove difficulties.

      2010 Bill: The proviso to Clause 56(1) stated that an no order by Central Government, which may appear necessary to remove a difficulty in giving effect to the provisions of this Act, shall be made under this section after the expiry of two years from the commencement of this Act.

      2016 Bill : The proviso to Clause 58(1) stated that an no order by Central Government, which may appear necessary to remove a difficulty in giving effect to the provisions of this Act, shall be made under this section after the expiry of three years from the commencement of this Act.

       

      1. Savings

      2010 Bill: Clause 57 provided that any action taken by the Central Government under the Resolution of the Government of India, Planning Commission bearing notification number A-43011/02/ 2009-Admin.I, dated the 28th January, 2009, shall be deemed to have been done or taken under the corresponding provisions of this Act.

      2016 Bill : Clause 59 states that any action take by Central Government under  the Resolution of the Government of India, Planning Commission bearing notification number A-43011/02/2009-Admin. I, dated the 28th January, 2009, or by the Department of Electronics and Information Technology under the Cabinet Secretariat Notification bearing notification number S.O. 2492(E), dated the 12th September, 2015, as the case may be, shall be deemed to have been validly done or taken under this Act.

       

      • Statement of Objects and Reasons

       

      2010 Bill: The Bill stated that the Central Government decided to issues  unique identification numbers to all residents in India, which involves collection of demographic, as well as biometric information.  The Unique Identification Authority of India was constituted as an executive body by the Government, vide its notification dated the 28th January, 2009. The Bill addressed and enlisted several issues with the issuance of  unique identification numbers which should be addressed by law and attract penalties, such as security and confidentiality of information, imposition of obligation of disclosure of information so collected in certain cases, impersonation at the time of enrolment, unauthorised access to the Central Identities Data Repository, manipulation of biometric information, investigation of certain acts constituting offence, and unauthorised disclosure of the information collected for the purposes of issuance of the numbers. To make the said Authority a statutory one, the National Identification Authority of India Bill, 2010 was proposed to establish the National Identification Authority of India to issue identification numbers and authenticate the Aadhaar number to facilitate access to benefits and services to such individuals to which they are entitled and for matters connected therewith or incidental thereto.Apart from the above mentioned purposes, The National Identification Authority of India Bill, 2010 also seeks to provide for the Authority to exercise powers and discharge functions so prescribed , ensure that the Authority does not require any individual to give information pertaining to his race, religion, caste, tribe, ethnicity, language, income or health, may engage entities to establish and maintain the Central Identities Data Repository and to perform any other functions as may be specified by regulations, constitute the  Identity Review Committee and take measures to ensure that the information in the possession or control of the Authority is secured and protected against any loss, unauthorised access or use or unauthorised disclosure thereof.

      2016 Bill: The Bill states that correct identification of targeted beneficiaries for delivery of subsidies, services, frants, benefits, etc has become a challenge for the Government and has proved to be a major hindrance for successful implementation of these programmes. In the absence of a credible system to authenticate identity of beneficiaries, it is difficult to ensure that the subsidies, benefits and services reach to intended beneficiaries. The Unique Identification Authority of India was established by a resolution of the Government of India, Planning Commission vide notification number A-43011/02/ 2009-Admin.I, dated the 28th January, 2009, to lay down policies and implement the Unique Identification Scheme of the Government, by which residents of India were to be provided unique identity number. Upon successful authentication, this number would serve as proof of identity for identification of beneficiaries for transfer of benefits, subsidies, services and other purposes. With increased use of the Aadhaar number, steps to ensure security of such information need to be taken and offences pertaining to certain unlawful actions, created. It has been felt that the processes of enrolment, authentication, security, confidentiality and use of Aadhaar related information must be made statutory. For this purpose, the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 seeks to provide for issuance of Aadhaar numbers to individuals on providing his demographic and biometric information to the Unique Identification Authority of India, requiring Aadhaar numbers for identifying an individual for delivery of benefits, subsidies, and services, authentication of the Aadhaar number, establishment of the Unique Identification Authority of India, maintenance and updating the information of individuals in the Central Identities Data Repository, state measures pertaining to security, privacy and confidentiality of information in possession or control of the Authority including information stored in the Central Identities Data Repository and identify offences and penalties for contravention of relevant statutory provisions.

       

       

      Aadhaar Bill 2016 & NIAI Bill 2010 - Comparing the Texts

      by Sumandro Chattapadhyay last modified Mar 09, 2016 11:25 AM
      This is a quick comparison of the texts of the Aadhaar Bill 2016 and the National Identification Authority of India Bill 2010. The new sections in the former are highlighed, and the deleted sections (that were part of the latter) are struck out.

      The New Aadhaar Bill in Plain English

      by Amber Sinha, Vanya Rakesh and Vipul Kharbanda — last modified Mar 11, 2016 04:41 AM
      We have put together a plain English version of the The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016.

      The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016

       

      Chapter I. PRELIMINARY

       

      Section 1

      1. This Act is called Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016.

      2. It will be applicable in whole of India (except the state of Jammu and Kashmir).

      3. It will become applicable on a date to be notified by the Central Government.

       

      Section 2

      1. “Aadhaar number” is the identification number issued to an individual under the Act;

      2. “Aadhaar number holder” is the person who has been given an Aadhaar number;

      3. “authentication” is the process of verifying the Aadhaar number, demographic information and biometric information of any person by the Central Identities Data Repository (CIDR);

      4. “authentication record” is the record of the authentication which will contain the identity of the requesting entity and the response of the CIDR;

      5. “Authority”  or “UIDAI” refers to the Unique Identification Authority of India established under this Act;

      6. “benefit” means any relief or payment which may be notified by the Central Government;

      7. “biometric information” means photograph, fingerprint, Iris scan, or any other biological attributes specified by regulations;

      8. “Central Identities Data Repository” or “CIDR” means a centralised database containing all Aadhaar numbers, demographic information and biometric information and other related information;

      9. “Chairperson” means the Chairperson of the UIDAI;

      10. “core biometric information” means fingerprint, Iris scan, or any biological attributes specified by regulations;

      11. “demographic information” includes information relating to the name, date of birth, address and other relevant information as specified by regulations. This information will not include race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history;

      12. “enrolling agency” means an agency appointed by the UIDAI or a Registrar for collecting demographic and biometric information of individuals for issuing Aadhaar numbers;

      13. “enrolment” means the process of collecting demographic and biometric information from individuals for the purpose of issuing Aadhaar numbers;

      14. “identity information” in respect of an individual, includes his Aadhaar number, his biometric information and his demographic information;

      15. “Member” includes the Chairperson and Member of the Authority appointed under section 12;

      16. “notification” means a notification published in the Official Gazette and the expression “notified” with its cognate meanings and grammatical variations will be construed accordingly;

      17. “prescribed” means prescribed by rules made by the Central Government under this Act;

      18. “records of entitlement” means the records of benefits, subsidies or services provided to any individual under any government programme;

      19. “Registrar” means any person authorized by the UIDAI to enroll individuals under the Act;

      20. “regulations” means the regulations made by the UIDAI under this Act;

      21. “requesting entity” means an agency that submits the Aadhaar number and other information of an individual to the CIDR for authentication;

      22. “resident” means a person who has resided in India for atleast 182 days in the last twelve months before the date of application for enrolment;

      23. “service” means any facility or assistance provided by the Central Government in any form;

      24. “subsidy” means any form of aid, support, grant, etc. in cash or kind as notified by the Central Government.

       

      Chapter II. ENROLMENT

       

      Section 3

      1. Every resident is entitled to get an Aadhaar number.

      2. At the time of enrollment, the enrolling agency will inform the individual of the following details—

        1. how their information will be used;

        2. what type of entities the information will be shared with; and

        3. that they have a right to see their information and also tell them how they can see their information.

      3. After collecting and verifying the information given by the individuals, the UIDAI will issue an Aadhaar number to each individual.

       

      Section 4

      1. Once an Aadhaar number has been issued to a person, it will not be re-assigned to any other person.

      2. An Aadhaar number will be a random number and will not contain any attributes or identity of the Aadhaar number holder.

      3. if adopted by a service provider, an Aadhaar number may be accepted as proof of identity of the person.

       

      Section 5

      The UIDAI will take special measures to issue Aadhaar number to women, children, senior citizens, persons with disability, unskilled and unorganised workers, nomadic tribes or to such other persons who do not have any permanent residence and similar categories of individuals.

       

      Section 6

      The UIDAI may require Aadhaar number holders to update their Aadhaar information, so that it remains accurate.

       

      Chapter III. AUTHENTICATION

       

      Section 7

      As a condition for receiving subsidy for which the expenditure is incurred from the Consolidated Fund of India, the Government may require that a person should be authenticated or give proof of the Aadhaar number to establish his/her identity. In the case a person does not have an Aadhaar number, he/she should make an application for enrolment. If an Aadhaar number is not assigned, the person will be offered viable and alternate means of identification for receiving the subsidy, benefit or service.

       

      Section 8

      1. The UIDAI will authenticate the Aadhaar information of people as per the conditions prescribed by the government and may also charge a fees for doing so.

      2. Any requesting entity will— (a) take consent from the individual before collecting his/her Adhaar information; (b) use the information only for authentication with the CIDR;

      3. The entity requesting authentication will also inform the individual of the following— (a) what type of information will be shared for authentication; (b) what will the information be used for; and (c) whether there is any alternative to submitting the Aadhaar information to the requesting entity.

      4. The UIDAI will respond to the authentication request with yes, no, or other appropriate response and share identity information about the Aadhaar number holder but not share any biometric information.

       

      Section 9

      The Aadhaar number or its authentication will not be a proof of citizenship or domicile.

       

      Section 10

      The UIDAI may engage any number of entities to establish and maintain the CIDR and to perform any other functions specified by the regulations.


      Chapter IV. UNIQUE IDENTIFICATION AUTHORITY OF INDIA


      Section 11

      1. The UIDAI will be established by the Central Government to be responsible for the processes of enrolment and authentication of Aadhaar numbers.

      2. The UIDAI will be a body corporate with the power to buy and sell property, to enter into contracts and to sue or be sued.

      3. The head office of the UIDAI will be in New Delhi.

      4. The UIDAI may establish its offices at other places in India.

      Section 12

      The UIDAI will have a Chairperson, two part-time Members and a chief executive officer, who to be appointed by the Central Government.

      Section 13

      The Chairperson and Members will be competent people with at least 10 years experience and knowledge in technology, governance, law, development, economics, finance, management, public affairs or administration.

      Section 14

      1. The Chairperson and the Members will be appointed for 3 years and can be re-appointed after their term. But no Member or Chairperson will be more than 65 years of age.

      2. The Chairperson and Members will take an oath of office and of secrecy.

      3. The Chairperson or Member may— (a) resign from office, by giving an advance written notice of at least 30 days; or (b) be removed from his office because she/he gets disqualified on any of the grounds mentioned in section 15.

      4. The salaries and allowances of the Members and Chairperson will be prescribed under the government.

      Section 15

      1. The Central Government may remove a Chairperson or Member, who—
        (a) has gone bankrupt;
        (b) is physically or mentally unable to do his/her job;
        (c) has been convicted of an offence involving moral turpitude;
        (d) has a financial conflict of interest in performing his/her functions; or
        (e) has abused his/her position so that the government needs to remove him/her in public interest.

      2. The Chairperson or a Member will be given a chance to present his/her side of the story before being removed, unless he/she is being removed on the grounds of bankruptcy or criminal conviction.

      Section 16

      An Ex-Chairperson or Ex-Member will have to take the approval of the Central Government,—

      1. to accept any job in any entity (other than a government organization) which was associated with any work done for the UIDAI while that person was a Chairperson or Member, for a period of three years after ceasing to hold office;

      2. to act or advise any entity on any particular transaction for which that person had provided advice to the UIDAI while he/she was the Chairperson or a Member;

      3. to give advice to any person using information which was obtained as the Chairperson or a Member which is not available to the public in general; or

      4. to accept any offer of employment or appointment  as a director of any company with which he/she had direct and significant official dealings during his/her term of office, for a period of three years.

      Section 17

      The Chairperson will preside over the meetings of the UIDAI and have the powers and perform the functions of the UIDAI.

      Section 18

      1. The chief executive officer (CEO) of the UIDAI will not be below the rank of Additional Secretary to the Government of India.

      2. The chief executive officer will be responsible for— (a) the day-to-day administration of the UIDAI; (b) implementing the programmes and decisions of the UIDAI; (c) making proposals for the UIDAI; (d) preparation of the accounts and budget of the UIDAI; and (e) performing any other functions prescribed in the regulations.

      3. The CEO will annually submit the following things to the UIDAI for its approval — (a) a general report covering all the activities of the Authority in the previous year; (b) programmes of work; (c) the annual accounts for the previous year; and (d) the budget for the coming year.

      4. The CEO will have administrative control over the officers and other employees of the Authority.


      Section 19

      1. The time and place of the meetings of the UIDAI and the rules and procedures of those meetings will be prescribed by regulations.

      2. The meetings will be presided by the Chairperson, and if they are absent, then the senior most Member of the UIDAI.

      3. All decisions at the meetings of the UIDAI will be taken by a majority vote. In case of a tie, the person presiding the meeting will have the casting vote.

      4. All decisions of the UIDAI will be signed by the Chairperson or any other Member or the Member-Secretary authorised by the UIDAI in this behalf.

      5. If any Member, who is a director of a company and because of this has any financial interest in matters coming up for consideration at a meeting, that member should disclose the financial interest and not take any further part in the discussions and decision on that matter.

      Section 20

      No actions or proceeding of the UIDAI will become invalid merely because of—

      1. any vacancy in, or any defect in the constitution of, the UIDAI;

      2. any defect in the appointment of a person as Chairperson or Member of the Authority; or

      3. any irregularity in the procedure of the Authority not affecting the merits of the case.

       

      Section 21

      1. The UIDAI, with the approval of the Government, can decide on the number and types of officers and employees that it would require.

      2. The salaries and allowances of the employees, officer and chief executive officer will be prescribed under the government.

      Section 22.

      Once the UIDAI is establishment—

      1. all the assets and liabilities of the existing Unique Identification Authority of India, established by the Government of India through notification dated the 28th January, 2009, will stand transferred to the new UIDAI.

      2. all data and information collected during enrolment, all details of authentication performed, by the existing Unique Identification Authority of India will be deemed to have been done by the UIDAI. All debts, liabilities incurred and all contracts entered into by the Unique Identification Authority of India will be deemed to have been entered into by the UIDAI;

      3. all money due to the existing Unique Identification Authority of India will be deemed to be due to the UIDAI; and

      4. all suits and other legal proceedings instituted by or against such Unique Identification Authority of India may be continued by or against the UIDAI.

      Section 23

      The UIDAI will develop the policy, procedure and systems for issuing Aadhaar numbers to individuals and perform their authentication. The powers and functions of the UIDAI include—

      1. specifying the demographic information and biometric information required for enrolment and the processes for collection and verification of that information;

      2. collecting demographic information and biometric information from people seeking Aadhaar numbers;

      3. appointing of one or more entities to operate the CIDR;

      4. generating and assigning Aadhaar numbers to individuals;

      5. performing authentication of Aadhaar numbers;

      6. maintaining and updating the information of individuals in the CIDR;

      7. omitting and deactivating an Aadhaar number;

      8. specifying the manner of use of Aadhaar numbers for the purposes of providing or availing of various subsidies and other purposes for which Aadhaar numbers may be used;

      9. specifying the terms and conditions for appointment of Registrars, enrolling agencies and service providers and revocation of their appointments;

      10. establishing, operating and maintaining of the CIDR;

      11. sharing the information of Aadhaar number holders;

      12. calling for information and records, conducting inspections, inquiries and audit of the operations of the CIDR, Registrars, enrolling agencies and other agencies appointed under this Act;

      13. specifying processes relating to data management, security protocols and other technology safeguards under this Act;

      14. specifying the conditions/procedures for issuance of new Aadhaar number to existing Aadhaar number holder;

      15. levying and collecting the fees or authorising the Registrars, enrolling agencies or other service providers to collect fees for the services provided by them under this Act;

      16. appointing committees necessary to assist the Authority in discharge of its functions;

      17. promoting research and development for advancement in biometrics and related areas;

      18. making and specifying policies and practices for Registrars, enrolling agencies and other service providers;

      19. setting up facilitation centres and grievance redressal mechanisms;

      20. other powers and functions as prescribed.

      The Authority may,— (a) enter into agreements with various state governments and Union Territories for collecting, storing, securing or processing of information or delivery of Aadhaar numbers to individuals or performing authentication; (b) appoint Registrars, engage and authorize agencies to collect, store, secure, process information or do authentication or perform other functions under this Act. The Authority may engage consultants, advisors and other persons required for efficient discharge of its functions.

      Chapter V. GRANTS, ACCOUNTS AND AUDIT AND ANNUAL REPORT

       

      Section 24

      The Central Government may grant money to the UIDAI as it may decide, upon due appropriation by Parliament.

       

      Section 25

      Fees/revenue collected by the UIDAI will be credited to the Consolidated Fund of India

       

      Section 26

      1. The UIDAI will prepare an annual statement of accounts in the format prescribed by Central Government

      2. The Comptroller and Auditor-General will audit the account of the UIDAI annually at intervals decided by him, at the UIDAI’s expense.

      3. The Comptroller and Auditor-General or his appointees will have the same powers of audit they usually have to audit Government accounts.

      4. The UIDAI will forward the statement of accounts certified by the Comptroller and Auditor-General and the audit report, to the Central Government who will lay it before both houses of Parliament.

       

      Section 27

      1. The UIDAI will provide returns, statements and particulars as sought, to the Central Government, as and when required.

      2. The UIDAI will prepare an annual report containing the description of work for previous years, annual accounts of previous year, and the programmes of work for coming year.

      3. The copy of the annual report will be laid before both houses of Parliament by the Central Government.

       

      Chapter VI. PROTECTION OF INFORMATION

       

      Section 28

      1. The UIDAI will ensure the security and confidentiality of identity information and authentication records.

      2. The UIDAI will take measures to ensure that all information with the UIDAI, including CIDR records is secured and protected against access, use or disclosure and against destruction, loss or damage.

      3. The UIDAI will adopt and implement appropriate technical and organisational security measures, and ensure the same are imposed through agreements/arrangements with its agents, consultants, advisors or other persons.

      4. Unless otherwise provided, the UIDAI or its agents will not reveal any information in the CIDR to anyone.

      5. An Aadhaar number holders may request UIDAI to provide access his information (excluding the core biometric information) as per the regulations specified.

       

      Section 29

      1. The core biometric information collected will not be a) shared with anyone for any reason, and b) used for any purpose other generation of Aadhaar numbers and authentication.

      2. Identity information, other than core biometric information, may be shared only as per this Act and regulations specified under it.

      3. Identity information available with a requesting entity will not be used for any purpose other than what is specified to the individual, nor will it be shared further without the individual’s consent.

      4. Aadhaar numbers or core biometric information will not be made public except as specified by regulations.

       

      Section 30

      All biometric information collected and stored in electronic form will be deemed to be “electronic record” and “sensitive personal data or information” under Information Technology Act, 2000 and its provisions and rules will apply to it in addition to this Act.

       

      Section 31

      1. If the demographic or biometric information about any Aadhaar number holder changes, is lost or is found to be incorrect, they may request the UIDAI to make changes to their record in the CIDR, as necessary.

      2. The identity information in the CIDR will not be altered, except as provided in this Act.

       

      Section 32

      1. The UIDAI will maintain the authentication records in the manner and for as long as specified by regulations.

      2. Every Aadhaar number holder may obtain his authentication record as specified by regulations.

      3. The UIDAI will not collect, keep or maintain any information about the purpose of authentication.

       

      Section 33

      1. The UIDAI may reveal identity information, authentication records or any information in the CIDR following a court order by a District Judge or higher. Any such order may only be made after UIDAI is allowed to appear in a hearing.

      2. The confidentiality provisions in Sections 28 and 29 will not apply with respect to disclosure made in the interest of national security following directions by a Joint Secretary to the Government of India, or an officer of a higher rank, authorised for this purpose.

      3. An Oversight Committee comprising Cabinet Secretary, and Secretaries of two departments — Department of Legal Affairs and DeitY— will review every direction under 33 B above.

      4. Any directions under 33 B above are valid for 3 months, after which they may be extended following a review by the Oversight Committee.

       

      Chapter VII. OFFENCES AND PENALTIES

       

      Section 34

      Impersonating or attempting to impersonate another person by providing false demographic or biometric information will punishable by imprisonment of up to three years, and/or fine of up to ten thousand rupees.

       

      Section 35

      Changing or attempting to change any demographic or biometric information of an Aadhaar number holder by impersonating another person (or attempting to do so), with the intent of i) causing harm or mischief to an Aadhaar number holder, or ii) appropriating the identity of an Aadhaar number holder, is punishable with imprisonment up to three years and fine up to ten thousand rupees.

       

      Section 36

      Collection of identity information by one not authorised by this Act, by way of pretending otherwise, is punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

       

      Section 37

      Intentional disclosure or dissemination of identity information, to any person not authorised under this Act, or in violation of any agreement entered into under this Act, will be punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

       

      Section 38

      The following intentional acts, when not authorised by the UIDAI, will be punishable with imprisonment up to three years and a fine not less than ten lakh rupees:

      1. accessing or securing access to the CIDR;

      2. downloading, copying or extracting any data from the CIDR;

      3. introducing or causing any virus or other contaminant into the CIDR;

      4. damaging or causing damage to the data in the CIDR;

      5. disrupting or causing disruption to access to CIDR;

      6. causing denial of access to an authorised to the CIDR;

      7. revealing information in breach of (D) in Section 28, or Section 29;

      8. destruction, deletion or alteration of any files in the CIDR;

      9. stealing, destruction, concealment or alteration of any source code used by the UIDAI.

       

      Section 39

      Tampering of data in the CIDR or removable storage medium, with the intention to modify or discover information relating to Aadhaar number holder will be punishable with imprisonment up to three years and a fine up to ten thousand rupees.

       

      Section 40

      Use of identity information in violation of Section 8 (3) by a requesting entity will be punishable with imprisonment up to three years and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).


      Section 41

      Violation of Section 8 (3) or Section 3 (2) by a requesting entity or enrolling agency will be punishable with imprisonment up to one year and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

       

      Section 42

      Any offence against this Act or regulations made under it, for which no specific penalty is provided, will be punishable with be punishable with imprisonment up to one year and/or a fine up to twenty five thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

       

      Section 43

      1. In case of an offence under Act committed by a Company, all person in charge of and responsible for the conduct of the company will also be held to be guilty and liable for punishment unless they can prove lack of knowledge of the offense or that they had exercised all due diligence to prevent it.

      2. In case an offence is committed by a Company with the consent, connivance or neglect of a director, manager, secretary or other officer of a company, they will also be held guilty of the offence.

       

      Section 44

      This Act will also apply to offences committed outside of India by any person, irrespective of their nationality, if the offence involves any data in the CIDR.

       

      Section 45

      Offences under this Act will not be investigated by police officers below the rank of Inspector of Police.

       

      Section 46

      Penalties imposed under this Act will not prevent imposition of any other penalties or punishment under any other law in force.

       

      Section 47

      1. Courts will take cognizance of offences under this Act only upon complaint being made by the UIDAI or any officer authorised by it.

      2. No court inferior to that of a Chief Metropolitan Magistrate or a Chief Judicial Magistrate will try any offence under this Act.

       

      Chapter VIII. MISCELLANEOUS

       

      Section 48

      1. Central Government has the power to supersede the UIDAI, through a notification, not for longer than six months, in the following circumstances: i) In case of circumstances beyond the control of the UIDAI, ii) The UIDAI has defaulted in complying with directions of the Central Government, affecting financial position of the UIDAI, iii) Public emergency

      2. Upon publication of notification, Chairperson and Members of the UIDAI must vacate the office

      3. Powers, functions and duties will be performed by person(s) authorised by the President.

      4. Properties controlled and owned by UIDAI will vest in the Central Government.

      5. Central Government will reconstitute the UIDAI upon expiration of supersession, with fresh appointment of Chairperson and Members.

       

      Section 49

      Chairperson, members, employees etc. are deemed to be public servants within the meaning of section 21 of the Indian Penal Code.

       

      Section 50

      1. Central Government has the power to issue directions to the UIDAI on questions of policy (to be decided by the Government), except technical and administrative matters and the UIDAI will be bound by it.

      2. The UIDAI will be given an opportunity to express views before direction is given.

       

      Section 51

      The UIDAI may delegate its powers and functions to a Member or officer of the UIDAI.

       

      Section 52

      No suit, prosecution or other legal proceedings will lie against the Central Government, UIDAI, Chairperson, any Member, officer, or other employees of the UIDAI for an act done in good faith.

       

      Section 53

      The Central Government has the power to makes Rules for matters prescribed under this provision.

       

      Section 54

      UIDAI has the power to make regulations for matters prescribed under this provision.

       

      Section 55

      Rules and regulations under this Act will be laid before each House of Parliament for a total period of thirty days, both Houses must agree in making modification, and then the Rules will come into effect.

       

      Section 56

      Provisions of this Act are in addition to, and not in derogation of any other law currently in effect.

       

      Section 57

      This Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies.

       

      Section 58

      The Central Government may pass an order to remove a difficulty in giving effect to the provisions of this Act, not beyond three years from the commencement of this Act.

       

      Section 59

      Action take by Central Government under the Resolution of the Government of India for setting up the UIDAI or by the Department of Electronics and Information Technology under the notification including the UIDAI under the Ministry of Communications and Information Technology will be deemed to have been validly done or taken.

       

      STATEMENT OF OBJECTS AND REASONS
      1. Correct identification of targeted beneficiaries for delivery of subsidies, services, frants, benefits, etc has become a challenge for the Government

      2. This has proved to be a major hindrance for successful implementation of these programmes.

      3. In the absence of a credible system to authenticate identity of beneficiaries, it is difficult to ensure that the subsidies, benefits and services reach to intended beneficiaries.

      4. The UIDAI was established to lay down policies and implement the Unique Identification Scheme of the Government, by which residents of India were to be provided unique identity number.

      5. Upon successful authentication, this number would serve as proof of identity for identification of beneficiaries for transfer of benefits, subsidies, services and other purposes.

      6. With increased use of the Aadhaar number, steps to ensure security of such information need to be taken and offences pertaining to certain unlawful actions, created.

      7. It has been felt that the processes of enrolment, authentication, security, confidentiality and use of Aadhaar related information must be made statutory.

      8. The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 seeks to provide for issuance of Aadhaar numbers to individuals on providing his demographic and biometric information to the UIDAI, requiring Aadhaar numbers for identifying an individual for delivery of benefits, subsidies, and services, authentication of the Aadhaar number, establishment of the UIDAI, maintenance and updating the information of individuals in the CIDR, state measures pertaining to security, privacy and confidentiality of information in possession or control of the UIDAI including information stored in the Central Identities Data Repository and identify offences and penalties for contravention of relevant statutory provisions.

       

      An Urgent Need for the Right to Privacy

      by Sumandro Chattapadhyay last modified Mar 17, 2016 07:40 AM
      Along with a group of individuals and organisations from academia and civil society, we have drafted and are signatories to an open letter addressed to the Union government and urging the same to "urgently take steps to uphold the constitutional basis to the right to privacy and fulfil it’s constitutional and international obligations." Here we publish the text of the open letter. Please follow the link below to support it by joining the signatories.

       

      Read and sign the open letter.

       

      Text of the Open Letter

      As our everyday lives are conducted increasingly through electronic communications the necessity for privacy protections has also increased. While several countries across the globe have recognised this by furthering the right to privacy of their citizens the Union Government has adopted a regressive attitude towards this core civil liberty. We urge the Union Government to take urgent measures to safeguard the right to privacy in India.

      Our concerns are based on a continuing pattern of disregard for the right to privacy by several governments in the past. This trend has increased as can be plainly viewed from the following developments.

      In 2015, the Attorney General in the case of *K.S. Puttaswamy v. Union of India*, argued before the Hon’ble Supreme Court that there is no right to privacy under the Constitution of India. The Hon'ble Court was persuaded to re-examine the basis of the right to privacy upsetting 45 years of judicial precedent. This has thrown the constitutional right to privacy in doubt and the several judgements that have been given under it. This includes the 1997 PUCL Telephone Tapping judgement as well. We urge the Union Government to take whatever steps are necessary and urge the Supreme Court to hold that a right to privacy exists under the Constitution of India.

      Recently Mr. Arun Jaitley, Minister for Finance introduced the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. This bill was passed on March 11, 2016 in the middle of budget discussion on a short notice as a money bill in the Lok Sabha when only 73 of 545 members were present. Its timing and introduction as a money bill prevents necessary scrutiny given the large privacy risks that arise under it. This version of the bill was never put up for public consultation and is being rushed through without adequate discussion. Even substantively it fails to give accountable privacy safeguards while making Aadhaar mandatory for availing any government subsidy, benefit, or service.

      We urge the Union Government to urgently take steps to uphold the constitutional basis to the right to privacy and fulfil it’s constitutional and international obligations. We encourage the Government to have extensive public discussions on the Aadhaar Bill before notifying it. We further call upon them to constitute a drafting committee with members of civil society to draft a comprehensive statute as suggested by the Justice A.P. Shah Committee Report of 2012.

      Signatories:

      • Amber Sinha, the Centre for Internet and Society
      • Japreet Grewal, the Centre for Internet and Society
      • Joshita Pai, Centre for Communication Governance, National Law University
      • Raman Jit Singh Chima, Access Now
      • Sarvjeet Singh, Centre for Communication Governance, National Law University
      • Sumandro Chattapadhyay, the Centre for Internet and Society
      • Sunil Abraham, the Centre for Internet and Society
      • Vanya Rakesh, the Centre for Internet and Society

       

      Press Release, March 11, 2016: The Law cannot Fix what Technology has Broken!

      by Japreet Grewal and Sunil Abraham — last modified Mar 16, 2016 10:10 AM
      We published and circulated the following press release on March 11, 2016, as the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. This Bill was proposed by finance minister, Mr. Arun Jaitley to give legislative backing to Aadhaar, being implemented by the Unique Identification Authority of India (UIDAI).

       

      The Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 today. This Bill was proposed by finance minister, Mr. Arun Jaitley to give legislative backing to Aadhaar, being implemented by the Unique Identification Authority of India (UIDAI).

      The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. The Bill has made it compulsory for an individual to enrol under Aadhaar in order to receive any subsidy, benefit or service from the Government. Biometric information that is required for the purpose of enrolment has been deemed "sensitive personal information" and restrictions have been imposed on use, disclosure and sharing of such information for purposes other than authentication, disclosure made pursuant to a court order or in the interest of national security. Here, the Bill has acknowledged the standards of protection of sensitive personal information established under Section 43A of the Information Technology Act, 2000. The Bill has also laid down several penal provisions for acts that include impersonation at the time of enrolment, unauthorised access to the Central Identities Data Repository, unauthorised use by requesting entity, noncompliance with intimation requirements, etc.

      Key Issues

      1. Identification without Consent

      Before the Aadhaar project it was not possible for the Indian government to identify citizens without their consent. But once the government has created a national centralized biometric database it will be possible for the government to identify any citizen without their consent. Hi-resolution photography and videography make it trivial for governments and also any other actor to harvest biometrics remotely. In other words, the technology makes consent irrelevant. A German ministers fingerprints were captured by hackers as she spoke using hand gesture at at conference. In a similar manner the government can now identify us both as individuals and also as groups without requiring our cooperation. This has direct implications for the right to privacy as we will be under constant government surveillance in the future as CCTV camera resolutions improve and there will be chilling effects on the right to free speech and the freedom of association. The only way to fix this is to change the technology configuration and architecture of the project. The law cannot be used as band-aid on really badly designed technology.

      2. Fallible Technology

      The technology used for collection and authentication as been said to be fallible. It is understood that the technology has been feasible for a population of 200 million. The Biometrics Standards Committee of UIDAI has acknowledged the lack of data on how a biometric authentication technology will scale up where the population is about 1.2 billion. Further, a report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population.

      We know that the Aadhaar number has been issued to dogs, trees (with the Aadhaar letter containing the photo of a tree). There have been slip-ups in the Aadhaar card enrolment process, some cards have ended up with pictures of an empty chair, a tree or a dog instead of the actual applicants. An RTI application has revealed that the Unique Identification Authority of India (UIDAI) has identified more than 25,000 duplicate Aadhaar numbers in the country till August 2015.

      At the stage of authentication, the accuracy of biometric identification depends on the chance of a false positive— the probability that the identifiers of two persons will match. For the current population of 1.2 billion the expected proportion of duplicates is 1/121, a ratio which is far too high. In a recent paper in EPW by Hans Mathews, a mathematician with CIS, shows that as per UIDAI's own statistics on failure rates, the programme would badly fail to uniquely identify individuals in India. [1]

      Endnote

      [1] See: http://cis-india.org/internet-governance/blog/epw-27-february-2016-hans-varghese-mathews-flaws-in-uidai-process

       

      Press Release, March 15, 2016: The New Bill Makes Aadhaar Compulsory!

      by Amber Sinha — last modified Mar 16, 2016 10:11 AM
      We published and circulated the following press release on March 15, 2016, to highlight the fact that the Section 7 of the Aadhaar Bill, 2016 states that authentication of the person using her/his Aadhaar number can be made mandatory for the purpose of disbursement of government subsidies, benefits, and services; and in case the person does not have an Aadhaar number, s/he will have to apply for Aadhaar enrolment.

       

      Nandan Nilekani, the former chairperson of the Unique Identification Authority of India had repeatedly stated that Aadhaar is not mandatory. However, in the last few years various agencies and departments of the government, both at the central and state level, had made it mandatory in order to be able to avail beneficiary schemes or for the arrangement of salary, provident fund disbursals, promotion, scholarship, opening bank account, marriages and property registrations. In August 2015, the Supreme Court passed an order mandating that the Aadhaar number shall remain optional for welfare schemes, stating that no person should be denied any benefit for reason of not having an Aadhaar number, barring a few specified services.

      The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, however, has not followed this mandate. Section 7 of the Bill states that “a person should be authenticated or give proof of the Aadhaar number to establish his/her identity” “as a condition for receiving subsidy, benefit or service”. Further, it reads, “In the case a person does not have an Aadhaar number, he/she should make an application for enrollment.” The language of the provision is very clear in making enrollment in Aadhaar mandatory, in order to be entitled for welfare services. Section 7 also says that “the person will be offered viable and alternate means of identification for receiving the subsidy, benefit or service. However, these unspecified alternate means will be made available in the event “an Aadhaar number is not assigned”. This language is vague and it is not clear whether it mandates alternate means of identification for those who choose not to apply for an Aadhaar number for any reason. The fact that it does make it mandatory to apply for an Aadhaar number for persons without it, may lead to the presumption that the alternate means are to be made available for those who may have applied for an Aadhaar number but it has not been assigned for any reason. It is also noteworthy that draft legislation is silent on what the “viable and alternate means of identification” could be. There are a number of means of identification, which are recognised by the state, and a schedule with an inclusive list could have gone a long way in reducing the ambiguity in this provision.

      Another aspect of Section 7 which is at odds with the Supreme Court order is that it allows making an Aadhaar number mandatory for “for receipt of a subsidy, benefit or service for which the expenditure is incurred” from the Consolidated Fund of India. The Supreme Court had been very specific in articulating that having an Aadhaar number could not be made compulsory except for “any purpose other than the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene” or for the purpose of the LPG scheme. The restriction in the Supreme Court order was with respect to the welfare schemes, however, instead of specifying the schemes, Section 7 specified the source of expenditure from which subsidies, benefits and services can be funded, making the scope much broader. Section 7, in effect, allows the Central Government to circumvent the Supreme Court order if they choose to tie more subsidies, benefits and services to the Consolidated Fund of India.

      These provisions run counter to the repeated claims of the government for the last six years that Aadhaar is not compulsory, nor is the specification by the Supreme Court for restricting use of Aadhaar to a few services only, reflected anywhere in the Bill. The “viable and alternate means” clause is too vague and inadequate to prevent denial of benefits to those without an Aadhaar number. The sum effect of these factors is to give the Central Government powers to make Aadhaar mandatory, for all practical purposes.

       

      List of Recommendations on the Aadhaar Bill, 2016 - Letter Submitted to the Members of Parliament

      by Amber Sinha, Sumandro Chattapadhyay, Sunil Abraham, and Vanya Rakesh — last modified Mar 21, 2016 08:50 AM
      On Friday, March 11, the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. Based on these concerns, and numerous others, we submitted an initial list of recommendations to the Members of Parliaments to highlight the aspects of the Bill that require immediate attention.

       

      Download the submission letter: PDF.

       

      Text of the Submission

      On Friday, March 11, the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. The Bill has made it compulsory for all Indian to enroll for Aadhaar in order to receive any subsidy, benefit, or service from the Government whose expenditure is incurred from the Consolidate Fund of India. Apart from the issue of centralisation of the national biometric database leading to a deep national vulnerability, the Bill also keeps unaddressed two serious concerns regarding the technological framework concerned:

      • Identification without Consent: Before the Aadhaar project it was not possible for the Indian government or any private entity to identify citizens (and all residents) without their consent. But biometrics allow for non-consensual and covert identification and authentication. The only way to fix this is to change the technology configuration and architecture of the project. The law cannot be used to correct the problems in the technological design of the project.

      • Fallible Technology: The Biometrics Standards Committee of UIDAI has acknowledged the lack of data on how a biometric authentication technology will scale up where the population is about 1.2 billion. The technology has been tested and found feasible only for a population of 200 million. Further, a report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population. For the current Indian population of 1.2 billion the expected proportion of duplicates is 1/121, a ratio which is far too high. [1]

      Based on these concerns, and numerous others, we sincerely request you to ensure that the Bill is rigorously discussed in Rajya Sabha, in public, and, if needed, also by a Parliamentary Standing Committee, before considering its approval and implementation. Towards this, we humbly submit an initial list of recommendations to highlight the aspects of the Bill that require immediate attention:

      1. Implement the Recommendations of the Shah and Sinha Committees: The report by the Group of Experts on Privacy chaired by the Former Chief Justice A P Shah [2] and the report by the Parliamentary Standing Committee on Finance (2011-2012) chaired by Shri Yashwant Sinha [3] have suggested a rigorous and extensive range of recommendations on the Aadhaar / UIDAI / NIAI project and the National Identification Authority of India Bill, 2010 from which the majority sections of the Aadhaar Bill, 2016, are drawn. We request that these recommendations are seriously considered and incorporated into the Aadhaar Bill, 2016.

      2. Authentication using the Aadhaar number for receiving government subsidies, benefits, and services cannot be made mandatory: Section 7 of the Aadhaar Bill, 2016, states that authentication of the person using her/his Aadhaar number can be made mandatory for the purpose of disbursement of government subsidies, benefits, and services; and in case the person does not have an Aadhaar number, s/he will have to apply for Aadhaar enrolment. This sharply contradicts the claims made by UIDAI earlier that the Aadhaar number is “optional, and not mandatory”, and more importantly the directive given by the Supreme Court (via order dated August 11, 2015). The Bill must explicitly state that the Aadhaar number is only optional, and not mandatory, and a person without an Aadhaar number cannot be denied any democratic rights, and public subsidies, benefits, and services, and any private services.

      3. Vulnerabilities in the Enrolment Process: The Bill does not address already documented issues in the enrolment process. In the absence of an exhaustive list of information to be collected, some Registrars are permitted to collect extra and unnecessary information. Also, storage of data for elongated periods with Enrollment agencies creates security risks. These vulnerabilities need to be prevented through specific provisions. It should also be mandated for all entities including the Enrolment Agencies, Registrars, CIDR and the requesting entities to shift to secure system like PKI based cryptography to ensure secure method of data transfer.

      4. Precisely Define and Provide Legal Framework for Collection and Sharing of Biometric Data of Citizens: The Bill defines “biometric information” is defined to include within its scope “photograph, fingerprint, iris scan, or other such biological attributes of an individual.” This definition gives broad and sweeping discretionary power to the UIDAI / Central Government to increase the scope of the term. The definition should be exhaustive in its scope so that a legislative act is required to modify it in any way.

      5. Prohibit Central Storage of Biometrics Data: The presence of central storage of sensitive personal information of all residents in one place creates a grave security risk. Even with the most enhanced security measures in place, the quantum of damage in case of a breach is extremely high. Therefore, storage of biometrics must be allowed only on the smart cards that are issued to the residents.

      6. Chain of Trust Model and Audit Trail: As one of the objects of the legislation is to provide targeted services to beneficiaries and reduce corruption, there should be more accountability measures in place. A chain of trust model must be incorporated in the process of enrolment where individuals and organisations vouch for individuals so that when a ghost is introduced someone has can be held accountable blame is not placed simply on the technology. This is especially important in light of the questions already raised about the deduplication technology. Further, there should be a transparent audit trail made available that allows public access to use of Aadhaar for combating corruption in the supply chain.

      7. Rights of Residents: There should be specific provisions dealing with cases where an individual is not issued an Aadhaar number or denied access to benefits due to any other factor. Additionally, the Bill should make provisions for residents to access and correct information collected from them, to be notified of data breaches and legal access to information by the Government or its agencies, as matter of right. Further, along with the obligations in Section 8, it should also be mandatory for all requesting entities to notify the individuals of any changes in privacy policy, and providing a mechanism to opt-out.

      8. Establish Appropriate Oversight Mechanisms: Section 33 currently specifies a procedure for oversight by a committee, however, there are no substantive provisions laid down that shall act as the guiding principles for such oversight mechanisms. The provision should include data minimisation, and “necessity and proportionality” principles as guiding principles for any exceptions to Section 29.

      9. Establish Grievance Redressal and Review Mechanisms: Currently, there are no grievance redressal mechanism created under the Bill. The power to set up such a mechanism is delegated to the UIDAI under Section 23 (2) (s) of the Bill. However, making the entity administering a project, also responsible for providing for the frameworks to address the grievances arising from the project, severely compromises the independence of the grievance redressal body. An independent national grievance redressal body with state and district level bodies under it, should be set up. Further, the NIAI Bill, 2010, provided for establishing an Identity Review Committee to monitor the usage pattern of Aadhaar numbers. This has been removed in the Aadhaar Bill 2016, and must be restored.

       

      Endnotes

      [1] See: http://cis-india.org/internet-governance/blog/Flaws_in_the_UIDAI_Process_0.pdf.

      [2] See: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf.

      [3] See: http://164.100.47.134/lsscommittee/Finance/15_Finance_42.pdf.

      Are we Losing the Right to Privacy and Freedom of Speech on Indian Internet?

      by Amber Sinha — last modified Mar 16, 2016 02:44 PM
      The article was published in DNA on March 10, 2016.

      Last month, it was reported that National Security Council Secretariat (NSCS) had proposed the setting up of a National Media Analytics Centre (NMAC). This centre’s mandate would be to monitor blogs, media channels, news outlets and social media platforms. Sources were quoted as stating that the centre would rely upon a tracking software built by Ponnurangam Kumaraguru, an Assistant Professor at the Indraprastha Institute of Information Technology in Delhi. The NMAC seems to mirror other similar efforts in countries such as US, Canada, Australia and UK, to monitor online content for the reasons as varied as prevention of terrorist activities, disaster relief and criminal investigation.

      The NSCS, the parent body that this centre will fall under, is a part of the National Security Council, India’s highest agency looking to integrate policy-making and intelligence analysis, and advising the Prime Minister’s Office on strategic issues as well as domestic and international threats. The NSCS represents the Joint Intelligence Committee and its duties include the assessment of intelligence from the Intelligence Bureau, Research and Analysis Wing (R&AW) and Directorates of Military, Air and Naval Intelligence, and the coordination of the functioning of intelligence agencies.

      From limited reports available, it appears that the tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments. The reports say that the software will also try to determine if the comments are factually correct or not. The idea of a government agency systematically tracking social media, blogs and news outlets and categorising content as desirable and undesirable is bound to create a chilling effect on free speech online. The most disturbing part of the report suggested that the past pattern of writers’ posts would be analysed to see how often her posts fell under the negative category, and whether she was attempting to create trouble or disturbance, and appropriate feedback would be sent to security agencies based on it. Viewed alongside the recent events where actors critical of the government and holding divergent views have expressed concerns about attempts to suppress dissenting opinions, this initiative sounds even more dangerous, putting at risk individuals categorised as “negative” or “belligerent”, for exercising their constitutionally protected right to free speech.

      FB

      Getty Images

      It has been argued that the Internet is a public space, and should be treated as subject to monitoring by the government as any other space. Further, this kind of analysis does not concern itself with private communication between two or more parties but only with publicly available information. Why must we raise eyebrows if the government is accessing and analysing it for the purposes of legitimate state interests? There are two problems with this argument. First, any surveillance of communication must always be limited in scope, specific to individuals, necessary and proportionate, and subject to oversight. There are no laws passed by the Parliament in India which allow for mass surveillance measures. Such activities are being conducted through bodies like NSC which came into existence through an Executive Order and have no clear oversight mechanisms built into its functioning. A quick look at the history of intelligence and surveillance agencies in India will show that none of them have been created through a legislation. A host of surveillance agencies have come up in the last few years including the Central Monitoring System, which was set up to monitor telecommunications, and the absence of legislative pedigree translates into lack of appropriate controls and safeguards, and zero public accountability.

      The second and the larger issue is that the scale and level of granularity of personal information available now is unprecedented. Earlier, our communications with friends and acquaintances, our movements, our association, political or otherwise, were not observable in the manner it is today. It would be remiss to underestimate the importance of personal information merely because it exists in the public domain. The ability to act without being subject to monitoring and surveillance is key to the right to free speech and expression. While we accept the importance of free speech and the value of an open internet and newer technologies to enable it, we do not give sufficient importance to how these technologies are affecting the right to privacy.

      Tweets

      Getty Images

      In the last few years, the social media scene in India has been characterised by extreme polemic with epithets such as ‘bhakt’, ‘sanghi’, ‘sickular’ and ‘presstitutes’ thrown around liberally, turning political discussions into a mess of ugliness. It remains to be seen whether the NMAC intends to deal with the professional trolls who rely on a barrage of abuse to disrupt public conversations online. However, the appropriate response would not be greater surveillance, let alone a body like NMAC, with a sweeping mandate and little accountability.

      Link to the original here.

      Privacy Concerns Overshadow Monetary Benefits of Aadhaar Scheme

      by Pranesh Prakash and Amber Sinha — last modified Mar 17, 2016 04:12 PM
      Since its inception in 2009, the Aadhaar system has been shrouded in controversy over issues of privacy, security and viability. It has been implemented without a legislative mandate and has resulted in a PIL in the Supreme Court, which referred it to a Constitution bench. On Friday, it kicked up more dust when the Lok Sabha passed a Bill to give statutory backing to the unique identity number scheme.
      Privacy Concerns Overshadow Monetary Benefits of Aadhaar Scheme

      A villager goes through the process of eye scanning for Unique Identification (UID) database system at an enrolment centre at Merta district in the desert Indian state of Rajasthan. (REUTERS)

      The article was published in the Hindustan Times on March 12, 2016.


      There was an earlier attempt to give legislative backing to this project by the UPA government, but a parliamentary standing committee, led by BJP leader Yashwant Sinha, had rejected the bill in 2011 on multiple grounds. In an about-turn, the BJP-led NDA government decided to continue with Aadhaar despite most of those grounds still remaining.

      Separately, there have been orders passed by the Supreme Court that prohibit the government from making Aadhaar mandatory for availing government services whereas this Bill seeks to do precisely that, contrary to the government’s argument that Aadhaar is voluntary.

      In some respects, the new Aadhaar Bill is a significant improvement over the previous version. It places stringent restrictions on when and how the UID Authority (UIDAI) can share the data, noting that biometric information — fingerprint and iris scans — will not be shared with anyone. It seeks prior consent for sharing data with third party. These are very welcome provisions.

      But a second reading reveals the loopholes.

      The government will get sweeping power to access the data collected, ostensibly for “efficient, transparent, and targeted delivery of subsidies, benefits and services” as it pleases “in the interests of national security”, thus confirming the suspicions that the UID database is a surveillance programme masquerading as a project to aid service delivery.

      The safeguards related to accessing the identification information can be overridden by a district judge. Even the core biometric information may be disclosed in the interest of national security on directions of a joint secretary-level officer. Such loopholes nullify the privacy-protecting provisions.

      Amongst the privacy concerns raised by the Aadhaar system are the powers it provides private third parties to use one’s UID number. This concern, which wouldn’t exist without a national ID squarely relates to Aadhaar and needs a more comprehensive data protection law to fix it. The supposed data protection under the Information Technology Act is laughable and inadequate.

      The Bill was introduced as a Money Bill, normally reserved for matters related to taxation, borrowing and the Consolidated Fund of India (CFI), and it would be fair to question whether this was done to circumvent the Rajya Sabha.

      None of the above arguments even get to the question of implementation.

      Aadhaar hasn’t been working. When looking into reasons why 22% of PDS cardholders in Andhra Pradesh didn’t collect their rations it was found that there was fingerprint authentication failure in 290 of the 790 cardholders, and in 93 instances there was an ID mismatch. A recent paper in the Economic and Political Weekly by Hans Mathews, a mathematician with the CIS, shows the programme would fail to uniquely identify individuals in a country of 1.2 billion.

      The debate shouldn’t be only about the Aadhaar Bill being passed off as a Money Bill and about the robustness of its privacy provisions, but about whether the Aadhaar project can actually meet its stated goals.

      Analysis of Aadhaar Act in the Context of A.P. Shah Committee Principles

      by Vipul Kharbanda — last modified Mar 17, 2016 07:43 PM
      Whilst there are a number of controversies relating to the Aadhaar Act including the fact that it was introduced in a manner so as to circumvent the majority of the opposition in the upper house of the Parliament and that it was rushed through the Lok Sabha in a mere eight days, in this paper we shall discuss the substantial aspects of the Act in relation to privacy concerns which have been raised by a number of experts. In October 2012, the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee submitted its report which listed nine principles of privacy which all legislations, especially those dealing with personal should adhere to. In this paper, we shall discuss how the Aadhaar Act fares vis-à-vis these nine principles.

       

      Introduction

      The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (the “Aadhaar Act”) was introduced in the Lok Sabha (lower house of the Parliament) by Minister of Finance, Mr. Arun Jaitley, in on March 3, 2016, and was passed by the Lok Sabha on March 11, 2016. It was sent back by the Rajya Sabha with suggestions but the Lok Sabha rejected those suggestions, which means that the Act is now deemed to have been passed by both houses as it was originally introduced as a Money Bill. Whilst there are a number of controversies relating to the Aadhaar Act including the fact that it was introduced in a manner so as to circumvent the majority of the opposition in the upper house of the Parliament and that it was rushed through the Lok Sabha in a mere eight days, in this paper we shall discuss the substantial aspects of the Act in relation to privacy concerns which have been raised by a number of experts. In October 2012, the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee submitted its report which listed nine principles of privacy which all legislations, especially those dealing with personal should adhere to. In this paper, we shall discuss how the Aadhaar Act fares vis-à-vis these nine principles.

      In order for the reader to better understand the frame of reference on which we shall analyse the Aadhaar Act, the nine principles contained in the report of the Group of Experts on Privacy are explained in brief below:

      • Principle 1: Notice - Does the legislation/regulation require that entities governed by the Act give simple to understand notice of its information practices to all individuals, in clear and concise language, before any personal information is collected from them.
      • Principle 2: Choice and Consent - Does the legislation/regulation require that entities governed under the Act provide the individual with the option to opt in/opt out of providing their personal information.
      • Principle 3: Collection Limitation - Does the legislation/regulation require that entities governed under the Act collect personal information from individuals only as is necessary for a purpose identified.
      • Principle 4: Purpose Limitation - Does the legislation/regulation require that personal data collected and processed by entities governed by the Act be adequate and relevant to the purposes for which they are processed.
      • Principle 5: Access and Correction - Does the legislation/regulation allow individuals: access to personal information about them held by an entity governed by the Act; the ability to seek correction, amendments, or deletion of such information where it is inaccurate, etc.
      • Principle 6: Disclosure - Does the legislation ensure that information is only disclosed to third parties after notice and informed consent is obtained. Is disclosure allowed for law enforcement purposes done in accordance with laws in force.
      • Principle 7: Security - Does the legislation/regulation ensure that information that is collected and processed under that Act, is done so in a manner that protects against loss, unauthorized access, destruction, etc.
      • Principle 8: Openness - Does the legislation/regulation require that any entity processing data take all necessary steps to implement practices, procedures, policies and systems in a manner proportional to the scale, scope, and sensitivity to the data that is collected and processed and is this information made available to all individuals in an intelligible form, using clear and plain language?
      • Principle 9: Accountability - Does the legislation/regulation provide for measures that ensure compliance of the privacy principles? This would include measures such as mechanisms to implement privacy policies; including tools, training, and education; and external and internal audits.

       

      Analysis of the Aadhaar Act

      The Aadhaar Act has been brought about to give legislative backing to the most ambitious individual identity programme in the world which aims to provide a unique identity number to the entire population of India. The rationale behind this scheme is to correctly identify the beneficiaries of government schemes and subsidies so that leakages in government subsidies may be reduced. In furtherance of this rationale the Aadhaar Act gives the Unique Identification Authority of India (“UIDAI”) the power to enroll individuals by collecting their demographic and biometric information and issuing an Aadhaar number to them. Below is an analysis of the Act based on the privacy principles enumerated I the A.P. Shah Committee Report.

      Collection Limitation

      Collection of Biometric and Demographic Information: The Aadhaar Act entitles every “resident” [1] to obtain an Aadhaar number by submitting his/her biometric (photograph, finger print, Iris scan) and demographic information (name, date of birth, address [2]) [3]. It must be noted that the Act leaves scope for further information to be included in the collection process if so specified by regulations. It must be noted that although the Act specifically provides what information can be collected, it does not specifically prohibit the collection of further information. This becomes relevant because it makes it possible for enrolling agencies to collect extra information relating to individuals without any legal implications of such act.

      Authentication Records: The UIDAI is mandated to maintain authentication records for a period which is yet to be specified (and shall be specified in the regulations) but it cannot collect or keep any information regarding the purpose for which the authentication request was made [4].

      Unauthorized Collection: Any person who in not authorized to collect information under the Act, and pretends that he is authorized to do so, shall be punishable with imprisonment for a term which may extend to three years or with a fine which may extend to Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [5]. It must be noted that the section, as it is currently worded seems to criminalize the act of impersonation of authorized individuals and the actual collection of information is not required to complete this offence. It is not clear if this section will apply if a person who is authorized to collect information under the Act in general, collects some information that he/she is not authorized to collect.

      Notice

      Notice during Collection: The Aadhaar Act requires that the agencies enrolling people for distribution of Aadhaar numbers should give people notice regarding: (a) the manner in which the information shall be used; (b) the nature of recipients with whom the information is intended to be shared during authentication; and (c) the existence of a right to access information, the procedure for making requests for such access, and details of the person or department in-charge to whom such requests can be made [6]. A failure to comply with this requirement will make the agency liable for imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [7]. It must be noted that the Act leaves the manner of giving such notice in the realm of regulations and does not specify how this notice is to be provided, which leaves important specifics to the realm of the executive.

      Notice during Authentication: The Aadhaar Act requires that authenticating agencies shall give information to the individuals whose information is to be authenticated regarding (a) the nature of information that may be shared upon authentication; (b) the uses to which the information received during authentication may be put by the requesting entity; and (c) alternatives to submission of identity information to the requesting entity [8]. A failure to comply with this requirement will make the agency liable for imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [9]. Just as in the case of notice during collection, the manner in which the notice is required to be given is left to regulations leaving an unclear picture as to how comprehensive, accessible, and frequent this notice must be.

      Access and Correction

      Updating Information: The Aadhaar Act give the UIDAI the power to require residents to update their demographic and biometric information from time to time so as to maintain its accuracy [10].

      Access to Information: The Aadhaar Act provides that Aadhaar number holders may request the UIDAI to provide access to their identity information expect their core biometric information [11]. It is not clear why access to the core biometric information [12] is not provided to an individual. Further, since section 6 seems to place the responsibility of updation and accuracy of biometric information on the individual, it is not clear how a person is supposed to know that the biometric information contained in the database has changed if he/she does not have access to the same. It may also be noted that the Aadhaar Act provides only for a request to the UIDAI for access to the information and does not make access to the information a right of the individual, this would mean that it would be entirely upon the discretion of the UIDAI to refuse to grant access to the information once a request has been made.

      Alteration of Information: The Aadhaar Act gives individuals the right to request the UIDAI to alter their demographic if the same is incorrect or has changed and biometric information if it is lost or has changed. Upon receipt of such a request, if the UIDAI is satisfied, then it may make the necessary alteration and inform the individual accordingly. The Act also provides that no identity information in the Central database shall be altered except as provided in the regulations [13]. This section provides for alteration of identity information but only in the circumstances given in the section, for example demographic information cannot be changed if it has been lost, similarly biometric information cannot be changed if it is inaccurate. Further, the section does not give a right to the individual to get the information altered but only entitles him/her to request the UIDAI to make a change and the final decision is left to the “satisfaction” of the UIDAI.

      Access to Authentication Record: Every individual is given the right to obtain his/her authentication record in a manner to be specified by regulations. [14]

      Disclosure

      Sharing during Authentication: The UIDAI is entitled to reply to any authentication query with a positive, negative or any other response which may be appropriate and may share identity information except core biometric information with the requesting entity [15]. The language in this provision is ambiguous and it is unclear what 'identity information' may be shared and why it would be necessary to share such information as Aadhaar is meant to be only a means of authentication so as to remove duplication.

      Potential Disclosure during Maintenance of CIDR: The UIDAI has been given the power to appoint any one or more entities to establish and maintain the Central Identities Data Repository (CIDR) [16]. If a private entity is involved in the maintenance and establishment of the CIDR it can be presumed that there is the possibilty that they would, to some degree, have access to the information stored in the CIDR, yet there are no clear standards in the Act regarding this potential access. And the process for appointing such entities. The fact that the UIDAI has been given the freedom to appoint an outside entity to maintain a sensitive asset such as the CIDR raises security concerns.

      Restriction on Sharing Information: The Aadhaar Act creates a blanket prohibition on the usage of core biometric information for any purpose other than generation of Aadhaar numbers and also prohibits its sharing for any reason whatsoever [17]. Other identity information is allowed to be shared in the manner specified under the Act or as may be specified in the regulations [18]. The Act further provides that the requesting entities shall not disclose the identity information except with the prior consent of the individual to whom the information relates [19]. There is also a prohibition on publicly displaying Aadhaar number or core biometric information except as specified by regulations [20]. Officers or the UIDAI or the employees of the agencies employed to maintain the CIDR are prohibited from revealing the information stored in the CIDR or authentication record to anyone [21]. It is not clear why an exception has been carved out and what circumstances would require publicly displaying Aadhaar numbers and core biometric information, especially since the reasons for which such important information may be displayed has been left up to regulations which have relatively less oversight. The section also provides the requesting entities with an option to further disclose information if they take consent of the individuals. This may lead to a situation where a requesting entity, perhaps the of an essential service, may take the consent of the individual to disclose his/her information in a standard form contract, without the option of saying no to such a request. It may lead to situations where the option is between giving consent to disclosure or denial or service altogether. For this reason it is necessary that there should be an opt in and opt out provision wherever a requesting entity has the power to ask for disclosure of information, so that people are not coerced into giving consent.

      Disclosure in Specific Cases: The prohibition on disclosure of information (except for core biometric information) does not apply in case of any disclosure made pursuant to an order of a court not below that of a District Judge [22]. There is another exception to the prohibition on disclosure of information (including core biometric information) in the interest of national security if so directed by an officer not below the rank of a Joint Secretary to the Government of India specially authorised in this behalf by an order of the Central Government. Before any such direction can take effect, it will be reviewed by an oversight committee consisting of the Cabinet Secretary and the Secretaries to the Government of India in the Department of Legal Affairs and the Department of Electronics and Information Technology. Any such direction shall be valid for a period of three months and may be extended by another three months after the review by the Oversight Committee [23]. Although this provision has been criticized, and rightly so, for the lack of accountability since the entire process is being handled within the executive and there is no independent oversight, however it must be mentioned that the level of oversight provided here is similar to that provided to interception requests, which involve a much graver if not the same level of invasion of privacy.

      Penalty for Disclosure: Any person who intentionally and in an unauthorized manner discloses, transmits, copies or otherwise disseminates any identity information collected in the course of enrolment or authentication shall be punishable with imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/ [24]. Further any person who intentionally and in an unathorised manner, accesses information in the CIDR [25], downloads, copies or extracts any data from the CIDR [26], or reveals or shares or distributes any identity information, shall be punishable with imprisonment of upto 3 years and a fine of not less than Rs. 10,00,000/-.

      Consent

      Consent for Authentication: A requesting entity has to take the consent of the individual before collecting his/her identity information for the purposes of authentication and also has to inform the individual of the alternatives to submission of the identity information [27]. Although this provision requires entities to take consent from the individuals before collecting information for authentication, however how useful this requirement of consent would be, still remains to be seen. There may be instances where a requesting entity may take the consent of the individual in a standard form contract, without the individual realizing what he/she is consenting to.

      Note: The Aadhaar Act provides no requirement or standard for the form of consent that must be taken during enrollment. This is significant as it is the point at which individuals are providing raw biometric material and during previous enrollment, has been a point of weakness as the consent taken is an enabler to function creep as it allows the UIDAI to share information with engaged in delivery of welfare services [28].

      Purpose

      Use of Information: The authenticating entities are allowed to use the identity information only for the purpose of submission to the CIDR for authentication [29]. Further, the Act specifies that identity information available with a requesting entity shall not be used for any purpose other than that specified to the individual at the time of submitting the information for authentication [30]. The Act also provides that any authentication entity which uses the information for any purpose not already specified will be liable to punishment of imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/ [31].

      Security

      Security and Confidentiality of Information: It is the responsibility of the UIDAI to ensure the security and confidentiality of the identity and authentication information and it is required to take all necessary action to ensure that the information in the CIDR is protected against unauthorized access, use or disclosure and against accidental or intentional destruction, loss or damage [32]. The UIDAI is required to adopt and implement appropriate technical and organisational security measures and also ensure that its contractors do the same [33]. It is also required to ensure that the agreements entered into with its contractors impose the same conditions as are imposed on the UIDAI under the Act and that they shall act only upon the instructions of the UIDAI [34].

      Biometric Information to be Electronic Record: The biometric information collected by the UIDAI has been deemed to be an “electronic record” as well as “sensitive personal data or information”, which would mean that in addition to the provisions of the Aadhaar Act, the provisions contained in the Information Technology Act, 2000 will also apply to such information [35]. It must be noted that while the Act lays down the principle that UIDAI is required to ensure the saecurity of the information, it does not lay down any guidelines as to the minimum security standards to be implemented by the Authority. However, through this section the legislature has linked the security standards contained in the IT Act to the information contained in this Act. While this is a clean way of dealing with the issue, some people may argue that the extremely sensitive nature of the information contained in the CIDR requires the standards for security to be much stricter than those provided in the IT Act. However, a perusal of Rule 8 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 shows that the Rules themselves provide that the standard of security must be commensurate with the information assets being protected. It would thus seem that the Act provides enough room to protect such important information, but perhaps leaves too much room for interpretation for such an important issue.

      Penalty for Unauthorised Access: Apart from the security provisions included in the legislation, the Aadhaar Act also provides for punishment of imprisonment of upto 3 years and a fine which shall not be less than Rs. 10,00,000/-, in case of the following offences:

      1. introduction of any virus or other computer contaminant in the CIDR [36];
      2. causing damage to the data in the CIDR [37];
      3. disruption of access to the CIDR [38];
      4. denial of access to any person who is authorised to access the CIDR [39];
      5. destruction, deletion or alteration of any information stored in any removable storage media or in the CIDR or diminishing its value or utility or affecting it injuriously by any means [40];
      6. stealing, concealing, destroying or altering any computer source code used by the Authority with an intention to cause damage [41].

      Further, unauthorized usage or tampering with the data in the CIDR or in any removable storage medium with the intent of modifying information relating to Aadhaar number holder or discovering any information thereof, is also punishable with imprisonment for a term which may extend to 3 years and also a fine which may extend to Rs. 10,000/- [42].

      Accountability

      Inspections and Audits: One of the functions listed in the powers and functions of the UIDAI is the power to call for information and records, conduct inspections, inquiries and audit of the operations of the CIDR, Registrars, enrolling agencies and other agencies appointed under the Aadhaar Act [43].

      Grievance Redressal: Another function of the UIDAI is to set up facilitation centres and grievance redressal mechanisms for redressal of grievances of individuals, Registrars, enrolling agencies and other service providers [44]. It must be said here that considering the importance that the government has given to and intends to give to Aadhaar in the future, an essential task such as grievance redressal should not be left entirely to the discretion of the UIDAI and some grievance redressal mechanism should be incorporated into the Act itself.

      Openness

      There does not seem to be any provision in the Aadhaar Act which requires the UIDAI to make its privacy policies and procedure available to the public in general even though the UIDAI has the responsibility to maintain the security and confidentiality of the information.

       

      Endnotes

      [1] A resident is defined as any person who has resided in India for a period of atleasy 182 days in the previous 12 months.

      [2] It has been specified that demographic information will not include race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history.

      [3] Section 3(1) of the Aadhaar Act.

      [4] Section 32(1) and 32(3) of the Aadhaar Act.

      [5] Section 36 of the Aadhaar Act.

      [6] Section 3(2) of the Aadhaar Act.

      [7] Section 41 of the Aadhaar Act.

      [8] Section 8(3) of the Aadhaar Act.

      [9] Section 41 of the Aadhaar Act.

      [10] Section 6 of the Aadhaar Act.

      [11] Section 28, proviso of the Aadhaar Act.

      [12] Core biometric information is defined as fingerprints, iris scan or other biological attributes which may be specified by regulations.

      [13] Section 31 of the Aadhaar Act.

      [14] Section 32(2) of the Aadhaar Act.

      [15] Section 8(4) of the Aadhaar Act.

      [16] Section 10 of the Aadhaar Act.

      [17] Section 29(1) of the Aadhaar Act.

      [18] Section 29(2) of the Aadhaar Act.

      [19] Section 29(3)(b) of the Aadhaar Act.

      [20] Section 29(4) of the Aadhaar Act.

      [21] Section 28(5) of the Aadhaar Act.

      [22] Section 33(1) of the Aadhaar Act.

      [23] Section 33(2) of the Aadhaar Act.

      [24] Section 37 of the Aadhaar Act.

      [25] Section 38(a) of the Aadhaar Act.

      [26] Section 38(b) of the Aadhaar Act.

      [27] Section 8(2)(a) and (c) of the Aadhaar Act.

      [28] For example, see: http://www.karnataka.gov.in/aadhaar/Downloads /Application%20form%20-%20English.pdf.

      [29] Section 8(2)(b) of the Aadhaar Act.

      [30] Section 29(3)(a) of the Aadhaar Act.

      [31] Section 37 of the Aadhaar Act.

      [32] Section 28(1), (2) and (3) of the Aadhaar Act.

      [33] Section 28(4)(a) and (b) of the Aadhaar Act.

      [34] Section 28(4)(c) of the Aadhaar Act.

      [35] Section 30 of the Aadhaar Act.

      [36] Section 38(c) of the Aadhaar Act.

      [37] Section 38(d) of the Aadhaar Act.

      [38] Section 38(e) of the Aadhaar Act.

      [39] Section 38(f) of the Aadhaar Act.

      [40] Section 38(h) of the Aadhaar Act.

      [41] Section 38(i) of the Aadhaar Act.

      [42] Section 39 of the Aadhaar Act.

      [43] Section 23(2)(l) of the Aadhaar Act.

      [44] Section 23(2)(s) of the Aadhaar Act.

       

      Vulnerabilities in the UIDAI Implementation Not Addressed by the Aadhaar Bill, 2016

      by Pooja Saxena and Amber Sinha — last modified Mar 21, 2016 08:33 AM
      In this infographic, we document the various issues in the Aadhaar enrolment process implemented by the UIDAI, and highlight the vulnerabilities that the Aadhaar Bill, 2016 does not address. The infographic is based on Vidushi Marda’s article 'Data Flow in the Unique Identification Scheme of India,' and is designed by Pooja Saxena, with inputs from Amber Sinha.

       

      Download the infographic: PDF and PNG.

       

      Credits: The illustration uses the following icons from The Noun Project - Thumpbrint created by Daouna Jeong, Duplicate created by Pham Thi Dieu Linh, Copy created by Mahdi Ehsaei.

      License: It is shared under Creative Commons Attribution 4.0 International License.

       

      Vulnerabilities in the UIDAI Implementation Not Addressed by the Aadhaar Bill, 2016

       

      The National Privacy Principles

      by Pooja Saxena and Amber Sinha — last modified Mar 21, 2016 09:48 AM
      In this infographic, we try to break down the National Privacy Principles developed by the Group of Experts on Privacy led by the Former Chief Justice A.P. Shah in 2012.

      License: It is shared under Creative Commons Attribution 4.0 International License.

      CIS' Statement on Sexual Harassment at ICANN55

      by Vidushi Marda — last modified Mar 21, 2016 03:22 PM

      The Centre for Internet and Society

      Statement on Sexual Harassment at ICANN55


      The Centre for Internet and Society (“CIS”) strongly condemns the acts of sexual harassment that took place against one of our representatives, Ms. Padmini Baruah, during ICANN 55 in Marrakech. It is completely unacceptable that an event the scale of an ICANN meeting does not have in place a formal redressal system, a neutral point of contact or even a policy for complainants who have been put through the ordeal of sexual harassment. ICANN cannot claim to be inclusive or diverse if it does not formally recognise a specific procedure or recourse under such instances.


      Ms. Baruah is by no means the first young woman to be subject to such treatment at an ICANN event, but she is the first to raise a formal complaint. Following the incident, she was given no immediate remedy or formal recourse, and that has left her with no option but to make the incident publicly known in the interim. The ombudsman’s office has been in touch with her, but this administrative process is simply inadequate for rights-violations.


      Ms. Baruah has received support from various community, staff, and board members. While we are thankful for their support, we believe that this situation can be better dealt with through some positive measures. We ask that ICANN carry out the following steps in order to make its meetings a truly safe and inclusive space:


      1. Institute a formal redressal system and policy with regard to sexual harassment within ICANN. The policy must be displayed on the ICANN website, at the venue of meetings and made available in delegate kits.

      2. Institute an Anti Sexual Harassment Committee that is neutral and approachable. Merely having an ombudsman who is a white male, however well intentioned, is inadequate and completely unhelpful to the complainant. The present situation is one where the ombudsman has no effective power and only advises the board.

      3. Conduct periodic gender and anti sexual harassment training of the ICANN board to help them better understand, recognise and address instances of sexual harassment.

      4. Conduct periodic gender and anti sexual harassment training for the ombudsman even if he/she will not be the exclusive point of contact for complainants as the ombudsman forms an important part of community and participant engagement

      5. Conduct periodic gender sensitisation for the ICANN community.

       

      Too Clever By Half: Strengthening India’s Smart Cities Plan with Human Rights Protection

      by Vanya Rakesh last modified Mar 22, 2016 01:49 PM
      The data involved in planning for urbanized and networked cities are currently flawed and politically-inflected. Therefore, we must ensure that basic human rights are not violated in the race to make cities “smart”.
      Too Clever By Half: Strengthening India’s Smart Cities Plan with Human Rights Protection

      Data-driven urban cities have drawn criticism as the initiative tends to homogenize Indian culture and treat them alike in terms of their political economy, culture, and governance. Photo Credit: Highways Agency, CC BY 2.0/Flickr

      The article was published in the Wire on March 21, 2016


      As Indian cities reposition themselves to play a significant role in development due to urban transformation, the government has envisioned building 100 smart cities across the country. Due to the lack of a precise definition as to what exactly constitutes a smart city, the mutual consensus that has evolved is that modern technology will be harnessed, which will lead to smart outcomes.

      Here, Big Data and analytics will play a predominant role by the way of cloud, mobile technology and other social technologies that gather data for the purpose of ascertaining and accordingly addressing concerns of people.

      Role of Big Data

      Leveraging city data and using geographical information systems (GIS) to collect valuable information about stakeholders are some techniques that are commonly used in smart cities to execute emergency systems, creating dynamic parking areas, naming streets, and develop monitoring. Other sources which would harness such data would be from fire alarms, in disaster management situations and energy saving mechanisms, which would sense, communicate, analyze and combine information across platforms to generate data to facilitate decision making and manage services.

      According to the Department of Electronics and Information Technology, the government’s plan to develop smart cities in the country could lead to a massive expansion of an IoT (Internet of Things) ecosystem within the country. The revised draft IoT policy aims at developing IoT products in this domain by using Big Data for government decision-making processes. For example, in India a key opportunity that has been identified is with regard to traffic management and congestion. Here, collecting data during peak hours, processing information in real time and using GPS history from mobile phones can give insight into the routes taken and modes of transportation preferred by commuters to deal with traffic woes. The Bengaluru Transport Information System (BTIS) was an early adopter of big data technology which resorted to aggregating data streams from multiple sources to enable planning of travel routes by avoiding traffic congestions, car-pooling, etc.

      Challenges

      The idea of a data-driven urban city has drawn criticism as the initiative tends to homogenize Indian culture and change the fabric of cities by treating them alike in terms of their political economy, culture, and governance.

      Despite basing the idea of a smart city on the assumption that technology-based solutions and techniques would be a viable solution for city problems in India, it is pertinent to note that the collection of personal real-time data may blur the line between personal data with the large data collected from multiple sources, leaving questions around privacy considerations, use and reuse of such data, especially by companies and businesses involved in providing services in legally and morally grey areas.

      Privacy concerns cloud the dependence on big data for functioning of smart cities as it may lead to erosion of privacy in different forms, for example if it is used to carry out surveillance, identification and disclosures without consent, discriminatory inferences, etc.

      Apart from right to privacy, a number of rights of an individual like the right to access and security rights would be at risk as it may enable practices of algorithmic social sorting (whether people get a loan, a tenancy, a job, etc.), and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed). Dataveillance raises concerns around access and use of data due to increase in digital footprints (data they themselves leave behind) and data shadows (information about them generated by others). Also, the challenges and the realities of getting access to correct and standardized data, and proper communication seem to be a hurdle which still needs to be overcome.

      The huge, yet untapped, amount of data available in India requires proper categorization and this makes a robust and reliable data management system prerequisite for realization of the country’s smart city vision. Cooperation between agencies in Indian cities and a holistic technology-based approach like ICT and GT (geospatial technologies) to resolve issues pertaining to wide use of technology is the need of the hour.  The skills to manage, analyze and develop insights for effective policy decisions are still being developed, particularly in the public sector. Recognizing this, Nasscom in India has announced setting up a Centre of Excellence (CoE) to create quality workforce.

      Though it is apparent that data will play a considerable role in smart city mission, the peril is lack of planning in terms of policies to govern the big data mechanics and use of data. This calls for development of suitable standards and policies to guide technology providers & administrators to manage and interpret data in a secured environment.

      Legal hurdles

      The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 deals with accountability regarding data security and protection as it applies to ‘body corporates’ and digital data. It defines a ‘body corporate’ as “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities” under the IT Act. Therefore, it can be ascertained that government bodies or individuals collecting and using Big Data for the smart cities in India would be excluded from the scope of these Rules. This highlights the lack of a suitable regulatory framework to take into account potential privacy challenges, which currently seem to be underestimated by our planners and administrators.

      Regarding access to open data, though the National Data Sharing and Accessibility Policy 2012 recognizes sensitive data, the term has not been clearly defined under it. However, the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 clearly define sensitive personal data or information. Therefore, the open data framework must refer to or adopt a clear definition drawing from section 43A Rules to bring clarity in this regard.

      Way forward

      As India moves toward a digital transformation, highlighted by flagship programmes like Smart Cities Mission, Digital India and the UID project, data regulation and recognition of use of data will change the nature of the relationship between the state and the individual.  However, this seems to have been overlooked. Policies that regulate the digital environment of the country will intertwine with urban policies due to the smart cities mission. Use of ICTs in the form of IoT and Big Data entails access to open data, bringing another policy area in its ambit which needs consideration. Identification/development of open standards for IoT particularly for interoperability between cross sector data must be looked at.

      To address privacy concerns due to the use of big data techniques, nuanced data legislation is required. For a conducive big data and technologically equipped environment, the governments must increase efforts to create awareness about the risks involved and provide assurance about the responsible use of data.

      Additionally, a lack of skilled and educated manpower to deal with such data effectively must also be duly considered.

      The concept note produced by the government reflects how it visualizes smart cities to be a product of marrying the physical form of cities and its infrastructure to a wider discourse on the use of technology and big data in city governance. This makes the role of big data quite indispensable, making it synonymous with the very notion of a smart city. However, the important issue is to understand that data analytics is only a part of the idea. What is additionally required is effective governance mechanism and political will. Collaboration and co-operation is the glue that will make this idea work. It is important to merge urban development policies with principles of democracy. The data involved in planning for urbanized and networked cities are currently flawed and politically-inflected. Therefore, collective efforts must go into minimizing pernicious effects of the same to ensure the basic human rights are not violated in the race to make cities “smart”.


      Vanya Rakesh is Programme Officer, The Centre for Internet & Society (CIS), Bangalore. Elonnai Hickok, Policy Director of CIS, also provided inputs for this story.

      Surveillance Project

      by Sunil Abraham last modified Apr 05, 2016 03:21 PM
      The Aadhaar project’s technological design and architecture is an unmitigated disaster and no amount of legal fixes in the Act will make it any better.
      Surveillance Project

      gummy finger to fool a biometric scanner can be produced using glue and a candle. Picture by K. Murali Kumar

      The article will be published in Frontline, April 15, 2016 print edition.


      Zero. The probability of some evil actor breaking into the central store of authentication factors (such as keys and passwords) for the Internet. Why? That is because no such store exists. And, what is the probability of someone evil breaking into the Central Identities Data Repository (CIDR) of the Unique Identification Authority of India (UIDAI)? Greater than zero. How do we know this? One, the central store exists and two, the Aadhaar Bill lists breaking into this central store as an offence. Needless to say, it would be redundant to have a law that criminalises a technological impossibility. What is the consequence of someone breaking into the central store? Remember, biometrics is just a fancy word for non-consensual and covert identification technology. High-resolution cameras can capture fingerprints and iris information from a distance.

      In other words, on March 16, when Parliament passed the Bill, it was as if Indian lawmakers wrote an open letter to criminals and foreign states saying, “We are going to collect data to non-consensually identify all Indians and we are going to store it in a central repository. Come and get it!” Once again, how do I know that the CIDR will be compromised at some date in the future? How can I make that policy prediction with no evidence to back it up? To quote Sherlock Holmes, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” If a back door to the CIDR exists for the government, then the very same back door can be used by an enemy within or from outside. In other words, the principle of decentralisation in cybersecurity does not require repeated experimental confirmation across markets and technologies.

      Zero. The chances that you can fix with the law what you have broken with poor technological choices and architecture. And, to a large extent vice versa. Aadhaar is a surveillance project masquerading as a development intervention because it uses biometrics. There is a big difference between the government identifying you and you identifying yourself to the government. Before UID, it was much more difficult for the government to identify you without your knowledge and conscious cooperation. Tomorrow, using high-resolution cameras and the power of big data, the government will be able to remotely identify those participating in a public protest. There will be no more anonymity in the crowd. I am not saying that law-enforcement agencies and intelligence agencies should not use these powerful technologies to ensure national security, uphold the rule of law and protect individual rights. I am only saying that this type of surveillance technology is inappropriate for everyday interactions between the citizen and the state.

      Some software engineers believe that there are technical fixes for these concerns; they point to the consent layer in the India stack developed through a public-private partnership with the UIDAI. But this is exactly what Evgeny Morozov has dubbed “technological solutionism”—fundamental flaws like this cannot be fixed by legal or technical band-aid. If you were to ask the UIDAI how do you ensure that the data do not get stolen between the enrolment machine and the CIDR, the response would be, we use state-of-the-art cryptography. If cryptography is good enough for the UIDAI why is it not good enough for citizens? That is because if citizens use cryptography [on smart cards] to identify themselves to the state, the state will need their conscious cooperation each time. That provides the feature that is required for better governance without the surveillance bonus. If you really must use biometrics, it could be stored on the smart card after being digitally signed by the enrolment officer. If there is ever a doubt whether the person has stolen the smart card, a special machine can be used to read the biometrics off the card and check that against the person. This way the power of biometrics would be leveraged without any of the accompanying harms.

      Zero. This time, for the utility of biometrics as a password or authentication factor. There are two principal reasons for which the Act should have prohibited the use of biometrics for authentication. First, biometric authentication factors are irrevocable unlike passwords, PINs, digital signatures, etc. Once a biometric authentication factor has been compromised, there is no way to change it. The security of a system secured by biometrics is permanently compromised. Second, our biometrics is so easy to steal; we leave our fingerprints everywhere.

      Also, if I upload my biometric data onto the Internet, I can then plausibly deny all transactions against my name in the CIDR. In order to prevent me from doing that, the government will have to invest in CCTV cameras [with large storage] as they do for passport-control borders and as banks do at ATMs. If you anyway have to invest in CCTV cameras, then you might as well stick with digital signatures on smart cards as the previous National Democratic Alliance (NDA) government proposed the SCOSTA (Smart Card Operating System Standard for Transport Application) standard for the MNIC (Multipurpose National ID Card). Leveraging smart card standards like EMV will ensure harnessing greater network effects thanks to the global financial infrastructure of banks. These network effects will drive down the cost of equipment and afford Indians greater global mobility. And most importantly when a digital signature is compromised the user can be issued a new smart card. As Rufo Guerreschi, executive director of Open Media Cluster, puts it, “World leaders and IT experts should realise that citizen freedoms and states’ ability to pursue suspects are not an ‘either or’ but a ‘both or neither’.”

      Near zero. We now move biometrics as the identification factor. The rate of potential duplicates or “False Positive Identification Rate” which according to the UIDAI is only 0.057 per cent. Which according to them will result in only “570 resident enrolments will be falsely identified as duplicate for every one million enrolments.” However, according to an article published in Economic & Political Weekly by my colleague at the Centre for Internet and Society, Hans Verghese Mathews, this will result in one out of every 146 people being rejected during enrolment when total enrolment reaches one billion people. In its rebuttal, the UIDAI disputes the conclusion but offers no alternative extrapolation or mathematical assumptions. “Without getting too deep into the mathematics” it offers an account of “a manual adjudication process to rectify the biometric identification errors”.

      This manual adjudication determines whether you exist and has none of the elements of natural justice such as notice to the affected party and opportunity to be heard. Elimination of ghosts is impossible if only machines and unaccountable humans perform this adjudication. This is because there is zero skin in the game. There are free tools available on the Internet such as SFinGe (Synthetic Fingerprint Generator) which allow you to create fake biometrics. The USB cables on the UIDAI-approved enrolment setup can be intercepted using generic hardware that can be bought online. With a little bit of clever programming, countless number of ghosts can be created which will easily clear the manual adjudication process that the UIDAI claims will ensure that “no one is denied an Aadhaar number because of a biometric false positive”.

      Near zero. This time for surveillance, which I believe should be used like salt in cooking. Essential in small quantities but counterproductive even if slightly in excess. There is a popular misconception that privacy researchers such as myself are opposed to surveillance. In reality, I am all for surveillance. I am totally convinced that surveillance is good anti-corruption technology.

      But I also want good returns on investment for my surveillance tax rupee. According to Julian Assange, transparency requirements should be directly proportionate to power; in other words, the powerful should be subject to more surveillance. And conversely, I add, privacy protections must be inversely proportionate to power—or again, in other words, the poor should be spared from intrusions that do not serve the public interest. The UIDAI makes the exact opposite design assumption; it assumes that the poor are responsible for corruption and that technology will eliminate small-ticket or retail corruption. But we all know that politicians and bureaucrats are responsible for most of large-ticket corruption.

      Why does not the UIDAI first assign UID numbers to all politicians and bureaucrats? Then using digital signatures why do not we ensure that we have a public non-repudiable audit trail wherein everyone can track the flow of benefits, subsidies and services from New Delhi to the panchayat office or local corporation office? That will eliminate big-ticket or wholesale corruption. In other words, since most of Aadhaar’s surveillance is targeted at the bottom of the pyramid, there will be limited bang for the buck. Surveillance is the need of the hour; we need more CCTVs with microphones turned on in government offices than biometric devices in slums.

      Instantiation technology

      One. And zero. In the contemporary binary and digital age, we have lost faith in the old gods. Science and its instantiation technology have become the new gods. The cult of technology is intolerant to blasphemy. For example, Shekhar Gupta recently tweeted saying that part of the opposition to Aadhaar was because “left-libs detest science/tech”. Technology as ideology is based on some fundamental articles of faith: one, new technology is better than old technology; two, expensive technology is better than cheap technology; three, complex technology is better than simple technology; and four, all technology is empowering or at the very least neutral. Unfortunately, there is no basis in science for any of these articles of faith.

      Let me use a simple story to illustrate this. I was fortunate to serve as a member of a committee that the Department of Biotechnology established to finalise the Human DNA Profiling Bill, 2015, which was to be introduced in Parliament in the last monsoon session. Aside: the language of the Act also has room for the database to expand into a national DNA database circumventing 10 years of debate around the controversial DNA Profiling Bill, 2015. The first version of this Bill that I read in January 2013 said that DNA profiling was a “powerful technology that makes it possible to determine whether the source of origin of one body substance is identical to that of another … without any doubt”. In other words, to quote K.P.C. Gandhi, a scientist from Truth Labs, “I can vouch for the scientific infallibility of using DNA profiling for carrying out justice.”

      Unfortunately, though, the infallible science is conducted by fallible humans. During one of the meetings, a scientist described the process of generating a biometric profile. The first step after the laboratory technician generated the profile was to compare the generated profile with her or his own profile because during the process of loading the machine with the DNA sample, some of the laboratory technician’s DNA could have contaminated the sample. This error would not be a possibility in much older, cheaper and rudimentary biometric technology for example, photography. A photographer developing a photograph in a darkroom does not have to ensure that his or her own image has not accidentally ended up on the negative. But the UIDAI is filled with die-hard techno-utopians; if you tell them that fingerprints will not work for those who are engaged in manual labour, they will say then we will use iris-based biometrics. But again, complex technologies are more fragile and often come with increased risks. They may provide greater performance and features, but sometimes they are easier to circumvent. A gummy finger to fool a biometric scanner can be produced using glue and a candle, but to fake a passport takes a lot of sophisticated technology. Therefore, it is important for us as a nation to give up our unquestioning faith in technology and start to debate the exact technological configurations of surveillance technology for different contexts and purposes.

      One. This time representing a monopoly. Prior to the UID project, nobody got paid when citizens identified themselves to the state. While the Act says that the UIDAI will get paid, it does not specify how much. Sooner or later, this cost of identification will be passed on to the citizens and residents. There will be a consumer-service provider relationship established between the citizen and the state when it comes to identification. The UIDAI will become the monopoly provider of identification and authentication services in India which is trusted by the government. That sounds like a centrally planned communist state to me. Should not the right-wing oppose the Act because it prevents the free market from working? Should not the free market pick the best technology and business model for identification and authentication? Will not that drive the cost of identification and authentication down and ensure higher quality of service for citizens and residents?

      Competing providers

      Competing providers can also publish transparency reports regarding their compliance with data requests from law-enforcement and intelligence agencies, and if this is important to consumers they will be punished by the market. The government can use mechanisms such as permanent and temporary bans and price regulation as disincentives for the creation of ghosts. There will be a clear financial incentive to keep the database clean. Just like the government established a regulatory framework for digital certificates in the Information Technology Act allowing for e-commerce and e-governance. Ideally, the Aadhaar Bill should have done something similar and established an ecosystem for multiple actors to provide services in this two-sided market. For it is impossible for a “small government” to have the expertise and experience to run one of the world’s largest database of biometric and transaction records securely for perpetuity.

      To conclude, I support the use of biometrics. I support government use of identification and authentication technology. I support the use of ID numbers in government databases. I support targeted surveillance to reduce corruption and protect national security. But I believe all these must be put in place with care and thought so that we do not end up sacrificing our constitutional rights or compromising the security of our nation state. Unfortunately, the Aadhaar project’s technological design and architecture is an unmitigated disaster and no amount of legal fixes in the Act will make it any better. Our children will pay a heavy price for our folly in the years to come. To quote the security guru Bruce Schneier, “Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.”

      Will Aadhaar Act Address India’s Dire Need For a Privacy Law?

      by Nehaa Chaudhari last modified Apr 05, 2016 04:01 PM

      The article was published by Quint on March 31, 2016.


      Snapshot

      The passage of the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (will hereby be referred to as “the Act”) has led to flak for the government from privacy advocates, academia and civil society, to name a few.

      To my mind, the opposition deserves its fair share of criticism (lacking so far), for its absolute failure to engage with and act as a check on the government in the passage of the Act, and the events leading up to it.

      The government’s introduction of the Act as a ‘money bill’ under Article 110 of the Constitution of India (“this/the Article”) is a mockery of the constitutional process. It renders redundant, the role of the Rajya Sabha as a check on the functioning of the Lower House.

      Article 110 limits a ‘money bill’ only to six specific instances: covering tax, the government’s financial obligations and, receipts and payments to and from the Consolidated Fund of India, and, connected matters.

      The Act lies well outside the confines of the Article; the government’s action may attract the attention of the courts.

      Political One-Upmanship

      Arun Jaitely
      Finance Minister Arun Jaitley (left) listens to Reserve Bank of India (RBI) Governor Raghuram Rajan. (Photo: Reuters)

      In the past, the Supreme Court (“the Court”) has stepped into the domain of the Parliament or the Executive when there was a complete and utter disregard for India’s constitutional scheme. In recent constitutional history, this is perhaps most noticeable in the anti-defection cases, (beginning with Kihoto Hollohan in 1992); and, in the SR Bommai case in 1994, on the imposition of the President’s rule in states.

      In hindsight, although India has benefited from the Court’s action in the Bommai and Hollohan cases, it is unlikely that the passage of the Aadhaar Act as a ‘money bill’, reprehensible as it is, meets the threshold required for the Court’s intervention in Parliamentary procedure.

      Besides, the manner of its passage, the Act warrants

      Instead, a part of the Aadhaar debate has involved political one-upmanship between the Congress and the BJP, pitting the former’s NIDAI Bill against the latter’s Aadhaar Act.

      While an academic comparison between the two is welcome, its use as a tool for political supremacy would be laughable, were it not deeply problematic, given the many serious concerns highlighted above.

      Better Than UPA Bill?

      Privacy
      The Act may have more privacy safeguards than the earlier UPA Bill. (Photo: iStockphoto)

      And while the Act may have more privacy safeguards than the earlier UPA Bill, critics have argued that they not up to the international standard, and instead, that they are plagued by opacity.

      Additionally, despite claims that the Act is a significant improvement over the UPA Bill, it fails to address concerns, including around the centralised storage of information, that were raised by civil society members and others.

      Perhaps most problematically, however, the Act takes away an individual’s control of her own information. Subsidies, government benefits and services are linked to the mandatory possession of an Aadhar number (Section 7 of the Act), effectively negating the ‘freedom’ of voluntary enrollment (Section 3 of the Act). This directly contradicts the recommendations of the Justice AP Shah Committee, before whom the Unique Identification Authority of India had earlier stated that enrollment in Aadhaar was voluntary.

      To make matters worse, the individual does not have the authority to correct, modify or alter her information; this lies, instead, with the UIDAI alone (Section 31 of the Act). And the sharing of such personal information does not require a court order in all cases.

      Students
      Kanhaiya Kumar speaking in JNU on 3 March 2016. (Photo: PTI)

       

      It may be authorised by Executive authorities under the vague, ill-understood concept of ‘national security’, (Section 33(2) of the Act) which the Act does not define. We would do well to learn the dangers of leaving ‘national security’ open to interpretation, in the aftermath of the recent events at JNU.


      These recent events around Aadhaar have only underscored the dire urgency for comprehensive privacy legislation in India and, the need to overhaul our data protection laws to meet our constitutional commitments along with international standards.

      Meanwhile, constitutional challenges to the Aadhaar scheme are currently pending in the Supreme Court. The Court’s verdict may well decide the future of the Aadhaar Act, with the stage already set for a constitutional challenge to the legislation. The BJP’s victory in this case may be short-lived.

      Sexual Harassment at ICANN

      by Padmini Baruah last modified Apr 06, 2016 02:40 PM
      Padmini Baruah represented the Centre for Internet & Society at ICANN in the month of March 2016. In a submission to ICANN she is calling upon the ICANN board for implementing a system for investigating cases related to sexual harassments.

      On the 6th of March, 2016, Sunday, at about 10 am in the gNSO working session being conducted at the room Diamant, I was sexually harassed by someone from the private sector constituency named Khaled Fattal. He approached me, pulled at my name tag, and passed inappropriate remarks. I felt like my space and safety as a young woman in the ICANN community was at stake.

      I had incidentally been in discussion with the ICANN Ombudsman on developing a clear and coherent sexual harassment policy and procedure for the specific purposes of ICANN’s public meetings. Needless to say, this incident pushed me to take forward what had hitherto been a mere academic interest with increased vigour. I was amazed, firstly that the office of the ombudsman only had two white male members manning it. I was initially inhibited by that very fact, but made two points before them:

      1. With respect to action on my individual case.
      2. With respect to the development of policy in general.

      I would like to put on record that the ombudsman office was extremely sympathetic and gave me a thorough hearing. They assured me that my individual complaint would be recorded, and sought to discuss the possibility of me raising a public statement with respect to policy, as they believed that the Board would be likely to take this suggestion up from a member of the community. I was also informed, astoundingly, that this was the first harassment case reported in the history of ICANN.

      I then, as a newcomer to the community, ran this idea of making a public statement by no means an easy task at all, given the attached stigma that comes with being branded a victim of a sexual crime by certain senior people within ICANN who had assured me that they would take my side in this regard. To my dismay, there were two strong stands of victim blaming and intimidation that I faced I was told, in some cases by extremely senior and well respected, prominent women in the ICANN community, that raising this issue up would demean my credibility, status and legitimacy in ICANN, and that my work would lose importance, and I would “...forever be branded as THAT woman.” My incident was also trivialised in offhand casual remarks such as “This happened because you are so pretty”, “Oh you filed a complaint, not against me I hope, ha ha” which all came from people who are very high up in the ICANN heirarchy. I was also asked if I was looking for money out of this. Click to read the full statement made to ICANN here.


      Aadhaar: Still Too Many Problems

      by Pranesh Prakash last modified Apr 06, 2016 03:31 PM
      While one wishes to welcome govt’s attempt to bring Aadhaar within a legislative framework, the fact is there are too many problems that still remain unaddressed for one to be optimistic.

      The article was published by Livemint on March 7, 2016.


      The Aadhaar Bill has been introduced as a money bill, even though it doesn’t qualify as such under Article 110 of the Constitution. If the Speaker agrees to this, it will render the Rajya Sabha toothless in this matter, and will weaken our democracy. The government should reintroduce it as an ordinary legislative bill, which is what it is.

      While the government has in the past argued before the Supreme Court that Aadhaar is voluntary, Section 7 of the bill allows the government to mandate an Aadhaar number (or application for an Aadhaar number) as a prerequisite for obtaining some subsidies, benefits, services, etc. This undermines its arguments before the Supreme Court, which led the court to pass orders holding that Aadhaar should not be made mandatory. This move to make it mandatory will now need the government to argue that rather than contravene the apex court order, it has instead removed the rationale for it.

      Interestingly, the Bharatiya Janata Party (BJP)-led National Democratic Alliance (NDA) government seems to have done a U-turn on the issue of the unique identification number not being proof of citizenship or domicile. The previous Congress-led United Progressive Alliance (UPA) government never meant the Aadhaar number to be proof of citizenship or domicile. This was attacked by the Yashwant Sinha-chaired standing committee on finance, which feared that illegal immigrants would get Aadhaar numbers. Now, the BJP and the NDA seem to be in agreement with the original UPA vision of Aadhaar.

      Importantly, there is very strong language when it comes to the issue of privacy and confidentiality of the information that is held by the Unique Identification Authority of India (UIDAI). Section 29 (1), for instance, says that no biometric information will be shared for any reason whatsoever, or used for any purpose other than Aadhaar number generation and authentication. However, that provision is undermined wholly by Section 33, which says that “in the interest of national security”, the biometric info may be accessed if authorized by a joint secretary. This will only fan the fears of those who have argued that the real rationale for Aadhaar was not, in fact, delivery of services, but to create a national database of biometric data available to government snoops.


      Also Read

      Further, there are no remedies available for governmental abuse of this provision.

      Lastly, in terms of privacy, the concern of those people who have been opposing Aadhaar is not just that the biometric and other identity information may be leaked to private parties, but also that having a unique Aadhaar number helps private parties to combine and use other databases that are linked with Aadhaar numbers in a manner that is not within the subject’s control. This is not at all addressed in this bill, and we need a robust data protection law in order to do that.

      There are some other crucial details that the law doesn’t address: Is user consent, to be taken by third parties that use the UID database for authentication, needed for each instance of authentication, or would a general consent hold forever? How can consent be revoked?

      There were many other objections that were raised against the Aadhaar scheme that have not been addressed by the government. For instance, in a recent article in the Economic and Political Weekly, Hans Varghese Mathews points out that going by the test data UIDAI made available in 2012, for a population of 1.3 billion people, the incidence of false positives—the probability of the identities of two people matching—is 1/112.

      This is far too high a ratio to be acceptable.

      Actual data from the field in Andhra Pradesh—of people who were unable to claim rations under the public distribution system (PDS)—paints a worse picture. A survey commissioned by the Andhra Pradesh government said 48% of respondents pointed to Aadhaar-related failures as the cause of their inability to claim rations.

      So, even if the Aadhaar numbers were no longer issued to Lord Hanuman (Rajasthan), to dogs (e.g., Tommy Singh, a mutt in Madhya Pradesh), and with photos of a tree (New Delhi), it might not prove to be usable in a country of India’s size, given the capabilities of the fingerprint machines. As my colleague Sunil Abraham notes, the law cannot fix technological flaws.

      So, while one wishes one could welcome the government’s attempt to bring Aadhaar within a legislative framework, the fact is there are too many problems that still remain unaddressed for one to be optimistic.

      Pranesh Prakash is policy director at the Centre for Internet and Society, a think tank.

      Adoption of Standards in Smart Cities

      by Prasad Krishna last modified Apr 11, 2016 03:03 AM

      PDF document icon Adoption of Standards in Smart Cities.pdf — PDF document, 285 kB (292641 bytes)

      FAQ on the Aadhaar Project and the Bill

      by Elonnai Hickok, Vanya Rakesh, and Vipul Kharbanda — last modified Apr 13, 2016 02:06 PM
      This FAQ attempts to address the key questions regarding the Aadhaar/UIDAI project and the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 (henceforth, Bill). This is neither a comprehensive list of questions, nor does it contain fully developed answers. We will continue to add questions to this list, and edit/expand the answers, based on our ongoing research. We will be grateful to receive your comments, criticisms, evidences, edits, suggestions for new answers, and any other responses. These can either be shared as comments in the document hosted on Google Drive, or via tweets sent to the information policy team at @CIS_InfoPolicy.

       

      To comment on and/or download the file, click here.


       

      Aadhaar Act and its Non-compliance with Data Protection Law in India

      by Vanya Rakesh last modified Apr 18, 2016 11:43 AM
      This post compares the provisions of the Aadhaar Act, 2016, with India's data protection regime as articulated in the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

       

      Download the file: PDF.


      Amidst all the hue and cry, the Aadhaar Act 2016, which was introduced with the aim of providing statutory backing to the use of Aadhaar, was passed in the Lok Sabha in its original form on March 16, 2016, after rejecting the recommendations made by Rajya Sabha . Though the Act has been vehemently opposed on several grounds, one of the concerns that has been voiced is regarding privacy and protection of the demographic and biometric information collected for the purpose of issuing the Aadhaar number.

      In India, for the purpose of data protection, a body corporate is subject to section 43A of the Information Technology Act, 2000 ("IT Act ") and subsequent Rules, i.e. -The Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 ("IT Rules"). Section 43A of the IT Act, 2000 holds a body corporate, which is possessing, dealing or handling any sensitive personal data or information, and is negligent in implementing and maintaining reasonable security practices resulting in wrongful loss or wrongful gain to any person, liable to compensate the affected person and pay damages.

      Rule 3 of the IT Rules enlists personal information that would amount to Sensitive personal data or information of a person and includes the biometric information. Even the Aadhaar Act states under section 30 that the biometric information collected shall be deemed as "sensitive personal data or information", which shall have the same meaning as assigned to it in clause (iii) of the Explanation to section 43A of the IT Act; this reflects that biometric data collected in the Aadhaar scheme will receive the same level of protection as is provided to other sensitive personal data under Indian law. This implies that, the agencies contracted by the UIDAI (and not the UIDAI itself) to perform functions like collection, authentication, etc. like the Registrars, Enrolling Agencies and Requesting Entities, which meet the criteria of being a 'body corporate' as defined in section 43A, could be held responsible under this provision, as well as the Rules, to ensure security of the data and information of Aadhaar holder and could potentially be held liable for breach of information that results in loss to an individual if it can be proven that they failed to implement reasonable security practices and procedures.

      In light of the fact that some actors in the Aadhaar scheme could be held accountable and liable under section 43A and associated Rules, this article compares the regulations regarding data security as found in section 43A and IT Rules 2011 with the provisions of Aadhaar Act 2016, and discusses the implications of the differences, if any.

      1. Compensation and Penalty

      Section 43A: Section 43A of the IT Act, 2000 (Amended in 2008) provides for compensation for failure to protect data. It states that a body corporate, which is possessing, dealing or handling any sensitive personal data or information, and is negligent in implementing and maintaining reasonable security practices resulting in wrongful loss or wrongful gain to any person, is liable to compensate the affected person and pay damages not exceeding five crore rupees.

      Aadhaar Act : Chapter VII of the Act provides for offences and penalties, but does not talk about damages to the affected party.

      • Section 37 states that intentional disclosure or dissemination of identity information, to any person not authorised under the Aadhaar Act, or in violation of any agreement entered into under the Act, will be punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
      • Section 38 prescribes penalty with imprisonment up to three years and a fine not less than ten lakh rupees in case any of the acts listed under the provision are performed without authorisation from the UIDAI.
      • Section 39 prescribes penalty with imprisonment for a term which may extend to three years and fine which may extend to ten thousand rupees for tampering with data in Central Identities Data Repository.
      • Section 40 holds a requesting entity liable for penalty for use of identity information in violation of Section 8 (3) with imprisonment up to three years and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
      • Section 41 holds a requesting entity or enrolling agency liable for penalty for violation of Section 8 (3) or Section 3 (2) with imprisonment up to one year and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
      • Section 42 provides general penalty for any offence against the Act or regulations made under it, for which no specific penalty is provided, with imprisonment up to one year and/or a fine up to twenty five thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

      Though the Aadhaar Act prescribes penalty in case of unauthorised access, use or any other act contravening the Regulations, it fails to guarantee protection to the information and does not provide for compensation in case of violation of the provisions.

      2. Privacy Policy

      IT Rules: Rule 4 requires a body corporate to provide a privacy policy on their website, which is easily accessible, provides for the type and purpose of personal, sensitive personal information collected and used, and Reasonable security practices and procedures.

      Aadhaar Act: Though in practise the contracting agencies (the body corporates under the Aadhaar ecosystem) may maintain a privacy policy on their website, the Aadhaar Act does not require a privacy policy for the UIDAI or other actors.

      Implications: Because contracting agencies will be covered by the IT Rules if they are 'body corporates', the requirement to maintain a privacy policy will be applicable to them.

      3. Consent

      IT Rules: Rule 5 requires that prior to the collection of sensitive personal data, the body corporate must obtain consent, either in writing or through fax regarding the purpose of usage before collection of such information.

      Aadhaar Act: The Act is silent regarding consent being acquired in case of the enrolling agency or registrars. However, section 8 provides that any requesting entity will take consent from the individual before collecting his/her Aadhaar information for authentication purposes, though it does not specify the nature (written/through fax).

      Implications: If the enrolling agency is a body corporate, they will also be required to take consent prior to collecting and processing biometrics. It is possible that since the Aadhaar Act envisages a scheme which is quasi-compulsory in nature, a consent provision was deliberately left out. This circumstance would give the enrolling agencies an argument against taking consent, by saying that the Aadhaar Act is a specific legislation which is also later in point of time than the IT Rules, and a deliberate omission of consent coupled with the compulsory nature of the Aadhaar scheme would mean that they are not required to take consent of the individuals before enrolment.

      4. Collection Limitation

      IT Rules: Rule 5 (2) requires that a body corporate should only collect sensitive personal data if it is connected to a lawful purpose and is considered necessary for that purpose.

      Aadhaar Act: Section 3(1) of the Act states that every resident shall be entitled to obtain an aadhaar number by submitting his demographic information and biometric information by undergoing the process of enrolment.

      5. Notice

      IT Rules: Rule 5(3) requires that while collecting information directly from an individual, the body corporate must provide the following information:

      • The fact that information is being collected
      • The purpose for which the information is being collected
      • The intended recipients of the information
      • The name and address of the agency that is collecting the information
      • The name and address of the agency that will retain the information

      Aadhaar Act: Section 3 of the Act states that at the time of enrolment and collection of information, the enrolling agency shall notify the individual as to how their information will be used; what type of entities the information will be shared with; and that they have a right to see their information and also tell them how they can see their information. However, the Act is silent regarding notice of name and address of the agency collecting and retaining the information.

      6. Retention Limitation

      IT Rules: Rule 5(4) requires that body corporate must retain sensitive personal data only for as long as it takes to fulfil the stated purpose or otherwise required under law.

      Aadhaar Act: The Act is silent regarding this and does not mention the duration for which the personal information of an individual shall be retained by the bodies/organisations contracted by UIDAI.

      7. Purpose Limitation

      IT Rules: Rule 5(5) requires that information must be used for the purpose that it was collected for.

      Aadhaar Act Section 57 contravenes this and states that the Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies. Section 8 of the Act states that for the purpose of authentication, a requesting entity is required to take consent before collection of Aadhaar information and use it only for authentication with the CIDR. Section 29 of the Act states that the core biometric information collected will not be shared with anyone for any reason, and must not be used for any purpose other than generation of Aadhaar numbers and authentication. Also, the Identity information available with a requesting entity will not be used for any purpose other than what is specified to the individual, nor will it be shared further without the individual's consent.

      Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies.

      8. Right to Access and Correct

      IT Rules : Rule 5(6) requires a body corporate to provide individuals with the ability to review the information they have provided and access and correct their personal or sensitive personal information.

      Aadhaar Act : The Act provides under section 3 that at the time of enrolment, the individual needs to be informed about the existence of a right to access information, the procedure for making requests for such access, and details of the person or department in-charge to whom such requests can be made. Section 28 of the Act provides that every aadhaar number holder may access his identity information except core biometric information. Section 32 provides that every Aadhaar number holder may obtain his authentication record. Also, if the demographic or biometric information about any Aadhaar number holder changes, is lost or is found to be incorrect, they may request the UIDAI to make changes to their record in the CIDR.

      9. Right to 'Opt Out' and Withdraw Consent

      IT Rules: Rule 5(7) requires that the individual must be provided with the option of 'opting out' of providing data or information sought by the body corporate. Also, they must have the right to withdraw consent at any point of time.

      Aadhaar Act: The Aadhaar Act does not provide an opt- out provision and also does not provide an option to withdraw consent at any point of time. Section 7 of the Aadhaar Act actually implies that once the Central or State government makes aadhaar authentication mandatory for receiving a benefit then the individual has no other option but to apply for an Aadhaar number. The only concession that is made is that if an Aadhaar number is not assigned to an individual then s/he would be offered some alternative viable means of identification for receiving the benefit.

      10. Grievance Officer

      IT Rules: Rule 5(9) requires that body corporate must designate a grievance officer for redressal of grievances, details of which must be posted on the body corporate's website and grievances must be addressed within a month of receipt.

      Aadhaar Act: The Aadhaar Act does not provide for any such mechanism for grievance redressal by the registrars, enrolling agencies or the requesting entities. However, since the contracting agencies will also get covered by the IT Rules if they are 'body corporates', the requirement to designate a grievance officer would be applicable to them as well due to the IT Rules.

      11. Disclosure with Consent, Prohibition on Publishing and Further Disclosure

      IT Rules: Rule 6 requires that body corporate must have consent before disclosing sensitive personal data to any third person or party, except in the case with Government agencies for the purpose of verification of identity, prevention, detection, investigation, on receipt of a written request. Also, the body corporate or any person on its behalf shall not publish the sensitive personal information and the third party receiving the sensitive personal information from body corporate or any person on its behalf shall not disclose it further.

      Aadhaar Act: Regarding the requesting entities, the Act provides that they shall not disclose the identity information except with the prior consent of the individual to whom the information relates. The Act also states that the Authority shall take necessary measures to ensure confidentiality of information against disclosures. However, as an exception under section 33, the UIDAI may reveal identity information, authentication records or any information in the CIDR following a court order by a District Judge or higher. The Act also allows disclosure made in the interest of national security following directions by a Joint Secretary to the Government of India, or an officer of a higher rank, authorised for this purpose. The Act is silent on the issue of obtaining consent of the individual under these exceptions. Additionally, the Act also states that the Aadhaar number or any core biometric information collected or created regarding an individual under the Act shall not be published, displayed or posted publicly, except for the purposes specified by regulations.

      12. Requirements for Transfer of Sensitive Personal Data

      IT Rules : Rule 7 requires that body corporate may transfer sensitive personal data into another jurisdiction only if the country ensures the same level of protection and may be allowed only if it is necessary for the performance of the lawful contract between the body corporate or any person on its behalf and provider of information or where such person has consented to data transfer.

      Aadhaar Act : The Act is silent regarding transfer of personal data into another jurisdiction by the any of the contracting bodies like the Registrar, Enrolling agencies or the requesting entities. However, if these agencies satisfy the requirement of being "body corporates" as defined under section 43A, then the above requirement regarding transfer of data to another jurisdiction under IT Rules would be applicable to them. However, considering the sensitive nature of the data involved, the lack of a prohibition of transferring data to another jurisdiction under the Aadhaar Act appears to be a serious lacuna.

      13. Security of Information

      IT Rules: Rule 8 requires that the body corporate must secure information in accordance with the ISO 27001 standard or any other best practices notified by Central Government. These practices must be audited annually or when the body corporate undertakes a significant up gradation of its process and computer resource.

      Aadhaar Act: Section 28 of the Act states that the UIDAI must ensure the security and confidentiality of identity information and authentication records. It also states that the Authority shall adopt and implement appropriate technical and organisational security measures, and ensure the same are imposed through agreements/arrangements with its agents, consultants, advisors or other persons. However, it does not mention which standards/measures have to be adopted by all the actors in Aadhaar ecosystem for ensuring the security of information, though it can be argued that if the contractors employed by the UIDAI are body corporate then the standards prescribed under the IT Rules would be applicable to them.

      Implications of the Differences for Body Corporates in Aadhaar Ecosystem

      An analysis of the Rules in comparison to the data protection measures under the Aadhaar Act shows that the requirements regarding protection of personal or sensitive personal information differ and are not completely in line with each other.

      Though the Aadhaar Act takes into account the provisions regarding consent of the individual, notice, restriction on sharing, etc., the Act is silent regarding many core measures like sharing of information across jurisdictions, taking consent before collection of information, adoption of security measures for protection of information, etc. which a body corporate in the Aadhaar ecosystem must adopt to be in compliance with section 43A of the IT Act. It is therefore important that the bodies collecting, handling, sharing the personal information and are governed by the Aadhaar Act, must adhere to section 43A and the IT Rules 2011. However, applicability of Aadhaar Act as well as section 43A and IT Rules 2011 would lead to ambiguity regarding interpretation and implementation of the Law. The differences must be duly taken into account and more clarity is required to make all the bodies under this Legislation like the enrolling agencies, Registrars and the Requesting Entities accountable under the correct provisions of Law. However, having two separate legislations governing the data protection standards in the Aadhaar scheme seems to have been overlooked. A harmonized and overarching privacy legislation is critical to avoid unclarity in the applicability of data protection standards and would also address many privacy concerns associated to the scheme.

      Appendix I

      The Rajya Sabha had proposed five amendments to the Aadhaar Act 2016, which are as follows:

      i. Opt-out clause: A provision to allow a person to "opt out" of the Aadhaar system, even if already enrolled.

      ii. Voluntary: To ensure that if a person chooses not to be part of the Aadhaar system, he/she would be provided "alternate and viable" means of identification for purposes of delivery of government subsidy, benefit or service.

      iii. Amendment restricting the use of Aadhaar numbers only for targeting of government benefits or service and not for any other purpose.

      iv. Amendment seeking change of the term "national security" to "public emergency or in the interest of public safety" in the provision specifying situations in which disclosure of identity information of an individual to certain law enforcement agencies can be allowed.

      v. Oversight Committee: The oversight committee , which would oversee the possible disclosure of information, should include either the Central Vigilance Commissioner or the Comptroller and Auditor-General.

      Sources:

      Appendix II - Section 43A: Compensation for Failure to Protect Data

      Where a body corporate, possessing, dealing or handling any sensitive personal data or information in a computer resource which it owns, controls or operates, is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person, such body corporate shall be liable to pay damages by way of compensation to the person so affected.

      For the purposes of this section:

      • "body corporate" means any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities;
      • "reasonable security practices and procedures" means security practices and procedures designed to protect such information from unauthorised access, damage, use, modification, disclosure or impairment, as may be specified in an agreement between the parties or as may be specified in any law for the time being in force and in the absence of such agreement or any law, such reasonable security practices and procedures, as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit;
      • "sensitive personal data or information" means such personal information as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit.'.

      The term 'body corporate' has been defined under section 43A as "any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities"

      Aadhaar Act 43A & IT Rules

      by Vanya Rakesh last modified Apr 17, 2016 02:23 PM

      PDF document icon Aadhaar Act, 43A & IT Rules_final draft.pdf — PDF document, 331 kB (339414 bytes)

      The Last Chance for a Welfare State Doesn’t Rest in the Aadhaar System

      by Sumandro Chattapadhyay last modified Apr 19, 2016 01:18 PM
      Boosting welfare is the message, which is how Aadhaar is being presented in India. The Aadhaar system as a medium, however, is one that enables tracking, surveillance, and data monetisation. This piece by Sumandro Chattapadhyay was published in The Wire on April 19, 2016.

       

      Originally published in and cross-posted from The Wire.


      Once upon a time, a king desired that his parrot should be taught all the ancient knowledge of the kingdom. The priests started feeding the pages of the great books to the parrot with much enthusiasm. One day, the king asked the priests if the parrot’s education has completed. The priests poked the belly of the parrot but it made no sound. Only the rustle of undigested pages inside the belly could be heard. The priests declared that the parrot is indeed a learned one now.

      The fate of the welfare system in our country is quite similar to this parrot from Tagore’s parable. It has been forcefully fed identification cards and other official documents (often four copies of the same) for years, and always with the same justification of making it more effective and fixing the leaks. These identification regimes are in effect killing off the welfare system. And some may say that that has been the actual plan in any case.

      The Aadhaar number has been recently offered as the ‘last chance’ for the ailing welfare system – a last identification regime that it needs to gulp down to survive. This argument wilfully overlooks the acute problems with the Aadhaar project.

      Firstly, the ‘last chance’ for a welfare state in India is not provided by implementing a new and improved identification regime (Aadhaar numbers or otherwise), but by enabling citizens to effectively track, monitor, and ensure delivery of welfare, services, and benefits. This ‘opening up’ of the welfare bureaucracy has been most effectively initiated by the Right to Information Act. Instead of a centralised biometrics-linked identity verification platform, which gives the privilege of tracking and monitoring welfare flows only to a few expert groups, an effective welfare state requires the devolution of such privilege and responsibility.

      We should harness the tracking capabilities of electronic financial systems to disclose how money belonging to the Consolidated Fund of India travel around state agencies and departmental levels. Instead, the Aadhaar system effectively stacks up a range of entry barriers to accessing welfare – from malfunctioning biometric scanners, to connectivity problems, to the burden of keeping one’s fingerprint digitally legible under all labouring and algorithmic circumstances.

      Secondly, authentication of welfare recipients by Aadhaar number neither make the welfare delivery process free of techno-bureaucratic hurdles, nor does it exorcise away corruption. Anumeha Yadav has recently documented the emerging ‘unrest at the ration shop’ across Rajasthan, as authentication processes face technical and connectivity delays, people get ‘locked out’ of public services for not having or having Aadhaar number with incorrect demographic details, and no mechanisms exist to provide rapid and definitive recourse.

      RTI activists at the Satark Nagrik Sangathan have highlighted that the Delhi ration shops, using Aadhaar-based authentication, maintain only two columns of data to describe people who have come to the shop – those who received their ration, and those who did not (without any indication of the reason). This leads to erasure-by-design of evidence of the number of welfare-seekers who are excluded from welfare services when the Aadhaar-based authentication process fails (for valid reasons, or otherwise).

      Reetika Khera has made it very clear that using Aadhaar Payments Bridge to directly transfer cash to a beneficiary’s account, in the best case scenario, may only take care of one form of corruption: deception (a different person claiming to be the beneficiary). But it does not address the other two common forms of public corruption: collusion (government officials approving undue benefits and creating false beneficiaries) and extortion (forceful rent seeking after the cash has been transferred to the beneficiary’s account). Evidently, going after only deception does not make much sense in an environment where collusion and extortion are commonplace.

      Thirdly, the ‘relevant privacy question’ for Aadhaar is not limited to how UIDAI protects the data collected by it, but expands to usage of Aadhaar numbers across the public and private sectors. The privacy problem created by the Aadhaar numbers does begin but surely not end with internal data management procedures and responsibilities of the UIDAI.

      On one hand, the Aadhaar Bill 2016 has reduced the personal data sharing restrictions of the NIAI Bill 2010, and has allowed for sharing of all data except core biometrics (fingerprints and iris scan) with all agencies involved in authentication of a person through her/his Aadhaar number. These agencies have been asked to seek consent from the person who is being authenticated, and to inform her/him of the ways in which the provided data (by the person, and by UIDAI) will be used by the agency. In careful wording, the Bill only asks the agencies to inform the person about “alternatives to submission of identity information to the requesting entity” (Section 8.3) but not to provide any such alternatives. This facilitates and legalises a much wider collection of personal demographic data for offering of services by public agencies “or any body corporate or person” (Section 57), which is way beyond the scope of data management practices of UIDAI.

      On the other hand, the Aadhaar number is being seeded to all government databases – from lists of HIV patients, of rural citizens being offered 100 days of work, of students getting scholarships meant for specific social groups, of people with a bank account. Now in some sectors, such as banking, inter-agency sharing of data about clients is strictly regulated. But we increasingly have non-financial agencies playing crucial roles in the financial sector – from mobile wallets to peer-to-peer transaction to innovative credit ratings. Seeding of Aadhaar into all government and private databases would allow for easy and direct joining up of these databases by anyone who has access to them, and not at all by security agencies only.

      When it becomes publicly acceptable that the money bill route was a ‘remedial’ instrument to put the Rajya Sabha ‘back on track’, one cannot not wonder about what was being remedied by avoiding a public debate about the draft bill before it was presented in Lok Sabha. The answer is simple: welfare is the message, surveillance is the medium.

      Acceptance and adoption of all medium requires a message, a content. The users are interested in the message. The message, however, is not the business. Think of Free Basics. Facebook wants people with none or limited access to internet to enjoy parts of the internet at zero data cost. Facebook does not provide the content that the users consume on such internet. The content is created by the users themselves, and also provided by other companies. Facebook own and control the medium, and makes money out of all content, including interactions, passing through it.

      The UIDAI has set up a biometric data bank and related infrastructure to offer authentication-as-a-service. As the Bill clarifies, almost all agencies (public or private, national or global) can use this service to verify the identity of Indian residents. Unlike Facebook, the content of these services do not flow through the Aadhaar system. Nonetheless, Aadhaar keeps track of all ‘authentication records’, that is records of whose identity was authenticated by whom, when, and where. This database is gold (data) mine for security agencies in India, and elsewhere. Further, as more agencies use authentication based on Aadhaar numbers, it becomes easier for them to combine and compare databases with other agencies doing the same, by linking each line of transaction across databases using Aadhaar numbers.

      Welfare is the message that the Aadhaar system is riding on. The message is only useful for the medium as far as it ensures that the majority of the user population are subscribing to it. Once the users are enrolled, or on-boarded, the medium enables flow of all kinds of messages, and tracking and monetisation (perhaps not so much in the case of UIDAI) of all those flows. It does not matter if the Aadhaar system is being introduced to remedy the broken parliamentary process, or the broken welfare distribution system. What matters is that the UIDAI is establishing the infrastructure for a universal surveillance system in India, and without a formal acknowledgement and legal framework for the same.

       

      RTI regarding Smart Cities Mission in India

      by Vanya Rakesh last modified Apr 21, 2016 02:12 AM

      PDF document icon RTI.pdf — PDF document, 11552 kB (11830139 bytes)

      RTI regarding Smart Cities Mission in India

      by Paul Thottan — last modified Apr 21, 2016 02:25 AM
      Centre for Internet & Society (CIS) had filed an RTI on 3 February 2016 before the Ministry of Urban Development (MoUD) regarding the Smart Cities Mission in India. The RTI sought information regarding the role of various foreign governments, private industry, multilateral bodies that will provide technical and financial assistance for this project and information on Government agreements regarding PPP’s for financing the project.

      A response to the RTI is here.


      1. The various government, private industry and civil society actors involved in the Smart Cities Mission.
      2. The various agreements the Government has undertaken through PPP’s for financing the mission.
      3. Role of private companies in this project.
      4. The process for selecting the cities for this mission and ministry responsible for this task.
      5. The various international organisations, foreign governments and multilateral bodies that will provide technical and financial assistance for this project.

      The MoUD sent its reply to the RTI application and the response is as follows:

      • With reference to the first query, the answer provided was that the mission statement and guidelines are available on the Missions website - smartcities.gov.in. This mission statement essentially envisages the role of citizens/citizen groups such as Resident Welfare Associations, Taxpayers Associations, Senior Citizens and Slum Dwellers Associations etc, apart from the government of India, States, Union Territories and Urban local bodies.
      • Regarding information about agreements for the purpose of financing the project, it has been provided in the response that the Ministry would facilitate the execution of MoU’s between Foreign Agencies and States/UT’s for assistance under this mission. The two agreements that have been executed include the MoU between the United States Trade and Development Agency (USTDA) and the French Agency for Development (AFD) for the States/UT’s of Andhra Pradesh, Uttar Pradesh, Rajasthan, Maharashtra, Chandigarh and Puducherry. They have also provided us with copies of the same and they have been summarised below. They also go on to state that various countries like Spain, Canada, Germany, China, Singapore, UK and South Korea have also shown interest in collaborating with the Ministry for the development of Smart Cities.
      • CIS sought the documents relating to role of private actors in this field. This information could not be provided by the Department since it was not available with them. Further, an application has been sent to the SC-III Division for providing the information directly to us.
      • As regards the fourth query, the information provided states that the role of the government, States/UT’s and Urban Local Bodies has been envisaged in para 13 of the Smart Cities Mission Statement - smartcities.gov.in
      • With respect to the query regarding the foreign actors involved, the information provided states that the documents relating to the involvement of the same are scattered in different files. Compilation of such information would divert the limited resources of the Public Authority disproportionately. Another application must be filed if any specific information is required.

      Copies of several MoUs signed between Foreign Development Agencies and States (for the respective cities) that were shared with us are:

      • Memorandum of Understanding between the United States Trade and Development Agency(USTDA) and the Government of Andhra Pradesh of the Republic of India on Cooperation to support the development of Smart Cities in Andhra Pradesh-namely Visakhapatnam.
      • Memorandum of Understanding between the United States Trade and Development Agency (USTDA) and the Government of Rajasthan of the Republic of India on Cooperation to support the development of Smart Cities in Rajasthan- namely Ajmer.
      • Memorandum of Understanding between the United States Trade and Development Agency (USTDA) and the Government of Uttar Pradesh of the Republic of India on Cooperation to support the development of Smart Cities in Uttar Pradesh- namely Allahabad.
      • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of the Union Territory of Chandigarh of the Republic of India on Technical Cooperation in the field of Sustainable Urban Development.
      • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of Maharashtra on Technical Cooperation in the field of Sustainable Urban Development.
      • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of the Union Territory of Puducherry of the Republic of India on Technical Cooperation in the field of Sustainable Urban Development.

      Key clauses under the MoU between the United States Trade and Development Agency (USTDA) and the governments of Andhra Pradesh, Rajasthan and Uttar Pradesh are:

      • The MoU undertaken by the USTDA for the development of Visakhapatnam, Allahabad and Ajmer clearly establishes that the document only cements the intention of the body to assist in the development of these cities and funding must be addressed separately.
      • The USTDA intends to contribute specific funding for feasibility studies, study tours, workshops/training, and any other projects mutually determined, in furtherance of this interest. The USTDA will also fund advisory services for the same.
      • The USTDA will seek to bring in other US government agencies such as the Department of Commerce, the US Export Import Bank and other trade and economic agencies to encourage US-India infrastructure development cooperation and support the development of smart cities in Vishakhapatnam, Allahabad and Ajmer.
      • One of the key points the USTDA stresses on is the creation of a Smart Solutions for Smart Cities Reverse Trade Mission, where Indian delegates will get a chance to showcase their methodologies and inventions in the United States.
      • The MoU also talks about involving industry organisations in the development of Smart Cities, to address important aviation and energy related infrastructure connected to developing smart cities.
      • The respective State Governments of the cities will provide resources for the development of these smart cities, including technical information and data related to smart cities planning; staff, logistical and travel support, and state budgetary resources will be allocated accordingly.

      Key clauses under the MoU between the Agency Francaise De Developpement (AFD) and the governments of Maharashtra, Chandigarh and Puducherry are:

      • The MoU with AFD is along the same lines but with more detail provided in the field of research in sustainable urban development. It comprises of four articles dealing with implementation, research, resource allocation and cooperation.
      • The AFD clearly states that it will adopt an active role in managing and implementing the project.
      • The AFD will equip the respective state governments with a technical cooperation programme which will include a pool of French experts from the public sector, complemented by experts from the private sector.
      • The MoU goes on to state the various vectors of sustainable urban development that will be the focal point of this project – urban transport, water and waste management, integrated development and urban planning, architecture and heritage, renewable energy, energy efficiency etc.
      • Apart from strategizing, the AFD looks to provide technical support as well. This technical expertise would be used to strengthen strategy and management of urban services in the city.
      • They would also play a key role in management through the creation of a Special Purpose Vehicle(SPV) to build strategic management (Human Resources, finance, potential market assessment) and capacity building for financial management.
      • As per Article II of the MoU, this support framework will be accompanied by annual reviews, a policy similar to the USTDA Smart Solutions for Smart Cities Reverse Mission with Indian and French counterparts, collaboration between academic and research institutions for the exchange of information, documentation and results of research in the field of smart cities (a key policy to establish firm research groundwork and increase cooperation and innovation), capacity building research and development.
      • Article III of the MoU deals with resource allocation wherein the respective State Governments will assist AFD by providing technical information and data related to smart cities planning, and also meet their logistical requirements.

      Can the Aadhaar Act 2016 be Classified as a Money Bill?

      by Pooja Saxena — last modified Apr 25, 2016 01:48 PM
      In this infographic, we show if the Aadhaar Act 2016, recently tabled in and passed by the Lok Sabha as a money bill, can be classified as a money bill. The infographic is designed by Pooja Saxena, based on information compiled by Amber Sinha and Sumandro Chattapadhyay.

       

      Download the infographic: PDF and JPG.

       

      License: It is shared under Creative Commons Attribution 4.0 International License.

       

      Does Aadhaar Act satisfy the conditions for a money bill?

       

      Document Actions