Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

Concept Note: Network Neutrality in South Asia

by Prasad Krishna last modified Dec 01, 2015 02:34 AM

PDF document icon Network Neutrality South Asia Concept Note _ORF CIS.pdf — PDF document, 238 kB (244150 bytes)

The Case of Whatsapp Group Admins

by Japreet Grewal — last modified Dec 08, 2015 10:25 AM
Contributors: Geetha Hariharan

Censorship laws in India have now roped in group administrators of chat groups on instant messaging platforms such as Whatsapp (group admin(s)) for allegedly objectionable content that was posted by other users of these chat groups. Several incidents[1] were reported this year where group admins were arrested in different parts of the country for allowing content that was allegedly objectionable under law. A few reports mentioned that these arrests were made under Section 153A[2] read with Section 34[3] of the Indian Penal Code (IPC) and Section 67[4] of the Information Technology Act (IT Act).

Targeting of a group admin for content posted by other members of a chat group has raised concerns about how this liability is imputed. Whether a group admin should be considered an intermediary under Section 2 (w) of the IT Act? If yes, whether a group admin would be protected from such liability?

Group admin as an intermediary

Whatsapp is an instant messaging platform which can be used for mass communication by opting to create a chat group. A chat group is a feature on Whatsapp that allows joint participation of Whatsapp users. The number of Whatsapp users on a single chat group can be up to 100. Every chat group has one or more group admins who control participation in the group by deleting or adding people. [5] It is imperative that we understand that by choosing to create a chat group on Whatsapp whether a group admin can become liable for content posted by other members of the chat group.

Section 34 of the IPC provides that when a number of persons engage in a criminal act with a common intention, each person is made liable as if he alone did the act. Common intention implies a pre-arranged plan and acting in concert pursuant to the plan. It is interesting to note that group admins have been arrested under Section 153A on the ground that a group admin and a member posting content on a chat group that is actionable under this provision have common intention to post such content on the group. But would this hold true when for instance, a group admin creates a chat group for posting lawful content (say, for matchmaking purposes) and a member of the chat group posts content which is actionable under law (say, posting a video abusing Dalit women)? Common intention can be established by direct evidence or inferred from conduct or surrounding circumstances or from any incriminating facts.[6]

We need to understand whether common intention can be established in case of a user merely acting as a group admin. For this purpose it is necessary to see how a group admin contributes to a chat group and whether he acts as an intermediary.

We know that parameters for determining an intermediary differ across jurisdictions and most global organisations have categorised them based on their role or technical functions.[7] Section 2 (w) of the Information Technology Act, 2000 (IT Act) defines an intermediary as any person, who on behalf of another person, receives, stores or transmits messages or provides any service with respect to that message and includes the telecom services providers, network providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces and cyber cafés. Does a group admin receive, store or transmit messages on behalf of group participants or provide any service with respect to messages of group participants or falls in any category mentioned in the definition? Whatsapp does not allow a group admin to receive, or store on behalf of another participant on a chat group. Every group member independently controls his posts on the group. However, a group admin helps in transmitting messages of another participant to the group by allowing the participant to be a part of the group thus effectively providing service in respect of messages. A group admin therefore, should be considered an intermediary. However his contribution to the chat group is limited to allowing participation but this is discussed in further detail in the section below.

According to the Organisation for Economic Co-operation and Development (OECD), in a 2010 report[8], an internet intermediary brings together or facilitates transactions between third parties on the Internet. It gives access to, hosts, transmits and indexes content, products and services originated by third parties on the Internet or provide Internet-based services to third parties. A Whatsapp chat group allows people who are not on your list to interact with you if they are on the group admins’ contact list. In facilitating this interaction, according to the OECD definition, a group admin may be considered an intermediary.

Liability as an intermediary

Section 79 (1) of the IT Act protects an intermediary from any liability under any law in force (for instance, liability under Section 153A pursuant to the rule laid down in Section 34 of IPC) if an intermediary fulfils certain conditions laid down therein. An intermediary is required to carry out certain due diligence obligations laid down in Rule 3 of the Information Technology (Intermediaries Guidelines) Rules, 2011 (Rules). These obligations include monitoring content that infringes intellectual property, threatens national security or public order, or is obscene or defamatory or violates any law in force (Rule 3(2)).[9] An intermediary is liable for publishing or hosting such user generated content, however, as mentioned earlier, this liability is conditional. Section 79 of IT Act states that an intermediary would be liable only if it initiates transmission, selects receiver of the transmission and selects or modifies information contained in the transmission that falls under any category mentioned in Rule 3 (2) of the Rules. While we know that a group admin has the ability to facilitate sharing of information and select receivers of such information, he has no direct editorial control over the information shared. Group admins can only remove members but cannot remove or modify the content posted by members of the chat group. An intermediary is liable in the event it fails to comply with due diligence obligations laid down under rule 3 (2) and 3 (3) of the Rules however, since a group admin lacks the authority to initiate transmission himself and control content, he can’t comply with these obligations. Therefore, a group admin would be protected from any liability arising out of third party/user generated content on his group pursuant to Section 79 of the IT Act.

It is however relevant to note whether the ability of a group admin to remove participants amounts to an indirect form of editorial control.

Other pertinent observations

In several reports[10] there have been discussions about how holding a group admin liable makes the process convenient as it is difficult to locate all the users of a particular group. This reasoning may not be correct as the Whatsapp policy[11] makes it mandatory for a prospective user to provide his mobile number in order to use the platform and no additional information is collected from group admins which may justify why group admins are targeted. Investigation agencies can access mobile numbers of Whatsapp users and gain more information from telecom companies.

It is also interesting to note that the group admins were arrested after a user or someone familiar to a user filed a complaint with the police about content being objectionable or hurtful. Earlier this year, the apex court had ruled in the case of Shreya Singhal v. Union of India[12] that an intermediary needed a court order or a government notification for taking down information. With actions taken against group admins on mere complaints filed by anyone, it is clear that the law enforcement officials have been overriding the mandate of the court.

Conclusion

 

According to a study conducted by a global research consultancy, TNS Global, around 38 % of internet users in India use instant messaging applications such as Snapchat and Whatsapp on a daily basis, Whatsapp being the most widely used application. These figures indicate the scale of impact that arrests of group admins may have on our daily communication.

It is noteworthy that categorising a group admin as an intermediary would effectively make the Rules applicable to all Whatsapp users intending to create groups and make it difficult to enforce and would perhaps blur the distinction between users and intermediaries.

The critical question however is whether a chat group is considered a part of the bundle of services that Whatsapp offers to its users and not as an independent platform that makes a group admin a separate entity. Also, would it be correct to draw comparison of a Whatsapp group chat with a conference call on Skype or sharing a Google document with edit rights to understand the domain in which censorship laws are penetrating today?

 

Valuable contribution by Pranesh Prakash and Geetha Hariharan


[1] http://www.nagpurtoday.in/whatsapp-admin-held-for-hurting-religious-sentiment/06250951http://www.catchnews.com/raipur-news/whatsapp-group-admin-arrested-for-spreading-obscene-video-of-mahatma-gandhi-1440835156.html ; http://www.financialexpress.com/article/india-news/whatsapp-group-admin-along-with-3-members-arrested-for-objectionable-content/147887/

[2] Section 153A. “Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.— (1) Whoever— (a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different reli­gious, racial, language or regional groups or castes or communi­ties…” or 2) Whoever commits an offence specified in sub-section (1) in any place of worship or in any assembly engaged in the performance of religious wor­ship or religious ceremonies, shall be punished with imprisonment which may extend to five years and shall also be liable to fine.

[3] Section 34. Acts done by several persons in furtherance of common intention – When a criminal act is done by several persons in furtherance of common intention of all, each of such persons is liable for that act in the same manner as if it were done by him alone.

[4] Section 67 Publishing of information which is obscene in electronic form. -Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees."

[5] https://www.whatsapp.com/faq/en/general/21073373

[6] Pandurang v. State of Hyderabad AIR 1955 SC 216

[7]https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf;  http://unesdoc.unesco.org/images/0023/002311/231162e.pdf

[8] http://www.oecd.org/internet/ieconomy/44949023.pdf

[9] Rule 3(2) (b) of the Rules

[10]http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp-group-admin-better-be-careful/article7531350.ece; http://www.newindianexpress.com/states/tamil_nadu/Social-Media-Administrator-You-Could-Land-in-Trouble/2015/10/10/article3071815.ece;  http://www.medianama.com/2015/10/223-whatsapp-group-admin-arrest/http://www.thenewsminute.com/article/whatsapp-group-admin-you-are-intermediary-and-here%E2%80%99s-what-you-need-know-35031

[11] https://www.whatsapp.com/legal/

[12] http://supremecourtofindia.nic.in/FileServer/2015-03-24_1427183283.pdf

DNA Research

by Vanya Rakesh last modified Jul 21, 2016 11:02 AM
In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament. This legislation will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. Here is a collection of our research on privacy and security concerns related to the Bill.

 

The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues related to the DNA Bill:

  1. Validity and legality of collection, usage and storage of DNA samples and information derived from the same.
  2. Monitoring projects and policies around Human DNA Profiling.
  3. Raising public awareness around issues concerning biometrics.

In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament.

The Bill seeks to establish DNA Databases at the state and regional level and a national level database. The databases would store DNA profiles of suspects, offenders, missing persons, and deceased persons. The database could be used by courts, law enforcement (national and international) agencies, and other authorized persons for criminal and civil purposes. The Bill will also regulate DNA laboratories collecting DNA samples. Lack of adequate consent, the broad powers of the board, and the deletion of innocent persons profiles are just a few of the concerns voiced about the Bill.

DNA Profiling Bill - Infographic
Download the infographic. Credit: Scott Mason and CIS team.

 

1. DNA Bill

The Human DNA Profiling bill is a legislation that will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. There have been many concerns raised about the infringement of privacy and the power that the government will have with such information raised by Human Rights Groups, individuals and NGOs. The bill proposes to profile people through their fingerprints and retinal scans which allow the government to create different unique profiles for individuals. Some of the concerns raised include the loss of privacy by such profiling and the manner in which they are conducted. Unless strictly controlled, monitored and protected, such a database of the citizens' fingerprints and retinal scans could lead to huge blowbacks in the form of security risks and privacy invasions. The following articles elaborate upon these matters.

     

    2. Comparative Analysis with other Legislatures

    Human DNA Profiling is a system that isn't proposed only in India. This system of identification has been proposed and implemented in many nations. Each of these systems differs from the other on bases dependent on the nation's and society's needs. The risks and criticisms that DNA profiling has faced may be the same but the manner in which solutions to such issues are varying. The following articles look into the different systems in place in different countries and create a comparison with the proposed system in India to give us a better understanding of the risks and implications of such a system being implemented.

     

    Privacy Policy Research

    by Vanya Rakesh last modified Jan 03, 2016 09:40 AM
    The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
    1. Raising public awareness  and dialogue around privacy,
    2. Undertaking in depth research of domestic and international policy pertaining to privacy
    3. Driving comprehensive privacy legislation in India through research.

    India does not have a comprehensive legislation covering issues of privacy or establishing the right to privacy In 2010 an "Approach Paper on Privacy" was published, in 2011 the Department of Personnel and Training released a draft Right to Privacy Bill, in 2012 the Planning Commission constituted a group of experts which published The Report of the Group of Experts on Privacy, in 2013 CIS drafted the citizens Privacy Protection Bill, and in 2014 the Right to Privacy Bill was leaked. Currently the Government is in the process of drafting and finalizing the Bill.

    Draft Right to Privacy

    Privacy Research -

    1. Approach Paper on Privacy, 2010 -

    The following article contains the reply drafted by CIS in response to the Paper on Privacy in 2010. The Paper on Privacy was a document drafted by a group of officers created to develop a framework for a privacy legislation that would balance the need for privacy protection, security, sectoral interests, and respond to the domain legislation on the subject.

    2. Report on Privacy, 2012 -

    The Report on Privacy, 2012 was drafted and published by a group of experts under the Planning Commission pertaining to the current legislation with respect to privacy. The following articles contain the responses and criticisms to the report and the current legislation.

    3. Privacy Protection Bill, 2013 -

    The Privacy Protection Bill, 2013 was a legislation that aims to formulate the rules and law that governs privacy protection. The following articles refer to this legislation including a citizen's draft of the legislation.

    4. Right to Privacy Act, 2014 (Leaked Bill) -

    The Right to Privacy Act, 2014 is a bill still under proposal that was leaked, linked below.

    • Leaked Privacy Bill: 2014 vs. 2011 http://bit.ly/QV0Y0w

    Sectoral Privacy Research

    by Vanya Rakesh last modified Jan 03, 2016 09:46 AM
    The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues.
    1. Research on the issue of privacy in different sectors in India.
    2. Monitoring projects, practices, and policies around those sectors.
    3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

    The Right to Privacy has evolved in India since many decades, where the question of it being a Fundamental Right has been debated many times in courts of Law. With the advent of information technology and digitisation of the services, the issue of Privacy holds more relevance in sectors like Banking, Healthcare, Telecommunications, ITC, etc., The Right to Privacy is also addressed in light of the Sexual minorities, Whistle-blowers, Government services, etc.

    Sectors -

    1. Consumer Privacy and other sectors -

    Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate customer privacy measures. The following articles deal with the current consumer privacy laws in place in India and around the world. Also, privacy concerns have been considered along with other sectors like Copyright law, data protection, etc.

    § Consumer Privacy - How to Enforce an Effective Protective Regime? http://bit.ly/1a99P2z

    § Privacy and Information Technology Act: Do we have the Safeguards for Electronic Privacy? http://bit.ly/10VJp1P

    • Limits to Privacy http://bit.ly/19mPG6I

    § Copyright Enforcement and Privacy in India http://bit.ly/18fi9fM

    • Privacy in India: Country Report http://bit.ly/14pnNwl

    § Transparency and Privacy http://bit.ly/1a9dMnC

    § The Report of the Group of Experts on Privacy (Contributed by CIS) http://bit.ly/VqzKtr

    § The (In) Visible Subject: Power, Privacy and Social Networking http://bit.ly/15koqol

    § Privacy and the Indian Copyright Act, 1857 as Amended in 2010 http://bit.ly/1euwX0r

    § Should Ratan Tata be afforded the Right to Privacy? http://bit.ly/LRlXin

    § Comments on Information Technology (Guidelines for Cyber Café) Rules, 2011 http://bit.ly/15kojJn

    § Broadcasting Standards Authority Censures TV9 over Privacy Violations! http://bit.ly/16L4izl

    § Is Data Protection Enough? http://bit.ly/1bvaWx2

    § Privacy, speech at stake in cyberspace http://cis-india.org/news/privacy-speech-at-stake-in-cyberspace-1

    § Q&A to the Report of the Group of Experts on Privacy http://bit.ly/TPhzQQ

    § Privacy worries cloud Facebook's WhatsApp Deal http://cis-india.org/internet-governance/blog/economic-times-march-14-2014-sunil-abraham-privacy-worries-cloud-facebook-whatsapp-deal

    § GNI Assessment Finds ICT Companies Protect User Privacy and Freedom of Expression http://bit.ly/1mjbpmL

    § A Stolen Perspective http://bit.ly/1bWHyzv

    § Is Data Protection enough? http://cis-india.org/internet-governance/blog/privacy/is-data-protection-enough

    § I don't want my fingerprints taken http://bit.ly/aYdMia

    § Keeping it Private http://bit.ly/15wjTVc

    § Personal Data, Public Profile http://bit.ly/15vlFk4

    § Why your Facebook Stalker is Not the Real Problem http://bit.ly/1bI2MSc

    § The Private Eye http://bit.ly/173ypSI

    § How Facebook is Blatantly Abusing our Trust http://bit.ly/OBXGXk

    § Open Secrets http://bit.ly/1b5uvK0

    § Big Brother is Watching You http://bit.ly/1cGpg0K

    2. Banking/Finance -

    Privacy in the banking and finance industry is crucial as the records and funds of one person must not be accessible by another without the due authorisation. The following articles deal with the current system in place that governs privacy in the financial and banking industry.

    § Privacy and Banking: Do Indian Banking Standards Provide Enough Privacy Protection? http://bit.ly/18fhsTM

    § Finance and Privacy http://bit.ly/15aUPh6

    § Making the Powerful Accountable http://bit.ly/1nvzSpC

    3. Telecommunications -

    The telecommunications industry is the backbone of current technology with respect to ICTs. The telecommunications industry has its own rules and regulations. These rules are the focal point of the following articles including criticism and acclaim.

    § Privacy and Telecommunications: Do We Have the Safeguards? http://bit.ly/10VJp1P

    § Privacy and Media Law http://bit.ly/18fgDfF

    § IP Addresses and Expeditious Disclosure of Identity in India http://bit.ly/16dBy4N

    § Telecommunications and Internet Privacy Read more: http://bit.ly/16dEcaF

    § Encryption Standards and Practices http://bit.ly/KT9BTy

    § Encryption Standards and Practices http://cis-india.org/internet-governance/blog/privacy/privacy_encryption

    § Security: Privacy, Transparency and Technology http://cis-india.org/internet-governance/blog/security-privacy-transparency-and-technolog y

    4. Sexual Minorities -

    While the internet is a global forum of self-expression and acceptance for most of us, it does not hold true for sexual minorities. The internet is a place of secrecy for those that do not conform to the typical identities set by society and therefore their privacy is more important to them than most. When they reveal themselves or are revealed by others, they typically face a lot of group hatred from the rest of the people and therefore value their privacy. The following article looks into their situation.

    · Privacy and Sexual Minorities http://bit.ly/19mQUyZ

    5. Health -

    The privacy between a doctor and a patient is seen as incredibly important and so should the privacy of a person in any situation where they reveal more than they would to others in the sense of CT scans and other diagnoses. The following articles look into the present scenario of privacy in places like a hospital or diagnosis center.

    § Health and Privacy http://bit.ly/16L1AJX

    § Privacy Concerns in Whole Body Imaging: A Few Questions http://bit.ly/1jmvH1z

    6. e-Governance -

    The main focus of governments in ICTs is their gain for governance. There have many a multiplicity of laws and legislation passed by various countries including India in an effort to govern the universal space that is the internet. Surveillance is a major part of that governance and control. The articles listed below deal with the issues of ethics and drawbacks in the current legal scenario involving ICTs.

    § E-Governance and Privacy http://bit.ly/18fiReX

    § Privacy and Governmental Databases http://bit.ly/18fmSy8

    § Killing Internet Softly with its Rules http://bit.ly/1b5I7Z2

    § Cyber Crime & Privacy http://bit.ly/17VTluv

    § Understanding the Right to Information http://bit.ly/1hojKr7

    § Privacy Perspectives on the 2012-2013 Goa Beach Shack Policy http://bit.ly/ThAovQ

    § Identifying Aspects of Privacy in Islamic Law http://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law

    § What Does Facebook's Transparency Report Tell Us About the Indian Government's Record on Free Expression & Privacy? http://cis-india.org/internet-governance/blog/what-does-facebook-transparency-report-tell -us-about-indian-government-record-on-free-expression-and-privacy

    § Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India http://cis-india.org/internet-governance/blog/search-and-seizure-and-right-to-privacy-in-digital-age

    § Internet Privacy in India http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-i ndia

    § Internet-driven Developments - Structural Changes and Tipping Points http://bit.ly/10s8HVH

    § Data Retention in India http://bit.ly/XR791u

    § 2012: Privacy Highlights in India http://bit.ly/1kWe3n7

    § Big Dog is Watching You! The Sci-fi Future of Animal and Insect Drones http://bit.ly/1kWee1W

    7. Whistle-blowers -

    Whistle-blowers are always in a difficult situation when they must reveal the misdeeds of their corporations and governments due to the blowback that is possible if their identity is revealed to the public. As in the case of Edward Snowden and many others, a whistle-blowers identity is to be kept the most private to avoid the consequences of revealing the information that they did. This is the main focus of the article below.

    § The Privacy Rights of Whistle-blowers http://bit.ly/18GWmM3

    8. Cloud and Open Source -

    Cloud computing and open source software have grown rapidly over the past few decades. Cloud computing is when an individual or company uses offsite hardware on a pay by usage basis provided and owned by someone else. The advantages are low costs and easy access along with decreased initial costs. Open source software on the other hand is software where despite the existence of proprietary elements and innovation, the software is available to the public at no charge. These software are based of open standards and have the obvious advantage of being compatible with many different set ups and are free. The following article highlights these computing solutions.

    § Privacy, Free/Open Source, and the Cloud http://bit.ly/1cTmGoI

    9. e-Commerce -

    One of the fastest growing applications of the internet is e-Commerce. This includes many facets of commerce such as online trading, the stock exchange etc. in these cases, just as in the financial and banking industries, privacy is very important to protect ones investments and capital. The following article's main focal point is the world of e-Commerce and its current privacy scenario.

    § Consumer Privacy in e-Commerce http://bit.ly/1dCtgTs

    Security Research

    by Vanya Rakesh last modified Jan 03, 2016 09:55 AM
    The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
    1. Research on the issue of privacy in different sectors in India.
    2. Monitoring projects, practices, and policies around those sectors.
    3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

    State surveillance in India has been carried out by Government agencies for many years. Recent projects include: NATGRID, CMS, NETRA, etc. which aim to overhaul the overall security and intelligence infrastructure in the country. The purpose of such initiatives has been to maintain national security and ensure interconnectivity and interoperability between departments and agencies. Concerns regarding the structure, regulatory frameworks (or lack thereof), and technologies used in these programmes and projects have attracted criticism.

    Surveillance/Security Research -

    1. Central Monitoring System -

    The Central Monitoring System or CMS is a clandestine mass electronic surveillance data mining program installed by the Center for Development of Telematics (C-DOT), a part of the Indian government. It gives law enforcement agencies centralized access to India's telecommunications network and the ability to listen in on and record mobile, landline, satellite, Voice over Internet Protocol (VoIP) calls along with private e-mails, SMS, MMS. It also gives them the ability to geo-locate individuals via cell phones in real time.

    • The Central Monitoring System: Some Questions to be Raised in Parliament http://bit.ly/1fln2vu

    2. Surveillance Industry : Global And Domestic -

    The surveillance industry is a multi-billion dollar economic sector that tracks individuals along with their actions such as e-mails and texts. With the cause for its existence being terrorism and the government's attempts to fight it, a network has been created that leaves no one with their privacy. All that an individual does in the digital world is suspect to surveillance. This included surveillance in the form of snooping where an individual's phone calls, text messages and e-mails are monitored or a more active kind where cameras, sensors and other devices are used to actively track the movements and actions of an individual. This information allows governments to bypass the privacy that an individual has in a manner that is considered unethical and incorrect. This information that is collected also in vulnerable to cyber-attacks that are serious risks to privacy and the individuals themselves. The following set of articles look into the ethics, risks, vulnerabilities and trade-offs of having a mass surveillance industry in place.

    • Surveillance Technologies http://bit.ly/14pxg74
    • New Standard Operating Procedures for Lawful Interception and Monitoring http://bit.ly/1mRRIo4

    3. Judgements By the Indian Courts -

    The surveillance industry in India has been brought before the court in different cases. The following articles look into the cause of action in these cases along with their impact on India and its citizens.

    4. International Privacy Laws -

    Due to the universality of the internet, many questions of accountability arise and jurisdiction becomes a problem. Therefore certain treaties, agreements and other international legal literature was created to answer these questions. The articles listed below look into the international legal framework which governs the internet.

    5. Indian Surveillance Framework -

    The Indian government's mass surveillance systems are configured a little differently from the networks of many countries such as the USA and the UK. This is because of the vast difference in infrastructure both in existence and the required amount. In many ways, it is considered that the surveillance network in India is far worse than other countries. This is due to the present form of the legal system in existence. The articles below explore the system and its functioning including the various methods through which we are spied on. The ethics and vulnerabilities are also explored in these articles.

    • A Comparison of Indian Legislation to Draft International Principles on Surveillance of Communications http://bit.ly/U6T3xy
    • Surveillance and the Indian Constitution - Part 2: Gobind and the Compelling State Interest Test http://bit.ly/1dH3meL
    • Surveillance and the Indian Constitution - Part 3: The Public/Private Distinction and the Supreme Court's Wrong Turn http://bit.ly/1kBosnw
    • Mastering the Art of Keeping Indians Under Surveillance http://cis-india.org/internet-governance/blog/the-wire-may-30-2015-bhairav-acharya-mastering-the-art-of-keeping-indians-under-surveillance

    UID Research

    by Vanya Rakesh last modified Jan 03, 2016 09:59 AM
    The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
    1. Researching the vision and implementation of the UID Scheme - both from a technical and regulatory perspective.
    2. Understanding the validity and legality of collection, usage and storage of Biometric information for this scheme.
    3. Raising public awareness around issues concerning privacy, data security and the objectives of the UID Scheme.

    The UID scheme seeks to provide all residents of India an identity number based on their biometrics that can be used to authenticate individuals for the purpose of Government benefits and services. A 2015 Supreme Court ruling has clarified that the UID can only be used in the PDS and LPG Schemes.

    Concerns with the scheme include the broad consent taken at the time of enrolment, the lack of clarity as to what happens with transactional metadata, the centralized storage of the biometric information in the CIDR, the seeding of the aadhaar number into service providers’ databases, and the possibility of function creep. Also, there are concerns due to absence of a legislation to look into the privacy and security concerns.

    UID Research -

    1. Ramifications of Aadhar and UID schemes -

    The UID and Aadhar systems have been bombarded with criticisms and plagued with issues ranging from privacy concerns to security risks. The following articles deal with the many problems and drawbacks of these systems.

    § UID and NPR: Towards Common Ground http://cis-india.org/internet-governance/blog/uid-npr-towards-common-ground

    § Public Statement to Final Draft of UID Bill http://bit.ly/1aGf1NN

    § UID Project in India - Some Possible Ramifications http://cis-india.org/internet-governance/blog/uid-in-india

    § Aadhaar Number vs the Social Security Number http://cis-india.org/internet-governance/blog/aadhaar-vs-social-security-number

    § Feedback to the NIA Bill http://cis-india.org/internet-governance/blog/cis-feedback-to-nia-bill

    § Unique ID System: Pros and Cons http://bit.ly/1jmxbZS

    § Submitted seven open letters to the Parliamentary Finance Committee on the UID covering the following aspects: SCOSTA Standards (http://bit.ly/1hq5Rqd), Centralized Database (http://bit.ly/1hsHJDg), Biometrics (http://bit.ly/196drke), UID Budget (http://bit.ly/1e4c2Op), Operational Design (http://bit.ly/JXR61S), UID and Transactions (http://bit.ly/1gY6B8r), and Deduplication (http://bit.ly/1c9TkSg)

    § Comments on Finance Committee Statements to Open Letters on Unique Identity: The Parliamentary Finance Committee responded to the open letters sent by CIS through an email on 12 October 2011. CIS has commented on the points raised by the Committee: http://bit.ly/1kz4H0F

    § Unique Identification Scheme (UID) & National Population Register (NPR), and Governance http://cis-india.org/internet-governance/blog/uid-and-npr-a-background-note

    § Financial Inclusion and the UID http://cis-india.org/internet-governance/privacy_uidfinancialinclusion

    § The Aadhaar Case http://cis-india.org/internet-governance/blog/the-aadhaar-case

    § Do we need the Aadhaar scheme http://bit.ly/1850wAz

    § 4 Popular Myths about UID http://bit.ly/1bWFoQg

    § Does the UID Reflect India? http://cis-india.org/internet-governance/blog/privacy/uid-reflects-india

    § Would it be a unique identity crisis? http://cis-india.org/news/unique-identity-crisis

    § UID: Nothing to Hide, Nothing to Fear? http://cis-india.org/internet-governance/blog/privacy/uid-nothing-to-hide-fear

    2. Right to Privacy and UID -

    The UID system has been hit by many privacy concerns from NGOs, private individuals and others. The sharing of one's information, especially fingerprints and retinal scans to a system that is controlled by the government and is not vetted as having good security irks most people. These issues are dealt with the in the following articles.

    § India Fears of Privacy Loss Pursue Ambitious ID Project http://cis-india.org/news/india-fears-of-privacy-loss

    § Analysing the Right to Privacy and Dignity with Respect to the UID http://bit.ly/1bWFoQg

    § Analysing the Right to Privacy and Dignity with Respect to the UID http://cis-india.org/internet-governance/blog/privacy/privacy-uiddevaprasad

    § Supreme Court order is a good start, but is seeding necessary? http://cis-india.org/internet-governance/blog/supreme-court-order-is-a-good-start-but-is-seeding-necessary

    § Right to Privacy in Peril http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

    3. Data Flow in the UID -

    The articles below deal with the manner in which data is moved around and handled in the UID system in India.

    § UIDAI Practices and the Information Technology Act, Section 43A and Subsequent Rules http://cis-india.org/internet-governance/blog/uid-practices-and-it-act-sec-43-a-and-subsequent-rules

    § Data flow in the Unique Identification Scheme of India http://cis-india.org/internet-governance/blog/data-flow-in-unique-identification-scheme-of-india

    CIS's Position on Net Neutrality

    by Sunil Abraham last modified Dec 09, 2015 01:06 PM
    Contributors: pranesh
    As researchers committed to the principle of pluralism we rarely produce institutional positions. This is also because we tend to update our positions based on research outputs. But the lack of clarity around our position on network neutrality has led some stakeholders to believe that we are advocating for forbearance. Nothing can be farther from the truth. Please see below for the current articulation of our common institutional position.

     

    1. Net Neutrality violations can potentially have multiple categories of harms — competition harms, free speech harms, privacy harms, innovation and ‘generativity’ harms, harms to consumer choice and user freedoms, and diversity harms thanks to unjust discrimination and gatekeeping by Internet service providers.

    2. Net Neutrality violations (including some those forms of zero-rating that violate net neutrality) can also have different kinds benefits — enabling the right to freedom of expression, and the freedom of association, especially when access to communication and publishing technologies is increased; increased competition [by enabling product differentiation, can potentially allow small ISPs compete against market incumbents]; increased access [usually to a subset of the Internet] by those without any access because they cannot afford it, increased access [usually to a subset of the Internet] by those who don't see any value in the Internet, reduced payments by those who already have access to the Internet especially if their usage is dominated by certain services and destinations.

    3. Given the magnitude and variety of potential harms, complete forbearance from all regulation is not an option for regulators nor is self-regulation sufficient to address all the harms emerging from Net Neutrality violations, since incumbent telecom companies cannot be trusted to effectively self-regulate. Therefore, CIS calls for the immediate formulation of Net Neutrality regulation by the telecom regulator [TRAI] and the notification thereof by the government [Department of Telecom of the Ministry of Information and Communication Technology]. CIS also calls for the eventual enactment of statutory law on Net Neutrality.  All such policy must be developed in a transparent fashion after proper consultation with all relevant stakeholders, and after giving citizens an opportunity to comment on draft regulations.

    4. Even though some of these harms may be large, CIS believes that a government cannot apply the precautionary principle in the case of Net Neutrality violations. Banning technical innovations and business model innovations is not an appropriate policy option. The regulation must toe a careful line to solve the optimization problem: refraining from over-regulation of ISPs and harming innovation at the carrier level (and benefits of net neutrality violations mentioned above) while preventing ISPs from harming innovation and user choice.  ISPs must be regulated to limit harms from unjust discrimination towards consumers as well as to limit harms from unjust discrimination towards the services they carry on their networks.

    5. Based on regulatory theory, we believe that a regulatory framework that is technologically neutral, that factors in differences in technological context, as well as market realities and existing regulation, and which is able to respond to new evidence is what is ideal.

      This means that we need a framework that has some bright-line rules based, but which allows for flexibility in determining the scope of exceptions and in the application of the rules.  Candidate principles to be embodied in the regulation include: transparency, non-exclusivity, limiting unjust discrimination.

    6. The harms emerging from walled gardens can be mitigated in a number of waysOn zero-rating the form of regulation must depend on the specific model and the potential harms that result from that model. Zero-rating can be: paid for by the end consumer or subsidized by ISPs or subsidized by content providers or subsidized by government or a combination of these; deal-based or criteria-based or government-imposed; ISP-imposed or offered by the ISP and chosen by consumers; Transparent and understood by consumers vs. non-transparent; based on content-type or agnostic to content-type; service-specific or service-class/protocol-specific or service-agnostic; available on one ISP or on all ISPs.  Zero-rating by a small ISP with 2% penetration will not have the same harms as zero-rating by the largest incumbent ISP.  For service-agnostic / content-type agnostic zero-rating, which Mozilla terms ‘equal rating’, CIS advocates for no regulation.

    7. CIS believes that Net Neutrality regulation for mobile and fixed-line access must be different recognizing the fundamental differences in technologies.

    8. On specialized services CIS believes that there should be logical separation and that all details of such specialized services and their impact on the Internet must be made transparent to consumers both individual and institutional, the general public and to the regulator.  Further, such services should be available to the user only upon request, and not without their active choice, with the requirement that the service cannot be reasonably provided with ‘best efforts’ delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or that the discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

    9. On incentives for telecom operators, CIS believes that the government should consider different models such as waiving contribution to the Universal Service Obligation Fund for prepaid consumers, and freeing up additional spectrum for telecom use without royalty using a shared spectrum paradigm, as well as freeing up more spectrum for use without a licence.

    10. On reasonable network management CIS still does not have a common institutional position.

    Smart Cities in India: An Overview

    by Vanya Rakesh last modified Jan 11, 2016 01:30 AM
    The Government of India is in the process of developing 100 smart cities in India which it sees as the key to the country's economic and social growth. This blog post gives an overview of the Smart Cities project currently underway in India. The smart cities mission in India is at a nascent stage and an evolving area for research. The Centre for Internet and Society will continue work in this area.

    Overview of the 100 Smart Cities Mission

    The Government of India announced its flagship programme- the 100 Smart Cities mission in the year 2014 and was launched in June 2015 to achieve urban transformation, drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology. Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY 2015-16 to FY 2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporation of the learnings into the Mission. The Mission aims to focus on area-based development in the form of redevelopment of existing spaces, or the development of new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all - especially the poor and the disadvantaged. [1] On 27th August 2015 the Centre unveiled 98 smart cities across India which were selected for this Project. Across the selected cities, 13 crore population ( 35% of the urban population will be included in the development plans. [2] The mission has been developed for the purpose of achieving urban transformation. The vision is to preserve India's traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. [3]

    The promises of the Smart City mission include reduction of carbon footprint, adequate water and electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing, robust IT connectivity and digitalization, good governance, citizen participation, security of citizens, health and education.

    Questions unanswered

    • Why and How was the Smart Cities project conceptualized in India? What was the need for such a project in India?
    • What was the role of the public/citizens at the ideation and conceptualization stage of the project?
    • Which actors from the Government, Private industry and the civil society are involved in this mission? Though the smart cities mission has been initiated by the Government of India under the Ministry of Urban Development, there is no clarity about the involvement of the associated offices and departments of the Ministry.

    How are the Smart Cities being selected?

    The 100 cities were supposed to be selected on the basis of Smart cities challenge[4] involving two stages. Stage I of the challenge involved Intra-State city selection on objective criteria to identify cities to compete in stage-II. In August 2015, The Ministry of Urban Development, Government of India announced 100 smart cities [5] evaluated on parameters such as service levels, financial and institutional capacity, past track record, called as the 'shortlisted cities' for this purpose. The selected cities are now competing for selection in the Second stage of the challenge, which is an All India competition. For this crucial stage, the potential 100 smart cities are required to prepare a Smart City Proposal (SCP) stating the model chosen (retrofitting, redevelopment, Greenfield development or a mix), along with a Pan-City dimension with Smart Solutions. The proposal must also include suggestions collected by way of consultations held with city residents and other stakeholders, along with the proposal for financing of the smart city plan including the revenue model to attract private participation. The country saw wide participation from the citizens to voice their aspirations and concerns regarding the smart city. 15th December 2015 has been declared as the deadline for submission of the SCP, which must be in consonance with evaluation criteria set by The MoUD, set on the basis of professional advice. [6] On the basis of this, 20 cities will be selected for the first year. According to the latest reports, the Centre is planning to fund only 10 cities for the first phase in case the proposals sent by the states do not match the expected quality standards and are unable to submit complete area-development plans by the deadline, i.e. 15th December, 2015. [7]

    Questions unanswered

    • Who would be undertaking the task of evaluating and selecting the cities for this project?
    • What are the criteria for selection of a city to qualify in the first 20 (or 10, depending on the Central Government) for the first phase of implementation?

    How are the smart cities going to be Funded?

    The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year. [8] The additional resources will have to be mobilized by the State/ ULBs from external/internal sources. According to the scheme, once list of shortlisted Smart Cities is finalized, Rs. 2 crore would have been disbursed to each city for proposal preparation.[9]

    According to estimates of the Central Government, around Rs 4 lakh crore of funds will be infused mainly through private investments and loans from multilateral institutions among other sources, which accounts to 80% of the total spending on the mission. [10] For this purpose, the Government will approach the World Bank and the Asian Development Bank (ADB) for a loan costing £500 million and £1 billion each for 2015-20. If ADB approves the loan, it would be it will be the bank's highest funding to India's urban sector so far.[11] Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission. [12]

    Questions unanswered

    • The Government notes on Financing of the project mentions PPPs for private funding and leveraging of resources from internal and external resources. There is lack of clarity on the external resources the Government has/will approach and the varied PPP agreements the Government is or is planning to enter into for the purpose of private investment in the smart cities.

    How is the scheme being implemented?

    Under this scheme, each city is required to establish a Special Purpose Vehicle (SPV) having flexibility regarding planning, implementation, management and operations. The body will be headed by a full-time CEO, with nominees of Central Government, State Government and ULB on its Board. The SPV will be a limited company incorporated under the Companies Act, 2013 at the city-level, in which the State/UT and the Urban Local Body (ULB) will be the promoters having equity shareholding in the ratio 50:50. The private sector or financial institutions could be considered for taking equity stake in the SPV, provided the shareholding pattern of 50:50 of the State/UT and the ULB is maintained and the State/UT and the ULB together have majority shareholding and control of the SPV. Funds provided by the Government of India in the Smart Cities Mission to the SPV will be in the form of tied grant and kept in a separate Grant Fund.[13]

    For the purpose of implementation and monitoring of the projects, the MoUD has also established an Apex Committee and National Mission Directorate for National Level Monitoring[14], a State Level High Powered Steering Committee (HPSC) for State Level Monitoring[15] and a Smart City Advisory Forum at the City Level [16].

    Also, several consulting firms[17] have been assigned to the 100 cities to help them prepare action plans.[18] Some of them include CRISIL, KPMG, McKinsey, etc. [19]

    Questions unanswered

    • What policies and regulations have been put in place to account for the smart cities, apart from policies looking at issues of security, privacy, etc.?
    • What international/national standards will be adopted while development of the smart cities? Though the Bureau of Indian Standards is in the process of formulating standardized guidelines for the smart cities in India[20], yet there is lack of clarity on adoption of these national standards, along with the role of international standards like the ones formulated by ISO.

    What is the role of Foreign Governments and bodies in the Smart cities mission?

    Ever since the government's ambitious project has been announced and cities have been shortlisted, many countries across the globe have shown keen interest to help specific shortlisted cities in building the smart cities and are willing to invest financially. Countries like Sweden, Malaysia, UAE, USA, etc. have agreed to partner with India for the mission.[21] For example, UK has partnered with the Government to develop three India cities-Pune, Amravati and Indore.[22] Israel's start-up city Tel Aviv also entered into an agreement to help with urban transformation in the Indian cities of Pune, Nagpur and Nashik to foster innovation and share its technical know-how.[23] France has piqued interest for Nagpur and Puducherry, while the United States is interested in Ajmer, Vizag and Allahabad. Also, Spain's Barcelona Regional Agency has expressed interest in exchanging technology with the Delhi. Apart from foreign government, many organizations and multilateral agencies are also keen to partner with the Indian government and have offered financial assistance by way of loans. Some of them include the UK government-owned Department for International Development, German government KfW development bank, Japan International Cooperation Agency, the US Trade and Development Agency, United Nations Industrial Development Organization and United Nations Human Settlements Programme. [24]

    Questions unanswered

    • Do these governments or organization have influence on any other component of the Smart cities?
    • How much are the foreign governments and multilateral bodies spending on the respective cities?
    • What kind of technical know-how is being shared with the Indian government and cities?

    What is the way ahead?

    On the basis of the SCP, the MoUD will evaluate, assess the credibility and select 20 smart cities out of the short-listed ones for execution of the plan in the first phase. The selected city will set up a SPV and receive funding from the Government.

    Questions unanswered

    • Will the deadline of submission of the Smart Cities Proposal be pushed back?
    • After the SCP is submitted on the basis of consultation with the citizens and public, will they be further involved in the implementation of the project and what will be their role?
    • How will the MoUD and other associated organizations as well as actors consider the implementation realities of the project, like consideration of land displacement, rehabilitation of the slum people, etc.
    • How are ICT based systems going to be utilized to make the cities and the infrastructure "smart"?
    • How is the MoUD going to respond to the concerns and criticism emerging from various sections of the society, as being reflected in the news items?
    • How will the smart cities impact and integrate the existing laws, regulations and policies? Does the Government intend to use the existing legislations in entirety, or update and amend the laws for implementation of the Smart Cities Mission?


    [1] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

    [2] http://articles.economictimes.indiatimes.com/2015-08-27/news/65929187_1_jammu-and-kashmir-12-cities-urban-development-venkaiah-naidu

    [3] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

    [4] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

    [5] Full list : http://www.scribd.com/doc/276467963/Smart-Cities-Full-List

    [6] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

    [7] http://www.ibtimes.co.in/modi-govt-select-only-10-cities-under-smart-city-project-this-year-report-658888

    [8] http://smartcities.gov.in/writereaddata/Financing%20of%20Smart%20Cities.pdf

    [9] Smart Cities presentation by MoUD : http://smartcities.gov.in/writereaddata/Presentation%20on%20Smart%20Cities%20Mission.pdf

    [10] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/

    [11] http://indianexpress.com/article/india/india-others/funding-for-smart-cities-key-to-coffer-lies-outside-india/#sthash.5lnW9Jsq.dpuf

    [12] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

    [13] http://smartcities.gov.in/writereaddata/SPVs.pdf

    [14] http://smartcities.gov.in/writereaddata/National%20Level%20Monitoring.pdf

    [15] http://smartcities.gov.in/writereaddata/State%20Level%20Monitoring.pdf

    [16] http://smartcities.gov.in/writereaddata/City%20Level%20Monitoring.pdf

    [17] http://smartcities.gov.in/writereaddata/List_of_Consulting_Firms.pdf

    [18] http://pib.nic.in/newsite/PrintRelease.aspx?relid=128457

    [20] http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html

    [21] http://accommodationtimes.com/foreign-countries-have-keen-interest-in-development-of-smart-cities/

    [22] http://articles.economictimes.indiatimes.com/2015-11-20/news/68440402_1_uk-trade-three-smart-cities-british-deputy-high-commissioner

    [23] http://www.jpost.com/Business-and-Innovation/Tech/Tel-Aviv-to-help-India-build-smart-cities-435161?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

    [24] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/#sthash.nCMxEKkc.dpuf

    ISO/IEC/ JTC 1/SC 27 Working Groups Meeting, Jaipur

    by Vanya Rakesh last modified Dec 21, 2015 02:38 AM
    I attended this event held from October 26 to 30, 2015 in Jaipur.

    The Bureau of Indian Standards (BIS) in collaboration with Data Security Council of India (DSCI) hosted the global standards’ meeting – ISO/IEC/ JTC 1/SC 27 Working Groups Meeting in Jaipur, Rajasthan at Hotel Marriott from 26th to 30th of October, 2015, followed by a half day conference on Friday, 30th October on the importance of Standards in the domain. The event witnessed experts from across the globe deliberating on forging international standards on Privacy, Security and Risk management in IoT, Cloud Computing and many other contemporary technologies, along with updating existing standards. Under SC 27, 5 working groups parallely held the meetings on varied Projects and Study periods respectively. The 5 Working Groups are as follows:

    1. WG1: Information Security Management Systems;
    2. WG 2 :Cryptography and Security Mechanisms;
    3. WG 3 : Security Evaluation, Testing and Specification;
    4. WG 4 : Security Controls and Services; and
    5. WG 5 :Identity Management and Privacy technologies; competence of security management

    This key set of Working Groups (WG)met in India for the first time.  Professionals discussed and debated development of standards under each working group to develop international standards to address issues regarding security, identity management and privacy.

    CIS had the opportunity to attend meetings under Working Group 5. This group further had parallel meetings on several topics namely:

    • Privacy enhancing data de-identification techniques ISO/IEC NWIP 20889 : Data de-identification techniques are important when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles. The selection, design, use and assessment of these techniques need to be performed appropriately in order to effectively address the risks of re-identification in a given context.  There is thus a need to classify known de-identification techniques using standardized terminology, and to describe their characteristics, including the underlying technologies, the applicability of each technique to reducing the risk of re-identification, and the usability of the de-identified data.  This is the main goal of this International Standard. Meetings were conducted to resolve comments sent by organisations across the world, review draft documents and agree on next steps.
    • A study period on Privacy Engineering framework : This session deliberated upon contributions, terms of reference and discuss the scope for the emerging field of privacy engineering framework. The session also reviewed important terms to be included in the standard and identify possible improvements to existing privacy impact assessment and management standards. It was identified that the goal of this standard is to integrate privacy into systems as part of the systems engineering process. Another concern raised was that the framework must be consistent with Privacy framework under ISO 29100 and HL7 Privacy and security standards.
    • A study period on user friendly online privacy notice and consent: The basic purpose of this New Work Item Proposal is to assess the viability of producing a guideline for PII Controllers on providing easy to understand notices and consent procedures to PII Principals within WG5. At the Meeting, a brief overview of the contributions received was given,along with assessment of  liaison to ISO/IEC JTC 1/SC 35 and other entities. This International Standard gives guidelines for the content and the structure of online privacy notices as well as documents asking for consent to collect and process personally identifiable information (PII) from PII principals online and is applicable to all situations where a PII controller or any other entity processing PII informs PII principals in any online context.
    • Some of the other sessions under Working Group 5 were on Privacy Impact Assessment ISO/IEC 29134, Standardization in the area of Biometrics and Biometric information protection, Code of Practise for the protection of personally identifiable information, Study period on User friendly online privacy notice and consent, etc.

    ISO/IEC/JTC 1/ SC27 is a joint technical committee of the international standards bodies – ISO and IEC on Information Technology security techniques which conducts regular meetings across the world. JTC 1 has over 2600 published standards developed under the broad umbrella of the committee and its 20 subcommittees. Draft International Standards adopted by the joint technical committees are circulated to the national bodies for voting. Publication as an International Standard requires approval by at least 75% of the national bodies casting a vote in favour of the same. In India, the Bureau of Indian Standards (BIS) is the National Standards Body. Standards are formulated keeping in view national priorities, industrial development, technical needs, export promotion, health, safety etc. and are harmonized with ISO/IEC standards (wherever they exist) to the extent possible, in order to facilitate adoption of ISO/IEC standards by all segments of industry and business.BIS has been actively participating in the  Technical Committee  work of ISO/IEC and is currently a Participating member in 417 and 74 Technical Committees/ Subcommittees and Observer member in 248 and 79 Technical Committees/Subcommittees of ISO and IEC respectively.  BIS  holds Secretarial responsibilities of 2 Technical Committees and 6 Subcommittees of ISO.

    The last meeting was held in the month of May, 2015 in Malaysia, followed by this meeting in October, 2015 Jaipur. 51 countries play an active role as the ‘Participating Members, India being one, while a few countries as observing members. As a part of these sessions, the participating countries also have rights to vote in all official ballots related to standards. The representatives of the country work on the preparation and development of the International Standards and provide feedback to their national organizations.

    There was an additional study group meeting on IoT to discuss comments on the previous drafts, suggest changes , review responses and identify standard gaps in SC 27.

    On October 30, 2015  BIS-DSCI hosted a half day International conference on 30 October, 2015 on Cyber Security and Privacy Standards, comprising of keynotes and panel discussions, bringing together national and international experts to share experience and exchange views on cyber security techniques and protection of data and privacy in international standards, and their growing importance in their society.  The conference looked at various themes like the Role of standards in smart cities, Responding to the Challenges of Investigating Cyber Crimes through Standards, etc. It was emphasised that due to an increasing digital world, there is a universal agreement for the need of cyber security as the infrastructure is globally connected, the cyber threats are also distributed as they are not restricted by the geographical boundaries. Hence, the need for technical and policy solutions, along with standards was highlighted for future protection of the digital world which is now deeply embedded in life, businesses and the government. Standards will help in setting crucial infrastructure for in data security and build associated infrastructure on these lines.

    The importance of standards was highlighted in context of smart cities wherein the need for standards was discussed by experts. Harmonization of regulations with standards must be looked at, by primarily creating standards which could be referred to by the regulators. Broadly, the challenges faced by smart cities are data security, privacy and digital resilience of the infrastructure. It was suggested that in the beginning, these areas must be looked at for development of standards in smart cities. Also, the ISO/IEC  has a Working Group and a Strategic Group focussing on Smart Cities. The risks of digitisation, network, identity management, etc. must be looked at to create the standards.

    The next meeting has been scheduled for April 2016 in Tampa (USA).

    This meeting was a good opportunity to interact with experts from various parts of the World and understand the working of ISO Meetings which are held twice/thrice every year. The Centre for Internet and Society will be continuing work and becoming involved in the standard setting process at the future Working group meetings.

    RTI PDF

    by Prasad Krishna last modified Dec 22, 2015 02:54 AM

    PDF document icon RTI.pdf — PDF document, 412 kB (422252 bytes)

    RTI response regarding the UIDAI

    by Vanya Rakesh last modified Dec 22, 2015 02:57 AM
    This is a response to the RTI filed regarding UIDAI

    The Supreme Curt of India, by virtue of an order dated 11th August 2015, directed the Government to widely publicize in electronic and print media, including radio and television networks that obtaining Aadhar card is not mandatory for the citizens to avail welfare schemes of the Government. (until the matter is resolved). CIS filed an RTI to get information about the steps taken by Government in this regard, the initiatives taken, and details about the expenditure incurred to publicize and inform the public about Aadhar not being mandatory to avail welfare schemes of the Government.

    Response: It has been informed that an advisory was issued by UIDAI headquarters to all regional offices to comply with the order, along with several advertisement campaigns. The total cost incurred so far by UIDAI for this is Rs. 317.30 lakh.


    Download the Response

    Benefits and Harms of "Big Data"

    by Scott Mason — last modified Dec 30, 2015 02:48 AM
    Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life.

    Introduction

    In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabyte[1]. By 2013 that had grown to 4 zettabytes[2], and with the nascent development of the so-called 'Internet of Things' gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available[3] , together with the development of innovative forms of statistical analytics, is generally referred to as "Big Data"; though there is no single agreed upon definition of the term. Although still in its initial stages, Big Data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even 10 years ago.

    Despite enormous optimism about the scope and variety of Big Data's potential applications however, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits[4]. Most notably these have included concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data [5]. However, concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on Big Data analytics as opposed to traditional forms of analysis and the creation of new digital divides to just name a few.

    The existing literature on Big Data is vast, however many of the benefits and harms identified by researchers tend to relate to sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of Big Data's possible applications, it can nevertheless be difficult to gain an overall perspective of the broader impacts of Big Data as a whole. As such this article will seek to disaggregate the potential benefits and harms of Big Data, organising them into several broad categories, which are reflective of the existing scholarly literature.

    What are the potential benefits of Big Data?

    From politicians to business leaders, recent years have seen Big Data confidently proclaimed as a potential solution to a diverse range of problems from, world hunger and diseases, to government budget deficits and corruption. But if we look beyond the hyperbole and headlines, what do we really know about the advantages of Big Data? Given the current buzz surrounding it, the existing literature on Big Data is perhaps unsurprisingly vast, providing innumerable examples of the potential applications of Big Data from agriculture to policing. However, rather than try (and fail) to list the many possible applications of Big Data analytics across all sectors and industries, for the purposes of this article we have instead attempted to distil the various advantages of Big Data discussed within literature into the following five broad categories; Decision-Making, Efficiency & Productivity, Research & Development, Personalisation and Transparency, each of which will be discussed separately below.

    Decision-Making

    Whilst data analytics have always been used to improve the quality and efficiency of decision-making processes, the advent of Big Data means that the areas of our lives in which data driven decision- making plays a role is expanding dramatically; as businesses and governments become better able to exploit new data flows. Furthermore, the real-time and predictive nature of decision-making made possible by Big Data, are increasingly allowing these decisions to be automated. As a result, Big Data is providing governments and business with unprecedented opportunities to create new insights and solutions; becoming more responsive to new opportunities and better able to act quickly - and in some cases preemptively - to deal with emerging threats.

    This ability of Big Data to speed up and improve decision-making processes can be applied across all sectors from transport to healthcare and is often cited within the literature as one of the key advantages of Big Data. Joh, for example, highlights the increased use of data driven predictive analysis by police forces to help them to forecast the times and geographical locations in which crimes are most likely to occur. This allows the force to redistribute their officers and resources according to anticipated need, and in certain cities has been highly effective in reducing crime rates [6]. Raghupathi meanwhile cites the case of healthcare, where predictive modelling driven by big data is being used to proactively identify patients who could benefit from preventative care or lifestyle changes[7].

    One area in particular where the decision-making capabilities of Big Data are having a significant impact is in the field of risk management [8]. For instance, Big Data can allow companies to map their entire data landscape to help detect sensitive information, such as 16 digit numbers - potentially credit card data - which are not being stored according to regulatory requirements and intervene accordingly. Similarly, detailed analysis of data held about suppliers and customers can help companies to identify those in financial trouble, allowing them to act quickly to minimize their exposure to any potential default[9].

    Efficiency and Productivity

    In an era when many governments and businesses are facing enormous pressures on their budgets, the desire to reduce waste and inefficiency has never been greater. By providing the information and analysis needed for organisations to better manage and coordinate their operations, Big Data can help to alleviate such problems, leading to the better utilization of scarce resources and a more productive workforce [10].

    Within the literature such efficiency savings are most commonly discussed in relation to reductions in energy consumption [11]. For example, a report published by Cisco notes how the city of Olso has managed to reduce the energy consumption of street-lighting by 62 percent through the use of smart solutions driven by Big Data[12]. Increasingly, however, statistical models generated by Big Data analytics are also being utilized to identify potential efficiencies in sourcing, scheduling and routing in a wide range of sectors from agriculture to transport. For example, Newell observes how many local governments are generating large databases of scanned license plates through the use of automated license plate recognition systems (ALPR), which government agencies can then use to help improve local traffic management and ease congestion[13].

    Commonly these efficiency savings are only made possible by the often counter-intuitive insights generated by the Big Data models. For example, whilst a human analyst planning a truck route would always tend to avoid 'drive-bys' - bypassing one stop to reach a third before doubling back - Big Data insights can sometimes show such routes to be more efficient. In such cases efficiency saving of this kind would in all likelihood have gone unrecognised by a human analyst, not trained to look for such patterns[14].

    Research, Development, and Innovation

    Perhaps one of the most intriguing benefits of Big Data is its potential use in the research and development of new products and services. As is highlighted throughout the literature, Big Data can help businesses to gain an understanding of how others perceive their products or identify customer demand and adapt their marketing or indeed the design of their products accordingly[15]. Analysis of social media data, for instance, can provide valuable insights into customers' sentiments towards existing products as well as discover demands for new products and services, allowing businesses to respond more quickly to changes in customer behaviour[16].

    In addition to market research, Big Data can also be used during the design and development stage of new products; for example by helping to test thousands of different variations of computer-aided designs in an expedient and cost-effective manner. In doing so, business and designers are able to better assess how minor changes to a products design may affect its cost and performance, thereby improving the cost-effectiveness of the production process and increasing profitability.

    Personalisation

    For many consumers, perhaps the most familiar application of Big Data is its ability to help tailor products and services to meet their individual preferences. This phenomena is most immediately noticeable on many online services such as Netflix; where data about users activities and preferences is collated and analysed to provide a personalised service, for example by suggesting films or television shows the user may enjoy based upon their previous viewing history[17]. By enabling companies to generate in-depth profiles of their customers, Big Data allows businesses to move past the 'one size fits all' approach to product and services design and instead quickly and cost-effectively adapt their services to better meet customer demand.

    In addition to service personalisation, similar profiling techniques are increasingly being utilized in sectors such as healthcare. Here data about a patient's medical history, lifestyle, and even their gene expression patterns are collated, generating a detailed medical profile which can then be used to tailor treatments to meet their specific needs[18]. Targeted care of this sort can not only help to reduce costs for example by helping to avoid over-prescriptions, but may also help to improve the effectiveness of treatments and so ultimately their outcome.

    Transparency

    If 'knowledge is power', then, - so say Big Data enthusiasts - advances in data analytics and the quantity of data available can give consumers and citizens the knowledge to hold governments and businesses to account, as well as make more informed choices about the products and services they use. Nevertheless, data (even lots of it) does not necessarily equal knowledge. In order for citizens and consumers to be able to fully utilize the vast quantities of data available to them, they must first have some way to make sense of it. For some, Big Data analytics provides just such a solution, allowing users to easily search, compare and analyze available data, thereby helping to challenge existing information asymmetries and make business and government more transparent[19].

    In the private sector, Big Data enthusiasts have claimed that Big Data holds the potential to ensure complete transparency of supply chains, enabling concerned consumers to trace the source of their products, for example to ensure that they have been sourced ethically [20]. Furthermore, Big Data is now making accessible information which was previously unavailable to average consumers and challenging companies whose business models rely on the maintenance of information asymmetries.The real-estate industry, for example, relies heavily upon its ability to acquire and control proprietary information, such as transaction data as a competitive asset. In recent years, however, many online services have allowed consumers to effectively bypass agents, by providing alternative sources of real-estate data and enabling prospective buyers and sellers to communicate directly with each other[21]. Therefore, providing consumers with access to large quantities of actionable data . Big Data can help to eliminate established information asymmetries, allowing them to make better and more informed decisions about the products they buy and the services they enlist.

    This potential to harness the power of Big Data to improve transparency and accountability can also be seen in the public sector, with many scholars suggesting that greater access to government data could help to stem corruption and make politics more accountable. This view was recently endorsed by the UN who highlighted the potential uses of Big Data to improve policymaking and accountability in a report published by the Independent Expert Advisory Group on the "Data Revolution for Sustainable Development". In the report experts emphasize the potential of what they term the 'data revolution', to help achieve sustainable development goals by for example helping civil society groups and individuals to 'develop data literacy and help communities and individuals to generate and use data, to ensure accountability and make better decisions for themselves' [22].

    What are the potential harms of Big Data?

    Whilst it is often easy to be seduced by the utopian visions of Big Data evangelists, in order to ensure that Big Data can deliver the types of far-reaching benefits its proponents promise, it is vital that we are also sensitive to its potential harms. Within the existing literature, discussions about the potential harms of Big Data are perhaps understandably dominated by concerns about privacy. Yet as Big Data has begun to play an increasingly central role in our daily lives, a broad range of new threats have begun to emerge including issues related to security and scientific epistemology, as well as problems of marginalisation, discrimination and transparency; each of which will be discussed separately below.

    Privacy

    By far the biggest concern raised by researchers in relation to Big Data is its risk to privacy. Given that by its very nature Big Data requires extensive and unprecedented access to large quantities of data; it is hardly surprising that many of the benefits outlined above in one way or another exist in tension with considerations of privacy. Although many scholars have called for a broader debate on the effects of Big Data on ethical best practice [23], a comprehensive exploration into the complex debates surrounding the ethical implications of Big Data go far beyond the scope of this article. Instead we will simply attempt to highlight some of the major areas of concern expressed in the literature, including its effects on established principles of privacy and the implication of Big Data on the suitability of existing regulatory frameworks governing privacy and data protection.

    1. Re-identification

    Traditionally many Big Data enthusiasts have used de-identification - the process of anonymising data by removing personally identifiable information (PII) - as a way of justifying mass collection and use of personal data. By claiming that such measures are sufficient to ensure the privacy of users, data brokers, companies and governments have sought to deflect concerns about the privacy implications of Big Data, and suggest that it can be compliant with existing regulatory and legal frameworks on data protection.

    However, many scholars remain concerned about the limits of anonymisation. As Tene and Polonetsky observe 'Once data-such as a clickstream or a cookie number-are linked to an identified individual, they become difficult to disentangle'[24]. They cite the example of University of Texas researchers Narayanan and Shmatikov, who were able to successfully re-identify anonymised Netflix user data by cross referencing it with data stored in a publicly accessible online database. As Narayanan and Shmatikov themselves explained, 'once any piece of data has been linked to a person's real identity, any association between this data and a virtual identity breaks anonymity of the latter' [25]. The quantity and variety of datasets which Big Data analytics has made associable with individuals is therefore expanding the scope of the types of data that can be considered PII, as well as undermining claims that de-identification alone is sufficient to ensure privacy for users.

    2. Privacy Frameworks Obsolete?

    In recent decades privacy and data protection frameworks based upon a number of so-called 'privacy principles' have formed the basis of most attempts to encourage greater consideration of privacy issues online[26]. For many however, the emergence of Big Data has raised question about the extent to which these 'principles of privacy' are workable in an era of ubiquitous data collection.

    Collection Limitation and Data Minimization : Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum[27].

    Purpose Limitation: Since the utility of a given dataset is often not easily identifiable at the time of collection, datasets are increasingly being processed several times for a variety of different purposes. Such practices have significant implications for the principle of purpose limitation, which aims to ensure that organizations are open about their reasons for collecting data, and that they use and process the data for no other purpose than those initially specified [28].

    Notice and Consent: The principles of notice and consent have formed the cornerstones of attempts to protect privacy for decades. Nevertheless in an era of ubiquitous data collection, the notion that an individual must be required to provide their explicit consent to allow for the collection and processing of their data seems increasingly antiquated, a relic of an age when it was possible to keep track of your personal data relationships and transactions. Today as data streams become more complex, some have begun to question suitability of consent as a mechanism to protect privacy. In particular commentators have noted how given the complexity of data flows in the digital ecosystem most individuals are not well placed to make truly informed decisions about the management of their data[29]. In one study, researchers demonstrated how by creating the perceptions of control, users were more likely to share their personal information, regardless of whether or not the users had actually gained control [30]. As such, for many, the garnering of consent is increasingly becoming a symbolic box-ticking exercise which achieves little more than to irritate and inconvenience customers whilst providing a burden for companies and a hindrance to growth and innovation [31].

    Access and Correction: The principle of 'access and correction' refers to the rights of individuals to obtain personal information being held about them as well as the right to erase, rectify, complete or otherwise amend that data. Aside from the well documented problems with privacy self-management, for many the real-time nature of data generation and analysis in an era of Big Data poses a number of structural challenges to this principle of privacy. As x comments, 'a good amount of data is not pre-processed in a similar fashion as traditional data warehouses. This creates a number of potential compliance problems such as difficulty erasing, retrieving or correcting data. A typical big data system is not built for interactivity, but for batch processing. This also makes the application of changes on a (presumably) static data set difficult'[32].

    Opt In-Out: The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services is making opting-out of data collection not only impractical but in some cases impossible. [33]

    3. "Chilling Effects"

    For many scholars the normalization of large scale data collection is steadily producing a widespread perception of ubiquitous surveillance amongst users. Drawing upon Foucault's analysis of Jeremy Bentham's panopticon and the disciplinary effects of surveillance, they argue that this perception of permanent visibility can cause users to sub-consciously 'discipline' and self- regulate of their own behavior, fearful of being targeted or identified as 'abnormal' [34]. As a result, the pervasive nature of Big Data risks generating a 'chilling effect' on user behavior and free speech.

    Although the notion of "chilling effects" is quite prevalent throughout the academic literature on surveillance and security, the difficulty of quantifying the perception and effects of surveillance on online behavior and practices means that there have only been a limited number of empirical studies of this phenomena, and none directly related to the chilling effects of Big Data. One study, conducted by researchers at MIT however, sought to assess the impact of Edward Snowden's revelations about NSA surveillance programs on Google search trends. Nearly 6,000 participants were asked to individually rate certain keywords for their perceived degree of privacy sensitivity along multiple dimensions. Using Google's own publicly available search data, the researchers then analyzed search patterns for these terms before and after the Snowden revelations. In doing so they were able to demonstrate a reduction of around 2.2% in searchers for those terms deemed to be most sensitive in nature. According to the researchers themselves, the results 'suggest that there is a chilling effect on search behaviour from government surveillance on the Internet'[35]. Although this study focussed on the effects on government surveillance, for many privacy advocates the growing pervasiveness of Big Data risks generating similar results. [36]

    4. Dignitary Harms of Predictive Decision-Making

    In addition to its potentially chilling effects on free speech, the automated nature of Big Data analytics also possess the potential to inflict so-called 'dignitary harms' on individuals, by revealing insights about themselves that they would have preferred to keep private [37].

    In an infamous example, following a shopping trip to the retail chain Target, a young girl began to receive mail at her father's house advertising products for babies including, diapers, clothing, and cribs. In response, her father complained to the management of the company, incensed by what he perceived to be the company's attempts to "encourage" pregnancy in teens. A few days later however, the father was forced to contact the store again to apologies, after his daughter had confessed to him that she was indeed pregnant. It was later revealed that Target regularly analyzed the sale of key products such as supplements or unscented lotions in order to generate "pregnancy prediction" scores, which could be used to assess the likelihood that a customer was pregnant and to therefore target them with relevant offers[38]. Such cases, though anecdotal illustrate how Big Data if not adopted sensitively can lead to potential embarrassing information about users being made public.

    Security

    In relation to cybersecurity Big Data can be viewed to a certain extent as a double-edged sword. On the one hand, the unique capabilities of Big Data analytics can provide organizations with new and innovative methods of enhancing their cybersecurity systems. On the other however, the sheer quantity and diversity of data emanating from a variety of sources creates its own security risks.

    5. "Honey-Pot"

    The larger the quantities of confidential information stored by companies on their databases the more attractive those databases may appear to potential hackers.

    6. Data Redundancy and Dispersion

    Inherent to Big Data systems is the duplication of data to many locations in order to optimize query processing. Data is dispersed across a wide range of data repositories in different servers, in different parts of the world. As a result it may be difficult for organizations to accurately locate and secure all items of personal information.

    Epistemological and Methodological Implications

    In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [39]. 'There is now a better way' Anderson insisted, 'Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot'[40].

    In spite of these bold claims however, many theorists remain skeptical of Big Data's methodological benefits and have expressed concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favor of shallow correlative analysis[41].

    7. Obfuscation

    Although Big Data analytics can be utilized to study almost any phenomena where enough data exists, many theorists have warned that simply because Big Data analytics can be used does not necessarily mean that they should be used[42]. Bigger is not always better and indeed the sheer quantity of data made available to users may in fact act to obscure certain insights. Whereas traditional scientific methods use sampling techniques to identify the most important and relevant data, Big Data by contrast encourages the collection and use of as much data as possible, in an attempt to attain full resolution of the phenomena being studied. However, not all data is equally useful and simply inputting as much data as possible into an algorithm is unlikely to produce accurate results and may instead obscure key insights.

    Indeed, whilst the promise of automation is central to a large part of Big Data's appeal, researchers observe that most Big Data analysis still requires an element of human judgement to filter out the 'good' data from the 'bad', and to decide what aspects of the data are relevant to the research objectives. As Boyd and Crawford observe, 'in the case of social media data, there is a 'data cleaning' process: making decisions about what attributes and variables will be counted, and which will be ignored. This process is inherently subjective"[43].

    Google's Flu Trend project provides an illustrative example of how Big Data's tendency to try to maximise data inputs can produce misleading results. Designed to accurately track flu outbreaks based upon data collected from Google searches, the project was initially proclaimed to be a great success. Gradually however it became apparent that the results being produced were not reflective of the reality on the ground. Later it was discovered that the algorithms used by the project to interpret search terms were insufficiently accurate to filter out anomalies in searches, such as those related to the 2009 H1N1 flu pandemic. As such, despite the great promise of Big Data, scholars insist it remains critical to be mindful of its limitations, remain selective about the types of data included in the analysis and exercise caution and intuition whenever interpreting its results [44].

    8. "Apophenia"

    In complete contrast to the problem of obfuscation, Boyd and Crawford observe how Big Data may also lead to the practice of 'apophenia', a phenomena whereby analysts interpret patterns where none exist, 'simply because enormous quantities of data can offer connections that radiate in all directions" [45]. David Leinweber for example demonstrated that data mining techniques could show strong but ultimately spurious correlations between changes in the S&P 500 stock index and butter production in Bangladesh [46]. Such spurious correlation between disparate and unconnected phenomena are a common feature of Big Data analytics and risks leading to unfounded conclusions being draw from the data.

    Although Leinweber's primary focus of analysis was the use of Data-Mining technologies, his observations are equally applicable to Big Data. Indeed the tendency amongst Big Data analysts to marginalise the types of domain specific expertise capable of differentiating between relevant and irrelevant correlations in favour of algorithmic automation can in many ways be seen to exacerbate many of the problems Leinweber identified.

    9. From Causation to Correlation

    Closely related to the problem of Aphonenia is the concern that Big Data's emphasis on correlative analysis risks leading to an abandonment of the pursuit of causal knowledge in favour of shallow descriptive accounts of scientific phenomena[47].

    For many, Big Data enthusiasts 'correlation is enough', producing inherently meaningful results interpretable by anyone without the need for pre-existing theory or hypothesis. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, for others by neglecting the knowledge of domain experts, Big Data risks generating a shallow type of analysis, since it fails to adequately embed observations within a pre-existing body of knowledge.

    This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[48].

    As such, although Big Data can provide unrivalled accounts of "what" people do, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially critical in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes.

    Digital Divides and Marginalisation

    Today data is a highly valuable commodity. The market for data in and of itself has been steadily growing in recent years with the business models of many online services now formulated around the strategy of harvesting data from users[49]. As with the commodification of anything however, inequalities can easily emerge between the haves and have not's. Whilst the quantity of data currently generated on a daily basis is many times greater than at any other point in human history, the vast majority of this data is owned and tightly controlled by a very small number of technology companies and data brokers. Although in some instances limited access to data may be granted to university researchers or to those willing and able to pay a fee, in many cases data remains jealously guarded by data brokers, who view it as an important competitive asset. As a result these data brokers and companies risk becoming the gatekeepers of the Big Data revolution, adjudicating not only over who can benefit from Big Data, but also in what context and under what terms. For many such inconsistencies and inequalities in access to data raises serious doubts about just how widely distributed the benefits of Big Data will be. Others go even further claiming that far from helping to alleviate inequalities, the advent of Big Data risks exacerbating already significant digital divides that exist as well as creating new ones [50].

    10. Anti-Competitive Practices

    As a result of the reluctance of large companies to share their data, there increasingly exists a divide in access between small start-ups companies and their larger and more established competitors. Thus, new entrants to the marketplace may be at a competitive disadvantage in relation to large and well established enterprises, being as they are unable to harness the analytical power of the vast quantities of data available to large companies by virtue of their privileged market position. Since the performance of many online services are today often intimately connected with the collation and use of users data, some researchers have suggested that this inequity in access to data could lead to a reduction in competition in the online marketplace, and ultimately therefore to less innovation and choice for consumers[51].

    As a result researchers including Nathan Newman of New York University have called for a reassessment and reorientation of anti-trust investigations and regulatory approaches more generally to 'to focus on how control of personal data by corporations can entrench monopoly power and harm consumer welfare in an economy shaped increasingly by the power of "big data"'[52]. Similarly a report produced by the European Data Protection Supervisor concluded that, 'The scope for abuse of market dominance and harm to the consumer through refusal of access to personal information and opaque or misleading privacy policies may justify a new concept of consumer harm for competition enforcement in digital economy' [53].

    11. Research

    From a research perspective barriers to access to data caused by proprietary control of datasets are problematic, since certain types of research could become restricted to those privileged enough to be granted access to data. Meanwhile those denied access are left not only incapable of conducting similar research projects, but also unable to test, verify or reproduce the findings of those who do. The existence of such gatekeepers may also lead to reluctance on the part of researchers to undertake research critical of the companies, upon whom they rely for access, leading to a chilling effect on the types of research conducted[54].

    12. Inequality

    Whilst bold claims are regularly made about the potential of Big Data to deliver economic development and generate new innovations, some critics of remain concerned about how equally the benefits of Big Data will be distributed and the effects this could have on already established digital divides [55].

    Firstly, whilst the power of Big Data is already being utilized effectively by most economically developed nations, the same cannot necessarily be said for many developing countries. A combination of lower levels of connectivity, poor information infrastructure, underinvestment in information technologies and a lack of skills and trained personnel make it far more difficult for the developing world to fully reap the rewards of Big Data. As a consequence the Big Data revolution risks deepening global economic inequality as developing countries find themselves unable to compete with data rich nations whose governments can more easily exploit the vast quantities of information generated by their technically literate and connected citizens.

    Likewise, to the extent that the Big Data analytics is playing a greater role in public policy-making, the capacity of individuals to generate large quantities of data, could potentially impact upon the extent to which they can provide inputs into the policy-making process. In a country such as India for example, where there exist high levels of inequality in access to information and communication technologies and the internet, there remain large discrepancies in the quantities of data produced by individuals. As a result there is a risk that those who lack access to the means of producing data will be disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority [56].

    Discrimination

    13. Injudicious or Discriminatory Outcomes

    Big Data presents the opportunity for governments, businesses and individuals to make better, more informed decisions at a much faster pace. Whilst this can evidently provide innumerable opportunities to increase efficiency and mitigate risk, by removing human intervention and oversight from the decision-making process Big Data analysts run the risk of becoming blind to unfair or injudicious results generated by skewed or discriminatory programming of the algorithms.

    There currently exists a large number of automated decision-making algorithms in operation across a broad range of sectors including most notably perhaps those used to asses an individual's suitability for insurance or credit. In either of these cases faults in the programming or discriminatory assessment criteria can have potentially damaging implications for the individual, who may as a result be unable to attain credit or insurance. This concern with the potentially discriminatory aspects of Big Data is prevalent throughout the literature and real life examples have been identified by researchers in a large number of major sectors in which Big Data is currently being used[57].

    Yu for instance, cites the case of the insurance company Progressive, which required its customers to install 'Snapsnot' - a small monitoring device - into their cars in order to receive their best rates. The device tracked and reported the customers driving habits, and offered discounts to those drivers who drove infrequently, broke smoothly, and avoided driving at night - behaviors that correlate with a lower risk of future accidents. Although this form of price differentiation provided incentives for customers to drive more carefully, it also had the unintended consequence of unfairly penalizing late-night shift workers. As Yu observes, 'for late night shift-workers, who are disproportionately poorer and from minority groups, this differential pricing provides no benefit at all. It categorizes them as similar to late-night party-goers, forcing them to carry more of the cost of the intoxicated and other irresponsible driving that happens disproportionately at night'[58].

    In another example, it is noted how Big Data is increasingly being used to evaluate applicants for entry-level service jobs. One method of evaluating applicants is by the length of their commute - the rationale being that employees with shorter commutes are statistically more likely to remain in the job longer. However, since most service jobs are typically located in town centers and since poorer neighborhoods tend to be those on the outskirts of town, such criteria can have the effect of unfairly disadvantaging those living in economically deprived areas. Consequently such metrics of evaluation can therefore also unintentionally act to reinforce existing social inequalities by making it more difficult for economically disadvantaged communities to work their way out of poverty[59].

    14. Lack of Algorithmic Transparency.

    If data is indeed the 'oil of the 21st century'[60] then algorithms are very much the engines which are driving innovation and economic development. For many companies the quality of their algorithms is often a crucial factor in providing them with a market advantage over their competitor. Given their importance, the secrets behind the programming of algorithms are often closely guarded by companies, and are typically classified as trade secrets and as such are protected by intellectual property rights. Whilst companies may claim that such secrecy is necessary to encourage market competition and innovation, many scholars are becoming increasingly concerned about the lack of transparency surrounding the design of these most crucial tools.

    In particular there is a growing sentiment common amongst many researchers that there currently exists a chronic lack of accountability and transparency in terms of how Big Data algorithms are programmed and what criteria are used to determine outcomes [61]. As Frank Pasquale observed,

    ' hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy'[62].

    As such, without increased transparency in algorithmic design, instances of Big Data discrimination may go unnoticed as analyst are unable to access the information necessary to identify them.

    Conclusion

    Today Big Data presents us with as many challenges as it does benefits. Whilst Big Data analytics can offer incredible opportunities to reduce inefficiency, improve decision-making, and increase transparency, concerns remain about the effects of these new technologies on issues such as privacy, equality and discrimination. Although the tensions between the competing demands of Big Data advocates and their critics may appear irreconcilable; only by highlighting these points of contestation can we hope to begin to ask the types of important and difficult questions necessary to do so, including; how can we reconcile Big Data's need for massive inputs of personal information with core principles of privacy such as data minimization and collection limitation? What processes and procedures need to be put in place during the design and implementation of Big Data models and algorithms to provide sufficient transparency and accountability so as to avoid instances of discrimination? What measures can be used to help close digital divides and ensure that the benefits of Big Data are shared equitably? Questions such as these are today only just beginning to be addressed; each however, will require careful consideration and reasoned debate, if Big Data is to deliver on its promises and truly fulfil its 'revolutionary' potential.


    [1] Gantz, J., &Reinsel, D. Extracting Value from Chaos, IDC, (2011), available at: http://www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf

    [2] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

    [4] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878, Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

    [5] Ibid.,

    [6] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

    [7] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

    [8] Anderson, R., & Roberts, D. 'Big Data: Strategic Risks and Opportunities, Crowe Horwarth Global Risk Consulting Limited, (2012) https://www.crowehorwath.net/uploadedfiles/crowe-horwath-global/tabbed_content/big%20data%20strategic%20risks%20and%20opportunities%20white%20paper_risk13905.pdf

    [9] Ibid.

    [10] Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society (2014)http://bds.sagepub.com/content/1/2/2053951714564227.abstract,

    [11] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

    [12] Cisco, 'IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service', Cisco, (2014) Available at: http://www.cisco.com/c/dam/m/en_us/ioe/public_sector/pdfs/jurisdictions/Oslo_Jurisdiction_Profile_051214REV.pdf

    [13] Newell, B, C. Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information. University of Washington - the Information School, (2013) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2341182

    [14] Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

    [15] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

    [16] Davenport, T., Barth., Bean, R. How is Big Data Different, MITSloan Management Review, Fall (2012), Available at, http://sloanreview.mit.edu/article/how-big-data-is-different/

    [17] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

    [18] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

    [19] Brown, B., Chui, M., Manyika, J. 'Are you Ready for the Era of Big Data?', McKinsey Quarterly, (2011), Available at, http://www.t-systems.com/solutions/download-mckinsey-quarterly-/1148544_1/blobBinary/Study-McKinsey-Big-data.pdf ; Benady, D., 'Radical transparency will be unlocked by technology and big data', Guardian (2014) Available at: http://www.theguardian.com/sustainable-business/radical-transparency-unlocked-technology-big-data

    [20] Ibid.

    [21] Ibid.

    [22] United Nations, A World That Counts: Mobilising the Data Revolution for Sustainable Development, Report prepared at the request of the United Nations Secretary-General,by the Independent Expert Advisory Group on a Data Revolutionfor Sustainable Development. (2014), pg. 18, see also, Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

    [23] Greenleaf, G. Abandon All Hope? Foreword for Issue 37(2) of the UNSW Law Journal on 'Communications Surveillance, Big Data, and the Law' ,(2014) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2490425##, Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

    [24] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

    [25] Narayanan and Shmatikov quoted in Ibid.,

    [26] OECD, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, The Organization for Economic Co-Operation and Development, (1999); The European Parliament and the Council of the European Union, EU Data Protection Directive, "Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data," (1995)

    [27] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899

    [28] Article 29 Working Group., Opinion 03/2013 on purpose limitation, Article 29 Data Protection Working Party, (2013) available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

    [29] Solove, D, J. Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

    [30] Brandimarte, L., Acquisti, A., & Loewenstein, G., Misplaced Confidences:

    Privacy and the Control Paradox, Ninth Annual Workshop on the Economics of Information Security (WEIS) June 7-8 2010, Harvard University, Cambridge, MA, (2010), available at: https://fpf.org/wp-content/uploads/2010/07/Misplaced-Confidences-acquisti-FPF.pdf

    [31] Solove, D, J., Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

    [32] Yu, W, E., Data., Privacy and Big Data-Compliance Issues and Considerations, ISACA Journal, Vol. 3 2014 (2014), available at: http://www.isaca.org/Journal/archives/2014/Volume-3/Pages/Data-Privacy-and-Big-Data-Compliance-Issues-and-Considerations.aspx

    [33] Ramirez, E., Brill, J., Ohlhausen, M., Wright, J., & McSweeny, T., Data Brokers: A Call for Transparency and Accountability, Federal Trade Commission (2014) https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

    [34] Michel Foucault, Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan, London: Allen Lane, Penguin, (1977)

    [35] Marthews, A., & Tucker, C., Government Surveillance and Internet Search Behavior (2015), available at SSRN: http://ssrn.com/abstract=2412564

    [36] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012)

    [37] Hirsch, D., That's Unfair! Or is it? Big Data, Discrimination and the FTC's Unfairness Authority, Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf

    [38] Hill, K., How Target Figured Out A Teen Girl Was Pregnant Before Her Father Didhttp://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

    [39] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

    [40] Ibid.,

    [41] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

    [42] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

    [43] Ibid

    [44] Lazer, D., Kennedy, R., King, G., &Vespignani, A. " The Parable of Google Flu: Traps in Big Data Analysis ." Science 343 (2014): 1203-1205. Copy at http://j.mp/1ii4ETo

    [45] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

    [46] Leinweber, D. (2007) 'Stupid data miner tricks: overfitting the S&P 500', The Journal of Investing, vol. 16, no. 1, pp. 15-22. http://m.shookrun.com/documents/stupidmining.pdf

    [47] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

    [48] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

    [49] De Zwart, M. J., Humphreys, S., & Van Dissel, B. Surveillance, big data and democracy: lessons for Australia from the US and UK. Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2. (2014) Retrieved from https://digital.library.adelaide.edu.au/dspace/handle/2440/90048

    [50] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878; Newman, N., Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

    [51] Newman, N., The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data (2013). Available at SSRN: http://ssrn.com/abstract=2265026, Newman, N. ,Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

    [52] Ibid.,

    [53] European Data Protection Supervisor, Privacy and competitiveness in the age of big data:

    The interplay between data protection, competition law and consumer protection in the Digital Economy, (2014), available at: https://secure.edps.europa.eu/EDPSWEB/webdav/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

    [54] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

    [55] Schradie, J., Big Data Not Big Enough? How the Digital Divide Leaves People Out, MediaShift, 31 July 2013, (2013), available at: http://mediashift.org/2013/07/big-data-not-big-enough-how-digital-divide-leaves-people-out/

    [56] Crawford, K., The Hidden Biases in Big Data, Harvard Business Review, 1 April 2013 (2013), available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data

    [57] Robinson, D., Yu, H., Civil Rights, Big Data, and Our Algorithmic Future, (2014) http://bigdata.fairness.io/introduction/

    [58] Ibid.

    [59] Ibid

    [60] Rotellla, P., Is Data The New Oil? Forbes, 2 April 2012, (2012), available at: http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/

    [61] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899; Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society(2014) http://bds.sagepub.com/content/1/2/2053951714564227.abstract

    [62] Pasquale, F., The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press , (2015)

    Eight Key Privacy Events in India in the Year 2015

    by Amber Sinha — last modified Jan 03, 2016 05:43 AM
    As the year draws to a close, we are enumerating some of the key privacy related events in India that transpired in 2015. Much like the last few years, this year, too, was an eventful one in the context of privacy.

    While we did not witness, as one had hoped, any progress in the passage of a privacy law, the year saw significant developments with respect to the ongoing Aadhaar case. The statement by the Attorney General, India's foremost law officer, that there is a lack of clarity over whether the right to privacy is a fundamental right, and the fact the the matter is yet unresolved was a huge setback to the jurisprudence on privacy. [1] However, the court has recognised a purpose limitation as applicable into the Aadhaar scheme, limiting the sharing of any information collected during the enrollment of residents in UID. A draft Encryption Policy was released and almost immediately withdrawn in the face of severe public backlash, and an updated Human DNA Profiling Bill was made available for comments. Prime Minister Narendra Modi's much publicised project "Digital India" was in news throughout the year, and it also attracted its' fair share of criticism in light of the lack of privacy safeguards it offered. Internationally, a lawsuit brought by Maximilian Schrems, an Austrian privacy activist, dealt a body blow to the fifteen year old Safe Harbour Framework in place for data transfers between EU and USA. Below, we look at what were, according to us, the eight most important privacy events in India, in 2015.

    1. August 11, 2015 order on Aadhaar not being compulsory

    In 2012, a writ petition was filed by Judge K S Puttaswamy challenging the government's policy in its attempt to enroll all residents of India in the UID project and linking the Aadhaar card with various government services. A number of other petitioners who filed cases against the Aadhaar scheme have also been linked with this petition and the court has been hearing them together. On September 11, 2015, the Supreme Court reiterated its position in earlier orders made on September 23, 2013 and March 24, 2014 stating that the Aadhaar card shall not be made compulsory for any government services. [2] Building on its earlier position, the court passed the following orders:

    a) The government must give wide publicity in the media that it was not mandatory for a resident to obtain an Aadhaar card,

    b) The production of an Aadhaar card would not be a condition for obtaining any benefits otherwise due to a citizen,

    c) Aadhaar card would not be used for any purpose other than the PDS Scheme, for distribution of foodgrains and cooking fuel such as kerosene and for the LPG distribution scheme.

    d) The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation.[3]

    Despite this being the fifth court order given by the Supreme Court[4] stating that the Aadhaar card cannot be a mandatory requirement for access to government services or subsidies, repeated violations continue. One of the violations which has been widely reported is the continued requirement of an Aadhaar number to set up a Digital Locker account which also led to activist, Sudhir Yadav filing a petition in the Supreme Court.[5]

    2. No Right to Privacy - Attorney General to SC

    The Attorney General, Mukul Rohatgi argued before the Supreme Court in the Aadhaar case that the Constitution of India did not provide for a fundamental Right to Privacy.[6] He referred to the body of case in the Supreme Court dealing with this issue and made a reference to the 1954 case, MP Sharma v. Satish Chandra[7] stating that there was "clear divergence of opinion" on the Right to Privacy and termed it as "a classic case of unclear position of law." He also referred to the discussion on this matter in the Constitutional Assembly Debates and pointed to the fact the framers of the Constitution did not intend for this to be a fundamental right. He said the matter needed to be referred to a nine judge Constitution bench.[8] This raises serious questions over the jurisprudence developed by the Supreme Court on the right to privacy over the last five decades. The matter is currently pending resolution by a larger bench which needs to be constituted by the Chief Justice of India.

    3. Shreya Singhal judgment and Section 69A, IT Act

    In the much celebrated judgment, Shreya Singhal v. Union of India, in March 2015, the Supreme Court struck down Section 66A of the Information Technology Act, 2000 as unconstitutional and laid down guidelines for online takedowns under the Internet intermediary rules. However, significantly, the court also upheld Section 69A and the blocking rules under this provision. It was held to be a narrowly-drawn provision with adequate safeguards. The rules prescribe a procedure for blocking which involves receipt of a blocking request, examination of the request by the Committee and a review committee which performs oversight functions. However, commentators have pointed to the opacity of the process in the rules under this provisions. While the rules mandate that a hearing is given to the originator of the content, this safeguard is widely disregarded. The judgment did not discuss Section 69 of the Information Technology Act, 2000 which deal with decrypting of electronic communication, however, the Department of Electronic and Information Technology brought up this issue subsequently, through a Draft Encryption Policy, discussed below.

    4. Circulation and recall of Draft Encryption Policy

    On October 19, 2015, the Department of Electronic and Information Technology (DeitY) released for public comment a draft National Encryption Policy. The draft received an immediate and severe backlash from commentators, and was withdrawn by September 22, 2015. [9] The government blamed a junior official for the poor drafting of the document and noted that it had been released without a review by the Telecom Minister, Ravi Shankar Prasad and other senior officials.[10] The main areas of contention were a requirement that individuals store plain text versions of all encrypted communication for a period of 90 days, to be made available to law enforcement agencies on demand; the government's right to prescribe key-strength, algorithms and ciphers; and only government-notified encryption products and vendors registered with the government being allowed to be used for encryption.[11] The purport of the above was to limit the ways in which citizens could encrypt electronic communication, and to allow adequate access to law enforcement agencies. The requirement to keep all encrypted information in plain text format for a period of 90 days garnered particular criticism as it would allow for creation of a 'honeypot' of unencrypted data, which could attract theft and attacks.[12] The withdrawal of the draft policy is not the final chapter in this story, as the Telecom Minister has promised that the Department will come back with a revised policy. [13] This attempt to put restrictions on use of encryption technologies is not only in line with a host of surveillance initiatives that have mushroomed in India in the last few years,[14] but also finds resonance with a global trend which has seen various governments and law enforcement organisations argue against encryption. [15]

    5. Privacy concerns raised about Digital India

    The Digital India initiative includes over thirty Mission Mode Projects in various stages of implementation. [16] All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies.[17] There is also a lack of properly articulated access control mechanisms and doubts over important issues such as data ownership owing to most projects involving public private partnership which involves private organisation collecting, processing and retaining large amounts of data. [18] Ahead of Prime Minister Modi's visit to the US, over 100 hundred prominent US based academics released a statement raising concerns about "lack of safeguards about privacy of information, and thus its potential for abuse" in the Digital India project. [19] It has been pointed out that the initiatives could enable a "cradle-to-grave digital identity that is unique, lifelong, and authenticable, and it plans to widely use the already mired in controversy Aadhaar program as the identification system." [20]

    6. Issues with Human DNA Profiling Bill, 2015

    The Human DNA Profiling Bill, 2015 envisions the creation of national and regional DNA databases comprising DNA profiles of the categories of persons specified in the Bill.[21] The categories include offenders, suspects, missing persons, unknown deceased persons, volunteers and such other categories specified by the DNA Profiling Board which has oversight over these banks. The Bill grants wide discretionary powers to the Board to introduce new DNA indices and make DNA profiles available for new purposes it may deem fit. [22] These, and the lack of proper safeguards surrounding issues like consent, retention and collection pose serious privacy risks if the Bill becomes a law. Significantly, there is no element of purpose limitation in the proposed law, which would allow the DNA samples to be re-used for unspecified purposes.[23]

    7. Impact of the Schrems ruling on India

    In Schrems v. Data Protection Commissioner, the Court of Justice in European Union (CJEU) annulled the Commission Decision 2000/520 according to which US data protection rules were deemed sufficient to satisfy EU privacy rules enabling transfers of personal data from EU to US, otherwise known as the 'Safe Harbour' framework. The court ruled that broad formulations of derogations on grounds of national security, public interest and law enforcement in place in the US goes beyond the test of proportionality and necessity under the Data Protection rules.[24] This judgment could also have implications for the data processing industry in India. For a few years now, a framework similar to the Safe Harbour has been under discussion for transfer of data between India and EU. The lack of a privacy legislation has been among the significant hurdles in arriving at a framework.[25] In the absence of a Safe Harbour framework, the companies in India rely on alternate mechanisms such as Binding Corporate Rules (BCR) or Model Contractual Clauses. These contracts impose the obligation on the data exporters and importers to ensure that 'adequate level of data protection' is provided. The Schrems judgement makes it clear that 'adequate level of data protection' entails a regime that is 'essentially equivalent' to that envisioned under Directive 95/46.[26] What this means is that any new framework of protection between EU and other countries like US or India will necessarily have to meet this test of essential equivalence. The PRISM programme in the US and a host of surveillance programmes that have been initiated by the government in India in the last few years could pose problems in satisfying this test of essential equivalence as they do not conform to the proportionality and necessity principles.

    8. The definition of "unfair trade practices" in the Consumer Protection Bill, 2015

    The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session[27] has introduced an expansive definition of the term "unfair trade practices." The definition as per the Bill includes the disclosure "to any other person any personal information given in confidence by the consumer."[28] This clause exclude from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Currently, the only law governing data protection law are the Reasonable security practices and procedures and sensitive personal data or information Rules, 2011[29] prescribed under Section 43A of the Information Technology Act, 2000. Under these rules, sensitive personal data or information is protected in that their disclosure requires prior permission from the data subject. [30] For other kinds of personal information not categorized as sensitive personal data or information, the only recourse of data subjects in case to claim breach of the terms of privacy policy which constitutes a lawful contract. [31] The Consumer Protection Bill, 2015, if enacted as law, could significantly expand the scope of protection available to data subjects. First, unlike the Section 43A rules, the provisions of the Bill would be applicable to physical as well as electronic collection of personal information. Second, disclosure to a third party of personal information other than sensitive personal data or information could also have similar 'prior permission' criteria under the Bill, if it can be shown that the information was shared by the consumer in confidence.

    What we see above are events largely built around a few trends that we have been witnessing in the context of privacy in India, in particular and across the world, in general. Lack of privacy safeguards in initiatives like the Aadhaar project and Digital India is symptomatic of policies that are not comprehensive in their scope, and consequently fail to address key concerns. Dr Usha Ramanathan has called these policies "powerpoint based policies" which are implemented based on proposals which are superficial in their scope and do not give due regard to their impact on a host of issues. [32] Second, the privacy concerns posed by the draft Encryption Policy and the Human DNA Profiling Bill point to the motive of surveillance that is in line with other projects introduced with the intent to protect and preserve national security. [33] Third, the incidents that championed the cause of privacy like the Schrems judgment have largely been initiated by activists and civil society actors, and have typically entailed the involvement of the judiciary, often the single recourse of actors in the campaign for the protection of civil rights. It must be noted that jurisprudence on the right to privacy in India has not moved beyond the guidelines set forth by the Supreme Court in PUCL v. Union of India.[34] However, new mass surveillance programmes and massive collection of personal data by both public and private parties through various schemes mandated a re-look at the standards laid down twenty years ago. The privacy issue pending resolution by a larger bench in the Aadhaar case affords an opportunity to revisit those principles in light of how surveillance has changed in the last two decades and strengthen privacy and data protection.


    [1] Right to Privacy not a fundamental right, cannot be invoked to scrap Aadhar: Centre tells Supreme Court, available at http://articles.economictimes.indiatimes.com/2015-07-23/news/64773078_1_fundamental-right-attorney-general-mukul-rohatgi-privacy

    [4] Five SC Orders Later, Aadhaar Requirement Continues to Haunt Many, available at http://thewire.in/2015/09/19/five-sc-orders-later-aadhaar-requirement-continues-to-haunt-many-11065/

    [5] Digital Locker scheme challenged in Supreme Court, available at http://www.moneylife.in/article/digital-locker-scheme-challenged-in-supreme-court/42607.html

    [6] Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise, available at http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise

    [7] 1954 SCR 1077.

    [8] Supra Note 1.

    [10] Encryption policy poorly worded by officer: Telecom Minister Ravi Shankar Prasad, available at http://economictimes.indiatimes.com/articleshow/49068406.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

    [11] Updated: India's draft encryption policy puts user privacy in danger, available at http://www.medianama.com/2015/09/223-india-draft-encryption-policy/

    [12] Bhairav Acharya, The short-lived adventure of India's encryption policy, available at http://notacoda.net/2015/10/10/the-short-lived-adventure-of-indias-encryption-policy/

    [13] Supra Note 9.

    [14] Maria Xynou, Big democracy, big surveillance: India's surveillance state, available at https://www.opendemocracy.net/opensecurity/maria-xynou/big-democracy-big-surveillance-indias-surveillance-state

    [15] China passes controversial anti-terrorism law to access encrypted user accounts, available at http://www.theverge.com/2015/12/27/10670346/china-passes-law-to-access-encrypted-communications ; Police renew call against encryption technology that can help hide terrorists, available at http://www.washingtontimes.com/news/2015/nov/16/paris-terror-attacks-renew-encryption-technology-s/?page=all .

    [18] Indira Jaising, Digital India Schemes Must Be Preceded by a Data Protection and Privacy Law, available at http://thewire.in/2015/07/04/digital-india-schemes-must-be-preceded-by-a-data-protection-and-privacy-law-5471/

    [19] US academics raise privacy concerns over 'Digital India' campaign, available at http://yourstory.com/2015/08/us-digital-india-campaign/

    [20] Lisa Hayes, Digital India's Impact on Privacy: Aadhaar numbers, biometrics, and more, available at https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/

    [22] Comments on India's Human DNA Profiling Bill (June 2015 version), available at http://www.genewatch.org/uploads/f03c6d66a9b354535738483c1c3d49e4/IndiaDNABill_FGPI_15.pdf

    [23] Elonnai Hickok, Vanya Rakesh and Vipul Kharbanda, CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015, available at http://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-human-dna-profiling-bill-2015

    [25] Jyoti Pandey, Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India, available at http://cis-india.org/internet-governance/blog/contestations-of-data-ecj-safe-harbor-ruling-and-lessons-for-india

    [26] Simon Cox, Case Watch: Making Sense of the Schrems Ruling on Data Transfer, available at https://www.opensocietyfoundations.org/voices/case-watch-making-sense-schrems-ruling-data-transfer

    [28] Section 2(41) (I) of the Consumer Protection Bill, 2015.

    [30] Rule 6 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

    [31] Rule 4 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

    [33] Supra Note 11.

    [34] Chaitanya Ramachandra, PUCL V. Union of India Revisited: Why India's Sureveillance Law must be redesigned for the Digital Age, available at http://nujslawreview.org/wp-content/uploads/2015/10/Chaitanya-Ramachandran.pdf

    Free Basics: Negating net parity

    by Sunil Abraham last modified Jan 03, 2016 05:58 AM
    Researchers funded by Facebook were apparently told by 92 per cent of Indians they surveyed from large cities, with Internet connection and college degree, that the Internet “is a human right and that Free Basics can help bring Internet to all of India.” What a strange way to frame the question given that the Internet is not a human right in most jurisdictions.

    The article was published in the Deccan Herald on January 3, 2016.


    Free Basics is gratis service offered by Facebook in partnership with telcos in 37 countries. It is a mobile app that features less than a 100 of the 1 billion odd websites that are currently available on the WWW which in turn is only a sub-set of the Internet. Free Basics violates Net Neutrality because it introduces an unnecessary gatekeeper who gets to decide on “who is in” and “who is out”. Services like Free Basics could permanently alienate the poor from the full choice of the Internet because it creates price discrimination hurdles that discourage those who want to leave the walled garden.

    Inika Charles and Arhant Madhyala, two interns at Centre for Internet and Society (CIS), surveyed 1/100th of the Facebook sample, that is, 30 persons with the very same question at a café near our office in Bengaluru. Seventy per cent agreed with Facebook that the Internet was a human right but only 26 per cent thought Free Basics would achieve universal connectivity. My real point here is that numbers don’t matter. At least not in the typical way they do. Facebook dismissed Amba Kak’s independent, unfunded, qualitative research in Delhi, in their second public rebuttal, saying the sample size was only 20.

    That was truly ironical. The whole point of her research was the importance of small numbers. Kak says, “For some, it was the idea of an ‘emergency’ which made all-access plans valuable.” A respondent stated: “But maybe once or twice a month, I need some information which only Google can give me... like the other day my sister needed to know results to her entrance exams.” If you consider that too mundane, take a moment to picture yourself stranded in the recent Chennai flood. The statistical rarity of a Black Swan does not reduce its importance. A more neutral network is usually a more resilient network. When we do have our next national disaster, do we want to be one of the few countries on the planet who, thanks to our flawed regulation, have ended up with a splinternet?

    Telecom Regulatory Authority of India (Trai) chairman R S Sharma rightly expressed some scepticism around numbers when he said “the consultation paper is not an opinion poll.” He elaborated: “The issue here is some sites are being offered to one person free of cost while another is paying for it. Is this a good thing and can operators have such powers?” Had he instead asked “Is this the best option?” my answer would be “no”. Given the way he has formulated the question, our answer is a lawyerly “it depends”. The CIS believes that differential pricing should be prohibited. However, it can be allowed under certain exceptional standards when it is done in a manner that can be justified by the regulator against four axes of sometimes orthogonal policy objectives. They are increased access, enhanced competition, increased user choice and contribution to openness. For example, a permanent ban on Free Basics makes sense in the Netherlands but regulation may be sufficient for India.

    Gatekeeping powers

    To the second and more important part to Trai chairman’s second question on gatekeeping powers of operators, our answer is a simple “no”. But then, do we have any evidence that gatekeeping powers have been abused to the detriment of consumer and public interest? No. What do we do when we cannot, like Russell’s chicken, use induction to explain our future? Prof Simon Wren-Lew says, “If Bertrand Russell’s chicken had been an economist ...(it would have)... asked a crucial additional question: Why is the farmer doing this? What is in it for him?” There were five serious problems with Free Basics that Facebook has at least partially fixed, thanks mostly to criticism from consumers in India and Brazil. One, exclusivity with access provider; two, exclusivity with a set of web services; three, lack of transparency regarding retention of personal information; four, misrepresentation through the name of the service, Internet.org and five, lack of support for encrypted traffic. But how do we know these problems will stay fixed? Emerging markets guru Jan Chipchase tweeted asking “Do you trust Facebook? Today? Tomorrow? When its share price is under pressure and it wants to wring more $$$ from the platform?”

    Zero. Facebook pays telecom operators zero. The operators pay Facebook zero. The consumers pay zero. Why do we need to regulate philanthropy? Because these freebies are not purely the fruit of private capital. They are only possible thanks to an artificial state-supported oligopoly dependent on public resources like spectrum and wires (over and under public property). Therefore, these oligopolies much serve the public interest and also ensure that users are treated in a non-discriminatory fashion.

    Also provision of a free service should not allow powerful corporations to escape regulation–in jurisdictions like Brazil it is clear that Facebook has to comply with consumer protection law even if users are not paying for the service. Given that big data is the new oil, Facebook could pay the access provider in advertisements or manipulation of public discourse or by tweaking software defaults such as autoplay for videos which could increase bills of paying consumers quite dramatically.

    India needs a Net Neutrality regime that allows for business models and technological innovation as long as they don’t discriminate between users and competitors. The Trai should begin regulation based on principles as it has rightly done with the pre-emptive temporary ban. But there is a need to bring “numbers we can trust” to the regulatory debate. We as citizens need to establish a peer-to-peer Internet monitoring infrastructure across mobile and fixed lines in India that we can use to crowd source data.

    (The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world)

    Ground Zero Summit

    by Amber Sinha — last modified Jan 03, 2016 06:06 AM
    The Ground Zero Summit which claims to be the largest collaborative platform in Asia for cyber-security was held in New Delhi from 5th to 8th November. The conference was organised by the Indian Infosec Consortium (IIC), a not for profit organisation backed by the Government of India. Cyber security experts, hackers, senior officials from the government and defence establishments, senior professionals from the industry and policymakers attended the event.

    Keynote Address

    The Union Home Minister, Mr. Rajnath Singh, inaugurated the conference. Mr Singh described cyber-barriers that impact the issues that governments face in ensuring cyber-security. Calling the cyberspace as the fifth dimension of security in addition to land, air, water and space, Mr Singh emphasised the need to curb cyber-crimes in India, which have grown by 70% in 2014 since 2013. He highlighted the fact that changes in location, jurisdiction and language made cybercrime particularly difficult to address. Continuing in the same vein, Mr. Rajnath Singh also mentioned cyber-terrorism as one the big dangers in the time to come. With a number of government initiatives like Digital India, Smart Cities and Make in India leveraging technology, the Home Minister said that the success of these projects would be dependent on having robust cyber-security systems in place.

    The Home Minister outlined some initiatives that Government of India is planning to take in order to address concerns around cyber security - such as plans to finalize a new national cyber policy. Significantly, he referred to a committee headed by Dr. Gulshan Rai, the National Cyber Security Coordinator mandated to suggest a roadmap for effectively tackling cybercrime in India. This committee has recommended the setting up of Indian Cyber Crime Coordination Centre (I-4C). This centre is meant to engage in capacity building with key stakeholders to enable them to address cyber crimes, and work with law enforcement agencies. Earlier reports about the recommendation suggest that the I-4C will likely be placed under the National Crime Records Bureau and align with the state police departments through the Crime and Criminal Tracking and Network Systems (CCTNS). I-4C is supposed to be comprised of high quality technical and R&D experts who would be engaged in developing cyber investigation tools.

    Other keynote speakers included Alok Joshi, Chairman, NTRO; Dr Gulshan Rai, National Cyber Security Coordinator; Dr. Arvind Gupta, Head of IT Cell, BJP and Air Marshal S B Dep, Chief of the Western Air Command.

    Technical Speakers

    There were a number of technical speakers who presented on an array of subjects. The first session was by Jiten Jain, a cyber security analyst who spoke on cyber espionage conducted by actors in Pakistan to target defence personnel in India. Jiten Jain talked about how the Indian Infosec Consortium had discovered these attacks in 2014. Most of these websites and mobile apps posed as defence news and carried malware and viruses. An investigation conducted by IIC revealed the domains to be registered in Pakistan. In another session Shesh Sarangdhar, the CEO of Seclabs, an application security company, spoke about the Darknet and ways to break anonymity on it. Sarangdhar mentioned that anonymity on Darknet is dependent on all determinants of the equation in the communication maintaining a specific state. He discussed techniques like using audio files, cross domain on tor, siebel attacks as methods of deanonymization. Dr. Triveni Singh. Assistant Superintendent of Police, Special Task Force, UP Police made a presentation on the trends in cyber crime. Dr. Singh emphasised the amount of uncertainty with regard to the purpose of a computer intrusion. He discussed real life case studies such as data theft, credit card fraud, share trading fraud from the perspective of law enforcement agencies.

    Anirudh Anand, CTO of Infosec Labs discussed how web applications are heavily reliant on filters or escaping methods. His talk focused on XSS (cross site scripting) and bypassing regular expression filters. He also announced the release of XSS labs, an XSS test bed for security professionals and developers that includes filter evasion techniques like b-services, weak cryptographic design and cross site request forgery. Jan Siedl, an authority on SCADA presented on TOR tricks which may be used by bots, shells and other tools to better use the TOR network and I2P. His presentation dealt with using obfuscated bridges, Hidden Services based HTTP, multiple C&C addresses and use of OTP. Aneesha, an intern with the Kerala Police spoke about elliptical curve cryptography, its features such as low processing overheads. As this requires elliptic curve paths, efficient Encoding and Decoding techniques need to be developed. Aneesha spoke about an algorithm called Generator-Inverse for encoding and decoding a message using a Single Sign-on mechanism. Other subjects presented included vulnerabilities that remained despite using TLS/SSL, deception technology and cyber kill-chain, credit card frauds, Post-quantum crypto-systems and popular android malware.

    Panels

    There were also two panels organised at the conference. Samir Saran, Vice President of Observer Research Foundation, moderated the first panel on Cyber Arms Control. The panel included participants like Lt. General A K Sahni from the South Western Air Command; Lt. General A S Lamba, Retired Vice Chief Indian Army, Alok Vijayant, Director of Cyber Security Operation of NTRO and Captain Raghuraman from Reliance Industries. The panel debated the virtues of cyber arms control treaties. It was acknowledged by the panel that there was a need to frame rules and create a governance mechanism for wars in cyberspace. However, this would be effective only if the governments are the primary actors with the capability for building cyber-warfare know-how and tools. The reality was that most kinds of cyber weapons involved non state actors from the hacker community. In light of this, the cyber control treaties would lose most of their effectiveness.

    The second panel was on the Make for India’ initiatives. Dinesh Bareja, the CEO of Open Security Alliance and Pyramid Cyber Security was the moderator for this panel which also included Nandakumar Saravade, CEO of Data Security Council of India; Sachin Burman, Director of NCIIPC; Dr. B J Srinath, Director General of ICERT and Amit Sharma, Joint Director of DRDO. The focus of this session was on ‘Make in India’ opportunities in the domain of cyber security. The panelist discussed the role the government and industry could play in creating an ecosystem that supports entrepreneurs in skill development. Among the approaches discussed were: involving actors in knowledge sharing and mentoring chapters which could be backed by organisations like NASSCOM and bringing together industry and government experts in events like the Ground Zero Summit to provide knowledge and training on cyber-security issues.

    Exhibitions

    The conference was accompanied by a exhibitions showcasing indigenous cybersecurity products. The exhibitors included Smokescreen Technologies, Sempersol Consultancy, Ninja Hackon, Octogence Technologies, Secfence, Amity, Cisco Academy, Robotics Embedded Education Services Pvt. Ltd., Defence Research and Development Organisation (DRDO), Skin Angel, Aksit, Alqimi, Seclabs and Systems, Forensic Guru, Esecforte Technologies, Gade Autonomous Systems, National Critical Information Infrastructure Protection Centre (NCIIPC), Indian Infosec Consortium (IIC), INNEFU, Forensic Guru, Event Social, Esecforte Technologies, National Internet Exchange of India (NIXI) and Robotic Zone.

    The conference also witnessed events such Drone Wars, in which selected participants had to navigate a drone, a Hacker Fashion Show and the official launch of the Ground Zero’s Music Album.

    Understanding the Freedom of Expression Online and Offline

    by Prasad Krishna last modified Jan 03, 2016 10:24 AM

    PDF document icon PROVISIONAL PROGRAMME AGENDA_.pdf — PDF document, 542 kB (555783 bytes)

    ICFI Workshop

    by Prasad Krishna last modified Jan 03, 2016 10:33 AM

    PDF document icon ICFI Workshop note 10thDec2015.pdf — PDF document, 664 kB (680175 bytes)

    Facebook Free Basics: Gatekeeping Powers Extend to Manipulating Public Discourse

    by Vidushi Marda last modified Jan 09, 2016 01:43 PM
    15 million people have come online through Free Basics, Facebook's zero rated walled garden, in the past year. "If we accept that everyone deserves access to the internet, then we must surely support free basic internet services. Who could possibly be against this?" asks Facebook founder Mark Zuckerberg, in a recent op-ed defending Free Basics.

    The article was published in Catchnews on January 6, 2015. For more info click here.


    This rhetorical question however, has elicited a plethora of answers. The network neutrality debate has accelerated over the past few weeks with the Telecom Regulatory Authority of India (TRAI) releasing a consultation paper on differential pricing.

    While notifications to "Save Free Basics in India" prompt you on Facebook, an enormous backlash against this zero rated service has erupted in India.

    Free Basics

    The policy objectives that must guide regulating net neutrality are consumer choice, competition, access and openness. Facebook claims that Free Basics is a transition to the full internet and digital equality. However, by acting as a gatekeeper, Facebook gives itself the distinct advantage of deciding what services people can access for free by virtue of them being "basic", thereby violating net neutrality.

    Amidst this debate, it's important to think of the impact Facebook can have on manipulating public discourse. In the past, Facebook has used it's powerful News Feed algorithm to significantly shape our consumption of information online.

    In July 2014, Facebook researchers revealed that for a week in January 2012, it had altered the news feeds of 689,003 randomly selected Facebook users to control how many positive and negative posts they saw. This was done without their consent as part of a study to test how social media could be used to spread emotions online.

    Their research showed that emotions were in fact easily manipulated. Users tended to write posts that were aligned with the mood of their timeline.

    Another worrying indication of Facebook's ability to alter discourse was during the ALS Ice Bucket Challenge in July and August, 2014. Users' News Feeds were flooded with videos of individuals pouring a bucket of ice over their head to raise awareness for charitable cause, but not entirely on its merit.

    The challenge was Facebook's method of boosting its native video feature which was launched at around the same time. Its News Feed was mostly devoid of any news surrounding riots in Ferguson, Missouri at the same time, which happened to be a trending topic on Twitter.

    Each day, the news feed algorithm has to choose roughly 300 posts out of a possible 1500 for each user, which involves much more than just a random selection. The posts you view when you log into Facebook are carefully curated keeping thousands of factors in mind. Each like and comment is a signal to the algorithm about your preferences and interests.

    The amount of time you spend on each post is logged and then used to determine which post you are most likely to stop to read. Facebook even keeps into account text that is typed but not posted and makes algorithmic decisions based on them.

    It also differentiates between likes - if you like a post before reading it, the news feed automatically assumes that your interest is much fainter as compared to liking a post after spending 10 minutes reading it.

    Facebook believes that this is in the best interest of the user, and these factors help users see what he/she will most likely want to engage with. However, this keeps us at the mercy of a gatekeeper who impacts the diversity of information we consume, more often than not without explicit consent. Transparency is key.


    (Vidushi Marda is a programme officer at the Centre for Internet and Society)

    Human Rights in the Age of Digital Technology: A Conference to Discuss the Evolution of Privacy and Surveillance

    by Amber Sinha — last modified Jan 11, 2016 02:12 AM
    The Centre for Internet and Society organised a conference in roundtable format called ‘Human Rights in the Age of Digital Technology: A Conference to discuss the evolution of Privacy and Surveillance. The conference was held at Indian Habitat Centre on October 30, 2015. The conference was designed to be a forum for discussion, knowledge exchange and agenda building to draw a shared road map for the coming months.

    In India, the Right to Privacy has been interpreted to mean an individual's’ right to be left alone. In the age of massive use of Information and Communications Technology, it has become imperative to have this right protected. The Supreme Court has held in a number of its decisions that the right to privacy is implicit in the fundamental right to life and personal liberty under Article 21 of the Indian Constitution, though Part III does not explicitly mention this right. The Supreme Court has identified the right to privacy most often in the context of state surveillance and introduced the standards of compelling state interest, targetted surveillance and oversight mechanism which have been incorporated in the forms of rules under the Indian Telegraph Act, 1885.  Of late, privacy concerns have gained importance in India due to the initiation of national programmes like the UID Scheme, DNA Profiling, the National Encryption Policy, etc. attracting criticism for their impact on the right to privacy. To add to the growing concerns, the Attorney General, Mukul Rohatgi argued in the ongoing Aadhaar case that the judicial position on whether the right to privacy is a fundamental right is unclear and has questioned the entire body of jurisprudence on right to privacy in the last few decades.

    Participation

    The roundtable saw participation from various civil society organisation such as Centre for Communication Governance, The Internet Democracy Project, as well as individual researchers like Dr. Usha Ramanathan and Colonel Mathew.

    Introductions

    Vipul Kharbanda, Consultant, CIS made the introductions and laid down the agenda for the day. Vipul presented a brief overview of the kind of work of CIS is engaged in around privacy and surveillance, in areas including among others, the Human DNA Profiling Bill, 2014, the Aadhaar Project, the Privacy Bill and surveillance laws in India. It was also highlighted that CIS was engaged in work in the field of Big Data in light of the growing voices wanting to use Big Data in the Smart Cities projects, etc and one of the questions was to analyse whether the 9 Privacy Principles would still be valid in a Big Data and IoT paradigm.

    The Aadhaar Case

    Dr. Usha Ramanathan began by calling the Aadhaar project an identification project as opposed to an identity project. She brought up various aspects of project ranging from the myth of voluntariness, the strong and often misleading marketing that has driven the project, the lack of mandate to collect biometric data and the problems with the technology itself. She highlighted  inconsistencies, irrationalities and lack of process that has characterised the Aadhaar project since its inception. A common theme that she identified in how the project has been run was the element of ad-hoc-ness about many important decisions taken on a national scale and migrating from existing systems to the Aadhaar framework. She particularly highlighted the fact that as civil society actors trying to make sense of the project, an acute problem faced was the lack of credible information available. In that respect, she termed it as ‘powerpoint-driven project’ with a focus on information collection but little information available about the project itself. Another issue that Dr. Ramanathan brought up was that the lack of concern that had been exhibited by most people in sharing their biometric information without being aware of what it would be used, was in some ways symptomatic of they way we had begun to interact with technology and willingly giving information about ourselves, with little thought. Dr Ramanathan’s presentation detailed the response to the project from various quarters in the form of petitions in different high courts in India, how the cases were received by the courts and the contradictory response from the government at various stages. Alongside, she also sought to place the Aadhaar case in the context of various debates and issues, like its conflict with the National Population Register, exclusion, issues around ownership of data collected, national security implications and impact on privacy and surveillance. Aside from the above issues, Dr. Ramanathan also posited that the kind of flat idea of identity envisaged by projects like Aadhaar is problematic in that it adversely impacts how people can live, act and define themselves. In summation, she termed the behavior of the government as irresponsible for the manner in which it has changed its stand on issues to suit the expediency of the moment, and was particularly severe on the Attorney General raising questions about the existence of a fundamental right to privacy and casually putting in peril jurisprudence on civil liberties that has evolved over decades.

    Colonel Mathew concurred with Dr. Ramanathan that the Aadhaar Project was not about identity but about identification. Prasanna developed on this further saying that while identity was a right unto the individual, identification was something done to you by others. Colonel Mathew further presented a brief history of the Aadhaar case, and how the significant developments over the last few years have played out in the courts. One of the important questions that Colonel Mathew addressed was the claim of uniqueness made by the UID project. He pointed to research conducted by Hans Varghese Mathew which analysed the data on biometric collection and processing released by the UID and demonstrated that there was a clear probability of a duplication in 1 out of every 97 enrolments. He also questioned the oft-repeated claim that UID would give identification to those without it and allow them to access welfare schemes. In this context, he pointed at the failures of the introducer system and the fact that only 0.03% of those registered have been enrolled through the introducer system. Colonel Mathew also questioned the change in stance by the ruling party, BJP which had earlier declared that the UID project should be scrapped as it was a threat to national security. According to him, the prime mover of the scheme were corporate interests outside the country interested in the data to be collected. This, he claimed created very serious risks to the national security. Prasanna further added to this point stating that while, on the face of it, some of the claims of threats to national security may sound alarmist in nature, if one were to critically study the manner in which the data had collected for this project, the concerns appeared justified.

    The Draft Encryption Policy

    Amber Sinha, Policy Officer at CIS, made a presentation on the brief appearance of the Draft Encryption Policy which was released in October this year, and withdrawn by the government within a day. Amber provided an overview of the policy emphasising on clauses around limitations on kind of encryption algorithms and key sizes individuals and organisations could use and the ill-advised procedures that needed to be followed. After the presentation, the topic was opened for discussion. The initial part of the discussion was focussed on specific clauses that threatened privacy and could serve the ends of enabling greater surveillance of the electronic communications of individuals and organisations, most notably having an exhaustive list of encryption algorithms, and the requirement to keep all encrypted communication in plain text format for a period of 90 days. We also attempted to locate the draft policy in the context of privacy debates in India as well as the global response to encryption. Amber emphasised that while mandating minimum standards of encryption for communication between government agencies may be a honorable motive, as it is concerned with matters of national security, however when this is extended to private parties and involved imposes upward thresholds on the kinds of encryption they can use, it stems from the motive of surveillance. Nayantara, of The Internet Democracy Project, pointed out that there had been global push back against encryption by governments in various countries like US, Russia, China, Pakistan, Israel, UK, Tunisia and Morocco. In India also, the IT Act places limits on encryption. Her points stands further buttressed by the calls against encryption in the aftermath of the terrorist attacks in Paris last month.

    It also intended to have a session on the Human DNA Profiling Bill led by Dr. Menaka Guruswamy. However, due to certain issues in scheduling and paucity of time, we were not able to have the session.

    Questions Raised

    On Aadhaar, some of the questions raised included the question of  applicability of the Section 43A, IT Act rules to the private parties involved in the process. The issue of whether Aadhaar can be tool against corruption was raised by Vipul. However, Colonel Mathew demonstrated through his research that issues like corruption in the TPDS system and MNREGA which Aadhaar is supposed to solve, are not effectively addressed by it but that there were simpler solutions to these problems.

    Ranjit raised questions about the different contexts of privacy, and referred to the work of Helen Nissenbaum. He spoke about the history of freely providing biometric information in India, initially for property documents and how it has gradually been used for surveillance. He argued has due to this tradition, many people in India do not view sharing of biometric information as infringing on their privacy. Dipesh Jain, student at Jindal Global Law School pointed to challenges like how individual privacy is perceived in India, its various contexts, and people resorting to the oft-quoted dictum of ‘why do you want privacy if you have nothing to hide’. In the context, it is pertinent to mention the response of Edward Snowden to this question who said, “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” Aakash Solanki, researcher

    Vipul and Amber also touched upon the new challenges that are upon us in a world of Big Data where traditional ways to ensure data protection through data minimisation principle and the methods like anonymisation may not work. With advances in computer science and mathematics threatening to re-identify anonymized datasets, and more and more reliances of secondary uses of data coupled with the inadequacy of the idea of informed consent, a significant paradigm shift may be required in how we view privacy laws.

    A number of action items going forward were also discussed, where different individuals volunteered to lead research on issues like the UBCC set up by the UIDAI, GSTN, the first national data utility, looking the recourses available to individual where his data is held by parties outside India’s jurisdiction.

    A Critique of Consent in Information Privacy

    by Amber Sinha and Scott Mason — last modified Jan 18, 2016 02:20 AM
    The idea of informed consent in privacy law is supposed to ensure the autonomy of an individual in any exercise which involves sharing of the individual's personal information. Consent is usually taken through a document, a privacy notice, signed or otherwise agreed to by the participant.

    Notice and Consent as cornerstone of privacy law
    The privacy notice, which is the primary subject of this article, conveys all pertinent information, including risks and benefits to the participant, and in the possession of such knowledge, they can make an informed choice about whether to participate or not.

    Most modern laws and data privacy principles seek to focus on individual control. In this context, the definition by the late Alan Westin, former Professor of Public Law & Government Emeritus, Columbia University, which characterises privacy as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other," [1] is most apt. The idea of privacy as control is what finds articulation in data protection policies across jurisdictions beginning from the Fair Information Practice Principles (FIPP) from the United States. [2] Paul Schwarz, the Jefferson E. Peyser Professor at UC Berkeley School of Law and a Director of the Berkeley Center for Law and Technology, called the FIPP the building blocks of modern information privacy law. [3] These principles trace their history to a report called 'Records, Computers and Rights of Citizens'[4] prepared by an Advisory Committee appointed by the US Department of Health, Education and Welfare in 1973 in response to the increasing automation in data systems containing information about individuals. The Committee's mandate was to "explore the impact of computers on record keeping about individuals and, in addition, to inquire into, and make recommendations regarding, the use of the Social Security number."[5] The most important legacy of this report was the articulation of five principles which would not only play a significant role in the privacy laws in US but also inform data protection law in most privacy regimes internationally[6] like the OECD Privacy Guidelines, the EU Data Protection Principles, the FTC Privacy Principles, APEC Framework or the nine National Privacy Principles articulated by the Justice A P Shah Committee Report which are reflected in the Privacy Bill, 2014 in India. Fred Cate, the C. Ben Dutton Professor of Law at the Indiana University Maurer School of Law, effectively summarises the import of all of these privacy regimes as follows:

    "All of these data protection instruments reflect the same approach: tell individuals what data you wish to collect or use, give them a choice, grant them access, secure those data with appropriate technologies and procedures, and be subject to third-party enforcement if you fail to comply with these requirements or individuals' expressed preferences"[7]

    This makes the individual empowered and allows them to weigh their own interests in exercising their consent. The allure of this paradigm is that in one elegant stroke, it seeks to "ensure that consent is informed and free and thereby also to implement an acceptable tradeoff between privacy and competing concerns."[8] This system was originally intended to be only one of the multiple ways in data processing would be governed, along with other substantive principles such as data quality, however, it soon became the dominant and often the only mechanism.[9] In recent years however, the emergence of Big Data and the nascent development of the Internet of Things has led many commentators to begin questioning the workability of consent as a principle of privacy. [10] In this article we will look closely at the some of issues with the concept of informed consent, and how these notions have become more acute in recent years. Following an analysis of these issues, we will conclude by arguing that today consent, as the cornerstone of privacy law, may in fact be thought of as counter-productive and that a rethinking of a principle based approach to privacy may be necessary.

    Problems with Consent

    To a certain extent, there are some cognitive problems that have always existed with the issue of informed consent such as long and difficult to understand privacy notices,[11] although, in recent past with these problems have become much more aggravated. Fred Cate points out that FIPPs at their inception were broad principles which included both substantive and procedural aspects. However, as they were translated into national laws, the emphasis remained on the procedural aspect of notice and consent. From the idea of individual or societal welfare as the goals of privacy, the focus had shifted to individual control.[12] With data collection occurring with every use of online services, and complex data sets being created, it is humanly impossible to exercise rational decision-making about the choice to allow someone to use our personal data. The thrust of Big Data technologies is that the value of data resides not in its primary purposes but in its numerous secondary purposes where data is re-used many times over. [13] In that sense, the very idea of Big Data conflicts with the data minimization principle.[14] The idea is to retain as much data as possible for secondary uses. Since, these secondary uses are, by their nature, unanticipated, its runs counter to the the very idea of the purpose limitation principle. [15] The notice and consent requirement has simply led to a proliferation of long and complex privacy notices which are seldom read and even more rarely understood. We will articulate some issues with privacy notices which have always existed, and have only become more exacerbated in the context of Big Data and the Internet of Things.

    1. Failure to read/access privacy notices

    The notice and consent principle relies on the ability of the individual to make an informed choice after reading the privacy notice. The purpose of a privacy notice is to act as a public announcement of the internal practices on collection, processing, retention and sharing of information and make the user aware of the same.[16] However, in order to do so the individual must first be able to access the privacy notices in an intelligible format and read them. Privacy notices come in various forms, ranging from documents posted as privacy policies on a website, to click through notices in a mobile app, to signs posted in public spaces informing about the presence of CCTV cameras. [17]

    In order for the principle of notice and consent to work, the privacy notices need to be made available in a language understood by the user. As per estimates, about 840 million people (11% of the world population) can speak or understand English. However, most privacy notices online are not available in the local language in different regions.[18] Further, with the ubiquity of smartphones and advent of Internet of Things, constrained interfaces on mobile screens and wearables make the privacy notices extremely difficult to read. It must be remembered that privacy notices often run into several pages, and smaller screens effectively ensure that most users do not read through them. Further, connected wearable devices often have "little or no interfaces that readily permit choices." [19] As more and more devices are connected, this problem will only get more pronounced. Imagine in a world where refrigerators act as the intermediary disclosing information to your doctor or supermarket, at what point does the data subject step in and exercise consent.[20]

    Another aspect that needs to be understood is that unlike earlier when data collectors were far and few in between, the user could theoretically make a rational choice taking into account the purpose of data collection. However, in the world of Big Data, consent often needs to be provided while the user is trying to access services. In that context click through privacy notices such as those required to access online application, are treated simply as an impediment that must be crossed in order to get access to services. The fact that the consent need to be given in real time almost always results in disregarding what the privacy notices say.[21]

    Finally, some scholars have argued that while individual control over data may be appealing in theory, it merely gives an illusion of enhanced privacy but not the reality of meaningful choice.[22] Research demonstrates that the presence of the term 'privacy policy' leads people to the false assumption that if a company has a privacy policy in place, it automatically means presence of substantive and responsible limits on how data is handled.[23] Joseph Turow, the Robert Lewis Shayon Professor of Communication at the Annenberg School for Communication, and his team for example has demonstrated how "[w]hen consumers see the term 'privacy policy,' they believe that their personal information will be protected in specific ways; in particular, they assume that a website that advertises a privacy policy will not share their personal information."[24] In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. Most people tend to ignore privacy policies.[25] Cass Sunstein states that our cognitive capacity to make choices and take decisions is limited. When faced with an overwhelming number of choices to make, most of us do not read privacy notices and resort to default options.[26] The requirement to make choices, sometimes several times in a day, imposes significant burden on the consumers as well the business seeking such consent. [27]

    2. Failure to understand privacy notices

    FTC chairperson Edith Ramirez stated: "In my mind, the question is not whether consumers should be given a say over unexpected uses of their data; rather, the question is how to provide simplified notice and choice."[28] Privacy notices often come in the form of long legal documents much to the detriment of the readers' ability to understand them. These policies are "long, complicated, full of jargon and change frequently."[29] Kent walker list five problems that privacy notices typically suffer from - a) overkill - long and repetitive text in small print, b) irrelevance - describing situations of little concern to most consumers, c) opacity - broad terms the reflect the truth that is impossible to track and control all the information collected and stored, d) non-comparability - simplification required to achieve comparability will lead to compromising accuracy, and e) inflexibility - failure to keep pace with new business models.[30] Erik Sherman did a review of twenty three corporate privacy notices and mapped them against three indices which give approximate level of education necessary to understand text on a first read. His results show that most of policies can only be understood on the first read by people of a grade level of 15 or above. [31] FTC Chairperson Timothy Muris summed up the problem with long privacy notices when he said, "Acres of trees died to produce a blizzard of barely comprehensible privacy notices." [32]

    Margaret Jane Radin, the former Henry King Ransom Professor of Law Emerita at the University of Michigan, provides a good definition of free consent. It "involves a knowing understanding of what one is doing in a context in which it is actually

    possible for or to do otherwise, and an affirmative action in doing something, rather

    than a merely passive acquiescence in accepting something."[33] There have been various proposals advocating a more succinct and simpler standard for privacy notices,[34] or multi-layered notices[35] or representing the information in the form of a table. [36] However, studies show only an insignificant improvement in the understanding by consumers when privacy policies are represented in graphic formats like tables and labels. [37] It has also been pointed out that it is impossible to convey complex data policies in simple and clear language.[38]

    3. Failure to anticipate/comprehend the consequences of consent

    Today's infinitely complex and labyrinthine data ecosystem is beyond the comprehension of most ordinary users. Despite a growing willingness to share information online, most have no understanding of what happens to their data once they have uploaded it - Where it goes? Whom it is held by? Under what conditions? For what purpose? Or how might it be used, aggregated, hacked, or leaked in the future? For the most part, the above operations are "invisible, managed at distant centers, from behind the scenes, by unmanned powers."[39]

    The perceived opportunities and benefits of Big Data have led to an acceptance of the indiscriminate collection of as much data as possible as well as the retention of that data for unspecified future analysis. For many advocates, such practices are absolutely essential if Big Data is to deliver on its promises.. Experts have argued that key privacy principles particularly those of collection limitation, data minimization and purpose limitation should not be applied to Big Data processing.[40] As mentioned above, in the case of Big Data, the value of the data collected comes often not from its primary purpose but from its secondary uses. Deriving value from datasets involves amalgamating diverse datasets and executing speculative and exploratory kinds of analysis in order to discover hidden insights and correlations that might have previously gone unnoticed.[41] As such organizations are today routinely reprocessing data collected from individuals for purposes not directly related to the services they provide to the customer. These secondary uses of data are becoming increasingly valuable sources of revenue for companies as the value of data in and of itself continues to rise. [42]

    Purpose Limitation

    The principle of purpose limitation has served as a key component of data protection for decades. Purposes given for the processing of users' data should be given at the time of collection and consent and should be "specified, explicit and legitimate". In practice however, reasons given typically include phrases such as, 'for marketing purposes' or 'to improve the user experience' that are vague and open to interpretation. [43]

    Some commentators whilst conceding the fact that purpose limitation in the era of Big Data may not be possible have instead attempted to emphasise the notion of 'compatible use' requirements. In the view of Working Party on the protection of individuals with regard to the processing of person data, for example, use of data for a purpose other than that originally stated at the point of collection should be subject to a case-by-case review of whether not further processing for different purpose is justifiable - i.e., compatible with the original purpose. Such a review may take into account for example, the context in which the data was originally collected, the nature or sensitivity of the data involved, and the existence of relevant safeguards to insure fair processing of the data and prevent undue harm to the data subject.[44]

    On the other hand, Big Data advocates have argued that an assessment of legitimate interest rather than compatibility with the initial purpose is far better suited to Big Data processing.[45] They argue that today the notion of purpose limitation has become outdated. Whereas previously data was collected largely as a by-product of the purpose for which it was being collected. If for example, we opted to use a service the information we provided was for the most part necessary to enable the provision of that service. Today however, the utility of data is no longer restricted to the primary purpose for which it is collected but can be used to provide all kinds of secondary services and resources, reduce waste, increase efficiency and improve decision-making.[46] These kinds of positive externalities, Big Data advocates insist, are only made possible by the reprocessing of data.

    Unfortunately for the notion of consent the nature of these secondary purposes are rarely evident at the time of collection. Instead the true value of the data can often only be revealed when it is amalgamated with other diverse datasets and subjected to various forms of analysis to help reveal hidden and non-obvious correlations and insights.[47] The uncertain and speculative value of data therefore means that it is impossible to provide "specific, explicit, and legitimate" details about how a given data set will be used or how it might be aggregated in future. Without this crucial information data subjects have no basis upon which they can make an informed decision about whether or not to provide consent. Robert Sloan and Richard Warner argue that it is impossible for a privacy notice to contain enough information to enable free consent. They argue that current data collection practices are highly complex and that these practices involve collection of information at one stage for one purpose and then retain, analyze, and distribute it for a variety of other purposes in unpredictable ways. [48] Helen Nissenbaum points to the ever changing nature of data flow and the cognitive challenges it poses. "Even if, for a given moment, a

    snapshot of the information flows could be grasped, the realm is in constant flux, with new firms entering the picture, new analytics, and new back end contracts forged: in other words, we are dealing with a recursive capacity that is indefinitely extensible." [49]

    Scale and Aggregation

    Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life, 'creating countless new digital puddles, lakes, tributaries and oceans of information'.[50] In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabytes , by 2013 that had grown to 4 zettabytes , and with the nascent development of the Internet of Things gathering pace, these trends are set to continue. [51] Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum. [52]

    The dilution of the purpose limitation principle entails that even those who understand privacy notices and are capable of making rational choices about it, cannot conceptualize how their data will be aggregated and possibly used or re-used. Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. Daniel Solove, the John Marshall Harlan Research Professor of Law at the George Washington University Law School, in his book, "The Digital Person", calls it the aggregation effect. He argues that the ingenuity of the data mining techniques and the insights and predictions that could be made by it render any cost-benefit analysis that an individual could make ineffectual. [53]

    4. Failure to opt-out

    The traditional choice against the collection of personal data that users have had access to, at least in theory, is the option to 'opt-out' of certain services. This draws from the free market theory that individuals exercise their free will when they use services and always have the option of opting out, thus, arguing against regulation but relying on the collective wisdom of the market to weed out harms. The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data, the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services, is making opting-out of data collection not only impractical but in some cases impossible. [54]

    Whilst sceptics may object that individuals are still free to stop using services that require data. As online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less of a genuine choice. [55] Information flows not only from the individuals it is about but also from what other people say about them. Financial transactions made online or via debit/credit cards can be analysed to derive further information about the individual. If opting-out makes you look anti-social, criminal, or unethical, the claims that we are exercising free will seems murky and leads one to wonder whether we are dealing with coercive technologies.

    Another issue with the consent and opt-out paradigm is the binary nature of the choice. This binary nature of consent makes a mockery of the notion that consent can function as an effective tool of personal data management. What it effectively means is that one can either agree with the long privacy notices, or choose to abandon the desired service. "This binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data. In practice, it certainly is not the optimal mechanism to ensure that either information privacy or the free flow of information is being protected." [56]

    Conclusion: 'Notice and Consent' is counter-productive

    There continues to be an unwillingness amongst many privacy advocates to concede that the concept of consent is fundamentally broken, as Simon Davies, a privacy advocate based in London, comments 'to do so could be seen as giving ground to the data vultures', and risks further weakening an already dangerously fragile privacy framework.[57] Nevertheless, as we begin to transition into an era of ubiquitous data collection, evidence is becoming stronger that consent is not simply ineffective, but may in some instances might be counter-productive to the goals of privacy and data protection.

    As already noted, the notion that privacy agreements produce anything like truly informed consent has long since been discredited; given this fact, one may ask for whose benefit such agreements are created? One may justifiably argue that far from being for the benefit and protection of users, privacy agreement may in fact be fundamentally to the benefit of data brokers, who having gained the consent of users can act with near impunity in their use of the data collected. Thus, an overly narrow focus on the necessity of consent at the point of collection, risks diverting our attention from the arguably more important issue of how our data is stored, analysed and distributed by data brokers following its collection. [58]

    Furthermore, given the often complicated and cumbersome processes involved in gathering consent from users, some have raised concerns that the mechanisms put in place to garner consent could themselves morph into surveillance mechanisms. Davies, for example cites the case of the EU Cookie Directive, which required websites to gain consent for the collection of cookies. Davies observes how, 'a proper audit and compliance element in the system could require the processing of even more data than the original unregulated web traffic. Even if it was possible for consumers to use some kind of gateway intermediary to manage the consent requests, the resulting data collection would be overwhelming''. Thus in many instances there exists a fundamental tension between the requirement placed on companies to gather consent and the equally important principle of data minimization. [59]

    Given the above issues with notice and informed consent in the context of information privacy, and the fact that it is counterproductive to the larger goals of privacy law, it is important to revisit the principle or rights based approach to data protection, and consider a paradigm shift where one moves to a risk based approach that takes into account the actual threats of sharing data rather than relying on what has proved to be an ineffectual system of individual control. We will be dealing with some of these issues in a follow up to this article.


    [1] Alan Westin, Privacy and Freedom, Atheneum, New York, 2015.

    [2] FTC Fair Information Practice Principles (FIPP) available at https://www.it.cornell.edu/policies/infoprivacy/principles.cfm.

    [3] Paul M. Schwartz, "Privacy and Democracy in Cyberspace," 52 Vanderbilt Law Review 1607, 1614 (1999).

    [4] US Secretary's Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, available at http://www.justice.gov/opcl/docs/rec-com-rights.pdf

    [6] Marc Rotenberg, "Fair Information Practices and the Architecture of Privacy: What Larry Doesn't Get," available at https://journals.law.stanford.edu/sites/default/files/stanford-technology-law-review/online/rotenberg-fair-info-practices.pdf

    [7] Fred Cate, The Failure of Information Practice Principles, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1156972

    [8] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

    [9] Fred Cate, Viktor Schoenberger, Notice and Consent in a world of Big Data, available at http://idpl.oxfordjournals.org/content/3/2/67.abstract

    [10] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

    [11] Ben Campbell, Informed consent in developing countries: Myth or Reality, available at https://www.dartmouth.edu/~ethics/docs/Campbell_informedconsent.pdf ;

    [12] Supra Note 7.

    [13] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013 at 153.

    [14] The Data Minimization principle requires organizations to limit the collection of personal data to the minimum extent necessary to obtain their legitimate purpose and to delete data no longer required.

    [15] Omer Tene and Jules Polonetsky, "Big Data for All: Privacy and User Control in the Age of Analytics," SSRN Scholarly Paper, available at http://papers.ssrn.com/abstract=2149364

    [16] Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

    [17] Daniel Solove, The Digital Person: Technology and Privacy in the Information Age, NYU Press, 2006.

    [19] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

    [21] Supra Note 10.

    [22] Supra Note 7.

    [23] Chris Jay Hoofnagle & Jennifer King, Research Report: What Californians Understand

    About Privacy Online, available at http://ssrn.com/abstract=1262130

    [24] Joseph Turrow, Michael Hennesy, Nora Draper, The Tradeoff Fallacy, available at https://www.asc.upenn.edu/sites/default/files/TradeoffFallacy_1.pdf

    [25] Saul Hansell, "Compressed Data: The Big Yahoo Privacy Storm That Wasn't," New York Times, May 13, 2002 available at http://www.nytimes.com/2002/05/13/business/compressed-data-the-big-yahoo-privacy-storm-that-wasn-t.html?_r=0

    [26] Cass Sunstein, Choosing not to choose: Understanding the Value of Choice, Oxford University Press, 2015.

    [28] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

    [29] L. F. Cranor. Necessary but not sufficient: Standardized mechanisms for privacy notice and choice. Journal on Telecommunications and High Technology Law, 10:273, 2012, available at http://jthtl.org/content/articles/V10I2/JTHTLv10i2_Cranor.PDF

    [30] Kent Walker, The Costs of Privacy, 2001 available at https://www.questia.com/library/journal/1G1-84436409/the-costs-of-privacy

    [31] Erik Sherman, "Privacy Policies are great - for Phds", CBS News, available at http://www.cbsnews.com/news/privacy-policies-are-great-for-phds/

    [32] Timothy J. Muris, Protecting Consumers' Privacy: 2002 and Beyond, available at http://www.ftc.gov/speeches/muris/privisp1002.htm

    [33] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 1999 available at http://www.repository.law.indiana.edu/ilj/vol75/iss4/1/

    [34] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

    [35] The Center for Information Policy Leadership, Hunton & Williams LLP, "Ten Steps To Develop A Multi-Layered Privacy Notice" available at https://www.informationpolicycentre.com/files/Uploads/Documents/Centre/Ten_Steps_whitepaper.pdf

    [36] Allen Levy and Manoj Hastak, Consumer Comprehension of Financial Privacy Notices, Interagency Notice Project, available at https://www.sec.gov/comments/s7-09-07/s70907-21-levy.pdf

    [37] Patrick Gage Kelly et al., Standardizing Privacy Notices: An Online Study of the Nutrition Label Approach available at https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416-544506-00037/544506-00037.pdf

    [39] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

    [40] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013.

    [41] Supra Note 15.

    [42] Supra Note 40.

    [43] Article 29 Working Party, (2013) Opinion 03/2013 on Purpose Limitation, Article 29, available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

    [44] Ibid.

    [45] It remains unclear however whose interest would be accounted, existing EU legislation would allow commercial/data broker/third party interests to trump those of the user, effectively allowing re-processing of personal data irrespective of whether that processing would be in the interest of the user.

    [46] Supra Note 40.

    [47] Supra Note 10.

    [48] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

    [49] Helen Nissenbaum, A Contextual Approach to Privacy Online, available at http://www.amacad.org/publications/daedalus/11_fall_nissenbaum.pdf

    [50] D Bollier, The Promise and Peril of Big Data. The Aspen Institute, 2010, available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf

    [51] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

    [52] Supra Note 40.

    [53] Supra Note 17.

    [54] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

    [55] Ibid.

    [57] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

    [58] Supra Note 10.

    [59] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

    UID Ad

    by Prasad Krishna last modified Jan 13, 2016 02:28 AM

    PDF document icon Times of India 29_08_2015.pdf — PDF document, 1155 kB (1182894 bytes)

    Reply to RTI Application under RTI Act of 2005 from Vanya Rakesh

    by Vanya Rakesh last modified Jan 13, 2016 02:40 AM
    Unique Identification Authority of India replied to the RTI application filed by Vanya Rakesh.

    Madam,

    1. Please refer to your RTI application dated 3.12.2015 received in the Division on 10.12.2015 on the subject mentioned above requesting to provide the information in electronic form via the email address [email protected], copies of the artwork in print media released by UIDAI to create awareness about use of Aadhaar not being mandatory.
    2. I am directed to furnish herewith in electronic form, copy of the artwork in print media released / published in the epapers edition of the Times of India and Dainik Jagran in their respective editions of dated 29.8.2015 in a soft copy, about obtaining of Aadhaar not being mandatory for a citizen, as desired.
    3. In case, you want to go for an appeal in connection with the information provided, you may appeal to the Appellate Authority indicated below within thirty days from the date of receipt of this letter.
      Shri Harish Lal Verma,
      Deputy Director (Media),
      Unique Identification Authority of India
      3nd Floor, Tower – II, Jeevan Bharati Building,
      New Delhi – 110001.


    Yours faithfully,

    (T Gou Khangin)
    Section Officer & CPIO Media Division

    Copy for information to: Deputy Director (Establishment) & Nodal CPIO


    Below scanned copies:

    RTI Reply
    RTI Reply
    Coverage in Dainik Jagran
    Dainik Jagran

    Download the coverage in the Times of India here. Read the earlier blog entry here.

    Background Note Big Data

    by Prasad Krishna last modified Jan 17, 2016 01:55 AM

    PDF document icon Background Note-BigDataandandGovernanceinIndia.pdf — PDF document, 131 kB (134227 bytes)

    Network Neutrality across South Asia

    by Prasad Krishna last modified Jan 17, 2016 02:37 AM

    PDF document icon Network Neutrality Agenda Information_1.14..2016.pdf — PDF document, 411 kB (421545 bytes)

    NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

    by Sumandro Chattapadhyay last modified Jan 19, 2016 07:58 AM
    NASSCOM-DSCI organised the 10th Annual Information Security Summit (AISS) 2015 in Delhi during December 16-17. Sumandro Chattapadhyay participated in this engaging Summit. He shares a collection of his notes and various tweets from the event.
    NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

    Annual Information Security Summit (AISS) 2015

     

    Details about the Summit

    Event page: https://www.dsci.in/events/about/2261.

    Agenda: https://www.dsci.in/sites/default/files/Agenda-AISS-2015.pdf.

     

    Notes from the Summit

    Mr. G. K. Pillai, Chairman of Data Security Council of India (DSCI), set the tone of the Summit at the very first hour by noting that 1) state and private industries in India are working in silos when it comes to preventing cybercrimes, 2) there is a lot of skill among young technologists and entrepreneurs, and the state and the private sectors are often unaware of this, and 3) there is serious lack of (cyber-)capacity among law enforcement agencies.

    In his Inaugural Address, Dr. Arvind Gupta (Deputy National Security Advisor and Secretary, NSCS), provided a detailed overview of the emerging challenges and framework of cybersecurity in India. He focused on the following points:

    • Security is a key problem in the present era of ICTs as it is not in-built. In the upcoming IoT era, security must be built into ICT systems.
    • In the next billion addition to internet population, 50% will be from India. Hence cybersecurity is a big concern for India.
    • ICTs will play a catalytic role in achieving SDGs. Growth of internet is part of the sustainable development agenda.
    • We need a broad range of critical security services - big data analytics, identity management, etc.
    • The e-governance initiatives launched by the Indian government are critically dependent on a safe and secure internet.
    • Darkweb is a key facilitator of cybercrime. Globally there is a growing concern regarding the security of cyberspace.
    • On the other hand, there exists deep divide in access to ICTs, and also in availability of content in local languages.
    • The Indian government has initiated bilateral cybersecurity dialogues with various countries.
    • Indian government is contemplating setting up of centres of excellence in cryptography. It has already partnered with NASSCOM to develop cybersecurity guidelines for smart cities.
    • While India is a large global market for security technology, it also needs to be self-reliant. Indian private sector should make use of government policies and bilateral trust enjoyed by India with various developing countries in Africa and south America to develop security technology solutions, create meaningful jobs in India, and export services and software to other developing countries.
    • Strong research and development, and manufacturing base are absolutely necessary for India to be self-reliant in cybersecurity. DSCI should work with private sector, academia, and government to coordinate and realise this agenda.
    • In the line of the Climate Change Fund, we should create a cybersecurity fund, since it is a global problem.
    • Silos are our bane in general. Bringing government agencies together is crucial. Trust issues (between government, private sector, and users) remain, and can only be resolved over time.
    • The demand for cybersecurity solutions in India is so large, that there is space for everyone.
    • The national cybersecurity centre is being set up.
    • Thinktanks can play a crucial role in helping the government to develop strategies for global cybersecurity negotiations. Indian negotiators are often capacity constrained.

    Rajendra Pawar, Chair of the NASSCOM Cyber Security Task Force, NASSCOM Cybersecurity Initiative, provided glimpses of the emerging business opportunity around cybersecurity in India:

    • In next 10 years, the IT economy in India will be USD 350 bn, and 10% of that will be the cybersecurity pie. This means a million job only in the cybersecurity space.
    • Academic institutes are key to creation of new ideas and hence entrepreneurs. Government and private sectors should work closely with academic institutes.
    • Globally, cybersecurity innovation and industries happen in clusters. Cities and states must come forward to create such clusters.
    • 2/3rd of the cybersecurity market is provision of services. This is where India has a great advantage, and should build on that to become a global brand in cybersecurity services.
    • Everyday digital security literacy and cultures need to be created.
    • Publication of cybersecurity best practices among private companies is a necessity.
    • Dedicated cybersecurity spending should be made part of the e-governance budget of central and state governments.
    • DSCI should function as a clearing house of cybersecurity case studies. At present, thought leadership in cybersecurity comes from the criminals. By serving as a use case clearing house, DSCI will inform interested researchers about potential challenges for which solution needs to be created.

    Manish Tiwary of Microsoft informed the audience that India is in the top 3 positions globally in terms of malware proliferation, and this ensures that India is a big focus for Microsoft in its global war against malware. Microsoft India looks forward to work closely with CERT-In and other government agencies.

    The session on Catching Fraudsters had two insightful presentations from Dr. Triveni Singh, Additional SP of Special Task Force of UP Police, and Mr. Manoj Kaushik, IAS, Additional Director of FIU.

    Dr. Singh noted that a key challenge faced by police today is that nobody comes to them with a case of online fraud. Most fraud businesses are run by young groups operating BPOs that steal details from individuals. There exists a huge black market of financial and personal data - often collected from financial institutions and job search sites. Almost any personal data can be bought in such markets. Further, SIM cards under fake names are very easy to buy. The fraudsters are effective using all fake identity, and is using operational infrastructures outsourced from legitimate vendors under fake names. Without a central database of all bank customers, it is very difficult for the police to track people across the financial sector. It becomes even more difficult for Indian police to get access to personal data of potential fraudsters when it is stored in a foreign server. which is often the case with usual web services and apps. Many Indian ISPs do not keep IP history data systematically, or do not have the technical expertise to share it in a structured and time-sensitive way.

    Mr. Kaushik explained that no financial fraud is uniquely committed via internet. Many fraud begin with internet but eventually involve physical fraudulent money transaction. Credit/debit card frauds all involve card data theft via various internet-based and physical methods. However, cybercrime is continued to be mistakenly seen as frauds undertaken completely online. Further, mobile-based frauds are yet another category. Almost all apps we use are compromised, or store transaction history in an insecure way, which reveals such data to hackers. FIU is targeting bank accounts to which fraud money is going, and closing them down. Catching the people behind these bank accounts is much more difficult, as account loaning has become a common practice - where valid accounts are loaned out for a small amount of money to fraudsters who return the account after taking out the fraudulent money. Better information sharing between private sector and government will make catching fraudsters easier.

    The session on Smart Cities focused on discussing the actual cities coming up India, and the security challenges highlighted by them. There was a presentation on Mahindra World City being built near Jaipur. Presenters talked about the need to stabilise, standardise, and securitise the unique identities of machines and sensors in a smart city context, so as to enable secured machine-to-machine communication. Since 'smartness' comes from connecting various applications and data silos together, the governance of proprietary technology and ensuring inter-operable data standards are crucial in the smart city.

    As Special Purposed Vehicles are being planned to realise the smart cities, the presenters warned that finding the right CEOs for these entities will be critical for their success. Legacy processes and infrastructures (and labour unions) are a big challenge when realising smart cities. Hence, the first step towards the smart cities must be taken through connected enforcement of law, order, and social norms.

    Privacy-by-design and security-by-design are necessary criteria for smart cities technologies. Along with that regular and automatic software/middleware updating of distributed systems and devices should be ensured, as well as the physical security of the actual devices and cables.

    In terms of standards, security service compliance standards and those for protocols need to be established for the internet-of-things sector in India. On the other hand, there is significant interest of international vendors to serve the Indian market. All global data and cloud storage players, including Microsoft Azure cloud, are moving into India, and are working on substantial and complete data localisation efforts.

    Mr. R. Chandrasekhar, President of NASSCOM, foregrounded the recommendations made by the Cybersecurity Special Task Force of NASSCOM, in his Special Address on the second day. He noted:

    • There is a great opportunity to brand India as a global security R&D and services hub. Other countries are also quite interested in India becoming such a hub.
    • The government should set up a cybersecurity startup and innovation fund, in coordination with and working in parallel with the centres of excellence in internet-of-things (being led by DeitY) and the data science/analytics initiative (being led by DST).
    • There is an immediate need to create a capable workforce for the cybersecurity industry.
    • Cybersecurity affects everyone but there is almost no public disclosure. This leads to low public awareness and valuation of costs of cybersecurity failures. The government should instruct the Ministry of Corporate Affairs to get corporates to disclose (publicly or directly to the Ministry) security breeches.
    • With digital India and everyone going online, cyberspace will increasingly be prone to attacks of various kinds, and increasing scale of potential loss. Cybersecurity, hence, must be part of the core national development agenda.
    • The cybersecurity market in India is big enough and under-served enough for everyone to come and contribute to it.

    The Keynote Address by Mr. Rajiv Singh, MD – South Asia of Entrust Datacard, and Mr. Saurabh Airi, Technical Sales Consultant of Entrust Datacard, focused on trustworthiness and security of online identities for financial transactions. They argued that all kinds of transactions require a common form factor, which can be a card or a mobile phone. The key challenge is to make the form factor unique, verified, and secure. While no programme is completely secure, it is necessary to build security into the form factor - security of both the physical and digital kind, from the substrates of the card to the encryption algorithms. Entrust and Datacard have merged in recent past to align their identity management and security transaction workflows, from physical cards to software systems for transactions. The advantages of this joint expertise have allowed them to successfully develop the National Population Register cards of India. Now, with the mobile phone emerging as a key financial transaction form factor, the challenge across the cybersecurity industry is to offer the same level of physical, digital, and network security for the mobile phone, as are provided for ATM cards and cash machines.

    The following Keynote Address by Dr. Jared Ragland, Director - Policy of BSA, focused on the cybersecurity investment landscape in India and the neighbouring region. BSA, he explained, is a global trade body of software companies. All major global software companies are members of BSA. Recently, BSA has produced a study on the cybersecurity industry across 10 markets in the Asia Pacific region, titled Asia Pacific Cybersecurity Dashboard. The study provides an overview of cybersecurity policy developments in these countries, and sector-specific opportunities in the region. Dr. Ragland mentioned the following as the key building blocks of cybersecurity policy: legal foundation, establishment of operational entities, building trust and partnerships (PPP), addressing sector-specific requirements, and education and awareness. As for India, he argued that while steady steps have been taken in the cybersecurity policy space by the government, a lot remains to be done. Operationalisation of the policy is especially lacking. PPPs are happening but there is a general lack of persistent formal engagement with the private sector, especially with global software companies. There is almost no sector-specific strategy. Further, the requirement for India-specific testing of technologies, according to domestic and not global standards, is leading to entry barrier for global companies and export barrier for Indian companies. Having said that, Dr. Ragland pointed out that India's cybersecurity experience is quite representative of that of the Asia Pacific region. He noted the following as major stumbling blocks from an international industry perspective: unnecessary and unreasonable testing requirements, setting of domestic standards, and data localisations rules.

    One of the final sessions of the Summit was the Public Policy Dialogue between Prof. M.V. Rajeev Gowda, Member of Parliament, Rajya Sabha, and Mr. Arvind Gupta, Head of IT Cell, BJP.

    Prof. Gowda focused on the following concerns:

    • We often freely give up our information and rights over to owners of websites and applications on the web. We need to ask questions regarding the ownership, storage, and usage of such data.
    • While Section 66A of Information Technology Act started as a anti-spam rule, it has actually been used to harass people, instead of protecting them from online harassment.
    • The bill on DNA profiling has raised crucial privacy concerns related to this most personal data. The complexity around the issue is created by the possibility of data leakage and usage for various commercial interests.
    • We need to ask if western notions of privacy will work in the Indian context.
    • We need to move towards a cashless economy, which will not only formalise the existing informal economy but also speed up transactions nationally. We need to keep in mind that this will put a substantial demand burden on the communication infrastructure, as all transactions will happen through these.

    Mr. Gupta shared his keen insights about the key public policy issues in digital India:

    • The journey to establish the digital as a key political agenda and strategy within BJP took him more than 6 years. He has been an entrepreneur, and will always remain one. His approached his political journey as an entrepreneur.
    • While we are producing numerous digitally literate citizens, the companies offering services on the internet often unknowingly acquire data about these citizens, store them, and sometimes even expose them. India perhaps produces the greatest volume of digital exhaust globally.
    • BJP inherited the Aadhaar national identity management platform from UPA, and has decided to integrate it deeply into its digital India architecture.
    • Financial and administrative transactions, especially ones undertake by and with governments, are all becoming digital and mostly Aadhaar-linked. We are not sure where all such data is going, and who all has access to such data.
    • Right now there is an ongoing debate about using biometric system for identification. The debate on privacy is much needed, and a privacy policy is essential to strengthen Aadhaar. We must remember that the benefits of Aadhaar clearly outweigh the risks. Greatest privacy threats today come from many other places, including simple mobile torch apps.
    • India is rethinking its cybersecurity capacities in a serious manner. After Paris attack it has become obvious that the state should be allowed to look into electronic communication under reasonable guidelines. The challenge is identifying the fine balance between consumers' interest on one hand, and national interest and security concerns on the other. Unfortunately, the concerns of a few is often getting amplified in popular media.
    • MyGov platform should be used much more effectively for public policy debates. Social media networks, like Twitter, are not the correct platforms for such debates.

     

     

    Transparency in Surveillance

    by Vipul Kharbanda last modified Jan 23, 2016 03:11 PM
    Transparency is an essential need for any democracy to function effectively. It may not be the only requirement for the effective functioning of a democracy, but it is one of the most important principles which need to be adhered to in a democratic state.

    Introduction

    A democracy involves the state machinery being accountable to the citizens that it is supposed to serve, and for the citizens to be able to hold their state machinery accountable, they need accurate and adequate information regarding the activities of those that seek to govern them. However, in modern democracies it is often seen that those in governance often try to circumvent legal requirements of transparency and only pay lip service to this principle, while keeping their own functioning as opaque as possible.

    This tendency to not give adequate information is very evident in the departments of the government which are concerned with surveillance, and merit can be found in the argument that all of the government's clandestine surveillance activities cannot be transparent otherwise they will cease to be "clandestine" and hence will be rendered ineffective. However, this argument is often misused as a shield by the government agencies to block the disclosure of all types of information about their activities, some of which may be essential to determine whether the current surveillance regime is working in an effective, ethical, and legal manner or not. It is this exploitation of the argument, which is often couched in the language of or coupled with concerns of national security, that this paper seeks to address while voicing the need for greater transparency in surveillance activities and structures.

    In the first section the paper examines the need for transparency, and specifically deals with the requirement for transparency in surveillance. In the next part, the paper discusses the regulations governing telecom surveillance in India. The final part of the paper discusses possible steps that may be taken by the government in order to increase transparency in telecom surveillance while keeping in mind that the disclosure of such information should not make future surveillance ineffective.

    Need for Transparency

    In today's age where technology is all pervasive, the term "surveillance" has developed slightly sinister overtones, especially in the backdrop of the Edward Snowden fiasco. Indeed, there have been several independent scandals involving mass surveillance of people in general as well as illegal surveillance of specific individuals. The fear that the term surveillance now invokes, especially amongst those social and political activists who seek to challenge the status quo, is in part due to the secrecy surrounding the entire surveillance regime. Leaving aside what surveillance is carried out, upon whom, and when - the state actors are seldom willing and open to talk about how surveillance is carried out, how decisions regarding who and how to target, are reached, how agency budgets are allocated and spent, how effective surveillance actions were, etc. While there may be justified security based arguments to not disclose the full extent of the state's surveillance activities, however this cloak of secrecy may be used illegally and in an unauthorized manner to achieve ends more harmful to citizen rights than the maintenance of security and order in the society.

    Surveillance and interception/collection of communications data can take place under different legal processes in different countries, ranging from court-ordered requests of specified data from telecommunications companies to broad executive requests sent under regimes or regulatory frameworks requiring the disclosure of information by telecom companies on a pro-active basis. However, it is an open secret that data collection often takes place without due process or under non-legal circumstances.

    It is widely believed that transparency is a critical step towards the creation of mechanisms for increased accountability through which law enforcement and government agencies access communications data. It is the first step in the process of starting discussions and an informed public debate regarding how the state undertakes activities of surveillance, monitoring and interception of communications and data. Since 2010, a large number of ICT companies have begun to publish transparency reports on the extent that governments request their user data as well as requirements to remove content. However, governments themselves have not been very forthcoming in providing such detailed information on surveillance programs which is necessary for an informed debate on this issue.[1] Although some countries currently report limited information on their surveillance activities, e.g. the U.S. Department of Justice publishes an annual Wiretap Report (U.S. Courts, 2013a), and the United Kingdom publishes the Interception of Communications Commissioner Annual Report (May, 2013), which themselves do not present a complete picture, however even such limited measures are unheard of in a country such as India.

    It is obvious that Governments can provide a greater level of transparency regarding the limits in place on the freedom of expression and privacy than transparency reports by individual companies. Company transparency reports can only illuminate the extent to which any one company receives requests and how that company responds to them. By contrast, government transparency reports can provide a much greater perspective on laws that can potentially restrict the freedom of expression or impact privacy by illustrating the full extent to which requests are made across the ICT industry. [2]

    In India, the courts and the laws have traditionally recognized the need for transparency and derive it from the fundamental right to freedom of speech and expression guaranteed in our Constitution. This need coupled with a sustained campaign by various organizations finally fructified into the passage of the Right to Information Act, 2005, (RTI Act) which amongst other things also places an obligation on the sate to place its documents and records online so that the same may be freely available to the public. In light of this law guaranteeing the right to information, the citizens of India have the fundamental right to know what the Government is doing in their name. The free flow of information and ideas informs political growth and the freedom of speech and expression is the lifeblood of a healthy democracy, it acts as a safety valve. People are more ready to accept the decisions that go against them if they can in principle seem to influence them. The Supreme Court of India is of the view that the imparting of information about the working of the government on the one hand and its decision affecting the domestic and international trade and other activities on the other is necessary, and has imposed an obligation upon the authorities to disclose information.[3]

    The Supreme Court, in Namit Sharma v. Union of India,[4] while discussing the importance of transparency and the right to information has held:

    "The Right to Information was harnessed as a tool for promoting development; strengthening the democratic governance and effective delivery of socio-economic services. Acquisition of information and knowledge and its application have intense and pervasive impact on the process of taking informed decision, resulting in overall productivity gains .

    ……..

    Government procedures and regulations shrouded in the veil of secrecy do not allow the litigants to know how their cases are being handled. They shy away from questioning the officers handling their cases because of the latters snobbish attitude. Right to information should be guaranteed and needs to be given real substance. In this regard, the Government must assume a major responsibility and mobilize skills to ensure flow of information to citizens. The traditional insistence on secrecy should be discarded."

    Although these statements were made in the context of the RTI Act the principle which they try to illustrate can be understood as equally applicable to the field of state sponsored surveillance. Though Indian intelligence agencies are exempt from the RTI Act, it can be used to provide limited insight into the scope of governmental surveillance. This was demonstrated by the Software Freedom Law Centre, who discovered via RTI requests that approximately 7,500 - 9,000 interception orders are sent on a monthly basis.[5]

    While it is true that transparency alone will not be able to eliminate the barriers to freedom of expression or harm to privacy resulting from overly broad surveillance,, transparency provides a window into the scope of current practices and additional measures are needed such as oversight and mechanisms for redress in cases of unlawful surveillance. Transparency offers a necessary first step, a foundation on which to examine current practices and contribute to a debate on human security and freedom.[6]

    It is no secret that the current framework of surveillance in India is rife with malpractices of mass surveillance and instances of illegal surveillance. There have been a number of instances of illegal and/or unathorised surveillance in the past, the most scandalous and thus most well known is the incident where a woman IAS officer was placed under surveillance at the behest of Mr. Amit Shah who is currently the president of the ruling party in India purportedly on the instructions of the current prime minister Mr. Narendra Modi.[7] There are also a number of instances of private individuals indulging in illegal interception and surveillance; in the year 2005, it was reported that Anurag Singh, a private detective, along with some associates, intercepted the telephonic conversations of former Samajwadi Party leader Amar Singh. They allegedly contacted political leaders and media houses for selling the tapped telephonic conversation records. The interception was allegedly carried out by stealing the genuine government letters and forging and fabricating them to obtain permission to tap Amar Singh's telephonic conversations. [8] The same individual was also implicated for tapping the telephone of the current finance minister Mr. Arun Jaitely.[9]

    It is therefore obvious that the status quo with regard to the surveillance mechanism in India needs to change, but this change has to be brought about in a manner so as to make state surveillance more accountable without compromising its effectiveness and addressing legitimate security concerns. Such changes cannot be brought about without an informed debate involving all stakeholders and actors associated with surveillance, however the basic minimum requirement for an "informed" debate is accurate and sufficient information about the subject matter of the debate. This information is severely lacking in the public domain when it comes to state surveillance activities - with most data points about state surveillance coming from news items or leaked information. Unless the state becomes more transparent and gives information about its surveillance activities and processes, an informed debate to challenge and strengthen the status quo for the betterment of all parties cannot be started.

    Current State of Affairs

    Surveillance laws in India are extremely varied and have been in existence since the colonial times, remnants of which are still being utilized by the various State Police forces. However in this age of technology the most important tools for surveillance exist in the digital space and it is for this reason that this paper shall focus on an analysis of surveillance through interception of telecommunications traffic, whether by tracking voice calls or data. The interception of telecommunications actually takes place under two different statutes, the Telegraph Act, 1885 (which deals with interception of calls) as well as the Information Technology Act, 2000 (which deals with interception of data).

    Currently, the telecom surveillance is done as per the procedure prescribed in the Rules under the relevant sections of the two statutes mentioned above, viz. Rule 419A of the Telegraph Rules, 1951 for surveillance under the Telegraph Act, 1885 and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 for surveillance under the Information Technology Act, 2000. These Rules put in place various checks and balances and try to ensure that there is a paper trail for every interception request. [10] The assumption is that the generation of a paper trail would reduce the number of unauthorized interception orders thus ensuring that the powers of interception are not misused. However, even though these checks and balances exist on paper as provided in the laws, there is not enough information in the public domain regarding the entire mechanism of interception for anyone to make a judgment on whether the system is working or not.

    As mentioned earlier, currently the only sources of information on interception that are available in the public domain are through news reports and a handful of RTI requests which have been filed by various activists.[11] The only other institutionalized source of information on surveillance in India is the various transparency reports brought out by companies such as Google, Yahoo, Facebook, etc.

    Indeed, Google was the first major corporation to publish a transparency report in 2010 and has been updating its report ever since. The latest data that is available for Google is for the period between January, 2015 to June, 2015 and in that period Google and Youtube together received 3,087 requests for data which asked for information on 4,829 user accounts from the Indian Government. Out of these requests Google only supplied information for 44% of the requests.[12] Although Google claims that they "review each request to make sure that it complies with both the spirit and the letter of the law, and we may refuse to produce information or try to narrow the request in some cases", it is not clear why Google rejected 56% of the requests. It may also be noted that the number of requests for information that Google received from India were the fifth highest amongst all the other countries on which information was given in the Transparency Report, after USA, Germany, France and the U.K.

    Facebook's transparency report for the period between January, 2015 to June, 2015 reveals that Facebook received 5,115 requests from the Indian Government for 6,268 user accounts, out of which Facebook produced data in 45.32% of the cases.[13] Facebook's transparency report claims that they respond to requests relating to criminal cases and "Each and every request we receive is checked for legal sufficiency and we reject or require greater specificity on requests that are overly broad or vague." However, even in Facebook's transparency report it is unclear why 55.68% of the requests were rejected.

    The Yahoo transparency report also gives data from the period between January 1, 2015 to June 30, 2015 and reveals that Yahoo received 831 requests for data, which related to 1,184 user accounts from the Indian Government. The Yahoo report is a little more detailed and also reveals that 360 of the 831 requests were rejected by Yahoo, however no details are given as to why the requests were rejected. The report also specifies that in 63 cases, no data was found by Yahoo, in 249 cases only non content data[14] was disclosed while in 159 cases content [15] was disclosed. The Yahoo report also claims that "We carefully scrutinize each request to make sure that it complies with the law, and we push back on those requests that don't satisfy our rigorous standards."

    While the Vodafone Transparency Report gives information regarding government requests for data in other jurisdictions, [16] it does not give any information on government requests in India. This is because Vodafone interprets the provisions contained in Rule 25(4) of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 (Interception Rules) and Rule 11 of the IT (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 as well as Rule 419A(19) of the Indian Telegraph Rules, 1954 which require service providers to maintain confidentiality/secrecy in matters relating to interception, as being a legal prohibition on Vodafone to reveal such information.

    Apart from the four major companies discussed above, there are a large number of private corporations which have published transparency reports in order to acquire a sense of trustworthiness amongst their customers. Infact, the Ranking Digital Rights Project has been involved in ranking some of the biggest companies in the world on their commitment to accountability and has brought out the Ranking Digital Rights 2015 Corporate Accountability Index that has analysed a representative group of 16 companies "that collectively hold the power to shape the digital lives of billions of people across the globe".

    Suggestions on Transparency

    It is clear from the discussions above, as well as a general overview of various news reports on the subject, that telecom surveillance in India is shrouded in secrecy and it appears that a large amount of illegal and unauthorized surveillance is taking place behind the protection of this veil of secrecy. If the status quo continues, then it is unlikely that any meaningful reforms would take place to bring about greater accountability in the area of telecom surveillance. It is imperative, for any sort of changes towards greater accountability to take place, that we have enough information about what exactly is happening and for that we need greater transparency since transparency is the first step towards greater accountability.

    Transparency Reports

    In very simplistic terms transparency, in anything, can best be achieved by providing as much information about that thing as possible so that there are no secrets left. However, it would be naïve to say that all information about interception activities can be made public on the altar of the principle of transparency, but that does not mean that there should be no information at all on interception. One of the internationally accepted methods of bringing about transparency in interception mechanisms, which is increasingly being adopted by both the private sector as well as governments, is to publish Transparency Reports giving various details of interception while keeping security concerns in mind. The two types of transparency reports that we require in India and what that would entail is briefly discussed below:

    By the Government

    The problem with India's current regime for interception is that the entire mechanism appears more or less adequate on paper with enough checks and balances involved in it to prevent misuse of the allotted powers. However, because the entire process is veiled in secrecy, nobody knows exactly how good or how rotten the system has become and whether it is working to achieve its intended purposes. It is clear that the current system of interception and surveillance being followed by the government has some flaws, as can be gathered from the frequent news articles which talk about incidents of illegal surveillance. However, without any other official or more reliable sources of information regarding surveillance activities these anecdotal pieces of evidence are all we have to shape the debate regarding surveillance in India. It is only logical then that the debate around surveillance, which is informed by such sketchy and unreliable news reports will automatically be biased against the current mechanism since the newspapers would also only be interested in reporting the scandalous and the extraordinary incidents. For example, some argue that the government undertakes mass surveillance, while others argue that India only carries out targeted surveillance, but there is not enough information publicly available for a third party to support or argue against either claim. It is therefore necessary and highly recommended that the government start releasing a transparency report such as the one's brought out by the United States and the UK as mentioned above.

    There is no need for a separate department or authority just to make the transparency report and this task could probably be performed in-house by any department, but considering the sector involved, it would perhaps be best if the Department of Telecommunications is given the responsibility to bring out a transparency report. These transparency reports should contain certain minimum amount of data for them to be an effective tool in informing the public discourse and debate regarding surveillance and interception. The report needs to strike a balance between providing enough information so that an informed analysis can be made of the effectiveness of the surveillance regime without providing so much information so as to make the surveillance activities ineffective. Below is a list of suggestions as to what kind of data/information such reports should contain:

    • Reports should contain data regarding the number of interception orders that have been passed. This statistic would be extremely useful in determining how elaborate and how frequently the state indulges in interception activities. This information would be easily available since all interception orders have to be sent to the Review Committee set up under Rule 419A of the Telegraph Rules, 1954.
    • The Report should contain information on the procedural aspects of surveillance including the delegation of powers to different authorities and individuals, information on new surveillance schemes, etc. This information would also be available with the Ministry of Home Affairs since it is a Secretary or Joint Secretary level officer in the said Ministry which is supposed to authorize every order for interception.
    • The report should contain an aggregated list of reasons given by the authorities for ordering interception. This information would reveal whether the authorities are actually ensuring legal justification before issuing interception or are they just paying lip service to the rules to ensure a proper paper trail. Since every order of interception has to be in writing, the main reasons for interception can easily be gleaned from a perusal of the orders.
    • It should also reveal the percentage of cases where interception has actually found evidence of culpability or been successful in prevention of criminal activities. This one statistic would itself give a very good review of the effectiveness of the interception regime. Granted that this information may not be very easily obtainable, but it can be obtained with proper coordination with the police and other law enforcement agencies.
    • The report should also reveal the percentage of order that have been struck down by the Review Committee as not following the process envisaged under the various Rules. This would give a sense of how often the Rules are being flouted while issuing interception orders. This information can easily be obtained from the papers and minutes of the meetings of the Review Committee.
    • The report should also state the number of times the Review Committee has met in the period being reported upon. The Review Committee is an important check on the misuse of powers by the authorities and therefore it is important that the Review Committee carries out its activities in a diligent manner.

    It may be noted here that some provisions of the Telegraph Rules, 1954 especially sub-Rules 17 and 18 of Rule 419A as well as Rules 22, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 may need to be amended so as to make them compliant with the reporting mechanism proposed above.

    By the Private Sector

    We have already discussed above the transparency reports published by certain private companies. Suffice it to say that reports from private companies should give as much of the information discussed under government reports as possible and/or applicable, since they may not have a large amount of the information that is sought to be published in the government reports such as whether the interception was successful, the reasons for interception, etc. It is important to have ISPs provide such transparency reports as this will provide two different data points for information on interception and the very existence of these private reports may act as a check to ensure the veracity of the government transparency reports.

    As in the case of government reports, for the transparency reports of the private sector to be effective, certain provisions of the Telegraph Rules, 1954 and the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009, viz. sub-Rules 14, 15 and 19 of Rule 419A of the Telegraph Rules, 1954 and Rules 20, 21, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009.

    Overhaul of the Review Committee

    The Review Committee which acts as a check on the misuse of powers by the competent authorities is a very important cog in the entire process. However, it is staffed entirely by the executive and does not have any members of any other background. Whilst it is probably impractical to have civilian members in the Review Committee which has access to potentially sensitive information, it is extremely essential that the Committee has wider representation from other sectors specially the judiciary. One or two members from the judiciary on the Review Committee would provide a greater check on the workings of the Committee as this would bring in representation from the judicial arm of the State so that the Review Committee does not remain a body manned purely by the executive branch. This could go some ways to ensure that the Committee does not just "rubber stamp" the orders of interception issued by the various competent authorities.

    Conclusion

    It is not in dispute that there is a need for greater transparency in the government's surveillance activities in order to address the problems associated with illegal and unauthorised interceptions. This paper is not making the case that greater transparency in and by itself will be able to solve the problems that may be associated with the government's currency interception and surveillance regime, however it is not possible to address any problem unless we know the real extent of it. It is essential for an informed debate and discussion that the people participating in the discussion are "informed", i.e. they should have accurate and adequate information regarding the issues which are being discussed. The current state of the debate on interception is rife with individuals using illustrative and anecdotal evidence which, in the absence of any other evidence, they assume to be the norm.

    A more transparent and forthcoming state machinery which regularly keeps its citizens abreast of the state of its surveillance regime would be likely to get better suggestions and perhaps less criticisms if it does come out that the checks and balances imposed in the regulations are actually making a difference to check unauthorized interceptions, and if not, then it is the right of the citizens to know about this and ask for reforms.


    [1] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9(2015), Feature 3450-3459, 2015.

    [2] Id.

    [4] http://www.judis.nic.in/supremecourt/imgs1.aspx?filename=39566 . Although the judgment was overturned on review, however this observation quoted above would still hold as it has not been specifically overturned.

    [6] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9 (2015), Feature 3450-3459, 2015.

    [10] For a detailed discussion of the Rules of interception please see Policy Paper on Surveillance in India, by Vipul Kharbanda, http://cis-india.org/internet-governance/blog/policy-paper-on-surveillance-in-india .

    [14] Non-content data (NCD) such as basic subscriber information including the information captured at the time of registration such as an alternate e-mail address, name, location, and IP address, login details, billing information, and other transactional information (e.g., "to," "from," and "date" fields from email headers).

    [15] Data that users create, communicate, and store on or through Yahoo. This could include words in a communication (e.g., Mail or Messenger), photos on Flickr, files uploaded, Yahoo Address Book entries, Yahoo Calendar event details, thoughts recorded in Yahoo Notepad or comments or posts on Yahoo Answers or any other Yahoo property.

    Big Data in the Global South - An Analysis

    by Tanvi Mani last modified Jan 24, 2016 02:54 AM

    I. Introduction

    "The period that we have embarked upon is unprecedented in history in terms of our ability to learn about human behavior." [1]

    The world we live in today is facing a slow but deliberate metamorphosis of decisive information; from the erstwhile monopoly of world leaders and the captains of industry obtained through regulated means, it has transformed into a relatively undervalued currency of knowledge collected from individual digital expressions over a vast network of interconnected electrical impulses.[2] This seemingly random deluge of binary numbers, when interpreted represents an intricately woven tapestry of the choices that define everyday life, made over virtual platforms. The machines we once employed for menial tasks have become sensorial observers of our desires, wants and needs, so much so that they might now predict the course of our future choices and decisions.[3] The patterns of human behaviour that are reflected within this data inform policy makers, in both a public and private context. The collective data obtained from our digital shadows thus forms a rapidly expanding storehouse of memory, from which interested parties can draw upon to resolve problems and enable a more efficient functioning of foundational institutions, such as the markets, the regulators and the government.[4]

    The term used to describe a large volume of collected data, in a structured as well as unstructured form is called Big Data. This data requires niche technology, outside of traditional software databases, to process; simply because of its exponential increment in a relatively short period of time. Big Data is usually identified using a "three V" characterization - larger volume, greater variety and distinguishably high rates of velocity. [5] This is exemplified in the diverse sources from which this data is obtained; mobile phone records, climate sensors, social media content, GPS satellite identifications and patterns of employment, to name a few. Big data analytics refers to the tools and methodologies that aim to transform large quantities of raw data into "interpretable data", in order to study and discern the same so that causal relationships between events can be conclusively established.[6] Such analysis could allow for the encouragement of the positive effects of such data and a concentrated mitigation of negative outcomes.

    This paper seeks to map out the practices of different governments, civil society, and the private sector with respect to the collection, interpretation and analysis of big data in the global south, illustrated across a background of significant events surrounding the use of big data in relevant contexts. This will be combined with an articulation of potential opportunities to use big data analytics within both the public and private spheres and an identification of the contextual challenges that may obstruct the efficient use of this data. The objective of this study is to deliberate upon how significant obstructions to the achievement of developmental goals within the global south can be overcome through an accurate recognition, interpretation and analysis of big data collected from diverse sources.

    II. Uses of Big Data in the Global Development

    Big Data for development is the process though which raw, unstructured and imperfect data is analyzed, interpreted and transformed into information that can be acted upon by governments and policy makers in various capacities. The amount of digital data available in the world today has grown from 150 exabytes in 2005 to 1200 exabytes in 2010.[7] It is predicted that this figure would increase by 40% annually in the next few years[8], which is close to 40 times growth of the world's population. [9] The implication of this is essentially that the share of available data in the world today that is less than a minute old is increasing at an exponential rate. Moreover, an increasing percentage of this data is produced and created real-time.

    The data revolution that is incumbent upon us is characterized by a rapidly accumulating and continuously evolving stock of data prevalent` in both industrialized as well as developing countries. This data is extracted from technological services that act as sensors and reflect the behaviour of individuals in relation to their socio-economic circumstances.

    For many global south countries, this data is generated through mobile phone technology. This trend is evident in Sub Saharan Africa, where mobile phone technology has been used as an effective substitute for often weak and unstructured State mechanisms such as faulty infrastructure, underdeveloped systems of banking and inferior telecommunication networks.[10]

    For example, a recent study presented at the Data for Development session at the NetMob Conference at MIT used mobile phone data to analyze the impact of opening a new toll highway in Dakar, Senegal on human mobility, particularly how people commute to work in the metropolitan area. [11] A huge investment, the improved infrastructure is expected to result in a significant increase of people in and out of Dakar, along with the transport of essential goods. This would initiate rural development in the areas outside of Dakar and boost the value of land within the region.[12] The impact of the newly constructed highway can however only be analyzed effectively and accurately through the collection of this mobile phone data from actual commuters, on a real time basis.

    Mobile phones technology is no longer used just for personal communication but has been transformed into an effective tool to secure employment opportunities, transfer money, determine stock options and assess the prices of various commodities.[13] This generates vast amounts of data about individuals and their interactions with the government and private sector companies. Internet Traffic is predicted to grow between 25 to 30 % in the next few years in North America, Western Europe and Japan but in Latin America, The Middle East and Africa this figure has been expected to touch close to 50%.[14] The bulk of this internet traffic can be traced back to mobile devices.

    The potential applicability of Big Data for development at the most general level is the ability to provide an overview of the well being of a given population at a particular period of time.[15] This overcomes the relatively longer time lag that is prevalent with most other traditional forms of data collection. The analysis of this data has helped, to a large extent, uncover "digital smoke signals" - or inherent changes in the usage patterns of technological services, by individuals within communities.[16] This may act as an indicator of the changes in the underlying well-being of the community as a whole. This information about the well-being of a community derived from their usage of technology provides significantly relevant feedback to policy makers on the success or failure of particular schemes and can pin point changes that need to be made to status quo. [17]The hope is that this feedback delivered in real-time, would in turn lead to a more flexible and accessible system of international development, thus securing more measurable and sustained outcomes. [18]

    The analysis of big data involves the use of advanced computational technology that can aid in the determination of trends, patterns and correlations within unstructured data so as to transform it into actionable information. It is hoped that this in addition to the human perspective and experience afforded to the process could enable decision makers to rely upon information that is both reliable and up to date to formulate durable and self-sustaining development policies.

    The availability of raw data has to be adequately complemented with intent and a capacity to use it effectively. To this effect, there is an emerging volume of literature that seeks to characterize the primary sources of this Big Data as sharing certain easily distinguishable features. Firstly, it is digitally generated and can be stored in a binary format, thus making it susceptible to requisite manipulation by computers attempting to engage in its interpretation. It is passively produced as a by-product of digital interaction and can be automatically extracted for the purpose of continuous analysis. It is also geographically traceable within a predetermined time period. It is however important to note that "real time" does not necessarily refer to information occurring instantly but is reflective of the relatively short time in which the information is produced and made available thus making it relevant within the requisite timeframe. This allows efficient responsive action to be taken in a short span of time thus creating a feedback loop. [19]

    In most cases the granularity of the data is preferably sought to be expanded over a larger spatial context such as a village or a community as opposed to an individual simply because this affords an adequate recognition of privacy concerns and the lack of definitive consent of the individuals in the extraction of this data. In order to ease the process of determination of this data, the UN Global Pulse has developed taxonomy of sorts to assess the types of data sources that are relevant to utilizing this information for development purposes.[20] These include the following sources;

    Data Exhaust or the digital footprint left behind by individuals' use of technology for service oriented tasks such as web purchases, mobile phone transactions and real time information collected by UN agencies to monitor their projects such as levels of food grains in storage units, attendance in schools etc.

    Online Information which includes user generated content on the internet such as news, blog entries and social media interactions which may be used to identify trends in human desires, perceptions and needs.

    Physical sensors such as satellite or infrared imagery of infrastructural development, traffic patterns, light emissions and topographical changes, thus enabling the remote sensing of changes in human activity over a period of time.

    Citizen reporting or crowd sourced data , which includes information produced on hotlines, mobile based surveys, customer generated maps etc. Although a passive source of data collection, this is a key instrument in assessing the efficacy of action oriented plans taken by decision makers.

    The capacity to analyze this big data is hinged upon the reliance placed on technologically advanced processes such as powerful algorithms which can synthesize the abundance of raw data and break down the information enabling the identification of patterns and correlations. This process would rely on advanced visualization techniques such "sense-making tools"[21]

    The identification of patterns within this data is carried out through a process of instituting a common framework for the analysis of this data. This requires the creation of a specific lexicon that would help tag and sort the collected data. This lexicon would specify what type of information is collected and who it is interpreted and collected by, the observer or the reporter. It would also aid in the determination of how the data is acquired and the qualitative and quantitative nature of the data. Finally, the spatial context of the data and the time frame within which it was collected constituting the aspects of where and when would be taken into consideration. The data would then be analyzed through a process of Filtering, Summarizing and Categorizing the data by transforming it into an appropriate collection of relevant indicators of a particular population demographic. [22]

    The intensive mining of predominantly socioeconomic data is known as "reality mining" [23] and this can shed light on the processes and interactions that are reflected within the data. This is carried out via a tested three fold process. Firstly, the " Continuous Analysis over the streaming of the data", which involves the monitoring and analyzing high frequency data streams to extract often uncertain raw data. For example, the systematic gathering of the prices of products sold online over a period of time. Secondly, "The Online digestion of semi structured data and unstructured data", which includes news articles, reviews of services and products and opinion polls on social media that aid in the determination of public perception, trends and contemporary events that are generating interest across the globe. Thirdly, a 'Real-time Correlation of streaming data with slowly accessible historical data repositories,' which refers to the "mechanisms used for correlating and integrating data in real-time with historical records."[24] The purpose of this stage is to derive a contextualized perception of personalized information that seeks to add value to the data by providing a historical context to it. Big Data for development purposes would make use of a combination of these depending on the context and need.

    (i) Policy Formulation

    The world today has become increasingly volatile in terms of how the decisions of certain countries are beginning to have an impact on vulnerable communities within entirely different nations. Our global economy has become infinitely more susceptible to fluctuating conditions primarily because of its interconnectivity hinged upon transnational interdependence. The primordial instigators of most of these changes, including the nature of harvests, prices of essential commodities, employment structures and capital flows, have been financial and environmental disruptions. [25] According to the OECD, " Disruptive shocks to the global economy are likely to become more frequent and cause greater economic and social hardship. The economic spillover effects of events like the financial crisis or a potential pandemic will grow due to the increasing interconnectivity of the global economy and the speed with which people, goods and data travel."[26]

    The local impacts of these fluctuations may not be easily visible or even traceable but could very well be severe and long lasting. A vibrant literature on the vulnerability of communities has highlighted the impacts of these shocks on communities often causing children to drop out of school, families to sell their productive assets, and communities to place a greater reliance on state rations.[27] These vulnerabilities cannot be definitively discerned through traditional systems of monitoring and information collection. The evidence of the effects of these shocks often take too long to reach decision makers; who are unable to formulate effective policies without ascertaining the nature and extent of the hardships suffered by these in a given context. The existing early warning systems in place do help raise flags and draw attention to the problem but their reach is limited and veracity compromised due to the time it takes to extract and collate this information through traditional means. These traditional systems of information collection are difficult to implement within rural impoverished areas and the data collected is not always reliable due to the significant time gap in its collection and subsequent interpretation. Data collected from surveys does provide an insight into the state of affairs of communities across demographics but this requires time to be collected, processed, verified and eventually published. Further, the expenses incurred in this process often prove to be difficult to offset.

    The digital revolution therefore provides a significant opportunity to gain a richer and deeper insight into the very nature and evolution of the human experience itself thus affording a more legitimate platform upon which policy deliberations can be articulated. This data driven decision making, once the monopoly of private institutions such as The World Economic Forum and The McKinsey Institute [28] has now emerged at the forefront of the public policy discourse. Civil society has also expressed an eagerness to be more actively involved in the collection of real-time data after having perceived its benefits. This is evidenced by the emergence of 'crowd sourcing'[29] and other 'participatory sensing' [30] efforts that are founded upon the commonalities shared by like minded communities of individuals. This is being done on easily accessible platforms such as mobile phone interfaces, hand-held radio devices and geospatial technologies. [31]

    The predictive nature of patterns identifiable from big data is extremely relevant for the purpose of developing socio-economic policies that seek to bridge problem-solution gaps and create a conducive environment for growth and development. Mobile phone technology has been able to quantify human behavior on an unprecedented scale.[32] This includes being able to detect changes in standard commuting patterns of individuals based on their employment status[33] and estimating a country's GDP in real-time by measuring the nature and extent of light emissions through remote sensing. [34]

    A recent research study has concluded that "due to the relative frequency of certain queries being highly correlated with the percentage of physician visits in which individuals present influenza symptoms, it has been possible to accurately estimate the levels of influenza activity in each region of the United States, with a reporting lag of just a day." Online data has thus been used as a part of syndromic surveillance efforts also known as infodemiology. [35] The US Centre for Disease Control has concluded that mining vast quantities of data through online health related queries can help detect disease outbreaks " before they have been confirmed through a diagnosis or a laboratory confirmation." [36] Google trends works in a similar way.

    Another public health monitoring system known as the Healthmap project compiles seemingly fragmented data from news articles, social media, eye-witness reports and expert discussions based on validated studies to "achieve a unified and comprehensive view of the current global state of infectious diseases" that may be visualized on a map. [37]

    Big Data used for development purpose can reduce the reliance on human inputs thus narrowing the room for error and ensuring the accuracy of information collected upon which policy makers can base their decisions.

    (ii) Advocacy and Social Change

    Due to the ability of Big Data to provide an unprecedented depth of detail on particular issues, it has often been used as a vehicle of advocacy to highlight various issues in great detail. This makes it possible to ensure that citizens are provided with a far more participative experience, capturing their attention and hence better communicating these problems. Numerous websites have been able to use this method of crowd sourcing to broadcast socially relevant issues[38]. Moreover, the massive increase in access to the internet has dramatically improved the scope for activism through the use of volunteered data due to which advocates can now collect data from volunteers more effectively and present these issues in various forums. Websites like Ushahidi[39] and the Black Monday Movement [40] being prime examples of the same. These platforms have championed various causes, consistently exposing significant social crises' that would otherwise go unnoticed.

    The Ushahidi application used crowd sourcing mechanisms in the aftermath of the Haiti earthquake to set up a centralized messaging system that allowed mobile phone users to provide information on injured and trapped people.[41] An analysis of the data showed that the concentration of text messages was correlated with the areas where there was an increased concentration of damaged buildings. [42] Patrick Meier of Ushahidi noted "These results were evidence of the system's ability to predict, with surprising accuracy and statistical significance, the location and extent of structural damage post the earthquake." [43]

    Another problem that data advocacy hopes to tackle, however, is that of too much exposure, with advocates providing information to various parties to help ensure that there exists no unwarranted digital surveillance and that sensitive advocacy tools and information are not used inappropriately. An interesting illustration of the same is The Tactical Technology Collective[44] that hopes to improve the use of technology by activists and various other political actors. The organization, through various mediums such as films, events etc. hopes to train activists regarding data protection and privacy awareness and skills among human rights activists. Additionally, Tactical Technology also assists in ensuring that information is used in an appealing and relevant manner by human rights activists and in the field of capacity building for the purposes of data advocacy.

    Observed data such as mobile phone records generated through network operators as well as through the use of social media are beginning to embody an omnipotent role in the development of academia through detailed research. This is due to the ability of this data to provide microcosms of information within both contexts of finer granularity and over larger public spaces. In the wake of natural disasters, this can be extremely useful, as reflected by the work of Flowminder after the 2010 Haiti earthquake.[45] A similar string of interpretive analysis can be carried out in instances of conflict and crises over varying spans of time. Flowminder used the geospatial locations of 1.9 million subscriber identity modules in Haiti, beginning 42 days before the earthquake and 158 days after it. This information allowed researches to empirically determine the migration patterns of population post the earthquake and enabled a subsequent UNFPA household survey.[46] In a similar capacity, the UN Global Pulse is seeking to assist in the process of consultation and deliberation on the specific targets of the millennium development goals through a framework of visual analytics that represent the big data procured on each of the topics proposed for the post- 2015 agenda online.[47]

    A recent announcement of collaboration between RTI International, a non-profit research organization and IBM research lab looks promising in its initiative to utilize big data analytics in schools within Mombasa County, Kenya.[48] The partnership seeks to develop testing systems that would capture data that would assist governments, non-profit organizations and private enterprises in making more informed decisions regarding the development of education and human resources within the region. Äs observed by Dr. Kamal Bhattacharya, The Vice President of IBM Research, "A significant lack of data on Africa in the past has led to misunderstandings regarding the history, economic performance and potential of the government." The project seeks to improve transparency and accountability within the schooling system in more than 100 institutions across the county. The teachers would be equipped with tablet devices to collate the data about students, classrooms and resources. This would allow an analysis of the correlation between the three aspects thus enabling better policy formulation and a more focused approach to bettering the school system. [49] This is a part of the United States Agency for International Development's Education Data for Decision Making (EdData II) project. According to Dr Kommy Weldemariam, Research Scientist , IBM Research, "… there has been a significant struggle in making informed decisions as to how to invest in and improve the quality and content of education within Sub-Saharan Africa. The Project would create a school census hub which would enable the collection of accurate data regarding performance, attendance and resources at schools. This would provide valuable insight into the building of childhood development programs that would significantly impact the development of an efficient human capital pool in the near future."[50]

    A similar initiative has been undertaken by Apple and IBM in the development of the "Student Achievement App" which seeks to use this data for "content analysis of student learning". The Application as a teaching tool that analyses the data provided to develop actionable intelligence on a per-student basis." [51] This would give educators a deeper understanding of the outcome of teaching methodologies and subsequently enable better leaning. The impact of this would be a significant restructuring of how education is delivered. At a recent IBM sponsored workshop on education held in India last year , Katharine Frase, IBM CTO of Public Sector predicted that "classrooms will look significantly different within a decade than they have looked over the last 200 years."[52]

    (iii) Access and the exchange of information

    Big data used for development serves as an important information intermediary that allows for the creation of a unified space within which unstructured heterogeneous data can be efficiently organized to create a collaborative system of information. New interactive platforms enable the process of information exchange though an internal vetting and curation that ensures accessibility to reliable and accurate information. This encourages active citizen participation in the articulation of demands from the government, thus enabling the actualization of the role of the electorate in determining specific policy decisions.

    The Grameen Foundation's AppLab in Kampala aids in the development of tools that can use the information from micro financing transactions of clients to identify financial plans and instruments that would be be more suitable to their needs.[53] Thus, through working within a community, this technology connects its clients in a web of information sharing that they both contribute to and access after the source of the information has been made anonymous. This allows the individual members of the community to benefit from this common pool of knowledge. The AppLab was able to identify the emergence of a new crop pest from an increase in online searches for an unusual string of search terms within a particular region. Using this as an early warning signal, the Grameen bank sent extension officers to the location to check the crops and the pest contamination was dealt with effectively before it could spread any further.[54]

    (iv) Accountability and Transparency

    Big data enables participatory contributions from the electorate in existing functions such as budgeting and communication thus enabling connections between the citizens, the power brokers and elites. The extraction of information and increasing transparency around data networks is also integral to building a self-sustaining system of data collection and analysis. However it is important to note that this information collected must be duly analyzed in a responsible manner. Checking the veracity of the information collected and facilitating individual accountability would encourage more enthusiastic responses from the general populous thus creating a conducive environment to elicit the requisite information. The effectiveness of the policies formulated by relying on this information would rest on the accuracy of such information.

    An example of this is Chequeado, a non-profit Argentinean media outlet that specializes in fact-checking. It works on a model of crowd sourcing information on the basis of which it has fact checked everything from the live presidential speech to congressional debates that have been made open to the public. [55] It established a user friendly public database, DatoCHQ, in 2014 which allowed its followers to participate in live fact-checks by sending in data, which included references, facts, articles and questions, through twitter. [56] This allowed citizens to corroborate the promises made by their leaders and instilled a sense of trust in the government.

    III. Big Data and Smart Cities in the Global South

    Smart cities have become a buzzword in South Asia, especially after the Indian government led by Prime Minister Narendra Modi made a commitment to build 100 smart cities in India[57]. A smart city is essentially designed as a hub where the information and communication technologies (ICT) are used to create feedback loops with an almost minimum time gap. In traditional contexts, surveys carried out through a state sponsored census were the only source of systematic data collection. However these surveys are long drawn out processes that often result in a drain on State resources. Additionally, the information obtained is not always accurate and policy makers are often hesitant to base their decisions on this information. The collection of data can however be extremely useful in improving the functionality of the city in terms of both the 'hard' or physical aspects of the infrastructural environment as well as the 'soft' services it provides to citizens. One model of enabling this data collection, to this effect, is a centrally structured framework of sensors that may be able to determine movements and behaviors in real-time, from which the data obtained can be subsequently analyzed. For example, sensors placed under parking spaces at intersections can relay such information in short spans of time. South Korea has managed to implement a similar structure within its smart city, Songdo.[58]

    Another approach to this smart city model is using crowd sourced information through apps, either developed by volunteers or private conglomerates. These allow for the resolving of specific problems by organizing raw data into sets of information that are attuned to the needs of the public in a cohesive manner. However, this system would require a highly structured format of data sets, without which significantly transformational result would be difficult to achieve.[59]

    There does however exist a middle ground, which allows the beneficiaries of this network, the citizens, to take on the role of primary sensors of information. This method is both cost effective and allows for an experimentation process within which an appropriate measure of the success or failure of the model would be discernible in a timely manner. It is especially relevant in fast growing cities that suffer congestion and breakdown of infrastructure due to the unprecedented population growth. This population is now afforded with the opportunity to become a part of the solution.

    The principle challenge associated with extracting this Big Data is its restricted access. Most organizations that are able to collect this big data efficiently are private conglomerates and business enterprises, who use this data to give themselves a competitive edge in the market, by being able to efficiently identify the needs and wants of their clientele. These organizations are reluctant to release information and statistics because they fear it would result in them losing their competitive edge and they would consequently lose the opportunity to benefit monetarily from the data collected. Data leaks would also result in the company getting a bad name and its reputation could be significantly hampered. Despite the individual anonymity, the transaction costs incurred in ensuring the data of their individual customers is protected is often an expensive process. In addition to this there is a definite human capital gap resulting from the significant lack of scientists and analysts to interpret raw data transmitted across various channels.

    (i) Big Data in Urban Planning

    Urban planning would require data that is reflective of the land use patterns of communities, combined with their travel descriptions and housing preferences. The mobility of individuals is dependent on their economic conditions and can be determined through an analysis of their purchases, either via online transactions or from the data accumulated by prominent stores. The primary source of this data is however mobile phones, which seemed to have transcend economic barriers. Secondary sources include cards used on public transport such as the Oyster card in London and the similar Octopus card used in Hong Kong. However, in most developing countries these cards are not available for public transport systems and therefore mobile network data forms the backbone of data analytics. An excessive reliance on the data collected through Smart phones could however be detrimental, especially in developing countries, simply because the usage itself would most likely be concentrated amongst more economically stable demographics and the findings from this data could potentially marginalize the poor.[60]

    Mobile network big data (MNBD) is generated by all phones and includes CDRs, which are obtained from calls or texts that are sent or received, internet usage, topping up a prepaid value and VLR or Visitor Location Registry data which is generated whenever the phone is question has power. It essentially communicates to the Base Transceiver Stations (BSTs) that the phone is in the coverage area. The CDR includes records of calls made, duration of the call and information about the device. It is therefore stored for a longer period of time. The VLR data is however larger in volume and can be written over. Both VLR and CDR data can provide invaluable information that can be used for urban planning strategies. [61] LIRNEasia, a regional policy and regulation think-tank has carried out an extensive study demonstrating the value of MNBD in SriLanka.[62] This has been used to understand and sometimes even monitor land use patterns, travel patterns during peak and off seasons and the congregation of communities across regions. This study was however only undertaken after the data had been suitably pseudonymised.[63] The study revealed that MNBD was incredibly valuable in generating important information that could be used by policy formulators and decision makers, because of two primary characteristics. Firstly, it comes close to a comprehensive coverage of the demographic within developing countries, thus using mobile phones as sensors to generate useful data. Secondly, people using mobile phones across vast geographic areas reflect important information regarding patterns of their travel and movement. [64]

    MNBD allows for the tracking and mapping of changes in population densities on a daily basis, thus identifying 'home' and 'work' locations, informing policy makers of population congestion so that thy may be able to formulate policies with respect to easing this congestion. According to Rohan Samarajiva, founding chair of LIRNEasia, "This allows for real-time insights on the geo-spatial distribution of population, which may be used by urban planners to create more efficient traffic management systems."[65] This can also be used for the developmental economic policies. For example, the northern region of Colombo, a region inhabited by the low income families shows a lower population density on weekdays. This is reflective of the large numbers travelling to southern Colombo for employment. [66]Similarly, patterns of land use can be ascertained by analyzing the various loading patterns of base stations. Building on the success of the Mobile Data analysis project in SriLanka LIRNEasia plans to collaborate with partners in India and Bangladesh to assimilate real time information about the behavioral tendencies of citizens, using which policy makers may be able to make informed decisions. When this data is combined with user friendly virtual platforms such as smartphone Apps or web portals, it can also help citizens make informed choices about their day to day activities and potentially beneficial long term decisions. [67]

    Challenges of using Mobile Network Data

    Mobile networks invest significant sums of money in obtaining information regarding usage patterns of their services. Consequently, they may use this data to develop location based advertizing. In this context, there is a greater reluctance to share data for public purposes. Allowing access to one operator's big data by another could result in significant implications on the other with respect to the competitive advantage shared by the operator. A plausible solution to this conundrum is the accumulation of data from multiple sources without separating or organizing it according to the source it originates from. There is thus a lesser chance of sensitive information of one company being used by another. However, even operators do have concerns about how the data would be handled before this "mashing up" occurs and whether it might be leaked by the research organization itself. LIRNEasia used comprehensive non-disclosure agreements to ensure that the researchers who worked with the data were aware of the substantial financial penalties that may be imposed on them for data breaches. The access to the data was also restricted. [68]

    Another line of argumentation advocates for the open sharing of data. A recent article in the Economist has articulated this in the context of the Ebola outbreak in West Africa. " Releasing the data, though, is not just a matter for firms since people's privacy is involved. It requires governmental action as well. Regulators in each affected country would have to order operators to make their records accessible to selected researchers, who through legal agreements would only be allowed to use the data in a specific manner. For example, Orange, a major mobile phone network operator has made millions of CDRs from Senegal and The Ivory Coast available for researchers for their use under its Data Development Initiative. However the Political will amongst regulators and Network operators to do this seems to be lacking."[69]

    It would therefore be beneficial for companies to collaborate with the customers who create the data and the researchers who want to use it to extract important insights. This however would require the creation of and subsequent adherence to self regulatory codes of conduct. [70] In addition to this cooperation between network operators will assist in facilitating the transference of the data of their customers to research organizations. Sri Lanka is an outstanding example of this model of cooperation which has enabled various operators across spectrums to participate in the mobile-money enterprise.[71]

    (ii) Big Data and Government Delivery of Services and Functions

    The analysis of Data procured in real time has proven to be integral to the formulation of policies, plans and executive decisions. Especially in an Asian context, Big data can be instrumental in urban development, planning and the allocation of resources in a manner that allows the government to keep up with the rapidly growing demands of an empowered population whose numbers are on an exponential rise. Researchers have been able to use data from mobile networks to engage in effective planning and management of infrastructure, services and resources. If, for example, a particular road or highway has been blocked for a particular period of time an alternative route is established before traffic can begin to build up creating a congestion, simply through an analysis of information collected from traffic lights, mobile networks and GPS systems.[72]

    There is also an emerging trend of using big data for state controlled services such as the military. The South Korean Defense Minister Han Min Koo, in his recent briefing to President Park Geun-hye reflected on the importance of innovative technologies such as Big Data solutions. [73]

    The Chinese government has expressed concerns regarding data breaches and information leakages that would be extremely dangerous given the exceeding reliance of governments on big data. A security report undertaken by Qihoo 360, China's largest software security provider established that 2,424 of the 17,875 Web security loopholes were on government websites. Considering the blurring line between government websites and external networks, it has become all the more essential for authorities to boost their cyber security protections.[74]

    The Japanese government has considered investing resources in training more data scientists who may be able to analyze the raw data obtained from various sources and utilize requisite techniques to develop an accurate analysis. The Internal Affairs and Communication Ministry planned to launch a free online course on big data, the target of which would be corporate workers as well as government officials.[75]

    Data analytics is emerging as an efficient technique of monitoring the public transport management systems within Singapore. A recent collaboration between IBM, StarHub, The Land Transport Authority and SMRT initiated a research study to observe the movement of commuters across regions. [76] This has been instrumental in revamping the data collection systems already in place and has allowed for the procurement of additional systems of monitoring.[77] The idea is essentially to institute a "black box" of information for every operational unit that allows for the relaying of real-time information from sources as varied as power switches, tunnel sensors and the wheels, through assessing patterns of noise and vibration. [78]

    In addition to this there are numerous projects in place that seek to utilize Big Data to improve city life. According to Carlo Ritti, Director of the MIT Senseable City Lab, "We are now able to analyze the pulse of a city from moment to moment. Over the past decade, digital technologies have begun to blanket our cities, forming the backbone of a large, intelligent infrastructure." [79] The professor of Information Architecture and Founding Director of the Singapore ETH Centre, Gerhart Schmitt has observed that "the local weather has a major impact on the behavior of a population." In this respect the centre is engaged in developing a range of visual platforms to inform citizens on factors such as air quality which would enable individuals to make everyday choices such as what route to take when planning a walk or predict a traffic jam. [80] Schmitt's team has also been able to arrive at a pattern that connects the demand for taxis with the city's climate. The amalgamation of taxi location with rainfall data has been able to help locals hail taxis during a storm. This form of data can be used in multiple ways allowing the visualization of temperature hotspots based on a "heat island" effect where buildings, cars and cooling units cause a rise in temperature. [81]

    Microsoft has recently entered into a partnership with the Federal University of Minas Gerais, one of the largest universities in Brazil to undertake a research project that could potentially predict traffic jams up to an hour in advance. [82] The project attempts to analyze information from transport departments, road traffic cameras and drivers social network profiles to identify patterns that they could use to help predict traffic jams approximately 15 to 60 minutes before they actually happen.[83]

    In anticipation of the increasing demand for professionals with requisite training in data sciences, the Malaysian Government has planned to increase the number of local data scientists from the present 80 to 1500 by 2020, through the support of the universities within the country.

    IV. Big Data and the Private Sector in the Global South

    Essential considerations in the operations of Big Data in the Private sector in the Asia Pacific region have been extracted by a comprehensive survey carried out by the Economist Intelligence Unit.[84] Over 500 executives across the Asia Pacific region were surveyed, from across industries representing a diverse range of functions. 69% of these companies had an annual turnover of over US $500m. The respondents were senior managers responsible for taking key decisions with regard to investment strategies and the utilization of big data for the same.

    The results of the Survey conclusively determined that firms in the Asia Pacific region have had limited success with implementing Big Data Practices. A third of the respondents claimed to have an advanced knowledge of the utilization of big data while more than half claim to have made limited progress in this regard. Only 9% of the Firms surveyed cited internal barriers to implementing big data practices. This included a significant difficulty in enabling the sharing of information across boundaries. Approximately 40% of the respondents surveyed claimed they were unaware of big data strategies, even if they had in fact been in place simply because these had been poorly communicated to them. Almost half of the firms however believed that big data plays an important role in the success of the firm and that it can contribute to increasing revenue by 25% or more.

    Numerous obstacles in the adoption of big data were cited by the respondents. These include the lack of suitable software to interpret the data and the lack of in-house skills to analyze the data appropriately. In addition to this, the lack of willingness on the part of various departments to share their data for the fear of a breach or leak was thought to be a major hindrance. This combined with a lack of communication between the various departments and exceedingly complicated reports that cannot be analyzed given the limited resources and lack of human capital qualified enough to carry out such an analysis, has resulted in an indefinite postponement of any policy propounding the adoption of big data practices.

    Over 59% of the firms surveyed agreed that collaboration is integral to innovation and that information silos are a huge hindrance within a knowledge based economy. There is also a direct correlation between the size of the company and its progress in adopting big data, with larger firms adopting comprehensive strategies more frequently than smaller ones. A major reason for this is that large firms with substantially greater resources are able to actualize the benefits of big data analytics more efficiently than firms with smaller revenues. These businesses which have advanced policies in place outlining their strategies with respect to their reliance on big data are also more likely to communicate these strategies to their employees to ensure greater clarity in the process.

    The use of big data was recently voted as the "best management practice" of the past year according to a cumulative ranking published by Chief Executive China Magazine, a Trade journal published by Global Sources on 13th January, 2015 in Beijing. The major benefit cited was the real-time information sourced from customers, which allows for direct feedback from clients when making decisions regarding changes in products or services. [85]

    A significant contributor to the lack of adequate usage of data analytics is the belief that a PhD is a prerequisite for entering the field of data science. This misconception was pointed out by Richard Jones, vice president of Cloudera in the Australia, New Zealand and the Asean region. Cloudera provides businesses with the requisite professional services that they may need to effectively utilize Big Data. This includes a combination of the necessary manpower, technology and consultancy services.[86] Deepak Ramanathan, the chief technology officer, SAS Asia Pacific believes that this skill gap can be addressed by forming data science teams within both governments and private enterprises. These teams could comprise of members with statistical, coding and business skills and allow them to work in a collaborative manner to address the problem at hand.[87] SAS is an Enterprise Software Giant that creates tools tailored to suit business users to help them interpret big data. Eddie Toh, the planning and marketing manager of Intel's data center platform believes that businesses do not necessarily need data scientists to be able to use big data analytics to their benefit and can in fact outsource the technical aspects of the interpretation of this data as and when required.[88]

    The analytical team at Dell has forged a partnership with Brazilian Public Universities to facilitate the development of a local talent pool in the field of data analytics. The Instituto of Data Science (IDS) will provide training methodologies for in person or web based classes. [89] The project is being undertaken by StatSoft, a subsidiary of Dell that was acquired by the technology giant last year. [90]

    V. Conclusion

    There have emerged numerous challenges in the analysis and interpretation of Big Data. While it presents an extremely engaging opportunity, which has the potential to transform the lives of millions of individuals, inform the private sector and influence government, the actualization of this potential requires the creation of a sustainable foundational framework ; one that is able to mitigate the various challenges that present themselves in this context.

    A colossal increase in the rate of digitization has resulted in an unprecedented increment in the amount of Big Data available, especially through the rapid diffusion cellular technology. The importance of mobile phones as a significant source of data, especially in low income demographics cannot be overstated. This can be used to understand the needs and behaviors of large populations, providing an in depth insight into the relevant context within which valuable assessments as to the competencies, suitability and feasibilities of various policy mechanisms and legal instruments can be made. However, this explosion of data does have a lasting impact on how individuals and organizations interact with each other, which might not always be reflected in the interpretation of raw data without a contextual understanding of the demographic. It is therefore vital to employ the appropriate expertise in assessing and interpreting this data. The significant lack of a human resource to capital to analyze this information in an accurate manner poses a definite challenge to its effective utilization in the Global South.

    The legal and technological implications of using Big Data are best conceptualized within the deliberations on protecting the privacy of the contributors to this data. The primary producers of this information, from across platforms, are often unaware that they are in fact consenting to the subsequent use of the data for purposes other than what was intended. For example people routinely accept terms and conditions of popular applications without understanding where or how the data that they inadvertently provide will be used.[91] This is especially true of media generated on social networks that are increasingly being made available on more accessible platforms such as mobile phones and tablets. Privacy has and always will remain an integral pillar of democracy. It is therefore essential that policy makers and legislators respond effectively to possible compromises of privacy in the collection and interpretation of this data through the institution of adequate safeguards in this respect.

    Another challenge that has emerged is the access and sharing of this data. Private corporations have been reluctant to share this data due to concerns about potential competitors being able to access and utilize the same. In addition to this, legal considerations also prevent the sharing of data collected from their customers or users of their services. The various technical challenges in storing and interpreting this data adequately also prove to be significant impediments in the collection of data. It is therefore important that adequate legal agreements be formulated in order to facilitate a reliable access to streams of data as well as access to data storage facilities to accommodate for retrospective analysis and interpretation.

    In order for the use of Big Data to gain traction, it is important that these challenges are addressed in an efficient manner with durable and self-sustaining mechanisms of resolving significant obstructions. The debates and deliberations shaping the articulation of privacy concerns and access to such data must be supported with adequate tools and mechanisms to ensure a system of "privacy-preserving analysis." The UN Global Pulse has put forth the concept of data philanthropy to attempt to resolve these issues, wherein " corporations [would] take the initiative to anonymize (strip out all personal information) their data sets and provide this data to social innovators to mine the data for insights, patterns and trends in realtime or near realtime."[92]

    The concept of data philanthropy highlights particular challenges and avenues that may be considered for future deliberations that may result in specific refinements to the process.

    One of the primary uses of Big Data, especially in developing countries is to address important developmental issues such as the availability of clean water, food security, human health and the conservation of natural resources. Effective Disaster management has also emerged as one of the key functions of Big Data. It therefore becomes all the more important for organizations to assess the information supply chains pertaining to specific data sources in order to identify and prioritize the issues of data management. [93] Data emerging from different contexts, across different sources may appear in varied compositions and would differ significantly across economic demographics. The Big Data generated from certain contexts would be inefficient due to the unavailability of data within certain regions and the resulting studies affecting policy decisions should take into account this discrepancy. This data unavailability has resulted in a digital divide which is especially prevalent in the global south. [94]

    Appropriate analysis of the Big Data generated would provide a valuable insight into the key areas and inform policy makers with respect to important decisions. However, it is necessary to ensure that the quality of this data meets a specific standard and appropriate methodological processes have been undertaken to interpret and analyze this data. The government is a key actor that can shape the ecosystem surrounding the generation, analysis and interpretation of big data. It is therefore essential that governments of countries across the global south recognize the need to collaborate with civic organizations as well technical experts in order to create appropriate legal frameworks for the effective utilization of this data.


    [1] Onella, Jukka- Pekka. "Social Networks and Collective Human Behavior." UN Global Pulse. 10 Nov.2011. <http://www.unglobalpulse.org/node/14539>

    [2] http://www.business2community.com/big-data/evaluating-big-data-predictive-analytics-01277835

    [3] Ibid

    [4] http://unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf

    [5] Ibid, p.13, pp.5

    [6] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

    [7] Helbing, Dirk , and Stefano Balietti. "From Social Data Mining to Forecasting Socio-Economic Crises." Arxiv (2011) 1-66. 26 Jul 2011 http://arxiv.org/pdf/1012.0178v5.pdf.

    [8] Manyika, James, Michael Chui, Brad Brown, Jacques Bughin, Richard Dobbs, Charles Roxburgh andAngela H. Byers. "Big data: The next frontier for innovation, competition, and productivity." McKinsey

    Global Institute (2011): 1-137. May 2011.

    [9] "World Population Prospects, the 2010 Revision." United Nations Development Programme. <http://esa.un.org/unpd/wpp/unpp/panel_population.htm>

    [10] Mobile phone penetration, measured by Google, from the number of mobile phones per 100 habitants, was 96% in Botswana, 63% in Ghana, 66% in Mauritania, 49% in Kenya, 47% in Nigeria, 44% in Angola, 40% in Tanzania (Source: Google Fusion Tables)

    [11] http://www.brookings.edu/blogs/africa-in-focus/posts/2015/04/23-big-data-mobile-phone-highway-sy

    [12] Ibid

    [13] <http://www.google.com/fusiontables/Home/>

    [14] "Global Internet Usage by 2015 [Infographic]." Alltop. <http://holykaw.alltop.com/global-internetusage-by-2015-infographic?tu3=1>

    [15] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011 <http://www.unglobalpulse.org/blog/digital-smoke-signals>

    [16] Ibid

    [17] Ibid

    [18] Ibid

    [19] Goetz, Thomas. "Harnessing the Power of Feedback Loops." Wired.com. Conde Nast Digital, 19 June 2011. <http://www.wired.com/magazine/2011/06/ff_feedbackloop/all/1>.

    [20] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

    [21] Bollier, David. The Promise and Peril of Big Data. The Aspen Institute, 2010. <http://www.aspeninstitute.org/publications/promise-peril-big-data>

    [22] Ibid

    [23] Eagle, Nathan and Alex (Sandy) Pentland. "Reality Mining: Sensing Complex Social Systems",Personal and Ubiquitous Computing, 10.4 (2006): 255-268.

    [24] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

    [25] OECD, Future Global Shocks, Improving Risk Governance, 2011

    [26] "Economy: Global Shocks to Become More Frequent, Says OECD." Organisation for Economic Cooperationand Development. 27 June. 2011.

    [27] Friedman, Jed, and Norbert Schady. How Many More Infants Are Likely to Die in Africa as a Result of the Global Financial Crisis? Rep. The World Bank <http://siteresources.worldbank.org/INTAFRICA/Resources/AfricaIMR_FriedmanSchady_060209.pdf>

    [28] Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute,June 2011<http://www.mckinsey.com/mgi/publications/big_data/pdfs/MGI_big_data_full_report.pdf>

    [29] The word "crowdsourcing" refers to the use of non-official actors ("the crowd") as (free) sources of information, knowledge and services, in reference and opposition to the commercial practice of

    outsourcing. "

    [30] Burke, J., D. Estrin, M. Hansen, A. Parker, N. Ramanthan, S. Reddy and M.B. Srivastava. ParticipatorySensing. Rep. Escholarship, University of California, 2006. <http://escholarship.org/uc/item/19h777qd>.

    [31] "Crisis Mappers Net-The international Network of Crisis Mappers." <http://crisismappers.net>, http://haiti.ushahidi.com and Goldman et al., 2009

    [32] Alex Pentland cited in "When There's No Such Thing As Too Much Information". The New York Times.23 Apr. 2011<http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

    [33] Nathan Eagle also cited in "When There's No Such Thing As Too Much Information". The New YorkTimes. 23 Apr. 2011. <http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

    [34] Helbing and Balietti. "From Social Data Mining to Forecasting Socio-Economic Crisis."

    [35] Eysenbach G. Infodemiology: tracking flu-related searches on the Web for syndromic surveillance.AMIA (2006)<http://yi.com/home/EysenbachGunther/publications/2006/eysenbach2006cinfodemiologyamia proc.pdf>

    [36] Syndromic Surveillance (SS)." Centers for Disease Control and Prevention. 06 Mar. 2012.<http://www.cdc.gov/ehrmeaningfuluse/Syndromic.html>.

    [37] Health Map <http://healthmap.org/en/>

    [39] www.ushahidi.com

    [41] Ushahidi is a nonprofit tech company that was developed to map reports of violence in Kenya followingthe 2007 post-election fallout. Ushahidi specializes in developing "free and open source software for

    information collection, visualization and interactive mapping." <http://ushahidi.com>

    [42] Conducted by the European Commission's Joint Research Center against data on damaged buildingscollected by the World Bank and the UN from satellite images through spatial statistical techniques.

    [43] www.ushahidi.com

    [44] See https://tacticaltech.org/

    [45] see www. flowminder.org

    [46] Ibid

    [48] http://allafrica.com/stories/201507151726.html

    [49] Ibid

    [50] Ibid

    [51] http://www.computerworld.com/article/2948226/big-data/opinion-apple-and-ibm-have-big-data-plans-for-education.html

    [52] Ibid

    [53] http://www.grameenfoundation.org/where-we-work/sub-saharan-africa/uganda

    [54] Ibid

    [55] http://chequeado.com/

    [56] http://datochq.chequeado.com/

    [57] Times of India (2015): "Chandigarh May Become India's First Smart City," 12 January, http://timesofi ndia.indiatimes.com/india/Chandigarh- may-become-Indias-fi rst-smart-city/articleshow/ 45857738.cms

    [58] http://www.cisco.com/web/strategy/docs/scc/ioe_citizen_svcs_white_paper_idc_2013.pdf

    [59] Townsend, Anthony M (2013): Smart Cities: Big Data, Civic Hackers and the Quest for a New Utopia, New York: WW Norton.

    [60] See "Street Bump: Help Improve Your Streets" on Boston's mobile app to collect data on roadconditions, http://www.cityofboston.gov/DoIT/ apps/streetbump.asp

    [61] Mayer-Schonberger, V and K Cukier (2013): Big Data: A Revolution That Will Transform How We Live, Work, and Think, London: John Murray.

    [62] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

    [63] Ibid

    [64] Newman, M E J and M Girvan (2004): "Finding and Evaluating Community Structure in Networks,"Physical Review E, American Physical Society, Vol 69, No 2.

    [65] http://www.sundaytimes.lk/150412/sunday-times-2/big-data-can-make-south-asian-cities-smarter-144237.html

    [66] Ibid

    [67] Ibid

    [68] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

    [69] GSMA (2014): "GSMA Guidelines on Use of Mobile Data for Responding to Ebola," October, http:// www.gsma.com/mobilefordevelopment/wpcontent/ uploads/2014/11/GSMA-Guidelineson-

    protecting-privacy-in-the-use-of-mobilephone- data-for-responding-to-the-Ebola-outbreak-_ October-2014.pdf

    [70] An example of the early-stage development of a self-regulatory code may be found at http:// lirneasia.net/2014/08/what-does-big-data-sayabout- sri-lanka/

    [71] See "Sri Lanka's Mobile Money Collaboration Recognized at MWC 2015," http://lirneasia. net/2015/03/sri-lankas-mobile-money-colloboration- recognized-at-mwc-2015/

    [72] http://www.thedailystar.net/big-data-for-urban-planning-57593

    [74] http://www.news.cn/, 25/11/2014

    [76] http://www.todayonline.com/singapore/can-big-data-help-tackle-mrt-woes

    [77] Ibid

    [78] Ibid

    [79] http://edition.cnn.com/2015/06/24/tech/big-data-urban-life-singapore/

    [80] Ibid

    [81] Ibid

    [82] http://venturebeat.com/2015/04/03/how-microsofts-using-big-data-to-predict-traffic-jams-up-to-an-hour-in-advance/

    [83] Ibid

    [84] https://www.hds.com/assets/pdf/the-hype-and-the-hope-summary.pdf

    [85] http://www.news.cn , 14/01/2015

    [86] http://www.techgoondu.com/2015/06/29/plugging-the-big-data-skills-gap/

    [87] Ibid

    [88] Ibid

    [89] http://www.zdnet.com/article/dell-to-create-big-data-skills-in-brazil/

    [90] Ibid

    [91] Efrati, Amir. "'Like' Button Follows Web Users." The Wall Street Journal. 18 May 2011.

    <http://online.wsj.com/article/SB10001424052748704281504576329441432995616.html>

    [92] Krikpatrick, Robert. "Data Philanthropy: Public and Private Sector Data Sharing for Global Resilience."

    UN Global Pulse. 16 Sept. 2011. <http://www.unglobalpulse.org/blog/data-philanthropy-public-privatesector-data-sharing-global-resilience>

    [93] Laney D (2001) 3D data management: Controlling data volume, velocity and variety. Available at: http://blogs. gartner.com/doug-laney/files/2012/01/ad949-3D-DataManagement-Controlling-Data-Volume-Velocity-andVariety.pdf

    [94] Boyd D and Crawford K (2012) Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, & Society 15(5): 662-679.

    The Creation of a Network for the Global South - A Literature Review

    by Tanvi Mani last modified Feb 04, 2016 01:13 PM

    I. Introduction

    The organization of societies and states is predicated on the development of Information Technology and has begun to enable the construction of specialized networks. These networks aid in the mobilization of resources on a global platform.[1] There is a need for governance structures that embody this globalized thinking and adopt superior information technology devices to bridge gaps in the operation and participation of not only political functions but also economic processes and operations.[2] Currently, public institutions fall short of an optimum level of functioning simply because they lack the information, know-how and resources to respond effectively to this newly globalized and economically liberalized world order. Civil society is beginning to seek a greater participatory voice in both policy making and ideating, which require public institutions to institute a method of allowing this participation while at the same time retaining the crux of their functions and processes. The network society thus requires, As argued by Castells, a new methodology of social structuring, one amalgamating the analysis of social structure and social action within the same overarching framework.[3] This Network propounds itself as a 'dynamic, self-evolving structure, which, powered by information technology and communicating with the same digital language, can grow, and include all social expressions, compatible with each network's goals. Networks increase their value exponentially through their contribution to human resources, markets, raw materials and other such components of production and distribution.' [4]

    As noted by Kevin Kelly,' The Atom is the past. The symbol of science for the next century is the dynamical Net.…Whereas the Atom represents clean simplicity, the Net channels the messy power of complexity. The only organization capable of nonprejudiced growth or unguided learning is a network. All other topologies limit what can happen. A network swarm is all edges and therefore open ended any way you come at it. Indeed the network is the least structured organization that can be said to have any structure at all. ..In fact a plurality of truly divergent components can only remain coherent in a network. No other arrangement - chain, pyramid, tree, circle, hub - can contain true diversity working as a whole .'[5]

    A network therefore is integral to the facilitation, coordination and advocacy of different agenda within a singular framework, which seeks to formulate suitable responses to a wide range of problems across regions. An ideal model of a network would therefore be one that is reflective of the interconnectivity between relationships, strengthened by effective communication and based on a strong foundation of trust.

    The most powerful element of a network is however the idea of a common purpose. The pursuit is towards similar ends and therefore the interconnected web of support it offers is in realization of a singular goal,

    II. Evolution of the Network

    There are certain norms that must be incorporated for a network to be able to work at its best. Robert Chambers, in his book, Whose Reality Counts? Identifies these norms and postulates their extension to every form of a network, in order to capture its creative spirit and aid in the realization of its goals.[6] A network should therefore ideally foster four fundamental elements in order to inculcate an environment of trust, encouragement and the overall actualization of its purpose. These elements are; Diversity or the encouragement of a multitude of narratives from diverse sources, Dynamism or the ability of participants to retain their individual identities while maintaining a facilitative structure, Democracy or an equitable system of decision making to enable an efficient working of the net and finally, Decentralization or the feasibility of enjoying local specifics on a global platform.[7]

    In order to attain these ideal elements it is integral to strengthen certain aspects of the practice through performing specific and focused functions, these include making sure of a clear broad consensus, which ensures the co-joining of a common purpose. Additionally, centralization, in the form of an overarching set of rules must be kept to a minimum, in order to facilitate a greater level of flexibility while still providing the necessary support structure. The building of trust and solid relationships between participants is prioritized to enhance creative ideation in a supportive environment. Joint activities, more than being output oriented are seen as the knots that tie together the entire web of support. Input and participation are the foremost objectives of the network, in keeping with the understanding that "contribution brings gain". [8]

    Significant management issues that plague networks include the practical aspects of bringing the network into function through efficient leadership and the consolidation of a common vision. A balanced approach would entail a common consultation on the goals of the network, the sources of funding and an agreed upon structure within which the network would operate. It is also important to create alliances outside of the sector of familiarity and ensure an inclusive environment for members across regions, allowing them to retain their localized individuality while affording them with a global platform. [9]

    III. Structure

    The structural informality of a network is essential to its sustenance. Networks must therefore ensure that they embody a non-hierarchized structure, devoid of bureaucratic interferences and insulated from a centralized system of control and supervision. This requires an internal system of checks and balances, consisting of periodic reviews and assessments. Networks must therefore limit the powers of supervision of the secretariat. The secretariat must allow for the coordination of its activities and allocate appropriate areas of engagement according to the relative strength of the participating members.

    One form of a network structure, postulated within a particular research study is the threads, knots and Nets model. [10] It consists of members within a network bound together by threads of relationship, communication and trust. These threads represent the commonality that binds together the participants of the particular network. The threads are established through common ideas and a voluntary participation in the process of communication and conflict resolution. [11]

    The knots represent the combined activities which the participants engage in, with the common goal of realizing a singular purpose. These knots signify an optimum level of activity, wherein members of the network are able to support, inspire and confer tangible benefits onto each other. The net represents the entire structure of the network, which is constructed through a confluence of relationships and common activities. [12] The structure is autonomous in nature and allows participants to contribute without losing their individual identities. It is also dynamic and flexible; incorporating new elements with relative ease. It is therefore a collaboration which affords onto its members the opportunity to expand without losing its purpose. The maintenance of such a structure requires constant review and repair, with adequate awareness of weak links or "threads" and the capability and willingness to knot them together with new participants, thereby extending the net.

    For example, the Global Alliance for Vaccines and Immunization used a system of organizational "milestones" to monitor the progress of the network and keep the network concentrated. It requires a sustained institutional effort to fulfill its mandate of "the right of every child to be protected against vaccine-preventable diseases" and brings together international organizations, civil society and private industry. [13] As postulated within the Critical Choices research study of the United Nations, clearly defined milestones are integral to sustaining an effective support mechanism for donors and ensuring that all relevant participants are on board. [14] This also allows for donors to be made aware of the tangible outcomes that have been achieved by the network. Interim goals that are achievable within a short span of time also afford a sense of legitimacy onto the network, allowing it to deliver on its mandate early on. Setting milestones would require an in depth focus and a nuanced understanding of specific aspects of larger problems and delivering early results on these problems would allow for a foundational base of trust, on the foundation of which, a possibly long drawn out consultative process can be fixed.[15]

    A Network might often find alliances outside of its sector of operation. For example, Greenpeace was able to make its voice heard in International Climate Change negotiations by engaging with private insurance companies and enlisting their support.[16] The organization looked towards the private sector for support to mobilize resources and enlist the requisite expertise within their various projects. [17]

    A. Funding

    The financial support a network receives is essential for its sustenance. The initial seed money it receives can be obtained from a single source however, cross sectoral financing is necessary to build a consensus with regards to issues that may be a part of the network's mandate. The World Commission for Dams (WCD), for example, obtains funding from multiple sources in order to retain its credibility. The sources of funding of the WCD include government agencies, multilateral organizations, business associations, NGO's and Government Agencies, without a single donor contributing more than 10% of the total funding it receives.[18] However, the difficulty with this model of funding is the relative complexity in assimilating a number of smaller contributions, which may take away from its capacity to expand its reach and enhance the scope of its work. Cross sectoral funding is less of a fundamental requirement for networks whose primary mandate is implementation, such as The Global Environment Facility (GEF), whose legitimacy is derived from intergovernmental treaties and is therefore only funded by governments.[19] The GEF has only recently broadened its sources of funding to include external contributions from the private sector.

    A network can also be funded through the objective it seeks to achieve through the course of its activities. For example, Rugmark an international initiative which seeks to mitigate the use of child labor in South Asia uses an external on site monitoring system to verify and provide labels certifying the production of carpets without the use of child labor.[20] The monitors of this system are trained by Rugmark and carpet producers have to sign a binding agreement, undertaking not to employ children below the age of 14 in order to receive the certification. The funds generated from these carpets, for the import of which American and European importers pay 1% of the import value, are used to provide rehabilitation and education facilities for the children in affected areas. The use of these funds is reported regularly. [21]

    The funding must be sustained for a few years, which is a difficult task for networks that require an overall consensus of participants. The greatest outcomes of the network are not tangible solutions to the problem but the facilitation of an environment which allows stakeholders to derive a tangible solution. Thus, the elements of trust, communication and collaboration are integral to the efficient functioning of the network. However, the lack of tangible outcomes exposes the funders to financial risks. The best way to reduce such risks is to institute an uncompromising time limit for the initiative, within which it must achieve tangible results or solutions that can be implemented. A less stringent approach would be to incorporate a system of periodic review and assessment of the accomplishments of the network, subsequent to which further recommendations may be made for a further course of action.[22]

    B. Relationships

    A three year study conducted by Newell & Swan drew definitive conclusions with respect to the inter-organizational collaboration between participants within a network. The study determined that there currently exist three types of trust; Companion trust or the trust that exists within the goodwill and friendship between participants, Competence trust, wherein the competence of other participants to carry out the tasks assigned to them is agreed upon and lastly, Commitment trust or the trust which is predicated on contractual or inter-institutional that are agreed upon. [23] While companion and competence trust are easily identifiable, commitment trust is more subjective as it is determined by the agreement surrounding the core values and overall identifiable aims. Sheppard & Tuchinsky refer to an identification based trust which is based on a collective understanding of shared values. Such a trust requires significant investment but they argue, "The rewards are commensurably greater and the he benefits go beyond quantity, efficiency and flexibility." [24] Powell postulates, "Trust and other forms of social capital are moral resources that operate in fundamentally different manner than physical capital. The supply of trust increases, rather than decreases, with use: indeed, trust can be depleted if not used." [25]

    Karl Wieck endorses the "maintenance of tight control values and beliefs which allow for local adaptation within centralized systems." [26] The autonomy that participants within a network enjoy is therefore considered to be close to sacred, so as to allow them to engage with each other on an equitable footing, while still maintain their individual identities. Freedman and Reynders believe that networks place a so called 'premium' on " the autonomy of those linked through the network…..networks provide a structure through which different groups - each with their own organizational styles, substantive priorities, and political strategies - can join together for common purposes that fill needs felt by each. "[27] Consequently, lower the level of centralized control within a network, the greater the requirement of trust. Allen Nan resonates with this idea, as is evident from her review of coordinating conflict resolution NGO's. She believes that these NGO's are most effective when " beginning with a loose voluntary association which grows through relationship building, gradually building more structure and authority as it develops. No NGO wants to give away its authority until it trusts a networking body of people that it knows. " [28]

    C. Communication and Collaboration

    The binding force that ties together any network is the importance of relationships between participants and their interactions with organizations outside the network. Research has shown that face to face interaction works best and although email may be practical, a face-to-face meeting at regular intervals builds a level of trust amongst participants. [29] It is however important to prevent network from turning into 'self-selecting oligarchies' and to prevent this, there needs to be a balance drawn between goodwill and the trust in others' competence along with a common understanding of differently hierarchized values. [30]

    There is also an impending need to develop a relationship vocabulary, as suggested by Taylor, which would be of particular use within transnational networks and afford a deeper understanding of cross cultural relationships.[31]

    D. Participation

    A significant issue that networks today have to address is how to inculcate and then subsequently maintain participation in the activities of the network. This would include providing incentives to participants, encouraging diversity and enabling greater creative inflow across sectors to generate innovative output. Participation involves three fundamental elements; Action, which includes active contribution in the form of talking, listening, commenting, responding and sharing information, Process, which aids in an equitable system of decision making and constructing relationships and the underpinned values associated with these two elements, which include spreading equality, inculcating openness and including previously excluded communities or individuals. [32] Participation in itself envisages a three leveled definition; participation as a contribution, where people offer a tangible input, participation as an organization process, where people organize themselves to influence certain pre-existing processes and participation as a form of empowerment where people seek to gain power and authority from participating.

    In order to create an autonomous system of evaluating and monitoring the nature and context of participation, a network would have to attempt to systematically incorporate a few fundamental processes, such as; enabling an understanding of the dynamism of a network through an established criteria of monitoring the levels of participation of the members, creating an explicit checklist of qualifications of this participation, such as the contributions of the participants, the limits of commitment and the available resources that must be shared and distributed, acknowledging the importance of relationships as fundamental to the success of any network., building a capacity for facilitative and shared leadership, tracing the changes that occur when the advocacy and lobbying activities of individuals are linked and using these individuals as participants who have the power to influence policy and development at various levels.[33] Finally, the recognition that utilizing the combined faculties of the network would aid in the effectuation of further change is vital to sustaining an active participation in the network.[34] It is common for networks to stagnate simply because of the lack of clarity on what a network really is or what it entails. There are significant misconceptions as to the activities engaged in by the network, such as the idea that a network "works solely as a resource center, to provide information, material and papers, rather than as forums for two way exchanges of information and experiences," contribute to the misunderstanding regarding the participation requirements within a network.[35] To facilitate an active, participatory function of learning, a network needs to be more than a resource center that seeks to meet the needs of beneficiaries. While meeting these needs is essential, development projects tend to obfuscate the benefit/input relationship within a network, thus significantly depleting its dynamism quotient. [36]

    One method of moving away from the needs based model is to create a tripartite functionary, as was created within a particular research study. [37] This involves A Contributions Assessment, A Weaver's Triangle for Networks and An identification of channels of participation.

    Contributions Assessment is an analysis of what the participants within a network are willing to contribute. It enables the network to assess what resources it has access to and how those resources may be distributes amongst the participants, multiplied or exchanged. [38] This system is predicated on a premise of assessing what participants have to offer as opposed to what they need. It challenges the long held notion of requiring an evaluation to identify problems, to address which recommendations are made and in fact seeks to focus on the moments of excellence and enable a discussion on the factors that contributed to these moments. [39] It thus places a value on the best of "what is" as opposed to trying to find a plausible "what ought to be". This approach allows participants to recognize that they are in fact the real "resource Centre" of the network and are encouraged act accordingly.

    A Contributions Assessment may be practically incorporated through a few steps. It must be focused on the contributions, after a discussion on who the contributors may be. The aims of the network must be clarified, along with a specification of the contributions required such as perhaps newsletters, a conference, policy analysis etc. The members of the network must be clear on what they would like to contribute to the network and how such contribution might be delivered. Finally, the secretariat must be able to ideate or innovate on how it can enable more contributions from the networks in a more effective manner. [40]

    The Weaver's Triangle has been adapted to be applies within networks and enables participants to understand what the aims and activities of the network are. It identifies the overall aim of the network and the change the network seeks to bring about to the status quo. It then lays out the objectives of the network in the form of specific statements about the said differences that the network seeks to bring about. Finally, the network would have to explain why a particular activity has been chosen. [41] The base of the triangle reflects the specific activities that the network seeks to engage in to achieve the said objectives. The triangle is further divided into two, to ensure that action aims and process aims have equal weightage; this allows for the facilitation of an exchange and a connection between the members of the network. [42]

    The Circles of Participation is an idea that has been put forth by the Latin American and Caribbean Women's Health Network. (LACWHN). [43] This Network has three differentiated categories of membership, which it uses to determine the degree of commitment of an organization to the network. R- refers to the members who receive the women's health journal, P refers to members who actively participate in events and campaigns and who are advisors for specific topics. PP refers to the permanent participants within the network at national and international levels. They also receive a journal. This categorization allows the network to make an assessment of the dynamism and growth of a network, with members moving through the categories depending on their levels of participation. [44]

    An important space for contributions to the network is the newsletter. This can be facilitated by allowing contributions from various sources, provided they meet the established quality checks, ensuring a balance between regions of origin of the members of the network, ensuring a balance between the policy and program activities of the members and keeping the centralized editorial process to a minimum. This is in keeping with the ideal of a decentralized system of expression that allows each member to retain its individuality while still contributing to the aims of the network. The Women's Global Network on Reproductive Rights (WGNRR) sought to create a similar system of publication to measure the success of their linkages, the levels of empowerment amongst members, in terms of strategizing and enabling localized action and the allocation of space in a fair and equitable manner. [45] Another Network, Creative Exchange customizes its information flow within the network so that each member only receives the information it expresses interest in.[46] This prevents the overburdening of members with unnecessary information.

    The activities of the network which don't directly pass through the secretariat or the coordinator of the network can be monitored efficiently by keeping I close contact with new entrants to the network and capturing the essence of the activities that occur on the fringes of the network. This would allow an assessment of the diversity of the network. For example, Creative exchange sends out short follow up emails to determine the number and nature of contacts that have been made subsequent to a particular item in the newsletter. The UK Conflict Development and Peace Network (CODEP) records the newest subscribers to the network after every issue of their newsletter and AB Colombia sends out weekly news summaries electronically which are available for free to recipients who provide details of their professional engagements and why or how they wish to use these summaries. [47] This enables the mapping of the type of recipients the information reaches.

    E. Leadership and Coordination

    Sarason and Lorentz postulate four distinguishing characteristics that capture the creativity and expertise required by individuals leading and coordinating networks.[48] Knowledge of the territory or a broad understanding of the type of members, the resources available and the needs of the members is extremely important to facilitate an ideal environment of mutual trust and open dialogue between the members. Scanning the network for fluidity and assessing openings, making connections and innovating solutions would enable an efficient leadership that would contribute to the overall dynamism of the network. In addition to this, perceiving strengths and building on assets of existing resources would allow the network to capitalize on its strengths. Finally, the coordinators of a network must be a resource to all members of the network and thus enable them to create better and more efficient systems. They must therefore exercise their personal influence over members wherever required for the overall benefit of the network. Practically, a beneficial leadership would also require an inventive approach by providing fresh and interesting solutions to immediate problems. A sense of clarity, transparency and accountability would also encourage members of the network to participate more and engage with each other. It is important for the leadership within a network to deliver on expectations, while building consensus amongst its members.

    A shared objective, a collaborative setting and a constant review of strategies is important to maintain linkages within a network. Responsible relationships underpinned by values and supported by flows of relevant information would allow an effective and fruitful analysis by those who are engaged within a network to do the relevant work. In addition to this, a respect for the autonomy of the network is essential.

    F. Inclusion

    Public policy networks are more often than not saturated with the economic and social elite from across the developed world. A network across the Global South would have to change this norm and extend its ambit of membership to grass root organizations, which might not have otherwise had the resources or the opportunity to be a part of a network.[49] Networks can achieve their long term goals only if they are driven by the willingness to include organizations from across economic demographics. This would ensure that their output is the result of a collaborative process that takes into account cross cultural norms and differentials across economic demographics.

    The participation of diverse actors is reflective of the policy making processing having given due regard to on the ground realities and being sensitive towards the concerns of differently placed interest groups. Networks have been accused of catering only to the needs of industrial countries and subscribing to values of the global north thus stunting local development and enforcing double standards. This tarnishes the legitimacy of the processes inculcated within the network itself. It is therefore all the more essential that a network focused on the global south have a diverse collection of members from across backgrounds and economic contexts. Additionally, the accountability of the network to civil society is dependent on the nature of the links it maintains with the public. Inclusion thus fosters a sense of legitimacy and accountability. The inclusion of local institutions from the beginning would also increase the chances of the solutions provided by the network, being effectively implemented. Local inclusion affords a sense of responsibility and ensures that the network would remain sustainable in the long run. Allowing local stakeholders to take ownership of the network and participate in the formulation of policies, engage in planning and facilitate participation would enable an efficient addressing of significant public policy issues. [50] Thus networks would need to create avenues for participation of local institutions and civil society to engage in a democratic form of decision making.

    III. Evaluation

    The process of evaluation of a network is most efficiently effectuated through a checklist that has been formulated within a research study for the purpose of evaluating its own network. [51]

    This checklist enumerates the various elements that have to be taken into consideration while evaluating the success of a network, as follows;

    FIG 1.[52]

    1. What is a network?

    'Networks are energising and depend crucially on the motivation of members'

    (Networks for Development, 2000:35)

    This definition is one that is broadly shared across the literature, although it is more detailed than some.

     

    A network has:

    • A common purpose derived from shared perceived need for action
    • Clear objectives and focus
    • A non-hierarchical structure
    A network encourages
    • Voluntary participation and commitment
    • The input of resources by members for benefit of all

    A network provides

    • Benefit derived from participation and linking

     

    2. What does a network do?

    • Facilitate shared space for exchange, learning, development - the capacity-building aspect
    • Act for change in areas where none of members is working in systematic way - the advocacy, lobbying and campaigning aspect
    • Include a range of stakeholders - the diversity/ broad-reach aspect

     

    3. What are the guiding principles and values?

    • Collaborative action
    • Respect for diversity
    • Enabling marginalised voices to be heard
    • Acknowledgement of power differences, and commitment to equality

    4. How do we do what we do, in accordance with our principles and values?

    Building Participation

    • Knowing the membership, what each can put in, and what each seeks to gain
    • Valuing what people can put in
    • Making it possible for them to do so
    • Seeking commitment to a minimum contribution
    • Ensuring membership is appropriate to the purpose and tasks
    • Encouraging members to be realistic about what they can give
    • Ensuring access to decision-making and opportunities to reflect on achievements
    • Keeping internal structural and governance requirements to a necessary minimum.

     

    Building Relationships and Trust

    • Spending time on members getting to know each other, especially face-to-face
    • Coordination point/secretariat has relationship-building as vital part of work
    • Members/secretariat build relations with others outside network - strategic individuals and institutions

     

    Facilitative Leadership (may be one person, or rotating, or a team)

    • Emphasis on quality of input rather than control
    • Knowledgeable about issues, context and opportunities,
    • Enabling members to contribute and participate
    • Defining a vision and articulating aims
    • Balancing the creation of forward momentum and action, with generating consensus
    • Understanding the dynamics of conflict and how to transform relations
    • Promoting regular monitoring and participatory evaluation
    • Have the minimum structure and rules necessary to do the work. Ensure governance is light, not strangling.Give members space to be dynamic
    • Encourage all those who can make a contribution to the overall goal to do so, even if it is small.

    Working toward decentralised and democratic governance

    • At the centre, make only the decisions that are vital to continued functioning. Push decision-making outwards.
    • Ensure that those with least resources and power have the opportunity to participate in a meaningful way.

     

    Building Capacity

    • Encourage all to share the expertise they have to offer. Seek out additional expertise that is missing.

     

    5. What are the evaluation questions that we can ask about these generic qualities? How do each contribute to the achievement of your aims and objectives?

    Participation

    • What are the differing levels or layers of participation across the network?
    • Are people participating as much as they are able to and would like?
    • Is the membership still appropriate to the work of the network? Purpose and membership may have evolved over time
    • Are opportunities provided for participation in decision-making and reflection?
    • What are the obstacles to participation that the network can do something about?

    Trust

    • What is the level of trust between members? Between members and secretariat?
    • What is the level of trust between non-governing and governing members?
    • How do members perceive levels of trust to have changed over time?
    • How does this differ in relation to different issues?
    • What mechanisms are in place to enable trust to flourish? How might these be strengthened?

     

    Leadership

    • Where is leadership located?
    • Is there a good balance between consensus-building and action?
    • Is there sufficient knowledge and analytical skill for the task?
    • What kind of mechanism is in place to facilitate the resolution of conflicts?

     

    Structure and control

    • How is the structure felt and experienced? Too loose, too tight, facilitating, strangling?
    • Is the structure appropriate for the work of the network?
    • How much decision-making goes on?
    • Where are most decisions taken? Locally, centrally, not taken?
    • How easy is it for change in the structure to take place?

     

    Diversity and dynamism

    • How easy is it for members to contribute their ideas and follow-through on them?
    • If you map the scope of the network through the membership, how far does it reach? Is this as broad as

    intended? Is it too broad for the work you are trying to do?

    Democracy

    • What are the power relationships within the network? How do the powerful and less powerful interrelate? Who sets the objectives, has access to the resources, participates in the governance?

    Factors to bear in mind when assessing sustainability

    • Change in key actors, internally or externally; succession planning is vital for those in central roles
    • Achievement of lobbying targets or significant change in context leading to natural decline in energy;
    • Burn out and declining sense of added value of network over and above every-day work.
    • Membership in networks tends to be fluid. A small core group can be a worry if it does not change and renew itself over time, but snapshots of moments in a network's life can be misleading. In a flexible, responsive environment members will fade in and out depending on the 'fit' with their own priorities. Such changes may indicate dynamism rather than lack of focus.
    • Decision-making and participation will be affected by the priorities and decision-making processes of members' own organisations.
    • Over-reaching, or generating unrealistic expectations may drive people away
    • Asking same core people to do more may diminish reach, reduce diversity and encourage burn-out

    V. Learning and Recommendations

    In order to facilitate the optimum working of a network several factors need to be taken into consideration and certain specific processes have to be incorporated into the regular functioning of the network. These are for example,

    • Ensuring that the evaluation of the network occurs at periodic intervals with the requisite level of attention to detail and efficiency to enable an in depth recalibration of the functions and processes of the network. To this effect, evaluation specialists must be engaged not just at times of crises or instability but as accompaniments to the various processes undertaken by the network. This would enable a holistic development of the network.
    • It is also important to understand the underlying values that define the unique nature of the network. The coordination of the network, its functions and its activities are intrinsically linked to these values and recognition of this element of the network would enable a greater functionality in the overall operation of the network.
    • A strong relationship between the members of the network, predicated on trust and open dialogue is essential for its efficient functioning. This would allow the accumulation of innovative ideas and dynamic thought to direct the future activities of the network.
    • The Secretariat or coordinator of the network must be able to engage the member in monitoring and evaluating the progress of the network. One method of enabling this coordination is through the institution of 'participant observer' methods at international conferences or meetings, which allow the members of the network to report back on the work that they have, which is linked to the work of other members.
    • The autonomy of a network and its decentralized mechanism of functioning are integral to retain the individuality of its members, who seek to pursue institutional objectives. The members seek to facilitate creative thinking and share ideas and this must be supported by financial resources. A strong bond of trust between the members of a network is therefore essential to enable long term commitments and the flourishing of interpersonal communication between members.
    • It is important that the subject area of operation of the network be comprehensively defined before the network comes into existence.
    • As seen with the experience of Canadian Knowledge Networks, it is beneficial to be selective in inviting participant to the network and following a rigorous process of review and selection would ensure that only the best candidates are selected so as to facilitate effective partnerships with other networks, as a result of demonstrable expertise within a particular field.
    • The management of a network must be disciplined, with clearly demarcated project deadlines and an optimum level of transparency and accountability. At the helm of leadership of every successful network, there has been intelligent, decisive and facilitative exchange, which is essential in securing a durable and potentially expandable space for the network to operate in.

    A. Canadian Perspectives

    A study of Canadian experiences was conducted by examining The Centers of Excellence and the Networks of Centers of Excellence (NCEs), which were funded through three Federal Granting Councils.[53] An initial observation that was made through the course of this study was that each network is intrinsically different and there is no uniform description which would fit all of them. The objectives of the Networks of Centers of Excellence Program are broadly, as follows; to encourage fundamental and applied research in fields which are critical to the economic development of Canada, to encourage the development and retention of world class scientists and engineers specializing in essential technologies, to manage multidisciplinary, cross sectoral national research programs which integrate stakeholder priorities through established partnerships and finally, to accelerate the exchange of research results within networks by accelerating technology transfers, made to users for social and economic development. [54] Extensive interviews carried out in the course of the research conducted by the ARA Consulting Group Inc. drew up particularly relevant conclusions with respect to the NCEs.

    Firstly, they have been able to produce significant "cultural shifts" among the researchers associated with the network. This is attributed to the network facilitating a collaborative effort amongst researchers as opposed to their previous working, which was largely in isolation. The benefits of this collaboration have been identified as providing innovative ideas and leading the research itself in unprecedented directions. This has the effect of equipping Canada with the capability to compete on a global level with respect to its research endeavors. The culture shift has also allowed researchers to be more aware of the problems that plague industry and has instigated more in depth research into the development of the industrial sector. Government initiatives that have attempted to cohesively apply academic research to industry have had limited success. The NCE's however have managed to successfully disintegrate the barriers between these two seemingly disparate fields. This has resulted in a faster and more effective system of knowledge dissemination resulting in durable and self-sustaining economic development, which takes place at a faster rate. The NCE's have also been able to contribute to healthcare, wellness and overall sustainable development through their cross sectoral research approach, a model that can be used worldwide.

    Another tangible effect has been that the relationship between industry and academic research is evolving into a positive and collaborative exchange, as opposed to the previous state which was largely isolationist, bordering on confrontational.[55] A possible cause of this is the increased representation of companies in the establishment of networks resulting in them influencing the course of research. This has not been met with any resistance from academic researchers who are driven by the imperative of an open publication. [56] Besides influencing the style of management, industrial representation has also brought about an increase in the level of private sector financial contributions made to NCEs. It is believed that these NCEs may even be able to support themselves in the next 7-8 years through the funding they receive from the commercialization of their research.

    A third benefit that has emerged is the faster rate of production of new knowledge and innovative thinking. This is the result of collaborative techniques which is made more efficient through the use of modern technology. The increasing number of multi authored cross institutional scholarly publications made available by the NCE is evidentiary of this trend. The rate and quantity of technology transfers has also increased exponentially as a result of this. Knowledge networks also facilitate the mobilization of human resources and address cross disciplinary problems, resulting in an efficient and synergistic solutions. Their low cost, fast pace approach has been instrumental in constructing an understanding of and capacity to engage in sustainable development.

    The significant contributions to sustainable development include the Canadian Genetic Diseases Network, which has discovered two specific genes that cause early onset Alzheimer's disease. The Sustainable Forest Management Network has claimed that its research does have a considerable level of influence on the industrial approach to sustainability. The Canadian Bacterial Disease Network conducts research on bacterially caused diseases which are mostly prevalent in developing countries, with a view to produce antibiotics and vaccines that may be able to successfully combat these vaccines. TeleLearning, another such network is working on the creation of software environments which will form the basis of technology based education in the future. [57] The greatest advantage of these knowledge networks is that they have been able to surpass traditional disciplinary barriers and have emerged at the forefront of interdisciplinary articulation, which is emerging as the path to breakthroughs in the fields of applied sciences and technology in the future. The NCE's have also been able to provide diverse working environments for graduate students, where they have been able to work under scientists associated with different specializations and across different departments. They have also been able to interact with government and industry representatives, giving them a far greater exposure of the field and equipping them to avail of a wide range of employment opportunities.

    The corporate style of management incorporated within the NCEs encourages a sense of discipline and an enthusiasm for innovation. The Board of Directors at NCE's take on a perfunctory role and function as a typical corporate board. Researchers are therefore required to provide regular reports and meet deadlines to achieve predetermined goals that have been agreed upon. The new paradigm of sustainable development and the fluid transfer of knowledge requires this structure of management, even within a previously strictly academically oriented environment. NCEs have been incorporated as non-profit corporation for largely legal reasons such as the ownership of intellectual property.

    The participation to these networks is restricted and is open only through an invitation, in the form of a submission of project proposals under a particular theme, with the final selection being made subject to a rigorous process of evaluation. This encourages the participants of the network to embody a degree of discipline and carry out their activities in a constructive, time bound manner.

    B. Perceived Challenges

    These knowledge networks, although extremely beneficial in the long run, do have certain specific issues that need to be addressed. Firstly, most formal knowledge networks do not have a formalized communication strategy. While they do make use of various forms of telecommunication, this communication is is no way formally directed or specific. Although some networks have managed to set up a directed communications strategy, supplemented by the involvement of specifically communications based networks (such as CANARIE) , there is still a long way to go in this area.

    As is evident with most academic endeavors in recent years, efficient and sustained development both in terms of economy as well as self-sustenance, requires a smooth transitioning to a close collaboration with the industry. Although the NCE's have made progress in this area, a lesson that can be learnt from this is that knowledge networks do require a collaborative arrangement between researchers, the industry and the financial sector. [58] The nature of this collaboration cannot be predicted before tangible research outputs are developed that reflect the relevance of academia in the industrial and financial sectors. A particular network, PENCE has mandated that the boards of directors include a representative of the financial sector. This is a step forward in opening the doors to greater collaboration and mutually assured growth and sustainable development in both academia as well as the industrial and financial sectors.

    As with all knowledge networks there is a continuous need for expansion of the focus areas to cover more fields and instigate research in neglected areas. The largest number of networks has been in the fields of healthcare and health associated work. However there is an impending need for networks to be established in other fields as well, such as those related to environmental issues, social dynamics and the general quality of life. [59]

    The Canadian experience has resulted in a nuanced understanding of specific actions that need to be taken to strengthen knowledge networks across the spectrum. Firstly, there is an impending need to build new knowledge networks, which would be required to strengthen institutions upon which the networks are based. These include universities and research institutions, which have been weakened both financially and academically over the past few years. The NCE Program, on the face of it, seems to be strengthening universities, by attracting funding for research endeavors that would otherwise not be available to them. While this may be true, it tends to obfuscate the true nature of a university as an intellectual community, by portraying it as a funding source for research and equipment.[60] The deteriorating role of the university in fostering research and laying the foundation of an intellectual community can be reversed by the competition posed by the NCEs which tend to threaten its stature in the fields of multi-disciplinary and graduate institution. Another aspect that needs to be considered is the role of knowledge networks in fostering sustainable development not only on a national or regional scale but on a global level. This can be effectuated by allowing the amalgamation of the academia and industry through ample representation, a model that has proven to be effective within the NCEs. This is all the more relevant today where multinational corporations hold considerable sway over the global economy, so much so that the role of governments in regulating this economy is gradually decreasing. Multilateral investment treaties and agreements are reflective of this.

    The final issue is that of the long standing debate between public good and proprietary knowledge. Canadian knowledge networks are of the opinion that knowledge must be freely disseminated. However, certain networks including the NCEs grant the exclusive right of the development and application of this knowledge to specific industry affiliates. On one hand this facilitates further investment into the research, which creates better products, new jobs and further social development. This is predicated on a fine balance of allowing this development without widening the already disparate socio-economic gaps that exist between developed and developing countries. Thus the balance between public good and propriety knowledge must be effectively managed by the regulatory role discharged by the governments and the decision making faculties of these knowledge networks. [61]

    Establishing international linkages across networks based within different regions across the world would also be an effective means of ensuring effective partnerships and the creation of a new, self-sustaining structure. This would bring new prospects of funding into sustainable development activities and engage industrial affiliates with international development activities.

    C. Donor Perspectives

    The International Development Research Centre, based in Canada has also been instrumental in the setting up of support structures for networks. The IDRC has remained consistent in its emphasis of networks as mechanisms of linking scientists engaged in similar problems across the globe instead of as mechanisms to fund research in countries. This has afforded the IDRC with a greater level of flexibility in responding to the needs of developing countries as well as responding to the financial pressures within Canada to deliver superior technical support with a reduction in overheads. The IDRC sees networking an indispensable aspect of scientific pursuit and technological adaptation in the most effective manner. It is currently supporting four specific types of networks; horizontal networks which link together institutions with similar areas of specialization, vertical networks which work on disparate aspects of the same problem of different but interrelated problems, information networks which provide a centralized form of information service to members, which enables them to exchange information in the manner necessary and finally training networks which provide supervisory services to independent participants within the network.[62]

    (I) Internal Evaluations

    There is an outstanding need to monitor visits that are undertaken by the coordinator or the specific representatives of the member or donor as applicable. This would expedite the process of identifying problems and aid in deriving tangible solutions in an efficient manner. The criteria for the assessment would vary depending on the goals of the organization. Donors may pose questions with respect to the cost effectiveness of a particular pattern of research and may seek a formal report regarding this aspect. A more extensive model of donor evaluations may even include assessments with respect to the monitoring and coordination of specific functions.

    (II) External Evaluations

    A system of external evaluation would be useful with assessing data with respect to the operations of programs and their objectives. This would engage newer participants by injecting newer ideas and insights into the management and scope of the network. The most extensive method of network evaluation was one that was postulated by Valverde [63] and reviewed by Faris [64]. It aimed to draw an analysis of particular constraints and specific elements that would influence the execution of network programs. This method identifies a list of threats, opportunities, strengths and weaknesses which would inform future recommendations. The Valverde method makes use of both formal as well as informal data which is varied depending on the type of network and the management structure it employs.[65]

    (III) Financial Viability

    A network almost always requires external resources to aid in the setting up and coordination of its activities. Donor agencies must recognize the long term commitment that is required in this respect. It is therefore essential that the period for which this funding will be made available be clarified at the outset, to leave agencies with ample time to plan for the possibility of cessation of external financial support. [66] As concluded from the findings of the research study, although most networks are offered external support, it is primarily technology transfer and information networks that have been able to generate the bulk of funding in this respect. They have been able to obtain this financial assistance from a variety of sources including participating organizations as well as governments. [67] The funding for purely research networks however are inconsistent and the networks would have to plan in advance for a possible cessation of financial support.[68]

    (IV) Adaptability

    From the perspective of donors, the degree of adaptability and level of responsiveness of a particular network is especially relevant in assessing the coordination, control and leadership of a particular network. A network that is plagued by ineffective leadership and the lack of coordination is unable to adapt to changing circumstances and meet the needs of its participants. A combination of collaborative effort, a localized approach and far-sighted leadership instills in the participants of the network a sense of comfort in its processes and in the donors a faith in its ability to address topical issues and remain relevant.

    (V) The Exchange of Information

    As noted by Akhtar, a network is created to respond to the growing need to improve channels of information exchange and communication. [69] Information needs to be tailored to suit its users and must be disseminated accordingly. The study conducted has concluded that information networks that are engaged in the transfer of technology are inefficient in disseminating internally derived information and recognizing the needs of their users.[70] Given that these networks are especially user oriented this systemic failure is extremely problematic. There is also a need to review the mechanism of transferring strategic research techniques and the approaches employed in dealing with developing countries. Special attention must be paid to the beneficiaries of a particular network so that the research conducted is directed towards that particular demographic. This is especially relevant for information networks, which from the evaluation; appear to be generating data but not considering who would be using these services.[71]

    (VI) Capacity Building

    Facilitating the training of individuals both on a formal and informal level has led to an enhance level of research and reporting, as well as the designing of projects. There is however a need to tailor this training to suit the needs of the participants of a particular network. Networks which have been able to provide inputs which are not ordinarily locally provided have instigated the establishment of national and regional institutions. [72]

    (VII) Cost Effectiveness

    It is important to note however that networks need to employ the most cost effective mechanism of delivering support services to national programs. A network must work in a manner that allows for enough individual enterprise but at the same time follows a collaborative model to generate more effective and relevant research within a short span of time and through the utilization of minimum resources. The Caribbean Technology Consultation Services (CTCS) for example was found to be far more cost effective and in fact 50% cheaper than the services of the United Nations Industrial Development Organization. [73] Similarly, the evaluators of the LAAN found that funding a network was significantly cheaper than finding individual research projects.[74]


    [1] Castells, Manuel (2000) "Toward a Sociology of the Network Society" Contemporary Sociology, Vol

    29 (5) p693-699

    [2] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

    and the Future of Global Governance IDRC, Ottawa

    [3] Supra ., n.1, p.697

    [4] Ibid

    [5] Supra n.1, p.61

    [6] Chambers, Robert (1997) Whose Reality Counts? Putting the First Last Intermediate Technology

    Publications, London

    [7] Ibid

    [8] Chisholm, Rupert. F (1998) Developing Network Organizations: Learning from Practice and Theory

    Addison Wesley

    [9] Brown, L. David. 1993. "Development Bridging Organizations and Strategic

    Management for Social Change." Advances in Strategic Management 9.

    [10] Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 16

    [11] Ibid

    [12] Ibid

    [13] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

    and the Future of Global Governance IDRC, Ottawa, p.61

    [14] Ibid

    [15] Ibid

    [16] Supra n.13, p. 65

    [17] Ibid

    [18] Supra n. 13, p. 62

    [19] Ibid

    [20] Supra n. 13, p. 63

    [21] Ibid

    [22] Supra n. 13, p. 64

    [23] Newell, Sue & Jacky Swan (2000) "Trust and Inter-organizational Networking" in Human Relations,

    Vol 53 (10)

    [24] Sheppard, Blair H & Marla Tuchinsky (1996) "Micro-OB and the Network Organisation" in Kramer, R.

    And Tyler T. (eds) Trust in Organisations, Sage

    [25] Powell, Walter W (1996) "Trust-based forms of governance" in Kramer, R. And Tyler T. (eds) Trust in

    Organisations , Sage

    [26] Stern, Elliot (2001) "Evaluating Partnerships: Developing a Theory Based Framework", Paper for

    European Evaluation Society Conference 2001, Tavistock Institute

    [27] Freedman, Lynn & Jan Reynders (1999) Developing New Criteria for Evaluating Networks in Karl, M.

    (ed) Measuring the Immeasurable: Planning Monitoring and Evaluation of Networks, WFS

    [28] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

    International Alert./UNHCR Working Group on Conflict Management and Prevention

    [29] Supra n. 10, p. 20

    [30] Ibid

    [31] Taylor, James, (2000) "So Now They Are Going To Measure Empowerment!", paper for INTRAC 4th

    International Workshop on the Evaluation of Social Development, Oxford, April

    [32] Karl, Marilee (2000) Monitoring And Evaluating Stakeholder Participation In Agriculture And Rural

    Development Projects: A Literature Review, FAO

    [33] Supra n. 10, p.25

    [34] Ibid

    [35] Supra n. 10, p. 26

    [36] Ibid

    [37] Supra n. 10, p.27

    [38] Ludema, James D, David L Cooperrider & Frank J Barrett (2001) "Appreciative Inquiry: the Power of

    the Unconditional Positive Question" in Reason, P. & Bradbury, H. (eds) Handbook of Action

    Research , Sage

    [39] Ibid

    [40] Supra n. 10, p. 29

    [41] Ibid

    [42] Ibid

    [43] Sida (2000) Webs Women Weave, Sweden, 131-135

    [44] Ibid

    [45] Dutting, Gisela & Martha de la Fuente (1999) "Contextualising our Experiences: Monitoring and

    Evaluation in the Women's Global Network for Reproductive Rights" in Karl, M. (ed) Measuring the

    Immeasurable: Planning Monitoring and Evaluation of Networks , WFS

    [46] Supra n. 10, p. 30

    [47] Supra n. 10, p. 32

    [48] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

    International Alert./UNHCR Working Group on Conflict Management and Prevention

    [49] Supra n. 13, p. 67

    [50] Supra n. 13, 68

    [51] Supra n 10, 36

    [52] See Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 36-37

    [53] The three granting councils are: the Natural Sciences and Engineering Research Council (NSERC),

    the Social Sciences and Humanities Research Council (SSHRC), and the Medical Research Council

    (MRC).

    [54] Howard C. Clark, Formal Knowledge Networks: A Study of Canadian Experiences, International Institute for Sustainable Development 1998, p. 16

    [55] Ibid, p. 18

    [56] Ibid, p. 18

    [57] Ibid, p. 19

    [58] Ibid , p 21

    [59] Ibid , p. 22

    [60] Ibid, p. 31

    [61] Ibid

    [62] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 232

    [63] Valverde, C. 1988, Agricultural research networking : Development and evaluation, International Services for National Agricultural Research, The Hague, Netherlands. Staff Notes (18-26 November 1988)

    [64] Faris, D.G 1991, Agricultural research networks as development tools: Views of a network coordinator, IDRC, Ottawa, Canada, and International Crops Research Institute for the Semi-Arid Tropic, Patancheru, Andhra Pradesh, India

    [65] Supra n. 62

    [66] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 233

    [67] ibid

    [68] Ibid

    [69] Akhtar, S. 1990. Regional Information Networks : Some Lessons from Latin America. Information Development 6 (1) : 35-42

    [70] Ibid, p. 242

    [71] Ibid, p. 242

    [72] Ibid., p. 243

    [73] Stanley, J.L and Elwela, S.S.B 1988, Evaluation report for the Caribbean Technology Consultancy Services (CTCS), CTCS Network Project (1985-1988) IDRC Ottawa, Canada

    [74] Moreau,L. 1991, Evaluation of Latin American Aqualculture Network. IDRC, Ottawa, Canada

    Summary of the Public Consultation by Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad

    by Vipul Kharbanda last modified Jan 28, 2016 03:22 PM
    On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart City scheme of the Central Government. An agenda for the same is attached herewith. City scheme of the Central Government.

    The Centre for Internet and Society, Bangalore (CIS) is researching  the 100 Smart City Scheme from the perspective of Big Data and is seeking to understand the role of Big Data in smart cities in India as well as the impact of the generation and use of the same. CIS is also examining whether the current legal framework is adequate to deal with these new technologies. It was in this background that CIS attended a part of the workshop.

    At the outset the organizers had noted that there will be no discussion on technology and its adoption in this particular workshop.. The format involved a speaker providing his/her viewpoint on the topic concerned and  the discussion revolved mainly around problems relating to traffic, parking, roads, drainage, etc. and there was no discussion of technology or how to utilise it to solve these problems. From the discussions CIS has had with certain people who are quite involved with these public consultations, the impression that we have is that the solutions to these problems were not very complicated and required only some intent and execution, and if that was achieved it would go a long way in improving the infrastructure of the city. This perspective raises the question of whether or not India needs 'Smart Cities' to improve the life of residents or if basic urban solutions are adequate and are in fact needed to lay the foundation for any potential smart city that might be established in the future.

    It is quite interesting to see the difference in the levels at which the debate on smart cities is happening, in that when the central government talks about smart cities they try to highlight technology and other aspects such as smart meters, smart grids, etc. while the discussion on the ground in the actual cities is currently at a much more basic stage. For example the government website for the smart city project, while describing a smart city, mentions a number of “smart solutions” such as “electronic service delivery”, “smart meters” for water, “smart meters” for electricity, “smart parking”, Intelligent Traffic Management”, “Tele-medicine”, etc. Even in all the major public service announcements on the smart city project, the government effort seems to be to focus on these “smart solutions”, projecting technology as the answer to urban problems. However those in the cities themselves appear to be more concerned with adequate parking, adequate water supply, proper roads, waste disposal, etc. This difference in approach is only representative of the yawning gap between the mindspace of those who conceive these schemes and market them on the one hand and those who are tasked with implementing the schemes on the other hand as well as the realities of what cities in India need to address problems related to infrastructure and functioning. However the silver lining in this scenario, atleast on a personal level, is that the people on the ground, are not blindly turning to technology to solve their problems but actually trying to look for the best solutions regardless of whether it is a technology based solution or not.

    Agenda


    Rashtriya 1

    Rashtriya 2

    CIS's Comments on the CCWG-Accountability Draft Proposal

    by Pranesh Prakash last modified Jan 29, 2016 03:17 PM
    The Centre for Internet & Society (CIS) gave its comments on the failures of the CCWG-Accountability draft proposal as well as the processes that it has followed.

    We from the Centre for Internet and Society wishes to express our dismay at the consistent way in which CCWG-Accountability has completely failed to take critical inputs from organizations like ours (and others, some instances of which have been highlighted in Richard Hill’s submission) into account, and has failed to even capture our concerns and misgivings about the process — as expressed in our submission to the CCWG-Accountability’s 2nd Draft Proposal on Work Stream 1 Recommendations — in any document prepared by the CCWG.  We cannot support the proposal in its current form.

    Time for Comments

    We believe firstly that the 21 day comment period itself was too short and is going to result effectively in many groups or categories of people from not being able to meaningfully participate in the process, which flies in the face of the values that ICANN claims to uphold. This extremely short period amounts to procedural unsoundness, and restrains educated discussion on the way forward, especially given that the draft has altered quite drastically in the aftermath to ICANN55.

    Capture of ICANN and CCWG Process

    The participation in the accountability-cross-community mailing list clearly shows that the process is dominated by developed countries (of the top 30 non-staff posters to the list, 26 were from the ‘WEOG’ UN grouping, with 14 being from the USA, with only 1 from Asia Pacific, 2 from Africa, and 1 from Latin America), by males (27 of the 30 non-staff posters), and by industry/commercial interests (17 of the top 30 non-staff posters).  If this isn’t “capture”, what is?  There is no stress test that overcomes this reality of capture of ICANN by Western industry interests.  The global community is only nominally multistakeholder, while actually being grossly under-representative of the developing nations, women and minority genders, and communities that are not business communities or technical communities.  For instance, of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.

    Culling statistics from the accountability-cross-community mailing list, we find that of the top 30 posters (excluding ICANN staff):

    • 57% were, as far as one could ascertain from public records, from a single country: the United States of America.
    • 87% were, as far as one could ascertain from public records, participants from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries. None of those who participated substantively were from the EEC (Eastern European) group and only 1 was from Asia-Pacific and only 1 was from GRULAC (Latin American and Caribbean Group).
    • 90% were male and 3 were female, as far as one could ascertain from public records.
    • 57% were identifiable as primarily being from industry or the technical community, as far as one could ascertain from public records, with only 2 (7%) being readily identifiable as representing governments.

    This lack of global multistakeholder representation greatly damages the credibility of the entire process, since it gains its legitimacy by claiming to represent the global multistakeholder Internet community.

    Bogey of Governmental Capture

    With respect to Stress Test 18, dealing with the GAC, the report proposes that the ICANN Bylaws, specifically Article XI, Section 2, be amended to create a provision where if two-thirds of the Board so votes, they can reject a full GAC consensus advice. This amendment is not connected to the fear of government capture or the fear that ICANN will become a government-led body; given that the advice given by the GAC is non-binding that is not a possibility. Given the state of affairs described in the submission made above, it is clear that for much of the world, their governments are the only way in which they can effectively engage within the ICANN ecosystem. Therefore, nullifying the effectiveness of GAC advice is harmful to the interests of fostering a multistakeholder ecosystem, and contributes to the strengthening of the kind of industry capture described above.

    Jurisdiction

    All discussions on the Sole Designator Model seem predicated on the unflinching certainty of ICANN’s jurisdiction continuing to remain in California, as the legal basis of that model is drawn from Californian corporate law.  To quote the draft report itself, in Annexe 12, it is stated that:

    "Jurisdiction directly influences the way ICANN’s accountability processes are structured and operationalized. The fact that ICANN today operates under the legislation of the U.S. state of California grants the corporation certain rights and implies the existence of certain accountability mechanisms. It also imposes some limits with respect to the accountability mechanisms it can adopt. The topic of jurisdiction is, as a consequence, very relevant for the CCWG-Accountability. ICANN is a public benefit corporation incorporated in California and subject to California state laws, applicable U.S. federal laws and both state and federal court jurisdiction."

    Jurisdiction has been placed within the mandate of WS2, to be dealt with post the transition.  However, there is no analysis in the 3rd Draft on how the Sole Designator Model would continue to be upheld if future Work Stream 2 discussions led to a consensus that there needed to be a shift in the jurisdiction of ICANN. In the event that ICANN shifts to, say, Delaware or Geneva, would there be a basis to the Sole Designator Model in the law?  Therefore this is an issue that needs to be addressed before this model is adopted, else there is a risk of either this model being rendered infructuous in the future, or this model foreclosing open debate and discussion in Work Stream 2.

    Right of Inspection

    We strongly support the incorporation of the rights of Inspection under this model as per Section 6333 of the California Corporations Code as a fundamental bylaw. As there is a severe gap between the claims that ICANN raises about its own transparency and the actual amount of transparency that it upholds, we opine that the right of inspection needs to be provided to each member of the ICANN community.

    Timeline for WS2 Reforms

    We support the CCWG’s commitment to the review of the DIDP Process, which they have committed to enhancing in WS2. Our research on this matter indicates that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide. Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. See "Peering behind the veil of ICANN's DIDP (II)".

    In the present format, there has been little analysis on the timeline of WS2; the report itself merely states that:

    "The CCWG-Accountability expects to begin refining the scope of Work Stream 2 during the upcoming ICANN 55 Meeting in March 2016. It is intended that Work Stream 2 will be completed by the end of 2016."

    Without further clarity and specification of the WS2 timeline, meaningful reform cannot be initiated. Therefore we urge the CCWG to come up with a clear timeline for transparency processes.

    The Internet Has a New Standard for Censorship

    by Jyoti Panday last modified Jan 30, 2016 09:17 AM
    The introduction of the new 451 HTTP Error Status Code for blocked websites is a big step forward in cataloguing online censorship, especially in a country like India where access to information is routinely restricted.
    The Internet Has a New Standard for Censorship

    Featured image credit: span112/Flickr, CC BY 2.0.

    The article was published in the Wire on January 29, 2016. The original can be read here.


    Ray Bradbury’s dystopian novel Fahrenheit 451 opens with the declaration, “It was a pleasure to burn.” The six unassuming words offer a glimpse into the mindset of the novel’s protagonist, ‘the fireman’ Guy Montag, who burns books. Montag occupies a world of totalitarian state control over the media where learning is suppressed and censorship prevails. The title alludes to the ‘temperature at which book paper catches fire and burns,’ an apt reference to the act of violence committed against citizens through the systematic destruction of literature. It is tempting to think about the novel solely as a story of censorship. It certainly is. But it is also a story about the value of intellectual freedom and the importance of information.

    Published in 1953, Bradbury’s story predates home computers, the Internet, Twitter and Facebook, and yet it anticipates the evolution of these technologies as tools for censorship. When the state seeks to censor speech, they use the most effective and easiest mechanisms available. In Bradbury’s dystopian world, burning books did the trick; in today’s world, governments achieve this by blocking access to information online. The majority of the world’s Internet users encounter censorship even if the contours of control vary depending on the country’s policies and infrastructure.

    Online censorship in India

    In India, information access blockades have become commonplace and are increasingly enforced across the country for maintaining political stability, for economic reasons, in defence of national security or preserving social values. Last week, the Maharashtra Anti-terror Squad blocked 94 websites that were allegedly radicalising the youth to join the militant group ISIS. Memorably, in 2015 the NDA government’s ham-fisted attempts at enforcing a ban on online pornography resulted in widespread public outrage. Instead of revoking the ban, the government issued yet another vaguely worded and in many senses astonishing order. As reported by Medianama, the revised order delegates the responsibility of determining whether banned websites should remain unavailable to private intermediaries.

    The state’s shifting reasons for blocking access to information is reflective of its tendentious attitude towards speech and expression. Free speech in India is messily contested and normally, the role of the judiciary acts as a check on the executive’s proclivity for banning. For instance, in 2010 the Supreme Court upheld the Maharashtra High Court’s decision to revoke the ban on the book on Shivaji by American author James Laine, which, according to the state government, contained material promoting social enmity. However, in the context of communications technology the traditional role of courts is increasingly being passed on to private intermediaries.

    The delegation of authority is evident in the government notifying intermediaries to proactively filter content for ‘child pornography’ in the revised order issued to deal with websites blocked as result of its crackdown on pornography. Such screening and filtering requires intermediaries to make a determination on the legality of content in order to avoid direct liability. As international best practices such as the Manila Principles on Intermediary Liability point out, such screening is a slow process and costly and  intermediaries are incentivised to simply limit access to information.

    Blocking procedures and secrecy

    The constitutional validity of Section 69A of the Information Technology Act, 2008 which grants power to the executive to block access to information unchecked, and in secrecy was challenged in Shreya Singhal v. Union of India. Curiously, the Supreme Court upheld S69A reasoning that the provisions were narrowly-drawn with adequate safeguards and noted that any procedural inconsistencies may be challenged through writ petitions under Article 226 of the Constitution. Unfortunately as past instances of blocking under S69A reveal the provisions are littered with procedural deficiencies, amplified manifold by the authorities responsible for interpreting and implementing the orders.

    Problematically, an opaque confidentiality criteria built into the blocking rules mandates secrecy in requests and recommendations for blocking and places written orders outside the purview of public scrutiny. As there are no comprehensive list of blocked websites or of the legal orders, the public has to rely on ISPs leaking orders, or media reports to understand the censorship regime in India. RTI applications requesting further information on the implementation of these safeguards have at best provided incomplete information.

    Historically, the courts in India have held that Article 19(1)(a) of the Constitution of India is as much about the right to receive information as it is to disseminate, and when there is a chilling effect on speech, it also violates the right to receive information. Therefore, if a website is blocked citizens have a constitutional right to know the legal grounds on which access is being restricted. Just like the government announces and clarifies the grounds when banning a book, users have a right to know the grounds for restrictions on their speech online.

    Unfortunately, under the present blocking regime in India there is no easy way for a service provider to comply with a blocking order while also notifying users that censorship has taken place. The ‘Blocking Rules’ require notice “person or intermediary” thus implying that notice may be sent to either the originator or the intermediary. Further, the confidentiality clause raises the presumption that nobody beyond the intermediaries ought to know about a block.

    Naturally, intermediaries interested in self-preservation and avoiding conflict with the government become complicit in maintaining secrecy in blocking orders. As a result, it is often difficult to determine why content is inaccessible and users often mistake censorship for technical problem in accessing content. Consequently, pursuing legal recourse or trying to hold the government accountable for their censorious activity becomes a challenge. In failing to consider the constitutional merits of the confidentiality clause, the Supreme Court has shied away from addressing the over-broad reach of the executive.

    Secrecy in removing or blocking access is a global problem that places limits on the transparency expected from ISPs. Across many jurisdictions intermediaries are legally prohibited from publicising filtering orders as well as information relating to content or service restrictions. For example in United Kingdom, ISPs are prohibited from revealing blocking orders related to terrorism and surveillance. In South Korea, the Korean Communications Standards Commission holds public meetings that are open to the public. However, the sheer volume of censorship (i.e. close to 10,000 URLs a month) makes it unwieldy for public oversight.

    As the Manila Principles note, providing users with an explanation and reasons for placing restrictions on their speech and expression increases civic engagement. Transparency standards will empower citizens to demand that companies and governments they interact with are more accountable when it comes to content regulation. It is worth noting, for conduits as opposed to content hosts, it may not always be technically feasible for to provide a notice when content is unavailable due to filtering. A new standard helps improve transparency standards for network level intermediaries and for websites bound by confidentiality requirements. The recently introduced HTTP code for errors is a critical step forward in cataloguing censorship on the Internet.

    A standardised code for censorship

    On December 21, 2015, the Internet Engineering Standards Group (IESG) which is the organisation responsible for reviewing and updating the internet’s operating standards approved the publication of 451-’An HTTP Status Code to Report Legal Obstacles’. The code provides intermediaries a standardised way to notify users know when a website is unavailable following a legal order. Publishing the code allows intermediaries to be transparent about their compliance with court and executive orders across jurisdictions and is a huge step forward for capturing online censorship. HTTP code 451 was introduced by software engineer Tim Bray and the code’s name is an homage to Bradbury’s novel Fahrenheit 451.

    Bray began developing the code after being inspired by a blog post by Terence Eden calling for a  censorship error code. The code’s official status comes after two years of discussions within the technical community and is a result of campaigning from transparency and civil society advocates who have been pushing for clearer labelling of internet censorship. Initially, the code received pushback from within the technical community for reasons enumerated by Mark Nottingham, Chair of the IETF HTTP Working Group in his blog. However, soon sites began using the code on an experimental and unsanctioned basis and faced with increasing demand for and feedback, the code was accepted.

    The HTTP code 451 works as a machine-readable flag and has immense potential as a tool for organisations and users who want to quantify and understand censorship on the internet. Cataloguing online censorship is a challenging, time-consuming and expensive task. The HTTP code 451 circumvents confidentiality obligations built into blocking or licensing regimes and reduces the cost of accessing blocking orders.

    The code creates a distinction between websites blocked following a court or an executive order, and when information is inaccessible due to technical errors. If implemented widely, Bray’s new code will help prevent confusion around blocked sites. The code addresses the issue of the ISP’s misleading and inaccurate usage of Error 403 ‘Forbidden’ (to indicate that the server can be reached and understood the request, but refuses to take any further action) or 404 ‘Not Found’ (to indicate that the requested resource could not be found but may be available again in the future).

    Adoption of the new standard is optional, though at present there are no laws in India that prevent intermediaries doing so. Implementing a standardised machine-readable flag for censorship will go a long way in bolstering the accountability of ISPs that have in the past targeted an entire domain instead of the specified URL. Adoption of the standard by ISPs will also improve the understanding of the burden imposed on intermediaries for censoring and filtering content as presently, there is no clarity on what constitutes compliance.  Of course, censorious governments may prohibit the use of the code, for example by issuing an order that specifies not only that a page be blocked, but also precisely which HTTP return code should be used. Though such sanctions should be viewed as evidence of systematic rights violation and totalitarian regimes.

    In India where access to software code repositories such as Github and Sourceforge are routinely restricted, the need for such code is obvious. The use of the code will improve confidence in blocking practices, allowing  users to understand the grounds on which their right to information is being restricted. Improving transparency around censorship is the only way to build trust between the government and its citizens about the laws and policies applicable to internet content.

    Nature of Knowledge

    by Scott Mason — last modified Jan 30, 2016 11:42 AM

    Introduction

    In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [1]. For him and others, the hidden patterns and correlations revealed through Big Data analytics enable us to produce objective and actionable knowledge about complex phenomena not previously possible using traditional methodologies. As Anderson himself put it, 'there is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot' [2] .

    In spite of harsh criticism of Anderson's article from across the academy, his uniquely (dis)utopian vision of the scientific utility of Big Data has since become increasingly mainstream with regular interventions from politicians and business leaders evangelising about Big Data's potentially revolutionary applications. Nowhere is this bout of data-philia more apparent than in India where the governments recently announced the launch of 'Digital India', a multi-million dollar project which aims to harness the power of public data to increase the efficiency and accessibility of public services [3]. In spite of the ambitious promises associated with Big Data however, many theorists remain sceptical about its practical benefits and express concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favour of shallow correlative analysis. In response, they emphasise the continued importance of theory and specialist knowledge to science, and warn against what they see as the uncritical adoption of Big Data in public policy-making [4]. In this article I will examine the challenges posed by Big Data technologies to established scientific epistemologies as well as the possible implications of these changes for public-policymaking. Beginning with an exploration of some of the ways in which Big Data is changing our understanding of scientific research and knowledge, I will argue that claims that Big Data represents a new paradigm of scientific inquiry are predicated upon a number of implicit assumptions about the nature of knowledge. Through a critic of these assumptions I will highlights some of the potential risks that an over-reliance on Big Data analytics poses for public policy-making, before finally making the case for a more nuanced approach to Big Data, which emphasises the continued importance of theory to scientific research.

    Big Data: The Fourth Paradigm?

    "Revolutions in science have often been preceded by revolutions in measurement".

    In his book the Structure of Scientific Revolutions Kuhn describes scientific paradigms as 'universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of researchers'[5]. Paradigms as such designate a field of intelligibility within a given discipline, defining what kinds of empirical phenomena are to be observed and scrutinized, the types of questions which can be asked of those phenomena, how those questions are to be structured as well as the theoretical frameworks within which the results can be analysed and interpreted. In short, they 'constitute an accepted way of interrogating the world and synthesizing knowledge common to a substantial proportion of researchers in a discipline at any one moment in time'[6]. Periodically however, Kuhn argues, that these paradigms can become destabilised by the development of new theories or the discovery of anomalies that cannot be explained through reference to the dominate paradigm. In such instances Kuhn claims, the scientific discipline is thrown into a period of 'crisis', during which new ideas and theories are proposed and tested, until a new paradigm is established and gains acceptance from the community.

    More recently computer scientists Jim Gray, adopted and developed Kuhn's concept of the paradigm shift, charting history of science through the evolution of four broad paradigms, experimental science, theoretical science, computational science and exploratory science [7]. Unlike Kuhn however, who proposed that paradigm shifts occur as the result of anomalous empirical observations which scientists are unable to account for within the existing paradigm, Gray suggested that transitions in scientific practice are in fact primarily driven by advances and innovations in methods of data collection and analysis. The emergence of the experimental paradigm according to Gray can therefore be traced back to the ancient Greece and China when philosophers began to describe their empirical observations using natural rather an spiritual explanations. Likewise, the transition to the theoretical paradigm of science can be located in the 17th Century during which time scientists began to build theories and models which made generalizations based upon their empirical observations. Thirdly, Gray identifies the emergence of a computational paradigm in the latter part of the 20th Century in which advanced techniques of simulation and computational modelling were developed to help solve equations and explore fields of inquiry such as climate modelling which would have been impossible using experimental or theoretical methods. Finally, Gray proposed that we are today witnessing a transition to a 'fourth paradigm of science', which he termed the exploratory paradigm. Although also utilising advanced computational methods, unlike the previous computational paradigm which developed programs based upon established rules and theories, Gray suggested that within this new paradigm, scientists begin with the data itself; designing programs to mine enormous databases in the search for correlations and patterns; in effect using the data to discover the rules [8].

    The implications of this shift are potentially significant for the nature of knowledge production, and are already beginning to be seen across a wide range of sectors. In the retail sector for example, data mining and algorithmic analysis are already being used to help predict items that a customers may wish to purchase based upon previous shopping habits[9]. Here, unlike with traditional research methodologies the analysis does not presuppose or hypothesise a relationship between items which it then attempts to prove through a process of experimentation, instead the relationships are identified inductively through the processing and reprocessing of vast quantities of data alone. By starting with the data itself, Big Data analysts circumvent the need for predictions or hypothesis about what one is likely to find, as Dyche observes 'mining Big Data reveals relationships and patterns that we didn't even know to look for'[10]. Similarly, by focussing primarily on the search for correlations and patterns as opposed to causation Big Data analysts also reject the need interpretive theory to frame the results instead researchers claim the outcomes are inherently meaningful and interpretable by anyone without the need for domain specific or contextual knowledge. For example, Joh observes how Big Data is being used in policing and law enforcement to help make better decisions about the allocation of police resources. By looking for patterns in the crime data they are able to make accurate predictions about the localities and times in which crimes are most likely to occur and dispatch their officers accordingly[11]. Such analysis according to Big Data proponents requires no knowledge of the cause of the crime, nor the social or cultural context within which it is being perpetrated, instead predictions and assessments are made purely on the basis of patterns and correlations identified within the historical data by statistical modelling.

    In summary then, Gray's exploratory paradigm represents a radical inversion of the deductive scientific method, allowing researchers to derive insights directly from the data itself without the use of hypothesis or theory. Thus it is claimed, by enabling the collection and analysis of datasets of unprecedented scale and variety Big Data allows analysts to 'let the data speak for itself'[12], providing exhaustive coverage of social phenomena, and revealing correlations that are inherently meaningful and interpretable by anyone without the need for specialised subject knowledge or theoretical frameworks.

    For Gray and others this new paradigm is made possible only by the recent exponential increase in the generation and collection of data as well as the emergence of new forms of data science, known collectively as "Big Data". For them the 'deluge of data' produced by the increase in the number of internet enabled devices as well as the nascent development of the internet of things, presents scientists and researchers with unprecedented opportunities to utilise data in new and innovative way to develop new insights across a wide range of sectors, many of which would have been unimaginable even 10 years ago. Furthermore, advances in computational and statistical methods as well as innovations in data visualization and methods of linking datasets, mean that scientist can now utilise the data available to its full potential or as professor Gary King quipped ' Big Data is nothing compared to a big algorithm'[13].

    These developments in statistical and computational analysis combined with the velocity variety and quantity of data available to analysts have therefore allowed scientists to pursue new types of research, generating new forms of knowledge and facilitating a radical shift in how we think about "science" itself. As Boyd and Crawford note, ' Big Data [creates] a profound change at the levels of epistemology and ethics. Big Data reframes key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and the categorization of reality . . . [and] stakes out new terrains of objects, methods of knowing, and definitions of social life '[14]. For many these changes in the nature of knowledge production provide opportunities to improve decision-making, increase efficiency, encourage innovation across a broad range of sectors from healthcare and policing to transport to international development[15]. For others however, many of the claims of Big Data are premised upon some questionable methodological and epistemological assumptions, some of which threat to impoverish the scientific method and undermine scientific rigour [16].

    Assumptions of Big Data

    Given its bold claims the allure of Big Data in both the public and privates sectors is perhaps understandable. However despite the radical and rapid changes to research practice and methodology, there has nevertheless seemingly been a lack of reflexive and critical reflection concerning the epistemological implications of the research practices used in Big Data analytics. And yet implicit within this vision of the future of scientific inquiry lie a number of important and arguably problematic epistemological and ontological assumptions, most notably;

    - Big Data can provide comprehensive coverage of phenomenon, capturing all relevant information.

    - Big Data does not require hypothesis, a priori theory, or models to direct the data collection or research questions.

    - Big Data analytics do not require theoretical framing in order to be interpretable. The data is inherently meaningful transcending domain specific knowledge and can be understood be anyone.

    - Correlative knowledge is sufficient to make accurate predictions and guide policy decisions.

    For many, these assumptions are highly problematic and call into question the claims that Big Data makes about itself. I will now look at each one in turn before proposing there possible implications for Big Data in Policy-making.

    Firstly, whilst Big Data may appear to be exhaustive in its scope, it can only be considered to be so in the context of the particular ontological and methodological framework chosen by the researcher. No data set however large can scrutinize all information relevant to a given phenomenon. Indeed, even if it were somehow possible to capture all relevant quantifiable data within a specific domain, Big Data analytics would still be unable to fully account for the multifarious variables which are unquantifiable or undatafiable. As such Big Data does not provide an omnipresent 'gods-eye view', instead much like any other scientific sample it must be seen to provide the researcher with a singular and limited perspective from which he or she can observe a phenomenon and draw conclusions. It is important to recognise that this vantage point provides only one of many possible perspectives, and is shaped by the technologies and tools used to collect the data, as well as the ontological assumptions of the researchers. Furthermore, as with any other scientific sample, it is also subject to sampling bias and is dependent upon the researcher to make subjective judgements about which variables are relevant to the phenomena being studied and which can be safely ignored.

    Secondly, claims by Big Data analysts to be able to generate insights directly from the data, signals a worrying divergence from deductive scientific methods which have been hegemonic within the natural sciences for centuries. For Big Data enthusiasts such as Prensky, 'scientists no longer have to make educated guesses, construct hypotheses and models, and test them with data-based experiments and examples. Instead, they can mine the complete set of data for patterns that reveal effects, producing scientific conclusions without further experimentation '[17]. Whereas, deductive reasoning begins with general statements or hypotheses and then proceeds to observe relevant data equipped with certain assumptions about what should be observed if the theory is to be proven valid; inductive reasoning conversely begins with empirical observations of specific examples from which it attempts to draw general conclusions. The more data collected the greater the probability that the general conclusions generated will be accurate, however regardless of the quantity of observations no amount of data can ever conclusively prove causality between two variables, since it is always possible that my conclusions may in future be falsified by an anomalous observation. For example, a researcher who had only ever observed the existence of white swans may reasonably draw the conclusion that 'all swans are white', whilst they would be justified in making such a claim, it would nevertheless be comprehensively disproven the day a black swan was discovered. This is what David Hume called the 'problem of induction'[18] and strikes at the foundation of Big Data claims to be able to provide explanatory and predictive analysis of complex phenomena, since any projections made are reliant upon the 'principle of uniformity of nature', that is the assumption that a sequence of events will always occur as it has in the past. As a result, although Big Data may be well suited to providing detailed descriptive accounts of social phenomena, without theoretical grounding it nevertheless remains unable to prove casual links between variables and therefore is limited in its ability to provide robust explanatory conclusions or give accurate predictions about future events.

    Finally, just as Big Data enthusiasts claim that theory or hypotheses are not needed to guide data collection, so too they insist human interpretation or framing is no longer required for the processing and analysis of the data. Within this new paradigm therefore, 'the data speaks for itself' [19], and specialised knowledge is not needed to interpret the results which are now supposedly rendered comprehensible to anyone with even a rudimentary grasp of statistics. Furthermore, the results we are told are inherently meaningful, transcending culture, history or social context and providing pure objective facts uninhibited by philosophical or ideological commitments.

    Initially inherited from the natural sciences, this radical form of empiricism thus presupposes the existence of an objective social reality occupied by static and immutable entities whose properties are directly determinable through empirical investigation. In this way, Big Data reduces the role of social science to the perfunctory calculation and analysis of the mechanical processes of pre-formed subjects, in much the same way as one might calculate the movement of the planets or the interaction of balls on a billiard table. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, it nevertheless has the effect of restricting both the scope and character of social scientific inquiry; projecting onto the field of social research meta-theoretical commitments that have long been implicit in the positivist method, whilst marginalising those projects which do not meet the required levels of scientificity or erudition.

    This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[20].

    Despite these criticisms however, Big Data is perhaps unsurprisingly increasingly becoming popular within the business community, lured by the promise of cheap and actionable scientific knowledge, capable of making their operations more efficient reducing overheads and producing better more competitive services. Perhaps most alarming from the perspective of Big Data's epistemological and methodological implications however, is the increasingly prominent role Big Data is playing in public policy-making. As I will now demonstrate, whilst Big Data can offer useful inputs into public policy-making processes, the methodological assumptions implicit within Big Data methodologies problems pose a number of risks to the effectiveness as well as the democratic legitimacy of public policy-making. Following an examination of these risks I will argue for a more reflexive and critical approach to Big Data in the public sector.

    Big Data and Policy-Making: Opportunities and Risks

    In recent year Big Data has begun to play an increasingly important role in public policy-making. Across the global, government funded projects designed to harvest and utilise vast quantities of public data are being developed to help improve the efficiency and performance of public services as well as better inform policy-making processes. At first glance, Big Data would appear to be the holy-grail for policy-makers - enabling truly evidence-based policy-making, based upon pure and objective facts, undistorted by political ideology or expedience. Furthermore, in an era of government debt and diminishing budgets, Big Data promises not only to produce more effective policy, but also to deliver on the seemingly impossible task of doing more with less, improving public services whilst simultaneously reducing expenditure.

    In the Indian context, the government's recently announced 'Digital India' project promises to harness the power of public data to help modernise Indian's digital infrastructure and increase access to public services. The use of Big Data is seen as being central to the project's success, however, despite the commendable aspirations of Digital India, many commentators remain sceptical about the extent to which Big Data can truly deliver on its promises of better more efficient public services, whilst others have warned of the risk to public policy of an uncritical and hasty adoption of Big Data analytics [21]. Here I argue that the epistemological and methodological assumptions which are implicit within the discourse around Big Data threaten to undermine the goal of evidence based policy-making, and in the process widen already substantial digital divides.

    It has long been recognised that science and politics are deeply entwined. For many social scientists the results of social research can be never entirely neutral, but are conditioned by the particular perspective of the researcher. As Shelia Jasanoff observed, 'Most thoughtful advisers have rejected the facile notion that giving scientific advice is simply a matter of speaking truth to power. It is well recognized that in thorny areas of public policy, where certain knowledge is difficult to come by, science advisers can offer at best educated guesses and reasoned judgments, not unvarnished truth' [22]. Nevertheless, 'unvarnished truth' is precisely what Big Data enthusiasts claim to be able to provide. For them the capacity of Big Data to derive results and insights directly from the data without any need for human framing, allows policy-makers to incorporate scientific knowledge directly into their decision-making processes without worrying about the 'philosophical baggage' usually associated with social scientific research.

    However, in order to be meaningful, all data requires a certain level of interpretative framing. As such far from cleansing science of politics, Big Data simply acts to shift responsibility for the interpretation and contextualisation of results away from domain experts - who possess the requisite knowledge to make informed judgements regarding the significance of correlations - to bureaucrats and policy-makers, who are more susceptible to emphasise those results and correlations which support their own political agenda. Thus whilst the discourse around Big Data may promote the notion of evidence based policy-making, in reality the vast quantities of correlations generated by Big Data analytics act simply to broaden the range of 'evidence' from which politician can chose to support their arguments; giving new meaning to Mark Twain's witticism that there are 'lies, damn lies, and statistics'.

    Similarly, for many an over-reliance on Big Data analytics for policy-making, risks leading to public policy which is blind to the unquantifiable and intangible. As already discussed above, Big Data's neglect of theory and contextual knowledge in favour of strict empiricism marginalises qualitative studies which emphasise the importance of traditional social scientific categories such as race, gender, and religion, in favour of a purely quantitative analysis of relational data. For many however consideration of issues such as gender, race, and religious sensitivity can be just as important to good public policy-making as quantitative data; helping to contextualise the insights revealed in the data and provide more explanatory accounts of social relations. They warn that neglect of such considerations as part of policy-making processes can have significant implications for the quality of the policies produced[23]. Firstly, although Big Data can provide unrivalled accounts of "what" people do, without a broader understanding of the social context in which they act, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially acute in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes. Secondly, whilst Big Data analytics can help decision-makers to design more cost-effective policy, by for example ensuring better use of scarce resources; efficiency and cost-effectiveness are not the only metrics by which good policy can be judged. Public policy regardless of the sector must consider and balance a broad range of issues during the policy process including matters such as race, gender issues and community relations. Normative and qualitative considerations of this kind are not subject to a simplistic 1-0 quantification but instead require a great deal of contextual knowledge and insight to navigate successfully

    Finally, to the extent that policy-makers are today attempting to harvest and utilise individual citizens personal data as direct inputs for the policy-making process, Big Data driven policy can in a very narrow sense be considered to offer a rudimentary form of direct democracy. At first glance this would appear to help to democratise political participation allowing public services to become automatically optimised to better meet the needs and preferences of citizens without the need for direct political participation. In societies such as India however, where there exist high levels of inequality in access to information and communication technologies, there remain large discrepancies in the quantities of data produced by individuals. In a Big Data world in which every byte of data is collected, analysed and interpreted in order to make important decisions about public services therefore, those who produce the greatest amounts of data, are better placed to have their voices heard the loudest, whilst those who lack access to the means to produce data risk becoming disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority. Similarly, using user generated data as the basis for policy decisions also leaves systems vulnerable to coercive manipulation. That is, once it has become apparent that a system has been automated on the basis of user inputs, groups or individuals may change their behaviour in order to achieve a certain outcome. Given these problems it is essential that in seeking to utilise new data resources for policy-making, we avoid an uncritical adoption of Big Data techniques, and instead as I argue below encourage a more balanced and nuanced approach to Big Data.

    Data-Driven Science: A more Nuanced Approach?

    Although an uncritical embrace of Big Data analytics is clearly problematic, it is not immediately obvious that a stubborn commitment to traditional knowledge-driven deductive methodologies would necessarily be preferable. Whilst deductive methods have formed the basis of scientific inquiry for centuries, the particular utility of this approach is largely derived from its ability to produce accurate and reliable results in situations where the quantities of data available are limited. In an era of ubiquitous data collection however, an unwillingness to embrace new methodologies and forms of analysis which maximise the potential value of the volumes of data available would seem unwise.

    For Kitchen and others however, it is possible to reap the benefits of Big Data without comprising scientific rigour or the pursuit of casual explanations. Challenging the 'either or' propositions which favour either scientific modelling and hypothesis or data correlations, Kitchen instead proposes a hybrid approach which utilises the combined advantages of inductive, deductive and so-called 'abductive' reasoning, to develop theories and hypotheses directly from the data[24]. As Patrick W. Gross, commented 'In practice, the theory and the data reinforce each other. It's not a question of data correlations versus theory. The use of data for correlations allows one to test theories and refine them' [25].

    Like the radical empiricism of Big Data, 'data-driven science' as Kitchen terms it, introduces an aspect of inductivism into the research design, seeking to develop hypotheses and insights 'born from the data' rather than 'born from theory'. Unlike the empiricist approach however, the identification of patterns and correlations is not considered the ultimate goal of the research process. Instead these correlations simply form the basis for new types of hypotheses generation, before more traditional deductive testing is used to assess the validity of the results. Put simply therefore, rather than interpreting data deluge as the 'end of theory', data-driven science instead attempts to harness its insights to develop new theories using alternative data-intensive methods of theory generation.

    Furthermore unlike new empiricism, data is not collected indiscriminately from every available source in the hope that sheer size of the dataset will unveil some hidden pattern or insight. Instead, in keeping with more conventional scientific methods, various sampling techniques are utilised, 'underpinned by theoretical and practical knowledge and experience as to whether technologies and their configurations will capture or produce appropriate and useful research material'[26]. Similarly analysis of the data once collected does not take place within a theoretical vacuum, nor are all relationships deemed to be inherently meaningful; instead existing theoretical frameworks and domain specific knowledge are used to help contextualise and refine the results, identifying those patterns that can be dismissed as well as those that require closer attention.

    Thus for many, data-driven science provides a more nuanced approach to Big Data allowing researchers to harness the power of new source of data, whilst also maintaining the pursuit of explanatory knowledge. In doing so, it can help to avoid the risks of uncritical adoption of Big Data analytics for policy-making providing new insights but also retaining the 'regulating force of philosophy'.

    Conclusion

    Since the publication of the Structure of Scientific Revolutions, Kuhn's notion of the paradigm has been widely criticised for producing a homogenous and overly smooth account of scientific progress, which ignores the clunky and often accidental nature of scientific discovery and innovation. Indeed the notion of the 'paradigm shift' is in many ways in typical of a self-indulgent and somewhat egotistical tendency amongst many historians and theorists to interpret events contemporaneous to themselves as in some way of great historical significance. Historians throughout the ages have always perceived themselves as living through periods of great upheaval and transition. In actual fact as has been noted by many, history and the history of science in particular rarely advances in a linear or predictable way, nor can progress when it does occur be so easily attributed to specific technological innovations or theoretical developments. As such we should remain very sceptical of the claims that Big Data represents a historic and paradigmatic shift in scientific practice. Such claims exhibit more than a hint of technological determinism and often ignore the substantial limitations to Big Data analytics. In contrast to these claims, it is important to note that technological advances alone do not drive scientific revolutions; the impact of Big Data will ultimately depend on how we decide to use it as well as the types of questions we ask of it.

    Big Data holds the potential to augment and support existing scientific practices, creating new insights and helping to better inform public policy-making processes. However, contrary to the hyperbole surrounding its development, Big Data does not represent a sliver-bullet for intractable social problems and if adopted uncritically and without consideration of its consequences, Big Data risks not only to diminishing scientific knowledge but also jeopardising our privacy and creating new digital divides. It is critical therefore that we see through the hyperbole and headlines to reflect critically on the epistemological consequences of Big Data as well as its implications for policy making, a task unfortunately which in spite of the pace of technological change is only just beginning.

    Bibliography

    Anderson C (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired, 23 June 2008. Available at: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory (accessed 31 October 2015).

    Bollier D (2010) The Promise and Peril of Big Data. The Aspen Institute. Available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf (accessed 19 October 2015).

    Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

    Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

    Cukier K (2010) Data, data everywhere. The Economist, 25 February (accessed 5 November 2015).

    Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

    Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

    Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

    Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

    Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

    Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

    Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspacelaw/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1;

    Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

    Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

    Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

    McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

    Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

    Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

    Raghupathi, W., & Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

    Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal



    [1] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

    [2] Ibid.,

    [3] Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

    [4] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

    [5] Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

    [6] Ibid.,

    [7] Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

    [8] Ibid.,

    [9] Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

    [10] Ibid.,

    [11] Joh. E, (2014) 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

    [12] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

    [13] King quoted in Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal

    [14] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

    [15] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1 ; Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014); Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/ ; , Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

    [16] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

    [17] Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

    [18] Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

    [19] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

    [20] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

    [21] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12;

    [22] Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

    [23] Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

    [24] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

    [25] Gross quoted in Ibid.,

    [26] Ibid.,

    Facebook's Fall from Grace: Arab Spring to Indian Winter

    by Sunil Abraham last modified Feb 11, 2016 03:51 PM
    Facebook’s Free Basics has been permanently banned in India! The Indian telecom regulator, TRAI has issued the world’s most stringent net neutrality regulation! To be more accurate, there is more to come from TRAI in terms of net neutrality regulations especially for throttling and blocking but if the discriminatory tariff regulation is anything to go by we can expect quite a tough regulatory stance against other net neutrality violations as well.

    The article was published in First Post on February 9, 2016. It can be read here.


    Even the regulations it cites in the Explanatory Memorandum don’t go as far as it does. The Dutch regulation will have to be reformulated in light of the new EU regulations and the Chilean regulator has opened the discussion on an additional non-profit exception by allowing Wikipedia to zero-rate its content in partnership with telecom operators.

    Bravo to Nikhil Pahwa, Apar Gupta, Raman Chima, Kiran Jonnalagadda and the thousands of volunteers at Save The Internet and associated NGOs, movements, entrepreneurs and activists who mobilized millions of Indians to stand up and petition TRAI to preserve some of the foundational underpinnings of the Internet. And finally bravo to Facebook for having completely undermined any claim to responsible stewardship of our information society through their relentless, shrill and manipulative campaign filled with the staggeringly preposterous lies. Having completely lost the trust of the Indian public and policy-makers, Facebook only has itself to blame for polarizing what was quite a nuanced debate in India through its hyperbole and setting the stage for this firm action by TRAI.

    And most importantly bravo to RS Sharma and his team at TRAI for several reasons for the notification of “Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016” aka differential pricing regulations. The regulation exemplifies six regulatory best practices that I briefly explore below.

    Transparency and Agility: Two months from start to finish, what an amazing turn around! TRAI was faced with unprecedented public outcry and also comments and counter-comments. Despite visible and invisible pressures, from the initial temporary ban on Free Basics to RS Sharma’s calm, collected and clear interactions with different stakeholders resulted in him regaining the credibility which was lost during the publication of the earlier consultation paper on Regulatory Framework for Over-the-top (OTTs) services. Despite being completely snowed over electronically by what Rohin Dharmakumar dubbed as Facebook’s DDOS attack, he gave Facebook one last opportunity to do the right thing which they of course spectacularly blew.

    Brevity and Clarity: The regulation fits onto three A4-sized pages and is a joy to read. Clarity is often a result of brevity but is not necessarily always the case. At the core of this regulation is a single sentence which prohibits discriminatory tariffs on the basis of content unless it is a “data service over closed electronic communications network”. And unlike many other laws and regulations, this regulation has only one exemption for offering or charging of discriminatory tariffs and that is for “emergency services” or during “grave public emergency”. Even the best lawyers will find it difficult to drive trucks through that one. Even if imaginative engineers architect a technical circumvention, TRAI says “if such a closed network is used for the purpose of evading these regulations, the prohibition will nonetheless apply”. Again clear signal that the spirit is more important than the letter of the regulation when it comes to enforcement.

    Certainty and Equity: Referencing the noted scholar Barbara Van Schewick, TRAI explains that a case-by-case approach based on principles [standards] or rules would “fail to provide much needed certainty to industry participants…..service providers may refrain from deploying network technology” and perversely “lead to further uncertainty as service providers undergoing [the] investigation would logically try to differentiate their case from earlier precedents”. Our submission from the Centre for Internet and Society had called for more exemptions but TRAI went with a much cleaner solution as it did not want to provide “a relative advantage to well-financed actors and will tilt the playing field against those who do not have the resources to pursue regulatory or legal actions”.

    What next? Hopefully the telecom operators and Facebook will have the grace to abide with the regulation without launching a legal challenge. And hopefully TRAI will issue equally clear regulations on throttling and blocking to conclude the “Regulatory Framework for Over-the-top Services” consultation process. Critically, TRAI must forbear from introducing any additional regulatory burdens on OTTs, a.k.a Internet companies based on unfounded allegations of regulatory arbitrage. There are some legitimate concerns around issues like taxation and liability but that has to be addressed by other arms of the government. To address the digital divide, there are other issues outside net neutrality such as shared spectrum, unlicensed spectrum and shared backhaul infrastructure that TRAI must also prioritize for regulation and deregulation.

    Without doubt other regulators from the global south will be inspired by India’s example and will hopefully take firm steps to prevent the rise of additional and unnecessary gatekeepers and gatekeeping practices on the Internet. The democratic potential of the Internet must be preserved through enlightened and appropriate regulation informed by principles and evidence.


    The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world).

    Database on Big Data and Smart Cities International Standards

    by Vanya Rakesh last modified Feb 11, 2016 03:49 PM
    The Centre for Internet and Society is in the process of mapping international standards specifically around Big Data, IoT and Smart Cities. Here is a living document containing a database of some of these key globally accepted standards.

    1. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Big Data (WG 9 )

    ● Background

    - The International Organization for Standardization /International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information Technology announced the creation of a Working Group (WG) focused on standardization in connection with big data.

    - JTC 1 is the standards development environment where experts come together to develop worldwide standards on Information and Communication Technology (ICT) for integrating diverse and complex ICT technologies.[1]

    - The American National Standards Institute (ANSI) holds the secretariat to JTC 1 and the ANSI-accredited U.S. Technical Advisory Group (TAG) Administrator to JTC 1 is theInterNational Committee for Information Technology Standards (INCITS) [2], an ANSI member and accredited standards developer (ASD). InterNational Committee for Information Technology standards (INCITS) is a technical committee on Big Data to serve as the US Technical Advisory Group (TAG) to JTC 1/WG 9 on Big Data/ pending approval of a New Work Item Proposal (NWIP). The INCITS/Big Data will address standardization in the areas assigned to JTC 1/WG 9. [3]

    - Under U.S. leadership, WG 9 on Big Data will serve as the focus of JTC 1's big data standardization program.

    ● Objective

    - To identify standardization gaps.

    - Develop foundational standards for Big Data.

    - Develop and maintain liaisons with all relevant JTC 1 entities

    - Grow the awareness of and encourage engagement in JTC 1 Big Data standardization efforts within JTC 1. [4]

    ● Status

    - JTC 1 appoints Mr. Wo Chang to serve as Convenor of the JTC 1 Working Group on Big Data.

    - The WG has set up a Study Group on Big Data.

    2. International Organisation for Standardization: ISO/IEC JTC 1 Study group on Big Data

    ● Background

    - The ISO/IEC JTC1 Study Group on Big Data (JTC1 SGBD) was created by Resolution 27 at the November, 2013 JTC1 Plenary at the request of the USA and other national bodies for consideration of Big Data activities across all of JTC 1.

    - A Study Group (SG) is an ISO mechanism by which the convener of a Working Group (WG) under a sub-committee appoints a smaller group of experts to do focused work in a specific area to identify a clear group to focus attention on a major area and expand the manpower of the committee.

    - The goal of an SG is to create a proposal suitable for consideration by the whole WG, and it is the WG that will then decide whether and how to progress the work.[5]

    ● Objective

    JTC 1 establishes a Study Group on Big Data for consideration of Big Data

    activities across all of JTC 1 with the following objectives:

    - Mapping the existing landscape: Map existing ICT landscape for key technologies and relevant standards /models/studies /use cases and scenarios for Big Data from JTC 1, ISO, IEC and other standards setting organizations,

    - Identify key terms : Identify key terms and definitions commonly used in the area of Big Data,

    - Assess status of big data standardization : Assess the current status of Big Data standardization market requirements, identify standards gaps, and propose standardization priorities to serve as a basis for future JTC 1 work, and

    - Provide a report with recommendations and other potential deliverables to the 2014 JTC 1 Plenary. [6]

    ● Current Status

    - The study group released a preliminary report in the year 2014, which can be accessed here : http://www.iso.org/iso/big_data_report-jtc1.pdf.

    3. The National Institute of Standards and Technology Big Data Interoperability Framework :

    ● Background

    - NIST is leading the development of a Big Data Technology Roadmap which aims to define and prioritize requirements for interoperability, portability, reusability, and extensibility for big data analytic techniques and technology infrastructure to support secure and effective adoption of Big Data.

    - To help develop the ideas in the Big Data Technology Roadmap, NIST is creating the Public Working Group for Big Data which Released Seven Volumes of Big Data Interoperability Framework on September 16, 2015.[7]

    ● Objective

    - To advance progress in Big Data, the NIST Big Data Public Working Group (NBD-PWG) is working to develop consensus on important, fundamental concepts related to Big Data.

    ● Status

    - The results are reported in the NIST Big Data Interoperability Framework series of volumes. Under the framework, seven volumes have been released by NIST, available here:

    http://bigdatawg.nist.gov/V1_output_docs.php

    4. IEEE Standards Association

    ● Background:

    - The IEEE Standards Association introduced a number of standards

    related to big-data applications.

    ● Status:

    The following standard is under development:

    - IEEE P2413

    "IEEE Standard for an Architectural Framework for the Internet of Things (IoT)" defines the relationships among devices used in industries, including transportation and health care. It also provides a blueprint for data privacy, protection, safety, and security, as well as a means to document and mitigate architecture divergence.[8]

    5. ITU

    ● Background:

    - The International Telecommunications Union (ITU) has announced its first standards for big data services, entitled 'Recommendation ITU-T Y.3600 "Big data - cloud computing based requirements and capabilities"', recognizing the need for strong technical standards considering the growth of big data to ensure that processing tools are able to achieve powerful results in the areas of collection, analysis, visualization, and more.[9]

    ● Objective:

    - Recommendation Y.3600 provides requirements, capabilities and use cases of

    cloud computing based big data as well as its system context. Cloud computing

    based big data provides the capabilities to collect, store, analyze, visualize and

    manage varieties of large volume datasets, which cannot be rapidly transferred

    and analysed using traditional technologies.[10]

    - It also outlines how cloud computing systems can be leveraged to provide big-data services.

    ● Status:

    - The standard was relseased in the year 2015 and is avaiabe here: http://www.itu.int/rec/T-REC-Y.3600-201511-I .

    Smart cities

    1. ISO Standards on Smart Cities

    ● Background:

    - ISO, the International Organization for Standardization, established a strategic advisory group in 2014 for smart cities, comprised of a wide range of international experts to advise ISO on how to coordinate current and future Smart City standardization activities, in cooperation with other international standards organizations, to benefit the market.[11]

    - Seven countries, China, Germany, UK, France, Japan, Korea and USA, are currently involved in the research.

    ● Objective:

    - The main aims of which are to formulate a definition of a Smart City

    - Identify current and future ISO standards projects relating to Smart Cities

    - Examine involvement of potential stakeholders, city requirements, potential interface problems. [12]

    ● Status:

    - ISO/TC 268, which is focused on sustainable development in communities, has one working group developing city indicators and other developing metrics for smart community infrastructures. In early 2016 this committee will be joined by another - IEC - systems committee. The first standard produced by ISO/TC 268 is ISO/TR 37150:2014.

    - ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics: this standard provides a review of existing activities relevant to metrics for smart community infrastructures. The concept of smartness is addressed in terms of performance relevant to technologically implementable solutions, in accordance with sustainable development and resilience of communities, as defined in ISO/TC 268. ISO/TR 37150:2014 addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). It focuses on the technical aspects of existing activities which have been published, implemented or discussed. Economic, political or societal aspects are not analyzed in ISO/TR 37150:2014.[13]

    - ISO 37120:2014 provides city leaders and citizens a set of clearly defined city performance indicators and a standard approach for measuring each. Though some indicators will be more helpful for cities than others, cities can now consistently apply these indicators and accurately benchmark their city services and quality of life against other cities.[14] This new international standard was developed using the framework of the Global City Indicators Facility (GCIF) that has been extensively tested by more than 255 cities worldwide. This is a demand-led standard, driven and created by cities, for cities. ISO 37120 defines and establishes definitions and methodologies for a set of indicators to steer and measure the performance of city services and quality of life. The standard includes a comprehensive set of 100 indicators - of which 46 are core - that measures a city's social, economic, and environmental performance. [15]

    The GCIF global network, supports the newly constituted World Council on City Data - a sister organization of the GCI/GCIF - which allows for independent, third party verification of ISO 37120 data.[16]

    - ISO/TS 37151 and ISO/TR 37152 Smart community infrastructures -- Common framework for development & operation: outlines 14 categories of basic community needs (from the perspective of residents, city managers and the environment) to measure the performance of smart community infrastructures. These are typical community infrastructures like energy, water, transportation, waste and information and communication technology systems, which have been optimized with sustainable development and resilience in mind. [17] The committee responsible for this document is ISO/TC 268, Sustainable development in communities, Subcommittee SC 1, Smart community infrastructures. The objective is to develop international consensus on a harmonised metrics to evaluate the smartness of key urban infrastructure.[18]

    - ISO 37101 Sustainable development of communities -- Management systems -- Requirements with guidance for resilience and smartness : By setting out requirements and guidance to attain sustainability with the support of methods and tools including smartness and resilience, it can help communities improve in a number of areas such as: Developing holistic and integrated approaches instead of working in silos (which can hinder sustainability), Fostering social and environmental changes, Improving health and wellbeing, Encouraging responsible resource use and Achieving better governance. [19] The objective is to develop a Management System Requirements Standard reflecting consensus on an integrated, cross-sector approach drawing on existing standards and best practices.

    - ISO 37102 Sustainable development & resilience of communities - Vocabulary . The objective is to establish a common set of terms and definitions for standardization in sustainable development, resilience and smartness in communities, cities and territories since there is pressing need for harmonization and clarification. This would provide a common language for all interested parties and stakeholders at the national, regional and international levels and would lead to improved ability to conduct benchmarks and to share experiences and best practices.

    - ISO/TR 37121 Inventory & review of existing indicators on sustainable development & resilience in cities : A common set of indicators useable by every city in the world and covering most issues related to sustainability, resilience and quality of life in cities. [20]

    - ISO/TR 12859:2009 gives general guidelines to developers of intelligent transport systems (ITS) standards and systems on data privacy aspects and associated legislative requirements for the development and revision of ITS standards and systems. [21]

    2. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Smart Cities (WG 11 )

    ● Background:

    - Serve as the focus of and proponent for JTC 1's Smart Cities standardization program and works for development of foundational standards for the use of ICT in Smart Cities - including the Smart City ICT Reference Framework and an Upper Level Ontology for Smart Cities - for guiding Smart Cities efforts throughout JTC 1 upon which other standards can be developed.[22]

    ● Objective:

    - To develop a set of ICT related indicators for Smart Cities in collaboration with ISO/TC 268.

    - Identify JTC 1 (and other organization) subgroups developing standards and related material that contribute to Smart Cities.

    - Grow the awareness of, and encourage engagement in, JTC 1 Smart Cities standardization efforts within JTC 1.

    ● Status

    - Ms Yuan Yuan is the Convenor of this Working group.

    - The purpose was to provide a report with recommendations to the JTC 1 Plenary in the year 2014, to which a preliminary report was submitted. [23]

    3. International Organisation for Standardization: ISO/IEC JTC 1 Study Group (SG1) on Smart Cities

    ● Background:

    - The Study Group (SG) - Smart Cities was established in 2013[24] SG 1 will explicitly consider the work going on in the following committees: ISO/TMB/AG on Smart Cities, IEC/SEG 1, ITU-T/FG SSC and ISO/TC 268. [25]

    ● Objective :

    - To examine the needs and potentials for standardization in this area.

    ● Status:

    - SG 1 is paying particular attention to monitoring cloud computing activities, which it sees as the key element of the Smart Cities infrastructure. DIN's Information Technology and Selected IT Applications Standards Committee (NIA (www.nia.din.de)) is formally responsible for ISO/IEC JTC1 /SG 1, but an autonomous national mirror committee on Smart Cities does not yet exist and the work is being overseen by DIN's Smart Grid steering body. [26]

    - A preliminary report has been released in the 2014, available here- http://www.iso.org/iso/smart_cities_report-jtc1.pdf

    4. ITU

    ● Background:

    - ITU members have established an ITU-T Study Group titled "ITU-T Study Group 20: IoT and its applications, including smart cities and communities" [27]

    - ITU-T has also established a Focus Group on Smart Sustainable Cities (FG-SSC).

    ● Objective:

    - The study group will address the standardization requirements of Internet of Things (IoT) technologies, with an initial focus on IoT applications in smart cities.

    - The focus group shall assess the standardization requirements of cities aiming to boost their social, economic and environmental sustainability through the integration of information and communication technologies (ICTs) in their infrastructures and operations.

    - The Focus Group will act as an open platform for smart-city stakeholders - such as municipalities; academic and research institutes; non-governmental organizations (NGOs); and ICT organizations, industry forums and consortia - to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities.[28]

    ● Status:

    - The study group will develop standards that leverage IoT technologies to address urban-development challenges.

    - The FG-SSC concluded its work in May 2015 by approving 21 Technical Specifications and Reports. [29]

    - So far, ITU-T SG 5 FG-SSC has issued the following reports- Technical report "An overview of smart sustainable cities and the role of information and communication technologies", Technical report "Smart sustainable cities: an analysis of definitions", Technical report "Electromagnetic field (EMF) considerations in smart sustainable cities", Technical specifications "Overview of key performance indicators in smart sustainable cities", Technical report "Smart water management in cities".[30]

    5. PRIPARE Project :

    ● Background:

    - The 7001 - PRIPARE Smart City Strategy is to to ensure that ICT solutions integrated in EIP smart cities will be compliant with future privacy regulation.

    - PRIPARE aims to develop a privacy and security-by-design software and systems engineering methodology, using the combined expertise of the research community and taking into account multiple viewpoints (advocacy, legal, engineering, business).

    ● Objective:

    - The mission of PRIPARE is to facilitate the application of a privacy and security-by-design methodology that will contribute to the advent of unhindered usage of Internet against disruptions, censorship and surveillance, support its practice by the ICT research community to prepare for industry practice and foster risk management culture through educational material targeted to a diversity of stakeholders.

    ● Status:

    - Liaison is currently on-going so that it becomes a standard (OASIS and ISO).[31]

    6. BSI-UK

    ● Background:

    - In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed.

    - The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. [32]

    ● Objective:

    The BIS launched the City's Standards Institute to bring together cities and key

    industry leaders and innovators :

    - To work together in identifying the challenges facing cities,

    - Providing solutions to common problems, and

    - Defining the future of smart city standards.[33]

    ● Status:

    The following standards and publications help address various issues for a city to

    become a smart city:

    - The development of a standard on Smart city terminology (PAS 180)

    - The development of a Smart city framework standard (PAS 181)

    - The development of a Data concept model for smart cities (PAS 182)

    - A Smart city overview document (PD 8100)

    - A Smart city planning guidelines document (PD 8101)

    - BS 8904 Guidance for community sustainable development provides a decision-making framework that will help setting objectives in response to the needs and aspirations of city stakeholders

    - BS 11000 Collaborative relationship management

    - BSI BIP 2228:2013 Inclusive urban design - A guide to creating accessible public spaces.

    7. Spain

    ● Background:

    - AENOR, the Spanish standards developing organization (SDO), has issued two new standards on smart cities: the UNE 178303 and UNE-ISO 37120. These standards joined the already published UNE 178301.

    ● Objective:

    - The texts, prepared by the Technical Committee of Standardization of AENOR on Smart Cities (AEN / CTN 178) and sponsored by the SETSI (Secretary of State for Telecommunications and Information Society of the Ministry of Industry, Energy and Tourism), aim to encourage the development of a new model of urban services management based on efficiency and sustainability.

    ● Status:

    Some of the standards that have been developed are:

    - UNE 178301 on Open Data evaluates the maturity of open data created or held by the public sector so that its reuse is provided in the field of Smart Cities.

    - UNE 178303 establishes the requirements for proper management of municipal assets.

    - UNE-ISO 37120 which collects the international urban sustainability indicators.

    - Following the publication of these standards, 12 other draft standards on Smart Cities have just been made public, most of them corresponding to public services such as water, electricity and telecommunications, and multiservice city networks. [34]

    8. China

    ● Background:

    Several national standardization committees and consortia have started

    standardization work on Smart Cities, including:

    - China National IT Standardization TC (NITS),

    - China National CT Standardization TC,

    - China National Intelligent Transportation System Standardization TC,

    - China National TC on Digital Technique of Intelligent Building and Residence Community of Standardization Administration, China Strategic Alliance of Smart City Industrial Technology Innovation[35]

    ● Objective:

    - In the year 2014, all the ministries involved in building smart cities in China joined with the Standardization Administration of China to create working groups whose job is to manage and standardize smart city development, though their activities have not been publicized. [36]

    ● Status:

    - China will continue to promote international standards in building smart cities and improve the competitiveness of its related industries in global market.

    - Also, China's Standardization Administration has joined hands with National Development and Reform Commission, Ministry of Housing and Urban-Rural Development and Ministry of Industry and Information Technology in establishing and implementing standards for smart cities.

    - When building smart cities, the country will adhere to the ISO 37120 and by the year 2020, China will establish 50 national standards on smart cities. [37]

    9. Germany

    ● Background :

    - Member of European Innovation Partnership (EIP) for Smart Cities and Communities DKE (German Commission for Electrical, Electronic & Information Technologies) and DIN (GermanInstitute for Standardization) have developed a joint roadmap and Smart Cities recommendations for action in Germany.

    ● Objective:

    - Its purpose is to highlight the need for standards and to serve as a strategic template for national and international standardization work in the field of smart city technology.

    - The Standardization Roadmap highlights the main activities required to create smart cities. [38]

    ● Status:

    - An updated version of the standardization roadmap was released in the year 2015. [39]

    10. Poland

    ● Background:

    - A coordination group on Smart and Sustainable Cities and Communities (SSCC) was set up in the beginning of 2014 to monitor any national standardization activities.

    ● Objective:

    - It was decided to put forward a proposal to form a group at the Polish Committee for Standardization (PKN) providing recommendations for smart sustainable city standardization in Poland.

    ● Status:

    It has two thematic groups:

    - GT 1-2 on terminology and Technical Bodies in PKN Its scope covers a collection of English terms and their Polish equivalents related to smart and sustainable development of cities and communities to allow better communication among various smart city stakeholders. This includes the preparation of the list of Technical Bodies (OT) in PKN involved in standardization activities related to specific aspects of smart and sustainable local development and making proposals concerning the allocation of standardization works to the relevant OT in PKN.

    - GT 3 for gathering information and the development and implementation of a work programme Its scope includes identifying stakeholders in Poland, and gathering information on any national "smart city" initiatives having an impact on environment-friendly development, sustainability, and liveability of a city. The group is also tasked with developing a work programme for GZ 1 based on identified priorities for Poland. Finally, its aim is to conduct communication and dissemination of activities to make the results of GZ 1 visible. [40]

    11. Europe

    ● Background:

    - In 2012, the European standardization organizations CEN and CENELEC founded the Smart and Sustainable Cities and Communities Coordination Group (SSCC-CG), which is a Coordination Group established to coordinate standardization activities and foster collaboration around standardization work. [41]

    ● Objective:

    - The aim of the CEN-CENELEC-ETSI (SSCC-CG) is to coordinate and promote European standardization activities relating to Smart Cities and to advise the CEN and CENELEC (Technical) and ETSI Boards on standardization activities in the field of Smart and Sustainable Cities and Communities.

    - The scope of the SSCC-CG is to advise on European interests and needs relating to standardization on Smart and Sustainable cities and communities.

    ● Status:

    - Originally conceived to be completed by the end of 2014, SSCC-CG's mandate has been extended by the European standards organizations CEN, CENELEC and ETSI by a further two years and will run until the end of 2016.[42]

    - The SSCC-CG does not develop standards, but reports directly to the management boards of the standardization organizations and plays an advisory role. Current members of the SSCC.CG include representatives of the relevant technical committees, the CEN/CENELEC secretariat, the European Commission, the European associations and the national standardization organizations.[43]

    - CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids: The aim of this document is to provide a strategic report which outlines the standardization requirements for implementing the European vision of smart grids, especially taking into account the initiatives by the Smart Grids Task Force of the European Commission. It provides an overview of standards, current activities, fields of action, international cooperation and strategic recommendations[44]

    12. Singapore

    ● Background:

    - In the year 2015, SPRING Singapore, the Infocomm Development Authority of Singapore (IDA) and the Information Technology Standards Committee (ITSC), under the purview of the Singapore Standards Council (SSC), have laid out an Internet of Things (IoT) Standards Outline in support of Singapore's Smart Nation initiative.

    ● Objective:

    - Realising importance of standards in laying the foundation for the nation empowered by big data, analytics technology and sensor networks in light of Singapore's vision of becoming a Smart Nation.

    ● Status:

    Three types of standards - sensor network standards, IoT foundational standards and domain-specific standards - have been identified under the IoT Standards Outline. Singapore actively participates in the ISO Technical Committee (TC) working on smart city standards.[45]


    [1] ISO/IEC JTC 1, Information Technology, http://www.iso.org/iso/jtc1_home.html

    [2] The InterNational Committee for Information Technology Standards, JTC 1 Working Group on Big Data, http://www.incits.org/committees/big-data

    [3] ISO/IEC JTC 1 Forms Two Working Groups on Big Data and Internet of Things, 27th January 2015, https://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=5b101d27-47b5-4540-bca3-657314402591

    [4] JTC 1 November 2014 Resolution 28 - Establishment of a Working Group on Big Data, and Call for Participation, 20th January 2015, http://jtc1sc32.org/doc/N2601-2650/32N2625-J1N12445_JTC1_Big_Data-call_for_participation.pdf

    [5] SD-3: Study Group Organizational Information, https://isocpp.org/std/standing-documents/sd-3-study-group-organizational-information

    [6] ISO/IEC JTC 1 Study Group on Big Data (BD-SG), http://jtc1bigdatasg.nist.gov/home.php

    [7] NIST Released V1.0 Seven Volumes of Big Data Interoperability Framework (September 16, 2015),http://bigdatawg.nist.gov/home.php

    [8] Standards That Support Big Data, Monica Rozenfeld, 8th September 2014, http://theinstitute.ieee.org/benefits/standards/standards-that-support-big-data

    [9] ITU releases first ever big data standards, Madolyn Smith, 21st December 2015, http://datadrivenjournalism.net/news_and_analysis/itu_releases_first_ever_big_data_standards#sthash.m3FBt63D.dpuf

    [10] ITU-T Y.3600 (11/2015) Big data - Cloud computing based requirements and capabilities, http://www.itu.int/itu-t/recommendations/rec.aspx?rec=12584

    [11] ISO Strategic Advisory Group on Smart Cities - Demand-side survey, March 2015, http://www.platform31.nl/uploads/media_item/media_item/41/62/Toelichting_ISO_Smart_cities_Survey-1429540845.pdf

    [12] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

    [13] ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics, http://www.iso.org/iso/catalogue_detail?csnumber=62564

    [14] Dissecting ISO 37120: Why this new smart city standard is good news for cities, 30th July 2014, http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities

    [15] World Council for City Data, http://www.dataforcities.org/wccd/

    [16] Global City Indicators Facility, http://www.cityindicators.org/

    [17] How to measure the performance of smart cities, Maria Lazarte, 5th October 2015

    http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001

    [18] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

    [19] A standard for improving communities reaches final stage, Clare Naden, 12th February 2015,

    http://www.iso.org/iso/news.htm?refid=Ref1932

    [20] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

    [21] ISO/TR 12859:2009 Intelligent transport systems -- System architecture -- Privacy aspects in ITS standards and systems, http://www.iso.org/iso/catalogue_detail.htm?csnumber=52052

    [22] ISO/IEC JTC 1 Information technology, WG 11 Smart Cities, http://www.iec.ch/dyn/www/f?p=103:14:0::::FSP_ORG_ID,FSP_LANG_ID:12973,25

    [23] Work of ISO/IEC JTC1 Smart Ci4es Study group , https://interact.innovateuk.org/documents/3158891/17680585/2+JTC1+Smart+Cities+Group/e639c7f6-4354-4184-99bf-31abc87b5760

    [24] JTC1 SAC - Meeting 13 , February 2015, http://www.finance.gov.au/blog/2015/08/05/jtc1-sac-meeting-13-february-2015/

    [25] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

    [26] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

    [27] ITU standards to integrate Internet of Things in Smart Cities, 10th June 2015, https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx

    [28] ITU-T Focus Group Smart Sustainable Cities, https://www.itu.int/dms_pub/itu-t/oth/0b/04/T0B0400004F2C01PDFE.pdf

    [29] Focus Group on Smart Sustainable Cities, http://www.itu.int/en/ITU-T/focusgroups/ssc/Pages/default.aspx

    [30] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

    [31] 7001 - PRIPARE Smart City Strategy, https://eu-smartcities.eu/commitment/7001

    [32] Financing Tomorrow's Cities: How Standards Can Support the Development of Smart Cities, http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3

    [33] BSI-Smart Cities, http://www.bsigroup.com/en-GB/smart-cities/

    [34] New Set of Smart Cities Standards in Spain, https://eu-smartcities.eu/content/new-set-smart-cities-standards-spain

    [35] Technical Report, M2M & ICT Enablement in Smart Cities, Telecommunication Engineering Centre, Department of Telecommunications, Ministry of Communications and Information Technology, Government of India, November 2015, http://tec.gov.in/pdf/M2M/ICT%20deployment%20and%20strategies%20for%20%20Smart%20Cities.pdf

    [36] Smart City Development in China, Don Johnson, 17th June 2014, http://www.chinabusinessreview.com/smart-city-development-in-china/

    [37] China to continue develop standards on smart cities, 17th December 2015, http://www.chinadaily.com.cn/world/2015wic/2015-12/17/content_22732897.htm

    [38] The German Standardization Roadmap Smart City, April 2014, https://www.dke.de/de/std/documents/nr_smart%20city_en_version%201.0.pdf

    [39] This version of the Smart City Standardization Roadmap, Version 1.1, is an incremental revision of Version 1.0. In Version 1.1, a special focus is placed on giving an overview of current standardization activities and interim results, thus illustrating German ambitions in this area.

    [40] SSCC-CG Final report Smart and Sustainable Cities and Communities Coordination Group, January 2015, https://www.etsi.org/images/files/SSCC-CG_Final_Report-recommendations_Jan_2015.pdf

    [41] Orchestrating infrastructure for sustainable Smart Cities , http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf

    [42] Urbanization- Why do we need standardization?, http://www.din.de/en/innovation-and-research/smart-cities-en

    [43] CEN-CENELEC-ETSI Coordination Group 'Smart and Sustainable Cities and Communities' (SSCC-CG), http://www.cencenelec.eu/standards/Sectors/SmartLiving/smartcities/Pages/SSCC-CG.aspx

    [44] Final report of the CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids, https://www.etsi.org/WebSite/document/Report_CENCLCETSI_Standards_Smart%20Grids.pdf

    [45] SPRING Singapore Supported Close to 600 Companies in Standards Adoption, and Service Excellence Projects , 12th August 2015, http://www.spring.gov.sg/NewsEvents/PR/Pages/Internet-of-Things-(IoT)-Standards-Outline-to-Support-Smart-Nation-Initiative-Unveiled-20150812.aspx

    India Electronics Week 2016 & the IoT Show

    by Vanya Rakesh last modified Feb 12, 2016 03:12 AM
    The India Electronics Week 2016 was held at the Bangalore International Exhibition Centre from 11th-13th January 2016, along with Bangalore's biggest IoT Exhibition and Conference, bringing the global electronics industry together. The event also had the EFY Expo 2016, supported by the Department of Electronics and Information Technology & the Ministry of Communications and Information Technology, Government of India.

    Expo

    The show catered to manufacturers, developers and technology leaders interested in the domestic as well as global markets by displaying their products & services. EFY Expo was a catalyst for  accelerated growth and value addition, acquisition of technology and joint ventures between Indian and global players to enable growth of Electronic Manufacturing in the country.

    Conference
    CIS had the opportunity to attend the conference on Smart Cities on the 13th with experts discussing Smart Governance, Risk Assessment of Iot, Role of IoT in Smart Cities, and building Smart Cities with everything as a service.

    The session started with a talk on building secure and flexible IoT platforms, where the need to focus on risk and security was emphasised. Several issues which require attention from the security perspective were raised, including: the focus must be on end-to-end security with IoT being present everywhere. Secondly, there must be IoT resilient standards addressing authentication and device management, and the Industry and Government must adopt must open standards to make the ecosystem flexible.  Also, the platforms must be secured and employ encryption to ensure trusted execution of software.

    This was followed by a session on Smart Governance, discussing the changing nature of society where we see people moving from being connected with people to now being connected to devices. From the perspective of smart governance, the talk was divided into segments like Government to Government, Government to Business, Government to Employees and Government to Citizens. For smart cities, several e-governance initiatives have been undertaken so far, apart from e-delivery of services. After the Smart Cities Mission was announced, the Central Government sent several indicators of smart governance to the State Governments like : telecare (for example Karnataka had telejob portal), smart parking, smart grids, etc. From the business point of view, areas to be considered for building in-house competence for companies to build efficient and successful smart cities were suggested, some of them being: smarter education, buildings, environment, transportation, etc. It was suggested that smart governance can be ensured by regular measurement of the outcomes, redefining the gaps and analysis of these gaps with clearly laid policies. The key challenges to implementation of smart governance include :

    • The inherent IoT challenges
    • Government departments working in silos
    • Lack of clarity in objectives
    • Lack of transparency
    • No standardized platforms
    • Data privacy- the issue of personal data being stored in Government repository
    • Scalable infrastructure
    • Growing population

    A survey was done to study the success rate of e-governance projects in India, where it was found that  50% of them were complete failures, while 35% were partial failures. Therefore, it becomes important to ponder over these challenges which may create a roadblock to smart governance raising concerns on projects like smart cities.

    RIOT-Risk assessment of Internet of Things-  A Session to understand the security issue in IoT and discuss about secure IoT implementation. In smart cities, IoT has huge potential which may face roadblocks due to lack of open platforms, lack of an ecosystem of sensors, gateways, platforms and the challenges of integration with existing systems. The IoT security issues, on the other hand, like absence of set standards, lack of motivation for security and little awareness about such issues need due attention. This requires levels of check, for example, at the IoT surface level  in devices, the cloud or the mobile. Another important area here becomes the issue of data privacy and security for IoT implementation.

    Everything as a service- An insight into what it takes to build a smart city with EaaS and understand the various components that go into this, how they interact and how it can be implemented. This session highlighted the importance of data in a city, as it becomes very useful to provide information like-about disasters, enabling the Government to make plans and take actions accordingly, information about the traffic in the city, the waste level, city health map, etc. With multiple actors using the same data, the use of such information in a smart city varies across various sectors like

    • Smart Government- for Transparency, accountability and better decision-making
    • Smart Mobility- Intelligent traffic, management, safer roads
    • Smart Healthcare- health maps, better emergency services
    • Smart Living-  safety and security, better quality of life
    • Smart Utilities- Resource conservation, Resilience
    • Smart Environment- Better waste, management, air quality monitoring.

    To use everything as a service, it is considered as an attribute/state where there is a nexus between the users and state. For this, information is collected on the basis of data captured so far, or new data is captured by opening up existing sources like telecom operators, machines, citizens , hospitals, etc., or install new sensors to generate new data. Here, the need for data privacy and government policy was emphasized upon . For EaaS, there is an urgent need to standardize the interface between the sensor network, data publisher, insight providers and service provider in a smart city.

    The conference gave insight into the perspective of the industry about smart cities, along with  the actors involved, issues and challenges envisioned by private companies in the development of smart cities in India. The companies see role of IoT as an integral part of the project, with data security, privacy and need to formulate/adopt standards for implementation of IoT in the new as well as the existing structure key for the Smart Cities Mission in India.

    There is No Such Thing as Free Basics

    by Subhashish Panigrahi last modified Feb 14, 2016 11:37 AM
    India would not see the rain of Free Basics advertisements on billboards with images of farmers and common people explaining how much they could benefit from this Firefox project. Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning the differential pricing without discriminating services.

    The article was published in Bangalore Mirror on February 9, 2016.


    In their notes, TRAI has explained, "In India, given that a majority of the population are yet to be connected to the Internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users' Internet experience." Not just that, violation of this ban would cost Rs 50,000 every day.

    Facebook's earlier plan was to launch Free Basics in India by making a few websites—that are mostly partners with Facebook—available for free. The company not just advertised heavily on billboards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics.

    TRAI criticised Facebook's attempt for such a manipulative public provocation. However, Facebook was heavily criticised by many policy and Internet advocates, including non-profits groups like Free Software Movement of India and Savetheinternet.in campaign.

    The latter two collectives were strongly discouraging Free Basics by bringing public opinion wherein Savetheinternet.org was used to send over 10 lakh emails to TRAI to disallow Free Basics.

    Furthermore 500 start ups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to prime minister Narendra Modi requesting continued support for Net Neutrality — a concept that advocates equal treating of websites — on the Republic Day.

    Stand-up comedy groups like AIB and East India Comedy had created humorous but informative videos explaining the regulatory debate and supporting net neutrality which went viral.

    Technology critic and Quartz writer Alice Truong reacted saying: "Zuckerberg almost portrays net neutrality as a first-world problem that doesn't apply to India because having some service is better than no service."

    In the light of differential pricing, news portal Medianama's founder Nikhil Pawa, in his opinion piece in Times of India, emphasised the way Aircel in India, Grameenphone in Bangladesh and Orange in Africa were providing free access to Internet with a sole motif of access to Internet, and criticised the walled Internet of Facebook that confines users inside Facebook only.

    Had the differential pricing been allowed, it would have affected start ups and content-based smaller companies adversely, as they could never have managed to pay the high price to a partner service provider to make their service available for free.

    On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception of the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

    The decision by the government has been welcomed largely in the country and outside.

    In support of the move, Web We Want programme manager at the World Wide Web Foundation, Renata Avila, has shared saying,

    "As the country with the second largest number of Internet users worldwide, this decision will resonate around the world.

    "It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can't create a two-tier Internet — one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web."

    A Case for Greater Privacy Paternalism?

    by Amber Sinha — last modified Feb 20, 2016 07:28 AM
    This is the second part of a series of three articles exploring the issues with the privacy self management framework and potential alternatives.
     

    The first part of the series can be accessed here.

     

    Background

    The current data privacy protection framework across most jurisdictions is built around a rights based approach which entrusts the individual with having the wherewithal to make informed decisions about her interests and well-being.[1] In his book, The Phantom Public, published in 1925, Walter Lippmann argues that the rights based approach is based on the idea of a sovereign and omnicompetent citizens, who can direct public affairs, however, this idea is a mere phantom or an abstraction. [2] Jonathan Obar, Assistant Professor of Communication and Digital Media Studies in the Faculty of Social Science and Humanities at University of Ontario Institute of Technology, states that Lippmann's thesis remains equally relevant in the context of current models of self-management, particularly for privacy.[3] In the previous post, Scott Mason and I had looked at the limitations of a 'notice and consent' regime for privacy governance. Having established the deficiencies of the existing framework for data protection, I will now look at some of the alternatives proposed that may serve to address these issues.

    In this article, I will look at paternalistic solutions posed as alternatives to the privacy self-management regime. I will look at theories of paternalism and libertarianism in the context of privacy and with reference to the works of some of the leading philosophers on jurisprudence and political science. The paper will attempt to clarify the main concepts and the arguments put forward by both the proponents and opponents of privacy paternalism. The first alternative solution draws on Anita Allen's thesis in her book, Unpopular Privacy,[4] which deals with the questions whether individuals have a moral obligation to protect their own privacy. Allen expands the idea of rights to protect one's own self interests and duties towards others to the notion that we may have certain duties not only towards others but also towards ourselves because of their overall impact on the society. In the next section, we will look at the idea of 'libertarian paternalism' as put forth by Cass Sunstein and Richard Thaler[5] and what its impact could be on privacy governance.

    Paternalism

    Gerald Dworkin, Professor Emeritus at University of California, Davis, defines paternalism as "interference of a state or an individual with another person, against their will, and defended or motivated by a claim that the person interfered with will be better off or protected from harm." [6] Any act of paternalism will involve some limitation on the autonomy of the subject of the regulation usually without the consent of the subject, and premised on the belief that such act shall either improve the welfare of the subject or prevent it from diminishing.[7] Seana Shiffrin, Professor of Philosophy and Pete Kameron Professor of Law and Social Justice at UCLA, takes a broader view of paternalism and includes within its scope not only matters which are aimed at improving the subject's welfare, but also the replacement of the subject's judgement about matters which may otherwise have lied legitimately within the subject's control.[8] In that sense, Shiffrin's view is interesting for it dispenses with both the requirement for active interference, and such act being premised on the subject's well-being.

    The central premise of John Stuart Mill's On Liberty is that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. "His own good, either physical or moral," according to Mill, "is not a sufficient warrant." However, various scholars over the years have found Mill's absolute prohibition problematic and support some degree of paternalism. John Rawls' Principle of Fairness, for instance has been argued to be inherently paternalistic. If one has to put it in a nutshell, the aspect about paternalism that makes it controversial is that it involves coercion or interference, which in any theory of normative ethics or political science needs to be justified based on certain identified criteria. Staunch opponents of paternalism believe that this justification can never be met. Most scholars however, do not argue that all forms of paternalism are untenable and the bulk of scholarship on paternalism is devoted to formulating the conditions under which this justification is satisfied.

    Paternalism interferes with self-autonomy in two ways according to Peter de Marneffe, the Professor of Philosophy at the School of Historical, Philosophical and Religious Studies, Arizona State University.[9] The first is the prohibition principle, under which a person's autonomy is violated by being prohibited from making a choice. The second is the opportunity principle which undermines the autonomy of a person by reducing his opportunities to make a choice. Both the cases should be predicated upon a finding that the paternalistic act will lead to welfare or greater autonomy. According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy.[10]

    There are two main strands of arguments made against paternalism.[11] The first argues that interference with the choices of informed adults will always be an inferior option to letting them decide for themselves, as each person is the 'best judge' of his or her interests. The second strand does not engage with the question about whether paternalism can make better decisions about individuals, but states that any benefit derived from the paternalist act is outweighed by the harm of violation of self-autonomy. Most proponents of soft-paternalism build on this premise by trying to demonstrate that not all paternalistic acts violate self-autonomy. There are various forms of paternalism that we do not question despite them interfering with our autonomy - seat belt laws and restriction of tobacco advertising being a few of them. If we try to locate arguments for self-autonomy in the Kantian framework, it refers not just to the ability to do what one chooses, but to rational self-governance.[12] This theory automatically "opens the door for justifiable paternalism."[13] In this paper, I assume that certain forms of paternalism are justified. In the remaining two section, I will look at two different theories advocating greater paternalism in the context of privacy governance and try to examine the merits and issues with such measures.

    A moral obligation to protect one's privacy

    Modest Paternalism

    In her book, Unpopular Privacy,[14] Anita Allen states that enough emphasis is not placed by people on the value of privacy. The right of individuals to exercise their free will and under the 'notice and consent' regime, give up their rights to privacy as they deem fit is, according to her, problematic. The data protection law in most jurisdictions, is designed to be largely value-neutral in that it does not sit on judgement on what is the nature of information that is being revealed and how the collector uses it. Its primary emphasis is on providing the data subject with information about the above and allowing him to make informed decisions. In my previous post, Scott Mason and I had discussed that with online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less and less of a genuine option.[15] Lamenting that people put little emphasis on privacy and often give away information which, upon retrospection and due consideration, they would feel, they ought not have disclosed, Allen proposes what she calls 'modest paternalism' in which regulations mandate that individuals do not waive their privacy is certain limited circumstances.

    Allen acknowledges the tension between her arguments in favor of paternalism and her avowed support for the liberal ideals of autonomy and that government interference should be limited, to the extent possible. However, she tries to make a case for greater paternalism in the context of privacy. She begins by categorizing privacy as a "primary good" essential for "self respect, trusting relationships, positions of responsibility and other forms of flourishing." In another article, Allen states that this "technophilic generation appears to have made disclosure the default rule of everyday life."[16] Relying on various anecdotes and examples of individuals' disregard for privacy, she argues that privacy is so "neglected in contemporary life that democratic states, though liberal and feminist, could be justified in undertaking a rescue mission that includes enacting paternalistic privacy laws for the benefit of un-eager beneficiaries." She does state that in most cases it may be more advantageous to educate and incentivise individuals towards making choices that favor greater privacy protection. However, in exceptional cases, paternalism would be justified as a tool to ensure greater privacy.

    A duty towards oneself

    In an article for the Harvard Symposium on Privacy in 2013, Allen states that laws generally provide a framework built around rights of individuals that enable self-protection and duties towards others. G A Cohen describes Robert Nozick's views which represents this libertarian philosophy as follows: "The thought is that each person is the morally rightful owner of himself. He possesses over himself, as a matter of moral right, all those rights that a slaveholder has over a chattel slave as a matter of legal right, and he is entitled, morally speaking, to dispose over himself in the way such a slaveholder is entitled, legally speaking, to dispose over his slave."[17] As per the libertarian philosophy espoused by Nozick, everyone is licensed to abuse themselves in the same manner slaveholders abused their slaves.

    Allen asks the question whether there is a duty towards oneself and if such a duty exists, should it be reflected in policy or law. She accepts that a range of philosophers consider the idea of duties to oneself as illogical or untenable. [18] Allen, however relies on the works of scholars such as Lara Denis, Paul Eisenberg and Daniel Kading who have located such a duty. She develops a schematic of two kinds of duties - first order duties that requires we protect ourselves for the sake of others, and second order, derivative duties that we protect ourself. Through the essay, she relies on the Kantian framework of categorical imperative to build the moral thrust of her arguments. Kantian view of paternalism would justify those acts which interfere with an individual's autonomy in order to prevent her from exercising her autonomy irrationally, and draw her towards rational end that agree with her conception of good.[19] However, Allen goes one step further and she locates the genesis for duties to both others (perfect duties) and oneself (imperfect duties) in the categorical imperative . Her main thesis is that there are certain situations where we have a moral duty to protect our own privacy where failure to do so would have an impact on either specific others or the society, at large.

    Issues

    Having built this interesting and somewhat controversial premise, Allen does not sufficiently expand upon it to present a nuanced solution. She provides a number of anecdotes but does not formulate any criteria for when privacy duties could be self-regarding. Her test for what kinds of paternalistic acts are justified is also extremely broad. She argues for paternalism where is protects privacy rights that "enhance liberty, liberal ways of life, well-being and expanded opportunity." She does not clearly define the threshold for when policy should move from incentives to regulatory mandate nor does she elaborate upon what forms paternalism would both serve the purpose of protecting privacy as well as ensuring that there is no unnecessary interference with the rights of individual.[20]

    Nudge and libertarian paternalism

    What is nudge?

    In 2006, Richard Thaler and Cass Sunstein published their book Nudge: Improving decisions about health, wealth and happiness. [21] The central thesis of the book is that in order to make most of decisions, we rely on a menu of options made available to us and the order and structure of choices is characterised by Thaler and Sunstein as "choice architecture." According to them, the choice architecture has a significant impact on the choices that we make. The book looks at examples from a food cafeteria, the position of restrooms and how whether the choice is to opt-in or opt-out influences the retirement plans that were chosen. This choice architecture influences our behavior without coercion or a set of incentives, as conventional public policy theory would have us expect. The book draws on work done by cognitive scientists such as Daniel Kahneman[22] and Amos Tversky[23] as well as Thaler's own research in behavioral economics. [24] The key takeaway from cognitive science and behavioral economics used in this book is that choice architecture influences our actions in anticipated ways and leads to predictably irrational behavior. Thaler and Sunstein believe that this presents a great potential for policy makers. They can tweak the choice architecture in their specific domains to influence the decisions made by its subjects and nudge them towards behavior that is beneficial to them and/or the society.

    The great attraction of the argument made by Thaler and Sunstein is that it offers a compromise between forbearance and mandatory regulation. If we identify the two ends of the policy spectrum as - a) paternalists who believe in maximum interference through legal regulations that coerce behavior to meet the stated goals of the policy, and b) libertarians who believe in the free market theory that relies on the individuals making decisions in their best interests, 'nudging' falls somewhere in the middle, leading to the oxymoronic yet strangely apt phrase, "libertarian paternalism." The idea is to design choices in such as way that they influence decision-making so as to increase individual and societal welfare. In his book, The Laws of Fear, Cass Sunstein argues that the anti-paternalistic position is incoherent as "there is no way to avoid effects on behavior and choices."

    The proponents of libertarian paternalism refute the commonly posed question about who decides the optimal and desirable results of choice architecture, by stating that this form of paternalism does not promote a perfectionist standard of welfare but an individualistic and subjective standard. According to them, choices are not prohibited, cordoned off or made to carry significant barriers. However, it is often difficult to conclude what it is that is better for the welfare of people, even from their own point of view. The claim that nudges lead to choices that make them better off by their own standards seems more and more untenable. What nudges do is lead people towards certain broad welfare which the choice-architects believe make the lives of people better in the longer term.[25]

    How nudges could apply to privacy?

    Our previous post echoes the assertion made by Thaler and Sunstein that the traditional rational choice theory that assumes that individuals will make rationally optimal choices in their self interest when provided with a set of incentives and disincentives, is largely a fiction. We have argued that this assertion holds true in the context of privacy protection principles of notice and informed consent. Daniel Solove has argued that insights from cognitive science, particularly using the theory of nudge would be an acceptable compromise between the inefficacy of privacy self-management and the dangers of paternalism.[26] His rationale is that while nudges influence choice, they are not overly paternalistic in that they still give the individual the option of making choices contrary to those sought by the choice architecture. This is an important distinction and it demonstrates that 'nudging' is less coercive than how we generally understand paternalistic policies.

    One of the nudging techniques which makes a lot of sense in the context of the data protection policies is the use of defaults. It relies on the oft-mentioned status quo bias.[27] This is mentioned by Thaler and Sunstein with respect to encouraging retirement savings plans and organ donation, but would apply equally to privacy. A number of data collectors have maximum disclosure as their default settings and effort in understanding and changing these settings is rarely employed by users. A rule which mandates that data collectors set optimal defaults that ensure that the most sensitive information is subjected to least degree of disclosure unless otherwise chosen by the user, will ensure greater privacy protection.

    Ryan Calo and Dr. Victoria Groom explored an alternative to the traditional notice and consent regime at the Centre of Internet and Society, Stanford University.[28] They conducted a two-phase experimental study. In the first phase, a standard privacy notice was compared with a control condition and a simplified notice to see if improving the readability impacted the response of users. In the second phase, the notice was compared with five notices strategies, out of which four were intended to enhance privacy protective behavior and one was intended to lower it. Shara Monteleone and her team used a similar approach but with a much larger sample size.[29] One of the primary behavioral insights used was that when we do repetitive activities including accepting online terms and conditions or privacy notices, we tend to use our automatic or fast thinking instead to reflective or slow thinking.[30] Changing them requires leveraging the automatic behavior of the individuals.

    Alessandro Acquisti, Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University, has studied the application of methodologies from behavioral economics to investigate privacy decision-making.[31] He highlights a variety of factors that distort decision-making such as - "inconsistent preferences and frames of judgment; opposing or contradictory needs (such as the need for publicity combined with the need for privacy); incomplete information about risks, consequences, or solutions inherent to provisioning (or protecting) personal information; bounded cognitive abilities that limit our ability to consider or reflect on the consequences of privacy-relevant actions; and various systematic (and therefore predictable) deviations from the abstractly rational decision process." Acquisti looks at three kinds of policy solutions taking the example of social networking sites collecting sensitive information- a) hard paternalistic approach which ban making visible certain kind of information on the site, b) a usability approach that entails designing the system in way that is most intuitive and easy for users to decide whether to provide the information, c) a soft paternalistic approach which seeks to aid the decision-making by providing other information such as how many people would have access to the information, if provided, and set defaults such that the information is not visible to others unless explicitly set by the user. The last two approaches are typically cited as examples of nudging approaches to privacy.

    Another method is to use tools that lead to decreased disclosure of information. For example, tools like Social Media Sobriety Test[32] or Mail Goggles[33] serve to block the sites during certain hours set by user during which one expects to be at their most vulnerable, and the online services are blocked unless the user can pass a dexterity examination.[34] Rebecca Belabako and her team are building privacy enhanced tools for Facebook and Twitter that will provide greater nudges in restricting who they share their location on Facebook and restricting their tweets to smaller group of people.[35] Ritu Gulia and Dr. Sapna Gambhir have suggested nudges for social networking websites that randomly select pictures of people who will have access to the information to emphasise the public or private setting of a post.[36] These approaches try to address the myopia bias where we choose immediate access to service over long term privacy harms.

    The use of nudges as envisioned in the examples above is in some ways an extension of already existing research which advocates a design standard that makes the privacy notices more easily intelligible.[37] However, studies show only an insignificant improvement by using these methods. Nudging, in that sense goes one step ahead. Instead of trying to make notices more readable and enable informed consent, the design standard will be intended to simply lead to choices that the architects deem optimal.

    Issues with nudging

    One of the primary justifications that Thaler and Sunstein put forward for nudging is that the choice architecture is ubiquitous. The manner in which option are presented to us impact how we make decision whether it was intended to do so or not, and that there is no such thing a neutral architecture. This inevitability, according to them, makes a strong case for nudging people towards choices that will lead to their well-being. However, this assessment does not support the arguments made by them that libertarian paternalism nudges people towards choices from their own point of view. It is my contention that various examples of libertarian paternalism, as put forth by Thaler and Sunstein, do in fact interfere with our self-autonomy as the choice architecture leads us not to options that we choose for ourselves in a fictional neutral environments, but to those options that the architects believe are good for us. This substitution of judgment would satisfy the definition by Seana Shiffron. Second, the fact that there is no such things as a neutral architecture, is by itself, not justification enough for nudging. If we view the issue only from the point of view of normative ethics, assuming that coercion and interference are undesirable, intentional interference is much worse than unintentional interference.

    However, there are certain nudges that rely primarily on providing information, dispensing advice and rational persuasion.[38] The freedom of choice is preserved in these circumstances. Libertarians may argue that even these circumstances the shaping of choice is problematic. This issue, J S Blumenthal-Barby argues, is adequately addressed by the publicity condition, a concept borrowed by Thaler and Sunstein from John Rawls.[39] The principle states that officials should never use a technique they would be uncomfortable defending to the public; nudging is no exception. However, this seems like a simplistic solution to a complex problem. Nudges are meant to rely on inherent psychological tendencies, leveraging the theories about automatic and subconscious thinking as described by Daniel Kahneman in his book, "Thinking Fast, Thinking Slow."[40] In that sense, while transparency is desirable it may not be very effective.

    Other commentators also note that while behavioral economics can show why people make certain decisions, it may not be able to reliably predict how people will behave in different circumstances. The burden of extrapolating the observations into meaningful nudges may prove to be too heavy.[41] However, the most oft-quoted criticism of nudging is that it will rely on officials to formulate the desired goals towards which the choice architecture will lead us.[42] The judgments of these officials could be flawed and subject to influence by large corporations.[43] These concerns echo the best judge argument made against all forms of paternalism, mentioned earlier in this essay. J S Blumenthal-Barby, Assistant Professor at the Center for Medical Ethics and Health Policy, Baylor College of Medicine, also examines the claim that the choice architects will be susceptible to the same biases while designing the choice environment.[44] His first argument in response to this is that experts who extensively study decision-making may be less prone to these errors. Second, he argues that even with errors and biases, a choice architecture which attempts to the rights the wrongs of a random and unstructured choice environment is a preferable option.[45]

    Conclusion

    Most libertarians will find the notion that individuals are prevented from sharing some information about themselves problematic. Anita Allen's idea about self-regarding duties is at odds how we understand rights and duties in most jurisdictions. Her attempt to locate an ethical duty to protect one's privacy, while interesting, is not backed by a formulation of how such a duty would work. While she relies largely on an Kantian framework, her definition of paternalism, as can be drawn from her writing is broader than that articulated by Kant himself. On the other hand, Thaler and Sunstein's book Nudge and related writings by them do attempt to build a framework of how nudging would work and answer some questions they anticipate would be raised against the idea of libertarian paternalism.

    By and large, I feel that, Thaler and Sunstein's idea of libertarian paternalism could be justified in the context of privacy and data protection governance. It would be fair to say the first two conditions of de Marneffe under which such acts of paternalism are justified [46] are largely satisfied by nudges that ensures greater privacy protection. If nudges can ensure greater privacy protection, its benefits are both substantial and evident. However, the larger question is whether these purported benefits outweigh the costs of loss of self-autonomy. Given the numerous ways in which the 'notice and consent' framework is ineffective and leads to very little informed consent, it can be argued that there is little exercise of autonomy, to begin with, and hence, the loss of self-autonomy is not substantial. Some of the conceptual issues which doubt the ability of nudges to solve complex problems remain unanswered and we will have to wait for more analysis by both cognitive scientists and policy-makers. However, given the growing inefficacy of the existing privacy protection framework, it would be a good idea of begin using some insights from cognitive science and behavioral economics to ensure greater privacy protection.

    The current value-neutrality of data protection law with respect of the kind of data collected and its use, and its complete reliance on the data subject to make an informed choice is, in my opinion, an idea that has run its course. Rather than focussing solely on the controls at the stage of data collection, I believe we need a more robust theory of how to govern the subsequent uses of data. This will is the focus of the next part of this series in which I will look at the greater use of risk-based approach to privacy protection.



    [1] With invaluable inputs from Scott Mason.

    [2] Walter Lippmann, The Phantom Public, Transaction Publishers, 1925.

    [3] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

    [4] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

    [5] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

    [7] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 29.

    [8] Seana Shiffrin, Paternalism, Unconscionability Doctrine, and Accommodation, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2682745

    [9] Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

    [10] Id .

    [11] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 74.

    [12] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 115.

    [13] Ibid at 116.

    [14] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

    [15] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

    [16] Anita Allen, Privacy Law: Positive Theory and Normative Practice, available at http://harvardlawreview.org/2013/06/privacy-law-positive-theory-and-normative-practice/ .

    [17] G A Cohen, Self ownership, world ownership and equality, available at http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=3093280

    [19] Michael Cholbi, Kantian Paternalism and suicide intervention, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

    [20] Eric Posner, Liberalism and Concealment, available at https://newrepublic.com/article/94037/unpopular-privacy-anita-allen

    [21] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

    [22] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

    [23] Daniel Kahneman, Paul Slovic and Amos Tversky, Judgment under uncertainty: heuristics and biases, Cambridge University Press, 1982; Daniel Kahneman and Amos Tversky, Choices, Values and Frames, Cambridge University Press, 2000.

    [24] Richard Thaler, Advances in behavioral finance, Russell Sage Foundation, 1993.

    [25] Thaler, Sunstein and Balz, Choice Architecture, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583509.

    [26] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

    [27] Frederik Borgesius, Behavioral sciences and the regulation of privacy on the Internet, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2513771.

    [28] Ryan Calo and Dr. Victoria Groom, Reversing the Privacy Paradox: An experimental study, available at http://ssrn.com/abstract=1993125

    [29] Shara Monteleon et al, Nudges to Privacy Behavior: Exploring an alternative approahc to privacy notices, available at http://publications.jrc.ec.europa.eu/repository/bitstream/JRC96695/jrc96695.pdf

    [30] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

    [31] Alessandro Acquisti, Nudging Privacy, available at http://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf

    [34] Rebecca Balebako et al, Nudging Users towards privacy on mobile devices, available at https://www.andrew.cmu.edu/user/pgl/paper6.pdf.

    [35] Id .

    [36] Ritu Gulia and Dr. Sapna Gambhir, Privacy and Privacy Nudges for OSNs: A Review, available at http://www.ijircce.com/upload/2014/march/14L_Privacy.pdf

    [37] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

    [38] Daniel Hausman and Bryan Welch argue that these cases are mistakenly characterized as nudges. They believe that nudges do not try to inform the automatic system, but manipulate the inherent cognitive biases. Daniel Hausman and Bryan Welch, Debate: To Nudge or Not to Nudge, Journal of Political Philosophy 18(1).

    [39] Ryan Calo, Code, Nudge or Notice, available at

    [40] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

    [41] Evan Selinger and Kyle Powys Whyte, Nudging cannot solve complex policy problems.

    [42] Mario J. Rizzo & Douglas Glen Whitman, The Knowledge Problem of New Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1310732; Pierre Schlag, Nudge, Choice Architecture, and Libertarian Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585362.

    [43] Edward L. Glaeser, Paternalism and Psychology, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=917383.

    [44] J S BLumenthal-Barby, Choice Architecture: A mechanism for improving decisions

    while preserving liberty?, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

    [45] Id .

    [46] According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy. Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

    Internet Freedom

    by Sunil Abraham and Vidushi Marda — last modified Feb 15, 2016 02:51 AM
    The modern medium of the web is an open-sourced, democratic world in which equality is an ideal, which is why what is most important is Internet freedom.

    The article by Sunil Abraham and Vidushi Marda was published by Asian Age on February 14, 2016.


    What would have gone wrong if India’s telecom regulator Trai had decided to support programmes like Facebook’s Free Basics and Airtel’s Zero Rating instead of issuing the regulation that prohibits discriminatory tariffs? Here are possible scenarios to look at in case the discriminatory tarrifs were allowed as they are in some countries.

    Possible impact on elections

    Facebook would have continued to amass its product — eyeballs. Indian eyeballs would be more valuable than others for three reasons 1. Facebook would have an additional layer of surveillance thanks to the Free Basics proxy server which stores the time, the site url and data transferred for all the other destinations featured in the walled garden 2. As part of Digital India, most government entities will set up Facebook pages and a majority of the interaction with citizens would happen on the social media rather than the websites of government entities and, consequently, Facebook would know what is and what is not working in governance 3. Given the financial disincentive to leave the walled garden, the surveillance would be total.

    What would this mean for democracies? Eight years ago, Facebook began to engineer the News Feed to show more posts of a user’s friends voting in order to influence voting behavior. It introduced the “I’m Voting” button into 61 million users’ feeds during the 2010 US presidential elections to increase voter turnout and found that this kind of social pressure caused people to vote. Facebook has also admitted to populating feeds with posts from friends with similar political views. During the 2012 Presidential elections, Facebook was able to increase voter turnout by altering 1.9 million news feeds.

    Indian eyeballs may not be that lucrative in terms of advertising. But these users are extremely valuable to political parties and others interested in influencing elections. Facebook’s notifications to users when their friends signed on to the “Support Free Basics” campaign was configured so that you were informed more often than with other campaigns. In other words, Facebook is not just another player on their platform. Given that margins are often slim, would Facebook be tempted to try and install a government of its choice in India during the 2019 general elections?

    In times of disasters

    Most people defending Free Basics and defending forbearance as the regulatory response in 2015/16 make the argument that “95 per cent of Internet users in developing countries spend 95 per cent of their time on Facebook”.

    This is not too far from the truth as LirneAsia demonstrated in 2012 with most people using Facebook in Indonesia not even knowing they were using the internet. In other words, they argue that regulators should ignore the fringe user and fringe usage and only focus on the mainstream. The cognitive bias they are appealing to is smaller numbers are less important.

    Since all the sublime analogies in the Net Neutrality debate have been taken, forgive us for using the scatological. That is the same as arguing that since we spend only 5% of our day in toilets, only 5% of our home’s real estate should be devoted to them.

    Everyone agrees that it is far easier to live in a house without a bedroom than a house without a toilet. Even extremely low probabilities or ‘Black Swan’ events can be terribly important! Imagine you are an Indian at the bottom of the pyramid. You cannot afford to pay for data on your phone and, as a result, you rarely and nervously stray out of the walled garden of Free Basics.

    During a natural disaster you are able to use the Facebook Safety Check feature to mark yourself safe but the volunteers who are organising both offline and online rescue efforts are using a wider variety of platforms, tools and technologies.

    Since you are unfamiliar with the rest of the Internet, you are ill equipped when you try to organise a rescue for you and your loved ones.

    Content and carriage converge

    Some people argue that TRAI should have stayed off the issue since the Competition Commission of India (CCI) is sufficient to tackle Net Neutrality harms. However it is unclear if predatory pricing by Reliance, which has only 9% market share, will cross the competition law threshold for market dominance? Interestingly, just before the Trai notification, the Ambani brothers signed a spectrum sharing pact and they have been sharing optic fibre since 2013.

    Will a content sharing pact follow these carriage pacts? As media diversity researcher, Alam Srinivas, notes “If their plans succeed, their media empires will span across genres such as print, broadcasting, radio and digital. They will own the distribution chains such as cable, direct-to-home (DTH), optic fibre (terrestrial and undersea), telecom towers and multiplexes.”

    What does this convergence vision of the Ambani brothers mean for media diversity in India? In the absence of net neutrality regulation could they use their dominance in broadcast media to reduce choice on the Internet? Could they use a non-neutral provisioning of the Internet to increase their dominance in broadcast media? When a single wire or the very same radio spectrum delivers radio, TV, games and Internet to your home — what under competition law will be considered a substitutable product? What would be the relevant market? At the Centre for Internet and Society (CI S), we argue that competition law principles with lower threshold should be applied to networked infrastructure through infrastructure specific non-discrimination regulations like the one that Trai just notified to protect digital media diversity.

    Was an absolute prohibition the best response for TRAI? With only two possible exemptions — i.e. closed communication network and emergencies - the regulation is very clear and brief. However, as our colleague Pranesh Prakash has said, TRAI has over regulated and used a sledgehammer where a scalpel would have sufficed. In CIS’ official submission, we had recommended a series of tests in order to determine whether a particular type of zero rating should be allowed or forbidden. That test may be legally sophisticated; but as TRAI argues it is clear and simple rules that result in regulatory equity. A possible alternative to a complicated multi-part legal test is the leaky walled garden proposal. Remember, it is only in the case of very dangerous technologies where the harms are large scale and irreversible and an absolute prohibition based on the precautionary principle is merited.

    However, as far as network neutrality harms go, it may be sufficient to insist that for every MB that is consumed within Free Basics, Reliance be mandated to provide a data top up of 3MB.

    This would have three advantages. One, it would be easy to articulate in a brief regulation and therefore reduce the possibility of litigation. Two, it is easy for the consumer who is harmed to monitor the mitigation measure and last, based on empirical data, the regulator could increase or decrease the proportion of the mitigation measure.

    This is an example of what Prof Christopher T. Marsden calls positive, forward-looking network neutrality regulation. Positive in the sense that instead of prohibitions and punitive measures, the emphasis is on obligations and forward-looking in the sense that no new technology and business model should be prohibited.

    What is Net neutrality?

    According to this principle, all service providers and governments should not discriminate between various data on the internet and consider all as one. They cannot give preference to one set of apps/ websites while restricting others.

    • 2006: TRAI invites opinions regarding the regulation of net neutrality from various telecom industry bodies and stakeholdersFeb. 2012: Sunil Bharti Mittal, CEO of Bharti Airtel, suggests services like YouTube should pay an interconnect charge to network operators, saying that if telecom operators are building highways for data then there should be a tax on the highway
    • July 2012: Bharti Airtel’s Jagbir Singh suggests large Internet companies like Facebook and Google should share revenues with telecom companies.
    • August 2012: Data from M-Lab said You Broadband, Airtel, BSNL were throttling traffic of P2P services like BitTorrent
    • Feb. 2013: Killi Kiruparani, Minister for state for communications and technology says government will look into legality of VoIP services like Skype
    • June 2013: Airtel starts offering select Google services to cellular broadband users for free, fixing a ceiling of 1GB on the data
    • Feb. 2014: Airtel operations CEO Gopal Vittal says companies offering free messaging apps like Skype and WhatsApp should be regulated
    • August 2014: TRAI rejects proposal from telecom companies to make messaging application firms share part of their revenue with the carriers/government
    • Nov. 2014: Trai begins investigation on Airtel implementing preferential access with special packs for WhatsApp and Facebook at rates lower than standard data rates
    • Dec. 2014: Airtel launches 2G, 3G data packs with VoIP data excluded in the pack, later launches VoIP pack.
    • Feb. 2015: Facebook launches Internet.org with Reliance communications, aiming to provide free access to 38 websites through single app
    • March 2015: Trai publishes consultation paper on regulatory framework for over the top services, explaining what net neutrality in India will mean and its impact, invited public feedback
    • April 2015: Airtel launches Airtel Zero, a scheme where apps sign up with airtle to get their content displayed free across the network. Flipkart, which was in talks for the scheme, had to pull out after users started giving it poor rating after hearing about the news
    • April 2015: Ravi Shankar Prasad, Communication and information technology minister announces formation of a committee to study net neutrality issues in the country
    • 23 April 2015: Many organisations under Free Software Movement of India protested in various parts of the country. In a counter measure, Cellular Operators Association of India launches campaign , saying its aim is to connect the unconnected citizens, demanding VoIP apps be treated as cellular operators
    • 27 April 2015: Trai releases names and email addresses of users who responded to the consultation paper in millions. Anonymous India group, take down Trai’s website in retaliation, which the government could not confirm
    • Sept. 2015: Facebook rebrands Internet.org as Free Basics, launches in the country with massive ads across major newspapers in the country. Faces huge backlash from public
    • Feb. 2016: Trai rules in favour of net neutrality, barring telecom operators from charging different rates for data services.

    The writers work at the Centre for Internet and Society, Bengaluru. CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world

    Free Speech and the Law on Sedition

    by Siddharth Narrain — last modified Feb 17, 2016 09:13 AM
    Siddharth Narrain explains how the law in India has addressed sedition.

    Sedition is an offence that criminalizes speech that is construed to be disloyal to or threatening to the state. The main legal provision in India is section 124A of the Indian Penal Code that criminalizes speech that “brings or attempts to bring into hatred or contempt, or attempts or attempts to excite disaffection” towards the government. The law makes a distinction between “disapprobation” (lawful criticism of the government) and “disaffection” (expressing disloyalty or enmity which is proscribed).

    The British introduced this law in 1898, as a part of their efforts to curb criticism of colonial rule, and to stamp out any dissent. Many famous nationalists including Bal Gangadhar Tilak and Mahatma Gandhi have been tried and imprisoned for sedition. After a spirited debate, the Indian Constitutional Assembly decided not to include ‘sedition’ as a specific exception to Article 19(1)(a). However section 124A IPC remained on the statute book. After the First Amendment to the Constitution and the introduction of the words “in the interests of public order” to the exceptions to Article 19(1)(a), it became extremely difficult to challenge the constitutionality of section 124A.

    In 1962, the Supreme Court upheld the constitutionality of the law in the Kedarnath Singh case, but narrowed the scope of the law to acts involving intention or tendency to create disorder, or disturbance of law and order, or incitement to violence. Thus the Supreme Court provided an additional safeguard to the law: not only was constructive criticism or disapprobation allowed, but if the speech concerned did not have an intention or tendency to cause violence or a disturbance of law and order, it was permissible.

    However, even though the law allows for peaceful dissent and constructive criticism, over the years various governments have used section 124A to curb dissent. The trial and conviction of the medical doctor and human rights activist Binayak Sen, led to a renewed call for the scrapping of this law. In the Aseem Trivedi case, where a cartoonist was arrested for his work around the theme of corruption, the Bombay High Court has laid down guidelines to be followed by the government in arrests under section 124A. The court reaffirmed the law laid down in Kedarnath Singh, and held that for a prosecution under section 124A, a legal opinion in writing must be obtained from the law officer of the district(it did not specify who this was) followed by a legal opinion in writing within two weeks from the state public prosecutor. This adds to the existing procedural safeguard under section 196 of the Code of Criminal Procedure (CrPC) that says that courts cannot take cognizance of offences punishable under section 124A IPC unless the Central or State government has given sanction or permission to proceed.

    The serious nature of section 124A is seen in the light of the punishment associated with it. Section 124A is a cognizable (arrests can be made without a warrant), non-bailable and non-compoundable offence. Punishment for the offence can extend up to life imprisonment. Because of the seriousness of the offence, courts are often reluctant to grant bail. Sedition law is seen as an anachronism in many countries including the United Kingdom, and it has been repealed in most Western democracies.

    IMPORTANT CASE LAW

    Kedarnath Singh v. State of Bihar, AIR 1962 SC 955 Supreme Court, 5 Judges,

    Medium: Offline

    Brief Facts: Kedarnath Singh, a member of the Forward Communist Party, was prosecuted for sedition related to a speech that he made criticising the government for its capitalist policies. Singh challenged the constitutionality of the sedition law. The Supreme Court bunched Singh’s case with other similar incidents where persons were prosecuted under the sedition law.

    Held: The law is constitutional and covered written or spoken words that had the implicit idea of subverting the government by violent means. However, this section would not cover words that were used as disapprobation of measures of the government that were meant to improve or alter the policies of the government through lawful means. Citizens can criticize the government as long as they are not inciting people to violence against the government with an intention to create public disorder. The court drew upon the Federal Court’s decision in Niharendru Dutt Majumdar where the court held that offence of sedition is the incitement to violence or the tendency or the effect of bringing a government established by law into hatred or contempt or creating disaffection in the sense of disloyalty to the state. While the Supreme Court upheld the validity of section 124A, it limited its application to acts involving intention or tendency to create disorder, or a disturbance of law and order, or incitement to violence.

    Balwant Singh and Anr v. State of Punjab: AIR 1985 SC 1785

    Brief Facts: The accused had raised the slogan “Khalistan Zindabad” outside a cinema hall just after the assassination of Prime Minister Indira Gandhi.

    Held: The slogans raised by the accused had no impact on the public. Two individuals casually raising slogans could not be said to be exciting disaffection towards the government. Section 124A would not apply to the facts and circumstances of this case.

    Sanskar Marathe v. State of Maharashtra & Ors, Criminal Public Interest Litigation No. 3 of 2015, Bombay High Court, 2 judges

    Medium: Online and Offline

    Brief Facts: The case arose out of the arrest of Aseem Trivedi, a political cartoonist who was involved with the India Against Corruption movement. Trivedi was arrested in 2012 in Mumbai for sedition and insulting the National Emblems Act. The court considered the question of how it could intervene to prevent the misuse of section 124A. Held: The cartoons were in the nature of political satire, and there was no allegation of incitement to violence, or tendency or intention to create public disorder. The Court issued guidelines to all police personnel in the form of preconditions for prosecutions under section 124A: Words, signs, or representations must bring the government into hatred or contempt, or must cause, or attempt to cause disaffection, enmity or disloyalty to the government. The words, signs or representation must also be an incitement to violence or must be intended or tend to create public disorder or a reasonable apprehension of public disorder. Words, signs or representations, just by virtue of being against politicians or public officials cannot be said to be against the government. They must show the public official as representative of the government. Disapproval or criticism of the government to bring about a change in government through lawful means does not amount to sedition. Obscenity or vulgarity by itself is not a factor to be taken into account while deciding if a word, sign or representation violates section 124A. In order to prosecute under section 124A, the government has to obtain a legal opinion in writing from the law officer of the district (the judgment does not specify who this is) and in the next two weeks, a legal opinion in writing from the public prosecutor of the state.

    Free Speech and Public Order

    by Gautam Bhatia — last modified Feb 18, 2016 06:23 AM
    In this post, Gautam Bhatia has explained the law on public order as a reasonable restriction to freedom of expression under Article 19(2) of the Constitution of India.

    Article 19(2) of the Constitution authorises the government to impose, by law, reasonable restrictions upon the freedom of speech and expression “in the interests of… public order.” To understand the Supreme Court’s public order jurisprudence, it is important to break down the sub-clause into its component parts, and focus upon their separate meanings. Specifically, three terms are important: “reasonable restrictions”, “in the interests of”, and “public order”.

    The Supreme Court’s public order jurisprudence can be broadly divided into three phases. Phase One (1949 – 1950), which we may call the pre-First Amendment Phase, is characterised by a highly speech-protective approach and a rigorous scrutiny of speech-restricting laws. Phase Two (1950 – 1960), which we may call the post-First Amendment Expansionist Phase, is characterised by a judicial hands-off approach towards legislative and executive action aimed at restricting speech. Phase Three (1960 - present day), which we may call the post-First Amendment Protectionist phase, is characterised by a cautious, incremental move back towards a speech-protective, rigorous-scrutiny approach. This classification is broad-brush and generalist, but serves as a useful explanatory device.

    Before the First Amendment, the relevant part of Article 19(2) allowed the government to restrict speech that “undermines the security of, or tends to overthrow, the State.” The scope of the restriction was examined by the Supreme Court in Romesh Thappar vs State of Madras and Brij Bhushan vs State of Delhi, both decided in 1950. Both cases involved the ban of newspapers or periodicals, under state laws that authorised the government to prohibit the entry or circulation of written material, ‘in the interests of public order’. A majority of the Supreme Court struck down the laws. In doing so, they invoked the concept of “over-breadth”: according to the Court, “public order” was synonymous with public tranquility and peace, while undermining the security of, or tending to overthrow the State, referred to acts which could shake the very foundations of the State. Consequently, while acts that undermined or tended to overthrow the State would also lead to public disorder, not all acts against public order would rise to the level of undermining the security of the State. This meant that the legislation proscribed acts that, under Article 19(2), the government was entitled to prohibit, as well as those that it wasn’t. This made the laws “over-broad”, and unconstitutional. In a dissenting opinion, Fazl Ali J. argued that “public order”, “public tranquility”, “the security of the State” and “sedition” were all interchangeable terms, that meant the same thing.

    In Romesh Thappar and Brij Bhushan, the Supreme Court also held that the impugned legislations imposed a regime of “prior restraint” – i.e., by allowing the government to prohibit the circulation of newspapers in anticipation of public disorder, they choked off speech before it even had the opportunity to be made. Following a long-established tradition in common law as well as American constitutional jurisprudence, the Court held that a legislation imposing prior restraint bore a heavy burden to demonstrate its constitutionality.

    The decisions in Romesh Thappar and Brij Bhushan led to the passage of the First Amendment, which substituted the phrase “undermines the security of, or tends to overthrow, the State” with “public order”, added an additional restriction in the interests of preventing an incitement to an offence, and – importantly – added a the word “reasonable” before “restrictions”.

    The newly-minted Article 19(2) came to be interpreted by the Supreme Court in Ramji Lal Modi vs State of UP (1957). At issue was a challenge to S. 295A of the Indian Penal Code, which criminalised insulting religious beliefs with an intent to outrage religious feelings of any class. The challenge made an over-breadth argument: it was contended that while some instances of outraging religious beliefs would lead to public disorder, not all would, and consequently, the Section was unconstitutional. The Court rejected this argument and upheld the Section. It focused on the phrase “in the interests of”, and held that being substantially broader than a term such as “for the maintenance of”, it allowed the government wide leeway in restricting speech. In other words, as long as the State could show that there was some connection between the law, and public order, it would be constitutional. The Court went on to hold that the calculated tendency of any speech or expression aimed at outraging religious feelings was, indeed, to cause public disorder, and consequently, the Section was constitutional. This reasoning was echoed in Virendra vs State of Punjab (1957), where provisions of the colonial era Press Act, which authorised the government to impose prior restraint upon newspapers, were challenged. The Supreme Court upheld the provisions that introduced certain procedural safeguards, like a time limit, and struck down the provisions that didn’t. Notably, however, the Court upheld the imposition of prior restraint itself, on the ground that the phrase “in the interests of” bore a very wide ambit, and held that it would defer to the government’s determination of when public order was jeopardised by speech or expression.

    In Ramji Lal Modi and Virendra, the Court had rejected the argument that the State can only impose restrictions on the freedom of speech and expression if it demonstrates a proximate link between speech and public order. The Supreme Court had focused closely on the breadth of the phrase “in the interests of”, but had not subjected the reasonable requirement to any analysis. In earlier cases such as State of Madras vs V.G. Row, the Court had stressed that in order to be “reasonable”, a restriction would have to take into account the nature and scope of the right, the extent of infringement, and proportionality. This analysis failed to figure in Ramji Lal Modi and Virendra. However, in Superintendent, Central Prison vs Ram Manohar Lohia, the Supreme Court changed its position, and held that there must be a “proximate” relationship between speech and public disorder, and that it must not be remote, fanciful or far fetched. Thus, for the first time, the breath of the phrase “in the interests of” was qualified, presumably from the perspective of reasonableness. In Lohia, the Court also stressed again that “public order” was of narrower ambit than mere “law and order”, and would require the State to discharge a high burden of proof, along with evidence.

    Lohia marks the start of the third phase in the Court’s jurisprudence, where the link of proximity between speech and public disorder has gradually been refined. In Babulal Parate vs State of Maharashtra (1961) and Madhu Limaye vs Sub-Divisional Magistrate (1970), the Court upheld prior restraints under S. 144 of the CrPC, while clarifying that the Section could only be used in cases of an Emergency. Section 144 of the CrPC empowers executive magistrates (i.e., high-ranking police officers) to pass very wide-ranging preventive orders, and is primarily used to prohibit assemblies at certain times in certain areas, when it is considered that the situation is volatile, and could lead to violence. In Babulal Parate and Madhu Limaye, the Supreme Court upheld the constitutionality of Section 144, but also clarified that its use was restricted to situations when there was a proximate link between the prohibition, and the likelihood of public dirsorder.

    In recent years, the Court has further refined its proximity test. In S. Rangarajan vs P. Jagjivan Ram (1989), the Supreme Court required proximity to be akin to a “spark in a powder keg”. Most recently, in Arup Bhuyan vs State of Assam (2011), the Court read down a provision in the TADA criminalizing membership of a banned association to only apply to cases where an individual was responsible for incitement to imminent violence (a standard borrowed from the American case of Brandenburg).[GB1]

    Lastly, in 2015,  we have seen the first instance of the application of Section 144 of the CrPC to online speech. The wide wording of the section was used in Gujarat to pre-emptively block mobile internet services, in the wake of Hardik Patel’s Patidar agitation for reservations. Despite the fact that website blocking is specifically provided for by Section 69A of the IT Act, and its accompanying rules, the Gujarat High Court upheld the state action.

    The following conclusions emerge:

    (1)  “Public Order” under Article 19(2) is a term of art, and refers to a situation of public tranquility/public peace, that goes beyond simply law-breaking

    (2)  Prior restraint in the interests of public order is justified under Article 19(2), subject to a test of proximity; by virtue of the Gujarat High Court judgment in 2015, prior restraint extends to the online sphere as well

    (3)  The proximity test requires the relationship between speech and public order to be imminent, or like a spark in a powder keg

    World Trends in Freedom of Expression and Media Development

    by Pranesh Prakash last modified Feb 17, 2016 04:41 PM

    PDF document icon WTR Global - FINAL 27 Feb.pdf — PDF document, 1592 kB (1630530 bytes)

    World Trends in Freedom of Expression and Media Development

    by Pranesh Prakash last modified Feb 17, 2016 05:03 PM
    The United Nations Educational, Scientific and Cultural Organisation (UNESCO) had published a book in 2014 that examines free speech, expression and media development. The chapter contains a Foreword by Irina Bokova, Director General, UNESCO. Pranesh Prakash contributed to Independence: Introduction - Global Media Chapter. The book was edited by Courtney C. Radsch.

    Foreword

    Tectonic shifts in technology and economic models have vastly expanded the opportunities for press freedom and the safety of journalists, opening new avenues for freedom of expression for women and men across the world. Today, more and more people are able to produce, update and share information widely, within and across national borders. All of this is a blessing for creativity, exchange and dialogue.

    At the same time, new threats are arising. In a context of rapid change, these are combining with older forms of restriction to pose challenges to freedom of expression, in the shape of controls not aligned with international standards for protection of freedom of expression and rising threats against journalists.

    These developments raise issues that go to the heart of UNESCO’s mandate “to promote the flow of ideas by word and image” between all peoples, across the world. For UNESCO, freedom of expression is a fundamental human right that underpins all other civil liberties, that is vital for the rule of law and good governance, and that is a foundation for inclusive and open societies. Freedom of expression stands at the heart of media freedom and the practice of journalism as a form of expression aspiring to be in the public interest.

    At the 36th session of the General Conference (November 2011), Member States mandated UNESCO to explore the impact of change on press freedom and the safety of journalists. For this purpose, the Report has adopted four angles of analysis, drawing on the 1991 Windhoek Declaration, to review emerging trends through the conditions of media freedom, pluralism and independence, as well as the safety of journalists. At each level, the Report has also examined trends through the lens of gender equality.

    The result is the portrait of change -- across the world, at all levels, featuring as much opportunity as challenge. The business of media is undergoing a revolution with the rise of digital networks, online platforms, internet intermediaries and social media. New actors are emerging, including citizen journalists, who are redrawing the boundaries of the media. At the same time, the Report shows that the traditional news institutions continue to be agenda-setters for media and public communications in general – even as they are also engaging with the digital revolution. The Report highlights also the mix of old and new challenges to media freedom, including increasing cases of threats against the safety of journalists.

    The pace of change raises questions about how to foster freedom of expression across print, broadcast and internet media and how to ensure the safety of journalists. The Report draws on a rich array of research and is not prescriptive -- but it sends a clear message on the importance of freedom of expression and press freedom on all platforms.

    To these ends, UNESCO is working across the board, across the world. This starts with global awareness raising and advocacy, including through World Press Freedom Day. It entails supporting countries in strengthening their legal and regulatory frameworks and in building capacity. It means standing up to call for justice every time a journalist is killed, to eliminate impunity. This is the importance of the United Nations Plan of Action on the Safety of Journalists and the Issue of Impunity, spearheaded by UNESCO and endorsed by the UN Chief Executives Board in April 2012. UNESCO is working with countries to take this plan forward on the ground. We also seek to better understand the challenges that are arising – most recently, through a Global Survey on Violence against Female Journalists, with the International News Safety Institute, the International Women’s Media Foundation, and the Austrian Government.

    Respecting freedom of expression and media freedom is essential today, as we seek to build inclusive, knowledge societies and a more just and peaceful century ahead. I am confident that this Report will find a wide audience, in Member States, international and regional organizations, civil society and academia, as well as with the media and journalists, and I wish to thank Sweden for its support to this initiative. This is an important contribution to understanding a world in change, at a time when the international community is defining a new global sustainable development agenda, which must be underpinned and driven by human rights, with particular attention to freedom of expression.

    Executive Summary

    Freedom of expression in general, and media development in particular, are core to UNESCO’s constitutional mandate to advance ‘the mutual knowledge and understanding of peoples, through all means of mass communication’ and promoting ‘the free flow of ideas by word and image.’ For UNESCO, press freedom is a corollary of the general right to freedom of expression. Since 1991, the year of the seminal Windhoek Declaration, which was endorsed by the UN General Assembly, UNESCO has understood press freedom as designating the conditions of media freedom, pluralism and independence, as well as the safety of journalists.  It is within this framework that this report examines progress as regards press freedom, including in regard to gender equality, and makes sense of the evolution of media actors, news media institutions and journalistic roles over time.

    This report has been prepared on the basis of a summary report on the global state of press freedom and the safety of journalists, presented to the General Conference of UNESCO Member States in November 2013, on the mandate of the decision by Member States taken at the 36th session of the General Conference of the Organization.[*]

    The overarching global trend with respect to media freedom, pluralism, independence and the safety of journalists over the past several years is that of disruption and change brought on by technology, and to a lesser extent, the global financial crisis. These trends have impacted traditional economic and organizational structures in the news media, legal and regulatory frameworks, journalism practices, and media consumption and production habits. Technological convergence has expanded the number of and access to media platforms as well as the potential for expression. It has enabled the emergence of citizen journalism and spaces for independent media, while at the same time fundamentally reconfiguring journalistic practices and the business of news.

    The broad global patterns identified in this report are accompanied by extensive unevenness within the whole.  The trends summarized above, therefore, go hand in hand with substantial variations between and within regions as well as countries.

    Download the PDF


    [*]. 37 C/INF.4 16 September 2013 “Information regarding the implementation of decisions of the governing bodies”. http://unesdoc.unesco.org/images/0022/002230/223097e.pdf; http://unesdoc.unesco.org/images/0022/002230/223097f.pdf

    Net Neutrality Advocates Rejoice As TRAI Bans Differential Pricing

    by Subhashish Panigrahi last modified Feb 23, 2016 02:10 AM
    India would not see any more Free Basics advertisements on billboards with images of farmers and common people explaining how much they benefited from this Facebook project.

    The article by Subhashish Panigrahi was published by Odisha TV on February 9, 2016.


    Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning differential pricing without discriminating services. In their notes TRAI has explained, “In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.” Not just that, violation of this ban would cost Rs. 50,000 every day.

    Facebook planned to launch Free Basics in India by making a few websites – mostly partners with Facebook—available for free. The company not just advertised aggressively on bill boards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics. TRAI criticized Facebook’s attempt to manipulate public opinion. Facebook was also heavily challenged by many policy and internet advocates including non-profits like Free Software Movement of India and Savetheinternet.in campaign. The two collectives strongly discouraged Free Basics by moulding public opinion against it with Savetheinternet.in alone used to send over 2.4 million emails to TRAI to disallow Free Basics. Furthermore, 500 Indian start-ups, including major names like Cleartrip, Zomato, Practo, Paytm and Cleartax, also wrote to India’s Prime Minister Narendra Modi requesting continued support for Net Neutrality – a concept that advocates equal treatment of websites – on Republic Day. Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous but informative videos explaining the regulatory debate and supporting net neutrality. Both went viral.

    Technology critic and Quartz writer Alice Truong reacted to Free Basics saying; “Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

    The decision of the Indian government has been largely welcomed in the country and outside. In support of the move, Web We Want programme manager at the World Wide Web Foundation Renata Avila has said; “As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.”

    There are mixed responses on the social media, both in support and in opposition to the TRAI decision. Josh Levy, Advocacy Director at Accessnow, has appreciated saying, “India is now the global leader on #NetNeutrality. New rules are stronger than those in EU and US.”

    Had differential pricing been allowed, it would have affected start-ups and content-based smaller companies adversely as they could never have managed to pay the high price to a partner service provider to make their service available for free. On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception, the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

    ‘A Good Day for the Internet Everywhere': India Bans Differential Data Pricing

    by Subhashish Panigrahi last modified Feb 25, 2016 01:21 AM
    India distinguished itself as a global leader on network neutrality on February 8, when regulators officially banned “differential pricing”, a process through which telecommunications service providers could or charge discriminatory tariffs for data services offered based on content.

    The article was published by Global Voices on February 9, 2016


    In short, this means that Internet access in India will remain an open field, where users should be guaranteed equal access to any website they want to visit, regardless of how they connect to the Internet.

    In their ruling, Telecommunication Regulatory Authority of India (TRAI) commented:

    In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.

    The decision of the Indian government has been welcomed largely in the country and outside. In support of the move, the World Wide Web Foundation's Renata Avila, also a Global Voices community member, wrote:

    As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.

    A blow for Facebook's “Free Basics”

    While the new rules should long outlast this moment in India's Internet history, the ruling should immediately force Facebook to cancel the local deployment of “Free Basics”, a smart phone application that offers free access to Facebook, Facebook-owned products like WhatsApp, and a select suite of other websites for users who do not pay for mobile data plans.

    Facebook's efforts to deploy and promote Free Basics as what they described as a remedy to India's lack of “digital equality” has encountered significant backlash. Last December, technology critic and Quartz writer Alice Truong reacted to Free Basics saying:

    Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

    When TRAI solicited public comments on the matter of differential pricing, Facebook responded with an aggressive advertising campaign on bill boards and in television commercials across the nation. It also embedded a campaign inside Facebook, asking users to write to TRAI in support of Free Basics.

    TRAI criticized Facebook for what it seemed to regard as manipulation of the public. Facebook was also heavily challenged by many policy and open Internet advocates including non-profits like the Free Software Movement of India and the Savetheinternet.in campaign. The latter two collectives strongly discouraged Free Basics by bringing public opinion where Savetheinternet.in alone facilitated a campaign in which citizens sent over 2.4 million emails to TRAI urging the agency to put a stop to differential pricing.

    Alongside these efforts, 500 Indian startups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to India's prime minister Narendra Modi requesting continued support for net neutrality—on the Indian Republic Day January 26.

    Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous and informative videos explaining the regulatory debate and supporting net neutrality which went viral.

    Had differential pricing been officially legalized, it would have adversely affected startups and content-based smaller companies, who most likely could never manage to pay higher prices to partner with service providers to make their service available for free. This would have paved the way for tech-giants like Facebook to capture the entire market. And this would be no small gain for a company like Facebook: India represents the world's largest market of Internet users after the US and China, where Facebook remains blocked.

    The Internet responds

    There have been mixed responses on social media, both supporting and opposing. Among open Internet advocates both in India and the US, the response was celebratory:

    There are also those like Panuganti Rajkiran who opposed the ruling:

    A terrible decision.. The worst part here is the haves deciding for the have nots what they can have and what they cannot.

    Soumya Manikkath says:

    So all is not lost in the world, for the next two years at least. Do come back with a better plan, dear Facebook, and we'll rethink, of course.

    The ruling leaves an open pathway for companies to offer consumers free access to the Internet, provided that this access is truly open and does not limit one's ability to browse any site of her choosing.

    Bangalore-based Internet policy expert Pranesh Prakash noted that this work must continue until India is truly — and equally — connected:

    Comments by the Centre for Internet and Society on the Report of the Committee on Medium Term Path on Financial Inclusion

    by Vipul Kharbanda last modified Mar 01, 2016 01:53 PM
    Apart from item-specific suggestions, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

     

    The Centre for Internet and Society (CIS) is a non-governmental organization which undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives.

    In the course of its work CIS has also extensively researched and witten about the Aadhaar Scheme of the Government of India, specially from a privacy and technical point of view. CIS was part of the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee and was instrumental in drafting a major part of the report of the Group. In this background CIS would like to mention that it is neither an expert on banking policy in general nor wishes to comment upon the purely banking related recommendations of the Committee. We would like to limit our recommendations to the areas in which we have some expertise and would therefore be commenting only on certain Recommendations of the Committee.

    Before giving our individual comments on the relevant recommendations, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

    Another problem with linking bank accounts with Aadhaar numbers, even if it is not mandatory, is that when the RBI issues an advisory to (optionally) link Aadhaar numbers with bank accounts, a number of banks may implement the advisory too strictly and refuse service to customers (especially marginal customers) whose bank accounts are not linked to their Aadhaar numbers, perhaps due to technical problems in the registration procedure, thereby denying those individuals access to the banking sector, which is contrary to the aims and objectives of the Committee and the stated policy of the RBI to improve access to banking.

    Individual Comments

    Recommendation 1.4 - Given the predominance of individual account holdings, the Committee recommends that a unique biometric identifier such as Aadhaar should be linked to each individual credit account and the information shared with credit information companies. This will not only be useful in identifying multiple accounts, but will also help in mitigating the overall indebtedness of individuals who are often lured into multiple borrowings without being aware of its consequences.

    CIS Comment: The discussion of the committee before making this recommendation revolves around the total incidence of indebtedness in rural areas and their Debt-to-Asset ratio representing payment capacity. However, the committee has not discussed any evidence which indicates that borrowing from multiple banks leads to greater indebtedness for individual account holders in the rural sector. Without identifying the problem through evidence the Committee has suggested linking bank accounts with Aadhaar numbers as a solution.

    Recommendation 2.2 - On the basis of cross-country evidence and our own experience, the Committee is of the view that to translate financial access into enhanced convenience and usage, there is a need for better utilization of the mobile banking facility and the maximum possible G2P payments, which would necessitate greater engagement by the government in the financial inclusion drive.

    CIS Comment: The drafting of the recommendation suggests that RBI is batting for the DBT rather than the subsidy model. However an examination of the discussion in the report suggests that all that the Committee has not discussed or examined the subsidy model vis-à-vis the direct benefit transfer (DBT) model here (though it does recommend DBT in the chapter on G-2-P payments), but only is trying to say is that where government to people money transfer has to take place, it should take place using mobile banking, payment wallets or other such technologies, which have been known to be successful in various countries across the world.

    Recommendation 3.1 - The Committee recommends that in order to increase formal credit supply to all agrarian segments, the digitization of land records should be taken up by the states on a priority basis.

    Recommendation 3.2 - In order to ensure actual credit supply to the agricultural sector, the Committee recommends the introduction of Aadhaar-linked mechanism for Credit Eligibility Certificates. For example, in Andhra Pradesh, the revenue authorities issue Credit Eligibility Certificates to Tenant Farmers (under ‘Andhra Pradesh Land Licensed Cultivators Act No 18 of 2011'). Such tenancy /lease certificates, while protecting the owner’s rights, would enable landless cultivators to obtain loans. The Reserve Bank may accordingly modify its regulatory guidelines to banks to directly lend to tenants / lessees against such credit eligibility certificates.

    CIS Comment: The Committee in its discussion before the recommendation 3.2 has discussed the problems faced by landless farmers, however there is no discussion or evidence which suggests that an Aadhaar linked Credit Eligibility Certificate is the best solution, or even a solution to the problem. The concern being expressed here is not with the system of a Credit Eligibility Certificate, but with the insistence on linking it to an Aadhaar number, and whether the system can be put in place without linking the same to an Aadhaar number.

    Recommendation 6.11 - Keeping in view the indebtedness and rising delinquency, the Committee is of the view that the credit history of all SHG members would need to be created, linking it to individual Aadhaar numbers. This will ensure credit discipline and will also provide comfort to banks.

    CIS Comment: There is no discussion in the Report on the reasons for increase in indebtedness of SHGs. While the recommendation of creating credit histories for SHGs is laudable and very welcome, however there is no logical reason that has been brought out in the Report as to why the same needs to be linked to individual Aadhaar numbers and how such linkage will solve any problems.

    Recommendation 6.13 - The Committee recommends that bank credit to MFIs should be encouraged. The MFIs must provide credit information on their borrowers to credit bureaus through Aadhaar-linked unique identification of individual borrowers.

    CIS Comment: Since the discussion before this recommendation clearly indicates multiple lending practices as one of the problems in the Microfinance sector and also suggests better credit information of borrowers as a possible solution, therefore this recommendation per se, seems sound. However, we would still like to point out that the RBI may think of alternative means to get borrower credit history rather than relying upon just the Aadhaar numbers.

    Recommendation 7.3 - Considering the widespread availability of mobile phones across the country, the Committee recommends the use of application-based mobiles as PoS for creating necessary infrastructure to support the large number of new accounts and cards issued under the PMJDY. Initially, the FIF can be used to subsidize the associated costs. This will also help to address the issue of low availability of PoS compared to the number of merchant outlets in the country. Banks should encourage merchants across geographies to adopt such applicationbased mobile as a PoS through some focused education and PoS deployment drives.

    Recommendation 7.5 - The Committee recommends that the National Payments Corporation of India (NPCI) should ensure faster development of a multi-lingual mobile application for customers who use non-smart phones, especially for users of NUUP; this will address the issue of linguistic diversity and thereby promote its popularization and quick adoption.

    Recommendation 7.8 - The Committee recommends that pre-paid payment instrument (PPI) interoperability may be allowed for non-banks to facilitate ease of access to customers and promote wider spread of PPIs across the country. It should however require non-bank PPI operators to enhance their customer grievance redressal mechanism to deal with any issues thereof.

    Recommendation 7.9 - The Committee is of the view that for non-bank PPIs, a small-value cashout may be permitted to incentivize usage with the necessary safeguards including adequate KYC and velocity checks.

    CIS Comments: While CIS supports the effort to use technology and mobile phones to increase banking penetration and improve access to the formal financial sector for rural and semi-rural areas, sufficient security mechanisms should be put in place while rolling out these services keeping in mind the low levels of education and technical sophistication that are prevalent in rural and semi-rural areas.

    Recommendation 8.1 - The Committee recommends that the deposit accounts of beneficiaries of government social payments, preferably all deposits accounts across banks, including the ‘inprinciple’ licensed payments banks and small finance banks, be seeded with Aadhaar in a timebound manner so as to create the necessary eco-system for cash transfer. This could be complemented with the necessary changes in the business correspondent (BC) system (see Chapter 6 for details) and increased adoption of mobile wallets to bridge the ‘last mile’ of service delivery in a cost-efficient manner at the convenience of the common person. This would also result in significant cost reductions for the government besides promoting financial inclusion.

    CIS Comment: While the report of the Committee has already given several examples of how cash transfer directly into the bank accounts (rather than requiring the beneficiaries to be at a particular place at a particular time) could be more efficient as well as economical, the Committee is making the same point again here under the chapter that deals specifically with government to person payments. However even before this recommendation, there has been no discussion as to the need for linking or “seeding” the deposit accounts of the beneficiaries with Aadhaar numbers, let alone a discussion of how it would solve any problems.

    Recommendation 10.6 - Given the focus on technology and the increasing number of customer complaints relating to debit/credit cards, the National Payments Corporation of India (NPCI) may be invited to SLBC meetings. They may particularly take up issues of Aadhaar-linkage in bank and payment accounts.

    CIS Comment: There is no discussion on why this recommendation has been made, more particularly; there is no discussion at all on why issues of Aadhaar linkage in bank and payment accounts need to be taken up at all.

    NN_Conference Report.pdf

    by Prasad Krishna last modified Feb 27, 2016 08:07 AM

    PDF document icon NN_Conference Report.pdf — PDF document, 1049 kB (1075119 bytes)

    Adoption of Standards in Smart Cities - Way Forward for India

    by Vanya Rakesh last modified Apr 11, 2016 03:04 AM
    With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient.

    Click here to download the full file

    Globally, the pace of urbanization is increasing exponentially. The world’s urban population is projected to rise from 3.6 billion to 6.3 billion between 2011 and 2050. A solution for the same has been development of sustainable cities by improving efficiency and integrating infrastructure and services [1]. It has been estimated that during the next 20 years, 30 Indians will leave rural India for urban areas every minute, necessitating smart and sustainable cities to accommodate them [2]. The Smart Cities Mission of the Ministry of Urban Development was announced in the year 2014, followed by selection of 100 cities in the year 2015 and 20 of them being selected for the first Phase of the project in the year 2016. The Mission [3] lists the “core infrastructural elements” that a smart city would incorporate like adequate water supply, assured electricity, sanitation, efficient public transport, affordable housing (especially for the poor), robust IT connectivity and digitisation, e-governance and citizen participation, sustainable environment, safety and security for citizens, health and education.

    With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The envisioned modern and smart city promises delivery of high quality services to the citizens and will harness data capture and communication management technologies. The performance of such cities would be monitored on the basis of physical as well as the social structure comprising of smart approaches and solution to utilities and transport.

    The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient. Interoperability is essential and to ensure smart integration of various systems in a smart city, internationally agreed standards that include technical specifications and classifications must be adhered to. Development of international standards ensure seamless interaction between components from different suppliers and technologies [4].

    Standardized indicators within standards benefit smart cities in the following ways:

    1. Effective governance and efficient delivery of services.
    2. International and Local targets, benchmarking and planning.
    3. Informed decision making and policy formulation.
    4. Leverage for funding and recognition in international entities.
    5. Transparency and open data for investment attractiveness.
    6. A reliable foundation for use of big data and the information explosion to assist cities in building core knowledge for city decision-making, and enable comparative insight.

    The adoption of standards for smart cities has been advocated across the world as they are perceived to be an effective tool to foster development of the cities. The Director of the ITU Telecommunication Standardization Bureau Chaesub Lee is of the view that “Smart cities will employ an abundance of technologies in the family of the Internet of Things (IoT) and standards will assist the harmonized implementation of IoT data and applications , contributing to effective horizontal integration of a city’s subsystems” [5].

    Smart Cities standards in India

    National Association of Software and Services Companies (NASSCOM) partnered with Accenture [6] to prepare a report called ‘Integrated ICT and Geospatial Technologies Framework for 100 Smart Cities Mission’ [7] to explore the role of ICT in developing smart cities [8], after the announcement of the Mission by Indian Government. The report, released in May 2015, lists down 55 global standards, keeping in view several city sub-systems like urban planning, transport, governance, energy, climate and pollution management, etc which could be applicable to the smart cities in India.

    Though NASSCOM is working closely with the Ministry of Urban Development to create a sustainable model for smart cities [9], due to lack of regulatory standards for smart cities, the Bureau of Indian Standards (BIS) in India has undertaken the task to formulate standardised guidelines for central and state authorities in planning, design and construction of smart cities by setting up a technical committee under the Civil engineering department of the Bureau. However, adoption of the standards by implementing agencies would be voluntary and intends to complement internationally available documents in this area [10].

    Developing national standards in line with these international standards would enable interoperability (i.e. devices and systems working together) and provide a roadmap to address key issues like data protection, privacy and other inherent risks in the digital delivery and use of public services in the envisioned smart cities, which call for comprehensive data management standards in India to instill public confidence and trust [11].

    Key International Smart Cities Standards

    Following are the key internationally accepted and recognized Smart Cities standards developed by leading organisations and the national standardization bodies of several countries that India could adopt or develop national standards in line with these.

    The International Organization for Standardization (ISO) - Smart Cities Standards

    ISO is an instrumental body advocating and developing for smart cities to safeguard rights of the people against a liveable and sustainable environment. The ISO Smart Cities Strategic Advisory Group uses the following working definition: A ‘Smart City’ is one that dramatically increases the pace at which it improves its social, economic and environmental (sustainability) outcomes, responding to challenges such as climate change, rapid population growth, and political and economic instability by fundamentally improving how it engages society, how it applies collaborative leadership methods, how it works across disciplines and city systems, and how it uses data information and modern technologies in order to transform services and quality of life for those in and involved with the city (residents, businesses, visitors), now and for the foreseeable future, without unfair disadvantage of others or degradation of the natural environment. [For details see ISO/TMB Smart Cities Strategic Advisory Group Final Report, September 2015 ( ISO Definition, June 2015)].

    The ISO Technical Committee 268 works on standardization in the field of Sustainable Development in Communities [12] to encourage the development and implementation of holistic, cross-sector and area-based approaches to sustainable development in communities. The Committee comprises of 3 Working Groups [13]:

    • Working Group 1: System Management ISO 37101- This standard sets requirements, guidance and supporting techniques for sustainable development in communities. It is designed to help all kinds of communities manage their sustainability, smartness and resilience to improve the contribution of communities to sustainable development and assess their performance in this area [14].
    • Working Group  2 : City Indicators- The key Smart Cities Standards developed by ISO TC 268 WG 2 (City Indicators) are:

    ISO 37120 Sustainable Development of Communities — Indicators for City Services and Quality of Life

    One of the key standards and an important step in this regard was ISO 37120:2014 under the ISO’s Technical Committee 268 (See Working on Standardization in the field of Sustainable Development in Communities) providing clearly defined city performance indicators (divided into core and supporting indicators) as a benchmark for city services and quality of life, along with a standard approach for measuring each for city leaders and citizens [15]. The standard is global in scope and can help cities prioritize city budgets, improve operational transparency, support open data and applications [16]. It follows the principles [17] set out and can be used in conjunction with ISO 37101.

    ISO 37120 was the first ISO Standard on Global City Indicators published in the year 2014, developed on the basis of a set of indicators developed and extensively tested by the Global City Indicators Facility (a project by University of Toronto) and its 250+ member cities globally. GCIF is committed to build standardized city indicators for performance management including a database of comparable statistics that allow cities to track their effectiveness on everything from planning and economic growth to transportation, safety and education [18].

    The World Council on City Data (WCCD) [19] - a sister organization of the GCI/GCIF - was established in the year 2014 to operationalize ISO 37120 across cities globally. The standards encompasses 100 indicators developed around 17 themes to support city services and quality of life, and is accessible through the WCCD Open City Data Portal which allows for cutting-edge visualizations and comparisons. Indian cities are not yet listed with WCCD [20].

    The indicators are listed under the following heads [21]:

    1. Economy
    2. Education
    3. Environment
    4. Energy
    5. Finance
    6. Fire and Emergency Responses
    7. Governance
    8. Health
    9. Safety
    10. Shelter
    11. Recreation
    12. Solid Waste
    13. Telecommunication and innovation
    14. Transportation
    15. Urban Planning
    16. Waste water
    17. Water and Sanitation

    This International Standard is applicable to any city, municipality or local government that undertakes to measure its performance in a comparable and verifiable manner, irrespective of size and location or level of development. City indicators have the potential to be used as critical tools for city managers, politicians, researchers, business leaders, planners, designers and other professionals [22]. The WCCD forum highlights need for cities to have a set of globally standardized indicators to [23]:

    1. Manage and make informed decisions through data analysis
    2. Benchmark and target
    3. Leverage Funding with senior levels of government
    4. Plan and establish new frameworks for sustainable urban development
    5. Evaluate the impact of infrastructure projects on the overall performance of a city.

    ISO/DTR 37121- Inventory and Review of Existing Indicators on Sustainable Development and Resilience in Cities

    The second standard under ISO TC 268 WG 2 is ISO 37121, which defines additional indicators related to sustainable development and resilience in cities. Some of the indicators include: Smart Cities, Smart Grid, Economic Resilience, Green Buildings, Political Resilience, Protection of biodiversity, etc. The complete list can be viewed on the Resilient Cities website [24].

    Working Group 3: Terminology - There are no publicly available documents so far, giving details about the status of the activities of this group. The ISO Technical Committee 268 also includes Sub Committee 1 (Smart Community Infrastructure) [25], comprising of the following Working Groups: 1) WG 1 Infrastructure metrics, and 2) WG 2 Smart Community Infrastructure.

    The key Smart Cities Standards developed by ISO under this are:

    • ISO 37151:2015 Smart community infrastructures — Principles and Requirements for Performance Metrics
      In the year 2015, a new ISO technical specification for smart cities- 37151:2015 for Principles and requirements for performance metrics was released.  The purpose of standardization in the field of smart community infrastructures such as energy, water, transportation, waste, information and communications technology (ICT), etc. is to promote the international trade of community infrastructure products and services and improve sustainability in communities by establishing harmonized product standards [26]. The metrics in this standard will support city and community managers in planning and measuring performance, and also compare and select procurement proposals for products and services geared at improving community infrastructures [27].
      This Technical Specification gives principles and specifies requirements for the definition,identification, optimization, and harmonization of community infrastructure performance metrics, and gives recommendations for analysis, regarding interoperability, safety, security of community infrastructures [28]. This new Technical Specification supports the use of the ISO 37120 [29].

    • ISO/TR 37150:2014 Smart Community Infrastructures - Review of Existing Activities Relevant to Metrics
      This standard addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). Smart community infrastructures take into consideration environmental impact, economic efficiency and quality of life by using information and communications technology (ICT) and renewable energies to achieve integrated management and optimized control of infrastructures. Integrating smart community infrastructures for a community helps improve the lifestyles of its citizens by, for example: reducing costs, increasing mobility and accessibility, and reducing environmental pollutants.
      ISO/TR 37150 reviews relevant metrics for smart community infrastructures and provides stakeholders with a better understanding of the smart community infrastructures available around the world to help promote international trade of community infrastructure products and give information about leading-edge technologies to improve sustainability in communities [30]. This standard, along with the above mentioned standards [31] supports the multi-billion dollar smart cities technology industry.

    Several other ISO Working Groups developing standards applicable to smart and sustainable cities have been listed in our website [32].

    The International Telecommunications Union (ITU)

    The ITU is another global body working on development of standards regarding smart cities.

    A study group was formed in the year 2015 to tackle standardization requirements for the Internet of Things, with an initial focus on IoT applications in smart cities to address urban development challenges [33], to enable the coordinated development of IoT technologies, including machine-to-machine communications and ubiquitous sensor networks. The group is titled “ITU-T Study Group 20: IoT and its applications, including smart cities and communities”, established to develop standards that leverage IoT technologies to address urban-development challenges and the mechanisms for the interoperability of IoT applications and datasets employed by various vertically oriented industry sectors [34].

    ITU-T also concluded a focused study group looking at smart sustainable cities in May 2015, acting as an open platform for smart city stakeholders to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities. Its parent group is ITU-T Study Group 5, which has  agreed on the following definition of a Smart Sustainable City:
    "A smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social, environmental as well as cultural aspects".

    UK - British Standards Institution

    Apart from the global standards setting organisations, many countries have been looking at developing standards to address the growth of smart cities across the globe. In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed. The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. The BIS launched the City's Standards Institute to bring together cities and key industry leaders and innovators to work together in identifying the challenges facing cities, providing solutions to common problems and defining the future of smart city standards [35].

    • PAS 181 Smart city framework- Guide to establishing strategies for smart cities and communities establishes a good practice framework for city leaders to develop, agree and deliver smart city strategies that can help transform their city’s ability to meet challenges faced in the future and meet the goals. The smart city framework (SCF) does not intend to describe a one-size-fits-all model for the future of UK cities but focuses on the enabling processes by which the innovative use of technology and data, together with organizational change, can help deliver the diverse visions for future UK cities in more efficient, effective and sustainable ways [36].

    • PD 8101 Smart cities- Guide to the role of the planning and development process gives guidance regarding planning for new development for smart city plans and provides an overview of the key issues to be considered and prioritized. The document is for use by local authority planning and regeneration officers to identify good practice in a UK context, and what tools they could use to implement this good practice. This aims to enable new developments to be built in a way that will support smart city aspirations at minimal cost [37].

    • PAS 182 Smart city concept model. Guide to establishing a model for data establishes an interoperability framework and data-sharing between agencies for smart cities for the following purposes:

      1. To have a city where information can be shared and understood between organizations and people at each level
      2. The derivation of data in each layer can be linked back to data in the previous layer
      3. The impact of a decision can be observed back in operational data. The smart city concept model (SCCM) provides a framework that can normalize and classify information from many sources so that data sets can be discovered and combined to gain a better picture of the needs and behaviours of a city’s citizens (residents and businesses) to help identify issues and devise solutions. PAS 182 is aimed at organizations that provide services to communities in cities, and manage the resulting data, as well as decision-makers and policy developers in cities [38].
    • PAS 180 Smart cities Vocabulary helps build a strong foundation for future standardization and good practices by providing an industry-agreed understanding of smart city terms and definitions to be used in the UK. It provides a working definition of a Smart City- “Smart Cities” is a term denoting the effective integration of physical, digital and human systems in the built environment to deliver a sustainable, prosperous and inclusive future for its citizens [39]. This aims to help improve communication and understanding of smart cities by providing a common language for developers, designers, manufacturers and clients. The standard also defines smart city concepts across different infrastructure and systems’ elements used across all service delivery channels and is intended for city authorities and planners, buyers of smart city services and solutions [40], as well as product and service providers.

     

    Endnotes

    [1] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

    [2] See: http://www.ibm.com/smarterplanet/in/en/sustainable_cities/ideas/.

    [3] See: http://www.thehindubusinessline.com/economy/smart-cities-mission-welcome-to-tomorrows-world/article8163690.ece.

    [4] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

    [5] See: http://www.iso.org/iso/news.htm?refid=Ref2042.

    [6] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

    [7] See: http://www.nasscom.in/integrated-ict-and-geospatial-technologies-framework-100-smart-cities-mission.

    [8] See: http://www.cxotoday.com/story/nasscom-creates-framework-for-smart-cities-project/.

    [9] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

    [10] See: http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html.

    [11] See: http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3.

    [12] See: http://www.iso.org/iso/iso_technical_committee?commid=656906.

    [13] See: http://cityminded.org/wp-content/uploads/2014/11/Patricia_McCarney_PDF.pdf.

    [14] See: http://www.iso.org/iso/news.htm?refid=Ref1877.

    [15] See: http://smartcitiescouncil.com/article/new-iso-standard-gives-cities-common-performance-yardstick.

    [16] See: http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities.

    [17] See: http://www.iso.org/iso/catalogue_detail?csnumber=62436.

    [18] See: http://www.cityindicators.org/.

    [19] See: http://www.dataforcities.org/.

    [20] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

    [21] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

    [22] See: http://www.iso.org/iso/37120_briefing_note.pdf.

    [23] See: http://www.dataforcities.org/wccd/.

    [24] See: http://resilient-cities.iclei.org/fileadmin/sites/resilient-cities/files/Webinar_Series/HERNANDEZ_-_ICLEI_Resilient_Cities_Webinar__FINAL_.pdf.

    [25] See: http://www.iso.org/iso/iso_technical_committee?commid=656967.

    [26] See: https://www.iso.org/obp/ui/#iso:std:iso:ts:37151:ed-1:v1:en.

    [27] See: http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001&utm_medium=email&utm_campaign=ISO+Newsletter+November&utm_content=ISO+Newsletter+November+CID_4182720c31ca2e71fa93d7c1f1e66e2f&utm_source=Email%20marketing%20software&utm_term=Read%20more.

    [28] See: http://www.iso.org/iso/37120_briefing_note.pdf.

    [29] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

    [30] See: http://www.iso.org/iso/executive_summary_iso_37150.pdf.

    [31] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

    [32] See: http://cis-india.org/internet-governance/blog/database-on-big-data-and-smart-cities-international-standards.

    [33] See: http://smartcitiescouncil.com/article/itu-takes-internet-things-standards-smart-cities.

    [34] See: https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx.

    [35] See: http://www.bsigroup.com/en-GB/smart-cities/.

    [36] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-181-smart-cities-framework/.

    [37] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PD-8101-smart-cities-planning-guidelines/.

    [38] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-182-smart-cities-data-concept-model/.

    [39] See: http://www.iso.org/iso/smart_cities_report-jtc1.pdf.

    [40] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-180-smart-cities-terminology/.

    Flaws in the UIDAI Process

    by Hans Varghese Mathews last modified Mar 06, 2016 10:40 AM
    The accuracy of biometric identification depends on the chance of a false positive: the probability that the identifiers of two persons will match. Individuals whose identifiers match might be termed duplicands. When very many people are to be identified success can be measured by the (low) proportion of duplicands. The Government of India is engaged upon biometrically identifying the entire population of India. An experiment performed at an early stage of the programme has allowed us to estimate the chance of a false positive: and from that to estimate the proportion of duplicands. For the current population of 1.2 billion the expected proportion of duplicands is 1/121, a ratio which is far too high.

    The article was published in Economic & Political Weekly, Journal » Vol. 51, Issue No. 9, 27 Feb, 2016.


    A legal challenge is being mounted in the Supreme Court, currently, to the programme of biometric identification that the Unique Identification Authority of India (UIDAI) is engaged upon: an identification preliminary and a requisite to providing citizens with “Aadhaar numbers” that can serve them as “unique identifiers” in their transactions with the state. What follows will recount an assessment of their chances of success. We shall be using data that was available to the UIDAI and shall employ only elementary ways of calculation. It should be recorded immediately that an earlier technical paper by the author (Mathews 2013) has been of some use to the plaintiffs, and reference will be made to that in due course.

    The Aadhaar numbers themselves may or may not derive, in some way, from the biometrics in question; the question is not material here. For our purposes a biometric is a numerical representation of some organic feature: like the iris or the retina, for instance, or the inside of a finger, or the hand taken whole even. We shall consider them in some more detail later. The UIDAI is using fingerprints and iris images to generate a combination of biometrics for each individual. This paper bears on the accuracy of the composite biometric identifier. How well those composites will distinguish between individuals can be assessed, actually, using the results of an experiment conducted by the UIDAI itself in the very early stages of its operation; and our contention is that, from those results themselves, the UIDAI should have been able to estimate how many individuals would have their biometric identifiers matching those of some other person, under the best of circumstances even, when any good part of population has been identified.


    Read the full article here.


    The author thanks Nico Temme of the Centrum Wiskunde & Informatica in The Netherlands for the bounds he derived on the chance of a false positive. He is particularly grateful to the anonymous referee of this journal who, through two rounds of comment, has very much improved the presentation of the results. A technical supplement to this paper is placed on the EPW website along with this paper.

    Flaws_in_the_UIDAI_Process_0.pdf

    by Prasad Krishna last modified Mar 06, 2016 10:37 AM

    PDF document icon Flaws_in_the_UIDAI_Process_0.pdf — PDF document, 731 kB (749514 bytes)

    Aadhaar Bill fails to incorporate suggestions by the Standing Committee

    by Amber Sinha last modified Mar 10, 2016 03:58 PM
    In 2011, a standing committee report led by Yashwant Sinha had been scathing in its indictments of the Aadhaar BIll introduced by the UPA government. Five years later, the NDA government has introduced a new bill which is a rehash of the same. I look at the concerns raised by the committee report, none of which have been addressed by the new bill.

    The article was published by The Wire on March 10, 2016

    In December, 2010, the UPA Government introduced the National Identification Authority of India Bill, 2010 in the Parliament. It was subsequently referred to a Standing Committee on Finance by the Speaker of Lok Sabha under Rule 331E of the the Rules of Procedure and Conduct of Business in Lok Sabha. This Committee, headed by BJP leader Yashwant Sinha took evidence from the Minister of Planning and the UIDAI from the government, as well as seeking the view of parties such as the National Human Rights Commission, Indian Banks Association and researchers like Dr Reetika Khera and Dr. Usha Ramanathan. In 2011, having heard from various parties and considering the concerns and apprehensions about the UID scheme, the Committee deemed the bill unacceptable and suggested a re-consideration of the the UID scheme as well as the draft legislation.

    The Aadhaar programme has so far been implemented under the Unique Identification Authority of India, a Central Government agency created through an executive order. This programme has been shrouded in controversy over issues of privacy and security resulting in a Public Interest Litigation filed by Judge Puttaswamy in the Supreme Court. While the BJP had criticised the project as well as the draft legislation  when it was in opposition, once it came to power and particularly, after it launched various welfare schemes like Digital India and Jan Dhan Yojna, it decided to continue with it and use Aadhaar as the identification technology for these projects. In the last year, there have been orders passed by the Supreme Court which prohibited making Aadhaar mandatory for availing services. One of the questions that the government has had to answer both inside and outside the court on the UID project is the lack of a legislative mandate for a project of this size. About five years later, the new BJP led government has come back with a rehash of the same old draft, and no comments made by the standing committee have been taken into account.

    The Standing Committee on the old bill had taken great exception to the continued collection of data and issuance of Aadhaar numbers, while the Bill was pending in the Parliament. The report said that the implementation of the provisions of the Bill and continuing to incur expenditure from the exchequer was a circumvention of the prerogative powers of the Parliament. However, the project has continued without abeyance since its inception in 2009. I am listing below some of the issues that the Committee identified with the UID project and draft legislation, none of which have been addressed in current Bill.

    One of the primary arguments made by proponents of Aadhaar has been that it would be useful in providing services to marginalized sections of the society who currently do not have identification cards and consequently, are not able to receive state sponsored services, benefits and subsidies. The report points that the project would not be able to achieve this as no statistical data on the marginalized sections of the society are being used to by UIDAI to provide coverage to them. The introducer systems which was supposed to provide Aadhaar numbers to those without any form of identification, has been used to enroll only 0.03% of the total number of people registered. Further, the Biometrics Standards Committee of UIDAI has itself acknowledged the issues caused due to a high number of manual laborers in India which would lead to sub-optimal fingerprint scans. A report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population. In this manner, the project could actually end up excluding more people.

    The Report also pointed to a lack of cost-benefit analysis done before going ahead with scheme of this scale. It makes a reference to the report by the London School of Economics on the UK Identity Project which was shelved due to a) huge costs involved in the project, b) the complexity of the exercise and unavailability of reliable, safe and tested technology, c) risks to security and safety of registrants, d) security measures at a scale that will result in substantially higher implementation and operational costs and e) extreme dangers to rights of registrants and public interest. The Committee Report insisted that such global experiences remained relevant to the UID project and need to be considered. However, the new Bill has not been drafted with a view to address any of these issues.

    The Committee comes down heavily on the irregularities in data collection by the UIDAI. They raise doubts about the ability of the Registrars to effectively verify the registrants and a lack of any security audit mechanisms that could identify issues in enrollment. Pointing to the news reports about irregularities in the process being followed by the Registrars appointed by the UIDAI, the Committee deems the MoUs signed between the UIDAI and the Registrars as toothless. The involvement of private parties has been under question already with many questions being raised over the lack of appropriate safeguards in the contracts with the private contractors.

    Perhaps the most significant observation of the Committee was that any scheme that facilitates creation of such a massive database of personal information of the people of the country and its linkage with other databases should be preceded by a comprehensive data protection law. By stating this, the Committee has acknowledged that in the absence of a privacy law which governs the collection, use and storage of the personal data, the UID project will lead to abuse, surveillance and profiling of individuals. It makes a reference to the Privacy Bill which is still at only the draft stage. The current data protection framework in the Section 43A rules under the Information Technology Act, 2000 are woefully inadequate and far too limited in their scope. While there are some protection built into Chapter VI of the new bill, these are nowhere as comprehensive as the ones articulated in the Privacy Bill. Additionally, these protections are subject to broad exceptions which could significantly dilute their impact.

     

    A comparison of the 2016 Aadhaar Bill, and the 2010 NIDAI Bill

    by Vanya Rakesh — last modified Mar 09, 2016 04:08 AM
    This blog post does a clause-by-clause comparison of the provisions of National Identification Authority of India Bill, 2010 and the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016
    • Title

     

    2010 Bill: The Bill was titled as the National Identification Authority of India Bill, 2010.

    2016 Bill : The Bill has been titled as the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016.

     

    • Purpose/Object Clause

     

    2010 Bill: The purpose of Bill was stated to provide for the establishment of the National Identification Authority of India to issue identification numbers to residents of India as well as certain other classes of individuals , to facilitate access to benefits and services, to which they are entitled.

    2016 Bill : The purpose of this Bill has been stated to ensure targeted delivery of subsidies, benefits and services to residents of India in an efficient and transparent manner by assigning unique identity numbers to such individuals.

     
    • Definitions

     
    1. 2010 Bill: “Authentication” was defined as the process in which the Aadhaar number, along with other attributes (including biometrics) are submitted to the Central Identities Data Repository for verification, done on the basis of information, data or documents available with the Repository.

      2016 Bill : “Authentication” has been defined as the process by which the Aadhaar number, along with demographic or biometric information of an individual is submitted to the Central Identities Data Repository for the purpose of verification, done on the basis of the correctness of (or lack of) information available with it.
     
    1. 2010 Bill: “Authentication Record” was not defined in the previous Bill.

      2016 Bill : “Authentication Record”  has been defined under clause 2(d)  as the record of the time of authentication, the identity of the entity requesting such record and the response provided by the Authority for this purpose. 

     

    1. 2010 Bill: “Authority” was defined under clause 2(d) as National Identification Authority of India established under provisions of the Bill. 

           2016 Bill :“Authority” has been defined under clause 2(e) as Unique Identification Authority of India established under provisions of the Bill.

     

    1. 2010 Bill: “Benefit” was not defined in the previous Bill.  

      2016 Bill : “Benefit” has been defined under clause 2(f) as any advantage, gift, reward, relief, or payment (either in cash or kind), or such other benefits, which is provided to an

    individual/ a group of individuals as notified by the Central Government.

     

    1. 2010 Bill: “Biometric Information” was defined under clause 2(e) as a set of biological attributes of an individual as may be specified by regulations.

      2016 Bill : “Biometric Information” has been defined under clause 2(g) as biological attributes of an individual like photograph, fingerprint, Iris scan, or other such biological

    attributes as may be specified by regulations.

     

    1. 2010 Bill: “Core Biometric Information” was not defined in the previous Bill.

      2016 Bill : “Core Biometric Information” has been defined under clause 2(j) as biological attribute of an individual like fingerprint, Iris scan, or such other biological attribute as

    may be specified by regulations.

     

    1. 2010 Bill: “Demographic Information” was defined under clause 2(h) as information specified in the regulations for the purpose of issuing an Aadhaar number, like information relating to the name, age, gender and address of an individual (other than race, religion, caste, tribe, ethnicity, language, income or health), and such other information.

      2016 Bill : “Demographic Information” has been defined under clause 2(k) as information of an individual as may be specified by regulations for the purpose of issuing an Aadhaar number like information relating to the name, date of birth, address and other relevant information, excluding race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history of an individual.

     

    1. 2010 Bill: “Enrolling Agency” was defined under clause 2(i) as an agency appointed by the Authority or the Registrars for collecting information under the Act.

      2016 Bill : “Enrolling Agency” has been defined under clause 2(l) as an agency appointed by the Authority or a Registrar for collecting demographic and biometric information of individuals under this Act.

     

    1. 2010 Bill: “Member” was defined under clause 2(l) to include the Chairperson and a part-time Member of the Authority appointed under the provisions of the Bill.

      2016 Bill : “Member” has been defined under clause 2(o)  to include the Chairperson and Member of the Authority appointed under the provisions of the Bill.

     

    1. 2010 Bill: “Records of Entitlement” was not defined under the previous Bill.

      2016 Bill :  “Records of Entitlement” has been defined under clause 2(r) as the records of benefits, subsidies or services provided to, or availed by, any individual under any programme.

     

    1. 2010 Bill: “Requesting Entity” was not defined under the previous Bill.

      2016 Bill : “Requesting Entity” has been defined under clause 2(u) as an agency or person that submits information of an individual comprising of the Aadhaar number and

    demographic or biometric information to the Central Identities Data Repository for the purpose of authentication.

     

    1. 2010 Bill: “Resident” was defined under clause 2(q) as an individual usually residing in a village, rural area, town, ward, demarcated area (demarcated by the Registrar General of Citizen Registration) within a ward in a town or urban area in India.

      2016 Bill : “Resident” has been defined under clause 2(v) as an individual who has resided in India for a period or periods amounting in all to one hundred and eighty-two days or more in the twelve months immediately preceding the date of application for enrolment.

     

    1. 2010 Bill:  “Review Committee” was defined under clause 2(r) as the Identification Review Committee constituted under the provisions of the Bill.

      2016 Bill : “Review Committee” has not been defined under the Bill.

     

    1. 2010 Bill: “Service” was not defined in the previous Bill.

      2016 Bill : “Service” has been defined under clause 2 (w) as any provision, facility, utility or any other assistance provided in any form to an individual or a group of individuals as may be notified by the Central Government.

     

    1. 2010 Bill: “Subsidy” was not defined in the previous Bill.

      2016 Bill : “Subsidy” has been defined under clause 2(x) as any form of aid, support, grant, subvention, or appropriation (either in cash or kind), as may be notified by the Central Government, given to an individual or a group of individuals.

     

    • Enrolment

     

    1. Aadhaar Numbers

    2016 Bill : Under clause 3(2) of the Bill, it is stated that at the time of enrolment, The enrolling agency shall inform the individual undergoing enrolment the following details:

    (a) the manner in which the information so collected shall be used,

    (b) the nature of recipients with whom the information is intended to be shared during authentication,and

    (c) the existence of a right to access information, the procedure for making such requests for access, and details of the person/department in-charge to whom such requests can be

    made.

     

    1. Properties of Aadhaar Number 

    2010 Bill : Clause 4 (3) stated that subject to authentication, the Aadhaar number shall be accepted as a proof of identity of the Aadhaar number holder.

    2016 Bill : Clause 4 (3) states that subject to authentication, the Aadhaar number (either in physical or electronic form) shall be accepted as a proof of identity of the Aadhaar

    number holder.

    The Explanation under this clause states that for the purpose of this provision, “electronic form” shall have the same meaning as assigned to it in section 2 (1) (r) of the Information Technology Act, 2000.

     

    • Authentication

     

    1. Proof of Aadhaar number necessary for receipt of certain subsidies, benefits and services, etc. 

    2016 Bill : Under clause 7 of the Bill it is provided that for the purpose of establishing an individual's identity as a condition to receipt a a subsidy, benefit or service. the Central or State Government (as the case may be), require that such individual undergo authentication, or furnish proof of possession of Aadhaar number. In case the Aadhaar number has not been assigned to an individual, such individual must make an application for enrolment.

    The Proviso states that the individual shall be offered alternate and viable means of identification for delivery of the subsidy, benefit or service, in an Aadhaar number is not assigned to an individual.

     

    1. Authentication of Aadhaar number 

    2010 Bill: Clause 5 of the Bill stated that authentication of the Aadhaar number shall be performed by the Authority, in relation to the holders’ biometric and demographic information, subject to such conditions and on payment of the prescribed fees. Also, it was provided that the Authority shall respond to an authentication query with a positive, negative or other appropriate response (excluding any demographic and biometric information).

    2016 Bill : The Bill states that authentication of the Aadhaar number shall be performed by the Authority, in relation to the holders’ biometric and demographic information, subject to such conditions and on payment of the prescribed fees.

    Clause 8 (2) provides that unless otherwise provided in the Act, the requesting entity shall— 

    1. For the purpose of authentication, obtain the consent of an individual before collecting his identity information, and

    2. ensure that the identity information of an individual is only used for submission to the Central Identities Data Repository for authentication.

    Clause 8 (3) provides that the following details shall be informed by the requesting entity to the individual submitting his identity information for the purpose of authentication: 

       a. the nature of information that may be shared upon authentication;

       b. the uses to which the information received during authentication may be put by the requesting entity; and

       c. alternatives to submission of identity information to the requesting entity.

    Clause 8(4) states that the Authority shall respond to an authentication query with a positive, negative or other appropriate response (excluding any core biometric information).

     

    1. Prohibition on requiring certain information. 

    2010 Bill: Clause 9 of the Bill prohibited the Authority to make an individual give information pertaining to his race, religion, caste, tribe, ethnicity, language, income or health.

    2016 Bill : This provision has been removed from the 2016 Bill.

     

    • Unique Identification Authority Of India

     

    1. Establishment of Authority 

    2010 Bill: Clause 11(1) of the Bill stated that the Central Government shall establish an Authority called as the National Identification Authority of India, to exercise the powers conferred on it and to perform the functions assigned to it under this Act. Also, clause 11(3) provided that the head office of the Authority shall be in the National Capital Region, referred to in section 2(f) of the National Capital Region Planning Board Act, 1985. 

    2016 Bill : Clause 11(1) of the Bill states that the Central Government shall establish an Authority called as the Unique Identification Authority of India, responsible for the processes of enrolment, authentication and perform such other functions assigned to it under this Act. Also, clause 11(3) provides that the head office of the Authority shall be in New Delhi.

     

    1. Composition of Authority

    2010 Bill: Clause 12 provided that the Authority shall consist of a Chairperson and two part-time Members, to be appointed by the Central Government.  

    2016 Bill : Clause 12 of the Bill provides that the Authority shall consist of a Chairperson (appointed on part-time or full- time basis) , two part-time Members, and the chief executive officer (who shall be Member-Secretary of the Authority), to be appointed by the Central Government.

     

    1. Qualifications for appointment of Chairperson and Members of Authority

    2010 Bill: Clause 13 provided that the Chairperson and Members of the Authority shall be persons of ability, integrity and outstanding calibre having experience and knowledge in the matters relating to technology, governance, law, development, economics, finance, management, public affairs or administration. 

    2016 Bill : Clause 13 provides that the Chairperson and Members of the Authority shall be persons of ability and integrity having experience and knowledge of at least ten years in matters relating to technology, governance, law, development, economics, finance, management, public affairs or administration.

     

    1. Term of office and other conditions of service of Chairperson.

    2010 Bill: Proviso to Clause 14 (1) stated that  the Chairperson of the Unique Identification Authority of India, who would have been appointed before the commencement of this Act by notification A-43011/02/2009-Admn.I (Vol.II) dated the 2nd July, 2009, shall continue as a Chairperson of the Authority for the term for which he had been appointed. Clause 14(4) prohibited the Chairperson from holding any other office during the period of holding his office in the Authority. Proviso to clause 14 (5) stated the salary, allowances and the other terms and conditions of service of the Chairperson shall not be varied to his disadvantage after his appointment. 

    2016 Bill : These provisions have not been included in the Bill.

     

    1. Removal of Chairperson and Members

    2010 Bill:  Clause 15 (2) stated that unless a reasonable opportunity of being heard has been duly provided, the Chairperson or a Member shall not be removed under clauses (d) or (e) of sub-section (1).

    2016 Bill : Clause 15 (2) stated that unless a reasonable opportunity of being heard has been duly provided, the Chairperson or a Member shall not be removed under clauses (b), (d) or (e) of sub-section (1).

     

    1. Restrictions on Chairperson or Members on employment after cessation of office

    2010 Bill: Clause 16 (a) provided that the Chairperson or a member, who ceases to hold office, shall not accept any employment in, or connected with the management or administration of, any person which has been associated with any work under the Act, for a period of three years from the date on which they cease to hold office, without previous approval of the Central Government. 

    The proviso to this clause stated that this provision shall not apply to any employment under the Central Government, State Government, local authority, any statutory authority or any corporation established by or under any Central, State or provincial Act or a Government Company, as defined in section 617 of the Companies Act, 195.

    2016 Bill: Clause 16 (a) provides that the Chairperson or a member, who ceases to hold office, shall not accept any employment in, or connected with the management of any organisation, company or any other entity which has been associated with any work done or contracted out by the Authority (whether directly or indirectly), during his tenure as Chairperson or Member, as the case may be, for a period of three years from the date on which he ceases to hold office, without previous approval of the Central Government. 

    The proviso to this clause stated that this provision shall not apply to any employment under the Central Government, State Government, local authority, any statutory authority or any corporation established by or under any Central, State or provincial Act or a Government Company, as defined in clause (45) of section 2 of the Companies Act, 2013.

     

    1. Functions of Chairperson

    2010 Bill: Clause 17 of the Bill provided that the Chairperson shall have powers of general superintendence, direction in the conduct of the affairs of the Authority, preside over the meetings of the Authority, and exercise and discharge such other powers and functions of the Authority as prescribed, without prejudice to any of the provisions of the Act. 

    2016 Bill : Clause 17 of the Bill states that the Chairperson shall preside over the meetings of the Authority, and exercise and discharge such other powers and functions of the Authority as prescribed, without prejudice to any of the provisions of the Act.

     

    1. Chief Executive Officer

    2010 Bill: Clause 20 (1) of the Bill stated that a chief executive officer, not below the rank of the Additional Secretary to the Government of India, who shall be the Member-Secretary of the Authority,shall be appointed by the Central Government.

    2016 Bill : Clause 18 (1) stated that a chief executive officer, not below the rank of the Additional Secretary to the Government of India, shall be appointed by the Central Government. In the list of its responsibilities, clause 18 (2) (e) additionally provides for performing such other functions, or exercising such other powers, as may be specified by regulations.

     

    1. Meetings 

    2010 Bill: Clause 18 (4) provided that all decisions of the Authority shall be authenticated by the signature of the Chairperson or any other Member who is authorised by the Authority for this purpose.

    2016 Bill : Clause 19 (4) provided that all decisions of the Authority shall be signed by the Chairperson, any other Member or the Member-Secretary authorised by the Authority.

     

    1. Vacancies, etc., not to invalidate proceedings of Authority

    2010 Bill: Clause 19 (b) of the Bill stated that No act or proceeding of the Authority shall be invalid merely by reason of any defect in the appointment of a person as a Member of the Authority

    2016 Bill : Clause 20 (b) of the Bill stated that No act or proceeding of the Authority shall be invalid merely by reason of any defect in the appointment of a person as Chairperson or Member of the Authority

     

    1. Powers and functions of Authority

     Clause 23 (2) (k)

    2010 Bill: Clause 23 (2) (k) provided that the powers and functions of the Authority may include sharing the information of Aadhaar number holders, with their written consent, with such agencies engaged in delivery of public benefits and public services as the Authority may by order direct, in a manner as specified by regulations. 

    2016 Bill : Clause 23 (2) (k) provides that the powers and functions of the Authority may include sharing the information of Aadhaar number holders, subject to the provisions of this Act.

     

    Clause 23 (2) (r) 

    2010 Bill : Clause 23 (2) (r) stated that the powers and functions of the Authority may include specifying, by regulation, the policies and practices for Registrars, enrolling agencies and other service providers.

    2016 Bill : Clause 23 (2) (r) states that the powers and functions of the Authority may include evolving of, and specifying, by regulation, the policies and practices for Registrars, enrolling agencies and other service providers.

     

    • Grants, Accounts and Audit and Annual Report

     

    2010 Bill: Clause 25 provided that  the fees or revenue collected by the Authority shall be credited to the Consolidated Fund of India and the entire amount so credited be transferred to the Authority.

    2016 Bill : Clause 25  states that the fees or revenue collected by the Authority shall be credited to the Consolidated Fund of India.

     

    • Identity Review Committee

     

    2010 Bill: Clause 28 of the Bill provided for establishment of the Identity Review Committee, consisting of three members (including the chairperson) who are persons of eminence, ability, integrity and having knowledge and experience in the fields of technology, law, administration and governance, social service, journalism, management or social sciences. Clause 29 of the Bill enlisted several functions to be undertaken by the Review Committee so constituted.

    2016 Bill: These provisions have been removed from the Bill.

     

    • Protection of Information

     

    1. Security and confidentiality of information

    2010 Bill: Clause 30 (2) of the Bill stated that the Authority shall take measures (including security safeguards) to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against any loss, unauthorised access, use or unauthorised disclosure of the same.

    2016 Bill : Clause 28 (3) states that  the Authority shall take measures to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against access, use or disclosure not permitted under this Act or regulations made thereunder, and against accidental or intentional destruction, loss or damage.

    A new provision-clause 28(4)- states that the Authority shall undertake the following additional measures for protection of information:

    (a) adopt and implement appropriate technical and organisational security measures,

    (b) ensure that the agencies, consultants, advisors or other persons appointed or engaged for performing any function of the Authority under this Act, have in place appropriate technical and organisational security measures for the information, and

    (c) ensure that the agreements or arrangements entered into with such agencies, consultants, advisors or other persons, impose obligations equivalent to those imposed on the Authority under this Act, and require such agencies, consultants, advisors and other persons to act only on instructions from the Authority.

     

    1. Restriction on sharing information 

    2010 Bill: The Bill did not provide for restrictions on sharing of information.

    2016 Bill: This new provision under Clause 29 states that no core biometric information, collected or created under this Act, shall be—

    (a) shared with anyone for any reason whatsoever; or

    (b) used for any purpose other than generation of Aadhaar numbers and authentication under this Act.

    Also, the identity information, other than core biometric information, collected or created

    under this Act may be shared only in accordance with the provisions of this Act as specified under Regulations.

    Clause 29 (3) prohibits usage of identity information available with a requesting entity for any purpose, other than that specified to the individual at the time of submitting any identity information for authentication, or disclosed further, except with the prior consent of the individual to whom such information relates.

    Clause 29 (4) prohibits publication, displaying or publicly posting of the Aadhaar number or core biometric information collected or created under this Act in respect of an Aadhaar number holder, except for the purposes as may prescribed in Law.

     

    1. Biometric information deemed to be sensitive personal information.

     2010 Bill: The Bill did not contain provisions stating that the biometric information shall be deemed to be sensitive personal information for the purpose of this Act. 

    2016 Bill: Clause 30 states that the biometric information collected and stored in electronic form shall be deemed to be “electronic record” and “sensitive personal data or information”, and the provisions contained in the Information Technology Act, 2000 and the rules made thereunder shall apply to such information,to the extent not in derogation of the provisions of this Act.

     The Explanation defines

    (a) “electronic form” - as defined under section 2 (1) (r)  of the Information Technology Act, 2000,

    (b) “electronic record” as defined under section 2 (1) (t)  of the Information Technology Act, 2000

    (c)“sensitive personal data or information” - as defined under clause (iii) of the

    Explanation to section 43A of the Information Technology Act, 2000.

     

    1. Security and confidentiality of information

    2010 Bill: Clause 30 (2) of the Bill stated that the Authority shall take measures (including security safeguards) to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against any loss, unauthorised access, use or unauthorised disclosure of the same.

    2016 Bill : Clause 28 (3) states that  the Authority shall take measures to ensure security and protection of information in possession/control of the Authority (including information stored in the Central Identities Data Repository), against access, use or disclosure not permitted under this Act or regulations made thereunder, and against accidental or intentional destruction, loss or damage.

    A new provision-clause 28(4)- states that the Authority shall undertake the following additional measures for protection of information:

    (a) adopt and implement appropriate technical and organisational security measures,

    (b) ensure that the agencies, consultants, advisors or other persons appointed or engaged for performing any function of the Authority under this Act, have in place appropriate technical and organisational security measures for the information, and

    (c) ensure that the agreements or arrangements entered into with such agencies, consultants, advisors or other persons, impose obligations equivalent to those imposed on the Authority under this Act, and require such agencies, consultants, advisors and other persons to act only on instructions from the Authority.

     

    1. Alteration of demographic information or biometric information. 

    2016 Bill : Clause 31 (4) prohibits alteration of identity information in the Central Identities Data Repository, except in the manner provided in this Act or regulations made thereof.

     

    1. Access to own information and records of requests for authentication.

    2016 Bill : Clause 32 (3) provides that the Authority shall not collect, keep or maintain any information about the purpose of authentication, either by itself or through any entity under its control.

     

    1. Disclosure of information in certain cases 

    2010 Bill: The provision creates an exception under Clause 33 for the purposes of disclosure of information in certain cases like disclosure (including identity information or details of authentication) made pursuant to an order of a competent court; or disclosure (including identity information) made in the interests of national security in pursuance of directions issued by an officer(s) not below the rank of Joint Secretary or equivalent in the Central Government specifically authorised in this behalf by an order of the Central Government.

    2016 Bill : The provision creates an exception under Clause 33 for the purposes of disclosure of information in certain cases like disclosure (including identity information or details of authentication) made pursuant to an order not inferior to that of a District Judge (provided that the court order shall be made only after giving an opportunity of hearing to the Authority); or disclosure (including identity information or authentication records) made in the interests of national security in pursuance of directions issued by an officer not below the rank of Joint Secretary to the Government of India, authorised in this behalf by an order of the Central Government.

    The proviso to Clause 33 (2) states that every direction so issued shall be reviewed by an Oversight Committee consisting of the Cabinet Secretary and the Secretaries to the Government of India in the Department of Legal Affairs and the Department of Electronics and Information Technology, before it takes effect.

    The second proviso states that any such direction so issued shall be valid for a period of three months from the date of its issue, which may be extended for a further period of three months after the review by the Oversight Committee.

     

    • Offences and Penalties

     

    1. Penalty for impersonation at time of enrolment. 

    2010 Bill: The penalty for impersonation was prescribed under Clause 34  as imprisonment for a term which may extend to three years and fine which may extend to ten thousand rupees.

    2016 Bill : The penalty for impersonation was prescribed under Clause 34  as imprisonment for a term which may extend to three years, or with fine which may extend to ten thousand rupees, or both.

     

    1. Penalty for unauthorised access to the Central Identities Data Repository

    2010 Bill: Clause 38 (g) stated that any person not authorised by the Authority,  provides any assistance to any person to do any of the acts mentioned under sub-clauses (a)-(f) shall be punishable. If anyone, who is not authorised by the Authority, performs any activity as listed under (a)-(i), shall be punishable with imprisonment for a term which may extend to three years and shall be liable to a fine which shall not be less than one crore rupees.

    2016 Bill : Clause 38 (g) stated that any person not authorised by the Authority,  reveals any information in contravention of sub-section section 28 (5), or shares, uses or displays information in contravention of section 29 or assists any person in any of the acts mentioned under sub-clauses (a)-(f) shall be punishable. If anyone, who is not authorised by the Authority, performs any activity as listed under (a)-(i), shall be punishable with imprisonment for a term which may extend to three years and shall be liable to a fine which shall not be less than ten lakh rupees. Additionally, the Explanation states that the expression “computer source code” shall have the meaning assigned to it in the Explanation to section 65 of the Information Technology Act, 2000.

     

    1. Penalty for unauthorised use by requesting entity and noncompliance with intimation requirements

    2010 Bill: Clause 40 of the Bill prescribed penalty for manipulating biometric information and stated that a person who gives/attempts to give any biometric information which does not pertain to him for the purpose of getting an Aadhaar number, authentication or updating his information, shall be punishable with imprisonment for a term which may extend to three years or with a fine which may extend to ten thousand rupees or with both.

    2016 Bill:  Clause 40 prescribes penalty for a person, being a requesting entity, uses the identity information of an individual in contravention of clause 8(3) , to be punishable with imprisonment which may extend to three years or with a fine which may extend to ten thousand rupees or, in the case of a company, with a fine which may extend to one lakh rupees or with both. Clause 41 of the Bill states that Whoever, being an enrolling agency or a requesting entity, fails to comply with the requirements of clause 3(2)-list of details to be informed to the individual undergoing enrolment, and clause 8(3)-informing individual undergoing enrolment details for the purpose of authentication, shall be punishable with imprisonment which may extend to one year, or with a fine which may extend to ten thousand rupees or, in the case of a company, with a fine which may extend to one lakh rupees or with both.

     

    1. General Penalty

    2010 Bill: For an offence committed under the Act or rules made thereunder, for which no specific penalty was provided, the penalty was prescribed as imprisonment for a term which may extend to three years, or fine as prescribed.

    2016 Bill  : For an offence committed under the Act or rules made thereunder, for which no specific penalty was provided, the penalty was prescribed as imprisonment for a term which may extend to one year, or fine as prescribed.

     

    • Miscellaneous

     

    1. Power of Central Government to supersede Authority.

    2010 Bill: Clause 47(1)(c) stated that if at any time the Central Government is of the opinion that such circumstances exist which render it necessary in the public interest to supersede the Authority, may do so in the manner prescribed under this provision.

    2016 Bill : Clause 48(1)(c) states that if at any time the Central Government is of the opinion that a public emergency exists, then the Central Government may supersede the Authority, in the manner prescribed under this provision.

     

    1. Power to remove difficulties.

    2010 Bill: The proviso to Clause 56(1) stated that an no order by Central Government, which may appear necessary to remove a difficulty in giving effect to the provisions of this Act, shall be made under this section after the expiry of two years from the commencement of this Act.

    2016 Bill : The proviso to Clause 58(1) stated that an no order by Central Government, which may appear necessary to remove a difficulty in giving effect to the provisions of this Act, shall be made under this section after the expiry of three years from the commencement of this Act.

     

    1. Savings

    2010 Bill: Clause 57 provided that any action taken by the Central Government under the Resolution of the Government of India, Planning Commission bearing notification number A-43011/02/ 2009-Admin.I, dated the 28th January, 2009, shall be deemed to have been done or taken under the corresponding provisions of this Act.

    2016 Bill : Clause 59 states that any action take by Central Government under  the Resolution of the Government of India, Planning Commission bearing notification number A-43011/02/2009-Admin. I, dated the 28th January, 2009, or by the Department of Electronics and Information Technology under the Cabinet Secretariat Notification bearing notification number S.O. 2492(E), dated the 12th September, 2015, as the case may be, shall be deemed to have been validly done or taken under this Act.

     

    • Statement of Objects and Reasons

     

    2010 Bill: The Bill stated that the Central Government decided to issues  unique identification numbers to all residents in India, which involves collection of demographic, as well as biometric information.  The Unique Identification Authority of India was constituted as an executive body by the Government, vide its notification dated the 28th January, 2009. The Bill addressed and enlisted several issues with the issuance of  unique identification numbers which should be addressed by law and attract penalties, such as security and confidentiality of information, imposition of obligation of disclosure of information so collected in certain cases, impersonation at the time of enrolment, unauthorised access to the Central Identities Data Repository, manipulation of biometric information, investigation of certain acts constituting offence, and unauthorised disclosure of the information collected for the purposes of issuance of the numbers. To make the said Authority a statutory one, the National Identification Authority of India Bill, 2010 was proposed to establish the National Identification Authority of India to issue identification numbers and authenticate the Aadhaar number to facilitate access to benefits and services to such individuals to which they are entitled and for matters connected therewith or incidental thereto.Apart from the above mentioned purposes, The National Identification Authority of India Bill, 2010 also seeks to provide for the Authority to exercise powers and discharge functions so prescribed , ensure that the Authority does not require any individual to give information pertaining to his race, religion, caste, tribe, ethnicity, language, income or health, may engage entities to establish and maintain the Central Identities Data Repository and to perform any other functions as may be specified by regulations, constitute the  Identity Review Committee and take measures to ensure that the information in the possession or control of the Authority is secured and protected against any loss, unauthorised access or use or unauthorised disclosure thereof.

    2016 Bill: The Bill states that correct identification of targeted beneficiaries for delivery of subsidies, services, frants, benefits, etc has become a challenge for the Government and has proved to be a major hindrance for successful implementation of these programmes. In the absence of a credible system to authenticate identity of beneficiaries, it is difficult to ensure that the subsidies, benefits and services reach to intended beneficiaries. The Unique Identification Authority of India was established by a resolution of the Government of India, Planning Commission vide notification number A-43011/02/ 2009-Admin.I, dated the 28th January, 2009, to lay down policies and implement the Unique Identification Scheme of the Government, by which residents of India were to be provided unique identity number. Upon successful authentication, this number would serve as proof of identity for identification of beneficiaries for transfer of benefits, subsidies, services and other purposes. With increased use of the Aadhaar number, steps to ensure security of such information need to be taken and offences pertaining to certain unlawful actions, created. It has been felt that the processes of enrolment, authentication, security, confidentiality and use of Aadhaar related information must be made statutory. For this purpose, the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 seeks to provide for issuance of Aadhaar numbers to individuals on providing his demographic and biometric information to the Unique Identification Authority of India, requiring Aadhaar numbers for identifying an individual for delivery of benefits, subsidies, and services, authentication of the Aadhaar number, establishment of the Unique Identification Authority of India, maintenance and updating the information of individuals in the Central Identities Data Repository, state measures pertaining to security, privacy and confidentiality of information in possession or control of the Authority including information stored in the Central Identities Data Repository and identify offences and penalties for contravention of relevant statutory provisions.

     

     

    Aadhaar Bill 2016 & NIAI Bill 2010 - Comparing the Texts

    by Sumandro Chattapadhyay last modified Mar 09, 2016 11:25 AM
    This is a quick comparison of the texts of the Aadhaar Bill 2016 and the National Identification Authority of India Bill 2010. The new sections in the former are highlighed, and the deleted sections (that were part of the latter) are struck out.

    The New Aadhaar Bill in Plain English

    by Amber Sinha, Vanya Rakesh and Vipul Kharbanda — last modified Mar 11, 2016 04:41 AM
    We have put together a plain English version of the The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016.

    The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016

     

    Chapter I. PRELIMINARY

     

    Section 1

    1. This Act is called Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016.

    2. It will be applicable in whole of India (except the state of Jammu and Kashmir).

    3. It will become applicable on a date to be notified by the Central Government.

     

    Section 2

    1. “Aadhaar number” is the identification number issued to an individual under the Act;

    2. “Aadhaar number holder” is the person who has been given an Aadhaar number;

    3. “authentication” is the process of verifying the Aadhaar number, demographic information and biometric information of any person by the Central Identities Data Repository (CIDR);

    4. “authentication record” is the record of the authentication which will contain the identity of the requesting entity and the response of the CIDR;

    5. “Authority”  or “UIDAI” refers to the Unique Identification Authority of India established under this Act;

    6. “benefit” means any relief or payment which may be notified by the Central Government;

    7. “biometric information” means photograph, fingerprint, Iris scan, or any other biological attributes specified by regulations;

    8. “Central Identities Data Repository” or “CIDR” means a centralised database containing all Aadhaar numbers, demographic information and biometric information and other related information;

    9. “Chairperson” means the Chairperson of the UIDAI;

    10. “core biometric information” means fingerprint, Iris scan, or any biological attributes specified by regulations;

    11. “demographic information” includes information relating to the name, date of birth, address and other relevant information as specified by regulations. This information will not include race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history;

    12. “enrolling agency” means an agency appointed by the UIDAI or a Registrar for collecting demographic and biometric information of individuals for issuing Aadhaar numbers;

    13. “enrolment” means the process of collecting demographic and biometric information from individuals for the purpose of issuing Aadhaar numbers;

    14. “identity information” in respect of an individual, includes his Aadhaar number, his biometric information and his demographic information;

    15. “Member” includes the Chairperson and Member of the Authority appointed under section 12;

    16. “notification” means a notification published in the Official Gazette and the expression “notified” with its cognate meanings and grammatical variations will be construed accordingly;

    17. “prescribed” means prescribed by rules made by the Central Government under this Act;

    18. “records of entitlement” means the records of benefits, subsidies or services provided to any individual under any government programme;

    19. “Registrar” means any person authorized by the UIDAI to enroll individuals under the Act;

    20. “regulations” means the regulations made by the UIDAI under this Act;

    21. “requesting entity” means an agency that submits the Aadhaar number and other information of an individual to the CIDR for authentication;

    22. “resident” means a person who has resided in India for atleast 182 days in the last twelve months before the date of application for enrolment;

    23. “service” means any facility or assistance provided by the Central Government in any form;

    24. “subsidy” means any form of aid, support, grant, etc. in cash or kind as notified by the Central Government.

     

    Chapter II. ENROLMENT

     

    Section 3

    1. Every resident is entitled to get an Aadhaar number.

    2. At the time of enrollment, the enrolling agency will inform the individual of the following details—

      1. how their information will be used;

      2. what type of entities the information will be shared with; and

      3. that they have a right to see their information and also tell them how they can see their information.

    3. After collecting and verifying the information given by the individuals, the UIDAI will issue an Aadhaar number to each individual.

     

    Section 4

    1. Once an Aadhaar number has been issued to a person, it will not be re-assigned to any other person.

    2. An Aadhaar number will be a random number and will not contain any attributes or identity of the Aadhaar number holder.

    3. if adopted by a service provider, an Aadhaar number may be accepted as proof of identity of the person.

     

    Section 5

    The UIDAI will take special measures to issue Aadhaar number to women, children, senior citizens, persons with disability, unskilled and unorganised workers, nomadic tribes or to such other persons who do not have any permanent residence and similar categories of individuals.

     

    Section 6

    The UIDAI may require Aadhaar number holders to update their Aadhaar information, so that it remains accurate.

     

    Chapter III. AUTHENTICATION

     

    Section 7

    As a condition for receiving subsidy for which the expenditure is incurred from the Consolidated Fund of India, the Government may require that a person should be authenticated or give proof of the Aadhaar number to establish his/her identity. In the case a person does not have an Aadhaar number, he/she should make an application for enrolment. If an Aadhaar number is not assigned, the person will be offered viable and alternate means of identification for receiving the subsidy, benefit or service.

     

    Section 8

    1. The UIDAI will authenticate the Aadhaar information of people as per the conditions prescribed by the government and may also charge a fees for doing so.

    2. Any requesting entity will— (a) take consent from the individual before collecting his/her Adhaar information; (b) use the information only for authentication with the CIDR;

    3. The entity requesting authentication will also inform the individual of the following— (a) what type of information will be shared for authentication; (b) what will the information be used for; and (c) whether there is any alternative to submitting the Aadhaar information to the requesting entity.

    4. The UIDAI will respond to the authentication request with yes, no, or other appropriate response and share identity information about the Aadhaar number holder but not share any biometric information.

     

    Section 9

    The Aadhaar number or its authentication will not be a proof of citizenship or domicile.

     

    Section 10

    The UIDAI may engage any number of entities to establish and maintain the CIDR and to perform any other functions specified by the regulations.


    Chapter IV. UNIQUE IDENTIFICATION AUTHORITY OF INDIA


    Section 11

    1. The UIDAI will be established by the Central Government to be responsible for the processes of enrolment and authentication of Aadhaar numbers.

    2. The UIDAI will be a body corporate with the power to buy and sell property, to enter into contracts and to sue or be sued.

    3. The head office of the UIDAI will be in New Delhi.

    4. The UIDAI may establish its offices at other places in India.

    Section 12

    The UIDAI will have a Chairperson, two part-time Members and a chief executive officer, who to be appointed by the Central Government.

    Section 13

    The Chairperson and Members will be competent people with at least 10 years experience and knowledge in technology, governance, law, development, economics, finance, management, public affairs or administration.

    Section 14

    1. The Chairperson and the Members will be appointed for 3 years and can be re-appointed after their term. But no Member or Chairperson will be more than 65 years of age.

    2. The Chairperson and Members will take an oath of office and of secrecy.

    3. The Chairperson or Member may— (a) resign from office, by giving an advance written notice of at least 30 days; or (b) be removed from his office because she/he gets disqualified on any of the grounds mentioned in section 15.

    4. The salaries and allowances of the Members and Chairperson will be prescribed under the government.

    Section 15

    1. The Central Government may remove a Chairperson or Member, who—
      (a) has gone bankrupt;
      (b) is physically or mentally unable to do his/her job;
      (c) has been convicted of an offence involving moral turpitude;
      (d) has a financial conflict of interest in performing his/her functions; or
      (e) has abused his/her position so that the government needs to remove him/her in public interest.

    2. The Chairperson or a Member will be given a chance to present his/her side of the story before being removed, unless he/she is being removed on the grounds of bankruptcy or criminal conviction.

    Section 16

    An Ex-Chairperson or Ex-Member will have to take the approval of the Central Government,—

    1. to accept any job in any entity (other than a government organization) which was associated with any work done for the UIDAI while that person was a Chairperson or Member, for a period of three years after ceasing to hold office;

    2. to act or advise any entity on any particular transaction for which that person had provided advice to the UIDAI while he/she was the Chairperson or a Member;

    3. to give advice to any person using information which was obtained as the Chairperson or a Member which is not available to the public in general; or

    4. to accept any offer of employment or appointment  as a director of any company with which he/she had direct and significant official dealings during his/her term of office, for a period of three years.

    Section 17

    The Chairperson will preside over the meetings of the UIDAI and have the powers and perform the functions of the UIDAI.

    Section 18

    1. The chief executive officer (CEO) of the UIDAI will not be below the rank of Additional Secretary to the Government of India.

    2. The chief executive officer will be responsible for— (a) the day-to-day administration of the UIDAI; (b) implementing the programmes and decisions of the UIDAI; (c) making proposals for the UIDAI; (d) preparation of the accounts and budget of the UIDAI; and (e) performing any other functions prescribed in the regulations.

    3. The CEO will annually submit the following things to the UIDAI for its approval — (a) a general report covering all the activities of the Authority in the previous year; (b) programmes of work; (c) the annual accounts for the previous year; and (d) the budget for the coming year.

    4. The CEO will have administrative control over the officers and other employees of the Authority.


    Section 19

    1. The time and place of the meetings of the UIDAI and the rules and procedures of those meetings will be prescribed by regulations.

    2. The meetings will be presided by the Chairperson, and if they are absent, then the senior most Member of the UIDAI.

    3. All decisions at the meetings of the UIDAI will be taken by a majority vote. In case of a tie, the person presiding the meeting will have the casting vote.

    4. All decisions of the UIDAI will be signed by the Chairperson or any other Member or the Member-Secretary authorised by the UIDAI in this behalf.

    5. If any Member, who is a director of a company and because of this has any financial interest in matters coming up for consideration at a meeting, that member should disclose the financial interest and not take any further part in the discussions and decision on that matter.

    Section 20

    No actions or proceeding of the UIDAI will become invalid merely because of—

    1. any vacancy in, or any defect in the constitution of, the UIDAI;

    2. any defect in the appointment of a person as Chairperson or Member of the Authority; or

    3. any irregularity in the procedure of the Authority not affecting the merits of the case.

     

    Section 21

    1. The UIDAI, with the approval of the Government, can decide on the number and types of officers and employees that it would require.

    2. The salaries and allowances of the employees, officer and chief executive officer will be prescribed under the government.

    Section 22.

    Once the UIDAI is establishment—

    1. all the assets and liabilities of the existing Unique Identification Authority of India, established by the Government of India through notification dated the 28th January, 2009, will stand transferred to the new UIDAI.

    2. all data and information collected during enrolment, all details of authentication performed, by the existing Unique Identification Authority of India will be deemed to have been done by the UIDAI. All debts, liabilities incurred and all contracts entered into by the Unique Identification Authority of India will be deemed to have been entered into by the UIDAI;

    3. all money due to the existing Unique Identification Authority of India will be deemed to be due to the UIDAI; and

    4. all suits and other legal proceedings instituted by or against such Unique Identification Authority of India may be continued by or against the UIDAI.

    Section 23

    The UIDAI will develop the policy, procedure and systems for issuing Aadhaar numbers to individuals and perform their authentication. The powers and functions of the UIDAI include—

    1. specifying the demographic information and biometric information required for enrolment and the processes for collection and verification of that information;

    2. collecting demographic information and biometric information from people seeking Aadhaar numbers;

    3. appointing of one or more entities to operate the CIDR;

    4. generating and assigning Aadhaar numbers to individuals;

    5. performing authentication of Aadhaar numbers;

    6. maintaining and updating the information of individuals in the CIDR;

    7. omitting and deactivating an Aadhaar number;

    8. specifying the manner of use of Aadhaar numbers for the purposes of providing or availing of various subsidies and other purposes for which Aadhaar numbers may be used;

    9. specifying the terms and conditions for appointment of Registrars, enrolling agencies and service providers and revocation of their appointments;

    10. establishing, operating and maintaining of the CIDR;

    11. sharing the information of Aadhaar number holders;

    12. calling for information and records, conducting inspections, inquiries and audit of the operations of the CIDR, Registrars, enrolling agencies and other agencies appointed under this Act;

    13. specifying processes relating to data management, security protocols and other technology safeguards under this Act;

    14. specifying the conditions/procedures for issuance of new Aadhaar number to existing Aadhaar number holder;

    15. levying and collecting the fees or authorising the Registrars, enrolling agencies or other service providers to collect fees for the services provided by them under this Act;

    16. appointing committees necessary to assist the Authority in discharge of its functions;

    17. promoting research and development for advancement in biometrics and related areas;

    18. making and specifying policies and practices for Registrars, enrolling agencies and other service providers;

    19. setting up facilitation centres and grievance redressal mechanisms;

    20. other powers and functions as prescribed.

    The Authority may,— (a) enter into agreements with various state governments and Union Territories for collecting, storing, securing or processing of information or delivery of Aadhaar numbers to individuals or performing authentication; (b) appoint Registrars, engage and authorize agencies to collect, store, secure, process information or do authentication or perform other functions under this Act. The Authority may engage consultants, advisors and other persons required for efficient discharge of its functions.

    Chapter V. GRANTS, ACCOUNTS AND AUDIT AND ANNUAL REPORT

     

    Section 24

    The Central Government may grant money to the UIDAI as it may decide, upon due appropriation by Parliament.

     

    Section 25

    Fees/revenue collected by the UIDAI will be credited to the Consolidated Fund of India

     

    Section 26

    1. The UIDAI will prepare an annual statement of accounts in the format prescribed by Central Government

    2. The Comptroller and Auditor-General will audit the account of the UIDAI annually at intervals decided by him, at the UIDAI’s expense.

    3. The Comptroller and Auditor-General or his appointees will have the same powers of audit they usually have to audit Government accounts.

    4. The UIDAI will forward the statement of accounts certified by the Comptroller and Auditor-General and the audit report, to the Central Government who will lay it before both houses of Parliament.

     

    Section 27

    1. The UIDAI will provide returns, statements and particulars as sought, to the Central Government, as and when required.

    2. The UIDAI will prepare an annual report containing the description of work for previous years, annual accounts of previous year, and the programmes of work for coming year.

    3. The copy of the annual report will be laid before both houses of Parliament by the Central Government.

     

    Chapter VI. PROTECTION OF INFORMATION

     

    Section 28

    1. The UIDAI will ensure the security and confidentiality of identity information and authentication records.

    2. The UIDAI will take measures to ensure that all information with the UIDAI, including CIDR records is secured and protected against access, use or disclosure and against destruction, loss or damage.

    3. The UIDAI will adopt and implement appropriate technical and organisational security measures, and ensure the same are imposed through agreements/arrangements with its agents, consultants, advisors or other persons.

    4. Unless otherwise provided, the UIDAI or its agents will not reveal any information in the CIDR to anyone.

    5. An Aadhaar number holders may request UIDAI to provide access his information (excluding the core biometric information) as per the regulations specified.

     

    Section 29

    1. The core biometric information collected will not be a) shared with anyone for any reason, and b) used for any purpose other generation of Aadhaar numbers and authentication.

    2. Identity information, other than core biometric information, may be shared only as per this Act and regulations specified under it.

    3. Identity information available with a requesting entity will not be used for any purpose other than what is specified to the individual, nor will it be shared further without the individual’s consent.

    4. Aadhaar numbers or core biometric information will not be made public except as specified by regulations.

     

    Section 30

    All biometric information collected and stored in electronic form will be deemed to be “electronic record” and “sensitive personal data or information” under Information Technology Act, 2000 and its provisions and rules will apply to it in addition to this Act.

     

    Section 31

    1. If the demographic or biometric information about any Aadhaar number holder changes, is lost or is found to be incorrect, they may request the UIDAI to make changes to their record in the CIDR, as necessary.

    2. The identity information in the CIDR will not be altered, except as provided in this Act.

     

    Section 32

    1. The UIDAI will maintain the authentication records in the manner and for as long as specified by regulations.

    2. Every Aadhaar number holder may obtain his authentication record as specified by regulations.

    3. The UIDAI will not collect, keep or maintain any information about the purpose of authentication.

     

    Section 33

    1. The UIDAI may reveal identity information, authentication records or any information in the CIDR following a court order by a District Judge or higher. Any such order may only be made after UIDAI is allowed to appear in a hearing.

    2. The confidentiality provisions in Sections 28 and 29 will not apply with respect to disclosure made in the interest of national security following directions by a Joint Secretary to the Government of India, or an officer of a higher rank, authorised for this purpose.

    3. An Oversight Committee comprising Cabinet Secretary, and Secretaries of two departments — Department of Legal Affairs and DeitY— will review every direction under 33 B above.

    4. Any directions under 33 B above are valid for 3 months, after which they may be extended following a review by the Oversight Committee.

     

    Chapter VII. OFFENCES AND PENALTIES

     

    Section 34

    Impersonating or attempting to impersonate another person by providing false demographic or biometric information will punishable by imprisonment of up to three years, and/or fine of up to ten thousand rupees.

     

    Section 35

    Changing or attempting to change any demographic or biometric information of an Aadhaar number holder by impersonating another person (or attempting to do so), with the intent of i) causing harm or mischief to an Aadhaar number holder, or ii) appropriating the identity of an Aadhaar number holder, is punishable with imprisonment up to three years and fine up to ten thousand rupees.

     

    Section 36

    Collection of identity information by one not authorised by this Act, by way of pretending otherwise, is punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

     

    Section 37

    Intentional disclosure or dissemination of identity information, to any person not authorised under this Act, or in violation of any agreement entered into under this Act, will be punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

     

    Section 38

    The following intentional acts, when not authorised by the UIDAI, will be punishable with imprisonment up to three years and a fine not less than ten lakh rupees:

    1. accessing or securing access to the CIDR;

    2. downloading, copying or extracting any data from the CIDR;

    3. introducing or causing any virus or other contaminant into the CIDR;

    4. damaging or causing damage to the data in the CIDR;

    5. disrupting or causing disruption to access to CIDR;

    6. causing denial of access to an authorised to the CIDR;

    7. revealing information in breach of (D) in Section 28, or Section 29;

    8. destruction, deletion or alteration of any files in the CIDR;

    9. stealing, destruction, concealment or alteration of any source code used by the UIDAI.

     

    Section 39

    Tampering of data in the CIDR or removable storage medium, with the intention to modify or discover information relating to Aadhaar number holder will be punishable with imprisonment up to three years and a fine up to ten thousand rupees.

     

    Section 40

    Use of identity information in violation of Section 8 (3) by a requesting entity will be punishable with imprisonment up to three years and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).


    Section 41

    Violation of Section 8 (3) or Section 3 (2) by a requesting entity or enrolling agency will be punishable with imprisonment up to one year and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

     

    Section 42

    Any offence against this Act or regulations made under it, for which no specific penalty is provided, will be punishable with be punishable with imprisonment up to one year and/or a fine up to twenty five thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

     

    Section 43

    1. In case of an offence under Act committed by a Company, all person in charge of and responsible for the conduct of the company will also be held to be guilty and liable for punishment unless they can prove lack of knowledge of the offense or that they had exercised all due diligence to prevent it.

    2. In case an offence is committed by a Company with the consent, connivance or neglect of a director, manager, secretary or other officer of a company, they will also be held guilty of the offence.

     

    Section 44

    This Act will also apply to offences committed outside of India by any person, irrespective of their nationality, if the offence involves any data in the CIDR.

     

    Section 45

    Offences under this Act will not be investigated by police officers below the rank of Inspector of Police.

     

    Section 46

    Penalties imposed under this Act will not prevent imposition of any other penalties or punishment under any other law in force.

     

    Section 47

    1. Courts will take cognizance of offences under this Act only upon complaint being made by the UIDAI or any officer authorised by it.

    2. No court inferior to that of a Chief Metropolitan Magistrate or a Chief Judicial Magistrate will try any offence under this Act.

     

    Chapter VIII. MISCELLANEOUS

     

    Section 48

    1. Central Government has the power to supersede the UIDAI, through a notification, not for longer than six months, in the following circumstances: i) In case of circumstances beyond the control of the UIDAI, ii) The UIDAI has defaulted in complying with directions of the Central Government, affecting financial position of the UIDAI, iii) Public emergency

    2. Upon publication of notification, Chairperson and Members of the UIDAI must vacate the office

    3. Powers, functions and duties will be performed by person(s) authorised by the President.

    4. Properties controlled and owned by UIDAI will vest in the Central Government.

    5. Central Government will reconstitute the UIDAI upon expiration of supersession, with fresh appointment of Chairperson and Members.

     

    Section 49

    Chairperson, members, employees etc. are deemed to be public servants within the meaning of section 21 of the Indian Penal Code.

     

    Section 50

    1. Central Government has the power to issue directions to the UIDAI on questions of policy (to be decided by the Government), except technical and administrative matters and the UIDAI will be bound by it.

    2. The UIDAI will be given an opportunity to express views before direction is given.

     

    Section 51

    The UIDAI may delegate its powers and functions to a Member or officer of the UIDAI.

     

    Section 52

    No suit, prosecution or other legal proceedings will lie against the Central Government, UIDAI, Chairperson, any Member, officer, or other employees of the UIDAI for an act done in good faith.

     

    Section 53

    The Central Government has the power to makes Rules for matters prescribed under this provision.

     

    Section 54

    UIDAI has the power to make regulations for matters prescribed under this provision.

     

    Section 55

    Rules and regulations under this Act will be laid before each House of Parliament for a total period of thirty days, both Houses must agree in making modification, and then the Rules will come into effect.

     

    Section 56

    Provisions of this Act are in addition to, and not in derogation of any other law currently in effect.

     

    Section 57

    This Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies.

     

    Section 58

    The Central Government may pass an order to remove a difficulty in giving effect to the provisions of this Act, not beyond three years from the commencement of this Act.

     

    Section 59

    Action take by Central Government under the Resolution of the Government of India for setting up the UIDAI or by the Department of Electronics and Information Technology under the notification including the UIDAI under the Ministry of Communications and Information Technology will be deemed to have been validly done or taken.

     

    STATEMENT OF OBJECTS AND REASONS
    1. Correct identification of targeted beneficiaries for delivery of subsidies, services, frants, benefits, etc has become a challenge for the Government

    2. This has proved to be a major hindrance for successful implementation of these programmes.

    3. In the absence of a credible system to authenticate identity of beneficiaries, it is difficult to ensure that the subsidies, benefits and services reach to intended beneficiaries.

    4. The UIDAI was established to lay down policies and implement the Unique Identification Scheme of the Government, by which residents of India were to be provided unique identity number.

    5. Upon successful authentication, this number would serve as proof of identity for identification of beneficiaries for transfer of benefits, subsidies, services and other purposes.

    6. With increased use of the Aadhaar number, steps to ensure security of such information need to be taken and offences pertaining to certain unlawful actions, created.

    7. It has been felt that the processes of enrolment, authentication, security, confidentiality and use of Aadhaar related information must be made statutory.

    8. The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 seeks to provide for issuance of Aadhaar numbers to individuals on providing his demographic and biometric information to the UIDAI, requiring Aadhaar numbers for identifying an individual for delivery of benefits, subsidies, and services, authentication of the Aadhaar number, establishment of the UIDAI, maintenance and updating the information of individuals in the CIDR, state measures pertaining to security, privacy and confidentiality of information in possession or control of the UIDAI including information stored in the Central Identities Data Repository and identify offences and penalties for contravention of relevant statutory provisions.

     

    An Urgent Need for the Right to Privacy

    by Sumandro Chattapadhyay last modified Mar 17, 2016 07:40 AM
    Along with a group of individuals and organisations from academia and civil society, we have drafted and are signatories to an open letter addressed to the Union government and urging the same to "urgently take steps to uphold the constitutional basis to the right to privacy and fulfil it’s constitutional and international obligations." Here we publish the text of the open letter. Please follow the link below to support it by joining the signatories.

     

    Read and sign the open letter.

     

    Text of the Open Letter

    As our everyday lives are conducted increasingly through electronic communications the necessity for privacy protections has also increased. While several countries across the globe have recognised this by furthering the right to privacy of their citizens the Union Government has adopted a regressive attitude towards this core civil liberty. We urge the Union Government to take urgent measures to safeguard the right to privacy in India.

    Our concerns are based on a continuing pattern of disregard for the right to privacy by several governments in the past. This trend has increased as can be plainly viewed from the following developments.

    In 2015, the Attorney General in the case of *K.S. Puttaswamy v. Union of India*, argued before the Hon’ble Supreme Court that there is no right to privacy under the Constitution of India. The Hon'ble Court was persuaded to re-examine the basis of the right to privacy upsetting 45 years of judicial precedent. This has thrown the constitutional right to privacy in doubt and the several judgements that have been given under it. This includes the 1997 PUCL Telephone Tapping judgement as well. We urge the Union Government to take whatever steps are necessary and urge the Supreme Court to hold that a right to privacy exists under the Constitution of India.

    Recently Mr. Arun Jaitley, Minister for Finance introduced the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. This bill was passed on March 11, 2016 in the middle of budget discussion on a short notice as a money bill in the Lok Sabha when only 73 of 545 members were present. Its timing and introduction as a money bill prevents necessary scrutiny given the large privacy risks that arise under it. This version of the bill was never put up for public consultation and is being rushed through without adequate discussion. Even substantively it fails to give accountable privacy safeguards while making Aadhaar mandatory for availing any government subsidy, benefit, or service.

    We urge the Union Government to urgently take steps to uphold the constitutional basis to the right to privacy and fulfil it’s constitutional and international obligations. We encourage the Government to have extensive public discussions on the Aadhaar Bill before notifying it. We further call upon them to constitute a drafting committee with members of civil society to draft a comprehensive statute as suggested by the Justice A.P. Shah Committee Report of 2012.

    Signatories:

    • Amber Sinha, the Centre for Internet and Society
    • Japreet Grewal, the Centre for Internet and Society
    • Joshita Pai, Centre for Communication Governance, National Law University
    • Raman Jit Singh Chima, Access Now
    • Sarvjeet Singh, Centre for Communication Governance, National Law University
    • Sumandro Chattapadhyay, the Centre for Internet and Society
    • Sunil Abraham, the Centre for Internet and Society
    • Vanya Rakesh, the Centre for Internet and Society

     

    Press Release, March 11, 2016: The Law cannot Fix what Technology has Broken!

    by Japreet Grewal and Sunil Abraham — last modified Mar 16, 2016 10:10 AM
    We published and circulated the following press release on March 11, 2016, as the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. This Bill was proposed by finance minister, Mr. Arun Jaitley to give legislative backing to Aadhaar, being implemented by the Unique Identification Authority of India (UIDAI).

     

    The Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 today. This Bill was proposed by finance minister, Mr. Arun Jaitley to give legislative backing to Aadhaar, being implemented by the Unique Identification Authority of India (UIDAI).

    The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. The Bill has made it compulsory for an individual to enrol under Aadhaar in order to receive any subsidy, benefit or service from the Government. Biometric information that is required for the purpose of enrolment has been deemed "sensitive personal information" and restrictions have been imposed on use, disclosure and sharing of such information for purposes other than authentication, disclosure made pursuant to a court order or in the interest of national security. Here, the Bill has acknowledged the standards of protection of sensitive personal information established under Section 43A of the Information Technology Act, 2000. The Bill has also laid down several penal provisions for acts that include impersonation at the time of enrolment, unauthorised access to the Central Identities Data Repository, unauthorised use by requesting entity, noncompliance with intimation requirements, etc.

    Key Issues

    1. Identification without Consent

    Before the Aadhaar project it was not possible for the Indian government to identify citizens without their consent. But once the government has created a national centralized biometric database it will be possible for the government to identify any citizen without their consent. Hi-resolution photography and videography make it trivial for governments and also any other actor to harvest biometrics remotely. In other words, the technology makes consent irrelevant. A German ministers fingerprints were captured by hackers as she spoke using hand gesture at at conference. In a similar manner the government can now identify us both as individuals and also as groups without requiring our cooperation. This has direct implications for the right to privacy as we will be under constant government surveillance in the future as CCTV camera resolutions improve and there will be chilling effects on the right to free speech and the freedom of association. The only way to fix this is to change the technology configuration and architecture of the project. The law cannot be used as band-aid on really badly designed technology.

    2. Fallible Technology

    The technology used for collection and authentication as been said to be fallible. It is understood that the technology has been feasible for a population of 200 million. The Biometrics Standards Committee of UIDAI has acknowledged the lack of data on how a biometric authentication technology will scale up where the population is about 1.2 billion. Further, a report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population.

    We know that the Aadhaar number has been issued to dogs, trees (with the Aadhaar letter containing the photo of a tree). There have been slip-ups in the Aadhaar card enrolment process, some cards have ended up with pictures of an empty chair, a tree or a dog instead of the actual applicants. An RTI application has revealed that the Unique Identification Authority of India (UIDAI) has identified more than 25,000 duplicate Aadhaar numbers in the country till August 2015.

    At the stage of authentication, the accuracy of biometric identification depends on the chance of a false positive— the probability that the identifiers of two persons will match. For the current population of 1.2 billion the expected proportion of duplicates is 1/121, a ratio which is far too high. In a recent paper in EPW by Hans Mathews, a mathematician with CIS, shows that as per UIDAI's own statistics on failure rates, the programme would badly fail to uniquely identify individuals in India. [1]

    Endnote

    [1] See: http://cis-india.org/internet-governance/blog/epw-27-february-2016-hans-varghese-mathews-flaws-in-uidai-process

     

    Press Release, March 15, 2016: The New Bill Makes Aadhaar Compulsory!

    by Amber Sinha — last modified Mar 16, 2016 10:11 AM
    We published and circulated the following press release on March 15, 2016, to highlight the fact that the Section 7 of the Aadhaar Bill, 2016 states that authentication of the person using her/his Aadhaar number can be made mandatory for the purpose of disbursement of government subsidies, benefits, and services; and in case the person does not have an Aadhaar number, s/he will have to apply for Aadhaar enrolment.

     

    Nandan Nilekani, the former chairperson of the Unique Identification Authority of India had repeatedly stated that Aadhaar is not mandatory. However, in the last few years various agencies and departments of the government, both at the central and state level, had made it mandatory in order to be able to avail beneficiary schemes or for the arrangement of salary, provident fund disbursals, promotion, scholarship, opening bank account, marriages and property registrations. In August 2015, the Supreme Court passed an order mandating that the Aadhaar number shall remain optional for welfare schemes, stating that no person should be denied any benefit for reason of not having an Aadhaar number, barring a few specified services.

    The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, however, has not followed this mandate. Section 7 of the Bill states that “a person should be authenticated or give proof of the Aadhaar number to establish his/her identity” “as a condition for receiving subsidy, benefit or service”. Further, it reads, “In the case a person does not have an Aadhaar number, he/she should make an application for enrollment.” The language of the provision is very clear in making enrollment in Aadhaar mandatory, in order to be entitled for welfare services. Section 7 also says that “the person will be offered viable and alternate means of identification for receiving the subsidy, benefit or service. However, these unspecified alternate means will be made available in the event “an Aadhaar number is not assigned”. This language is vague and it is not clear whether it mandates alternate means of identification for those who choose not to apply for an Aadhaar number for any reason. The fact that it does make it mandatory to apply for an Aadhaar number for persons without it, may lead to the presumption that the alternate means are to be made available for those who may have applied for an Aadhaar number but it has not been assigned for any reason. It is also noteworthy that draft legislation is silent on what the “viable and alternate means of identification” could be. There are a number of means of identification, which are recognised by the state, and a schedule with an inclusive list could have gone a long way in reducing the ambiguity in this provision.

    Another aspect of Section 7 which is at odds with the Supreme Court order is that it allows making an Aadhaar number mandatory for “for receipt of a subsidy, benefit or service for which the expenditure is incurred” from the Consolidated Fund of India. The Supreme Court had been very specific in articulating that having an Aadhaar number could not be made compulsory except for “any purpose other than the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene” or for the purpose of the LPG scheme. The restriction in the Supreme Court order was with respect to the welfare schemes, however, instead of specifying the schemes, Section 7 specified the source of expenditure from which subsidies, benefits and services can be funded, making the scope much broader. Section 7, in effect, allows the Central Government to circumvent the Supreme Court order if they choose to tie more subsidies, benefits and services to the Consolidated Fund of India.

    These provisions run counter to the repeated claims of the government for the last six years that Aadhaar is not compulsory, nor is the specification by the Supreme Court for restricting use of Aadhaar to a few services only, reflected anywhere in the Bill. The “viable and alternate means” clause is too vague and inadequate to prevent denial of benefits to those without an Aadhaar number. The sum effect of these factors is to give the Central Government powers to make Aadhaar mandatory, for all practical purposes.

     

    List of Recommendations on the Aadhaar Bill, 2016 - Letter Submitted to the Members of Parliament

    by Amber Sinha, Sumandro Chattapadhyay, Sunil Abraham, and Vanya Rakesh — last modified Mar 21, 2016 08:50 AM
    On Friday, March 11, the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. Based on these concerns, and numerous others, we submitted an initial list of recommendations to the Members of Parliaments to highlight the aspects of the Bill that require immediate attention.

     

    Download the submission letter: PDF.

     

    Text of the Submission

    On Friday, March 11, the Lok Sabha passed the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016. The Bill was introduced as a money bill and there was no public consultation to evaluate the provisions therein even though there are very serious ramifications for the Right to Privacy and the Right to Association and Assembly. The Bill has made it compulsory for all Indian to enroll for Aadhaar in order to receive any subsidy, benefit, or service from the Government whose expenditure is incurred from the Consolidate Fund of India. Apart from the issue of centralisation of the national biometric database leading to a deep national vulnerability, the Bill also keeps unaddressed two serious concerns regarding the technological framework concerned:

    • Identification without Consent: Before the Aadhaar project it was not possible for the Indian government or any private entity to identify citizens (and all residents) without their consent. But biometrics allow for non-consensual and covert identification and authentication. The only way to fix this is to change the technology configuration and architecture of the project. The law cannot be used to correct the problems in the technological design of the project.

    • Fallible Technology: The Biometrics Standards Committee of UIDAI has acknowledged the lack of data on how a biometric authentication technology will scale up where the population is about 1.2 billion. The technology has been tested and found feasible only for a population of 200 million. Further, a report by 4G Identity Solutions estimates that while in any population, approximately 5% of the people have unreadable fingerprints, in India it could lead to a failure to enroll up to 15% of the population. For the current Indian population of 1.2 billion the expected proportion of duplicates is 1/121, a ratio which is far too high. [1]

    Based on these concerns, and numerous others, we sincerely request you to ensure that the Bill is rigorously discussed in Rajya Sabha, in public, and, if needed, also by a Parliamentary Standing Committee, before considering its approval and implementation. Towards this, we humbly submit an initial list of recommendations to highlight the aspects of the Bill that require immediate attention:

    1. Implement the Recommendations of the Shah and Sinha Committees: The report by the Group of Experts on Privacy chaired by the Former Chief Justice A P Shah [2] and the report by the Parliamentary Standing Committee on Finance (2011-2012) chaired by Shri Yashwant Sinha [3] have suggested a rigorous and extensive range of recommendations on the Aadhaar / UIDAI / NIAI project and the National Identification Authority of India Bill, 2010 from which the majority sections of the Aadhaar Bill, 2016, are drawn. We request that these recommendations are seriously considered and incorporated into the Aadhaar Bill, 2016.

    2. Authentication using the Aadhaar number for receiving government subsidies, benefits, and services cannot be made mandatory: Section 7 of the Aadhaar Bill, 2016, states that authentication of the person using her/his Aadhaar number can be made mandatory for the purpose of disbursement of government subsidies, benefits, and services; and in case the person does not have an Aadhaar number, s/he will have to apply for Aadhaar enrolment. This sharply contradicts the claims made by UIDAI earlier that the Aadhaar number is “optional, and not mandatory”, and more importantly the directive given by the Supreme Court (via order dated August 11, 2015). The Bill must explicitly state that the Aadhaar number is only optional, and not mandatory, and a person without an Aadhaar number cannot be denied any democratic rights, and public subsidies, benefits, and services, and any private services.

    3. Vulnerabilities in the Enrolment Process: The Bill does not address already documented issues in the enrolment process. In the absence of an exhaustive list of information to be collected, some Registrars are permitted to collect extra and unnecessary information. Also, storage of data for elongated periods with Enrollment agencies creates security risks. These vulnerabilities need to be prevented through specific provisions. It should also be mandated for all entities including the Enrolment Agencies, Registrars, CIDR and the requesting entities to shift to secure system like PKI based cryptography to ensure secure method of data transfer.

    4. Precisely Define and Provide Legal Framework for Collection and Sharing of Biometric Data of Citizens: The Bill defines “biometric information” is defined to include within its scope “photograph, fingerprint, iris scan, or other such biological attributes of an individual.” This definition gives broad and sweeping discretionary power to the UIDAI / Central Government to increase the scope of the term. The definition should be exhaustive in its scope so that a legislative act is required to modify it in any way.

    5. Prohibit Central Storage of Biometrics Data: The presence of central storage of sensitive personal information of all residents in one place creates a grave security risk. Even with the most enhanced security measures in place, the quantum of damage in case of a breach is extremely high. Therefore, storage of biometrics must be allowed only on the smart cards that are issued to the residents.

    6. Chain of Trust Model and Audit Trail: As one of the objects of the legislation is to provide targeted services to beneficiaries and reduce corruption, there should be more accountability measures in place. A chain of trust model must be incorporated in the process of enrolment where individuals and organisations vouch for individuals so that when a ghost is introduced someone has can be held accountable blame is not placed simply on the technology. This is especially important in light of the questions already raised about the deduplication technology. Further, there should be a transparent audit trail made available that allows public access to use of Aadhaar for combating corruption in the supply chain.

    7. Rights of Residents: There should be specific provisions dealing with cases where an individual is not issued an Aadhaar number or denied access to benefits due to any other factor. Additionally, the Bill should make provisions for residents to access and correct information collected from them, to be notified of data breaches and legal access to information by the Government or its agencies, as matter of right. Further, along with the obligations in Section 8, it should also be mandatory for all requesting entities to notify the individuals of any changes in privacy policy, and providing a mechanism to opt-out.

    8. Establish Appropriate Oversight Mechanisms: Section 33 currently specifies a procedure for oversight by a committee, however, there are no substantive provisions laid down that shall act as the guiding principles for such oversight mechanisms. The provision should include data minimisation, and “necessity and proportionality” principles as guiding principles for any exceptions to Section 29.

    9. Establish Grievance Redressal and Review Mechanisms: Currently, there are no grievance redressal mechanism created under the Bill. The power to set up such a mechanism is delegated to the UIDAI under Section 23 (2) (s) of the Bill. However, making the entity administering a project, also responsible for providing for the frameworks to address the grievances arising from the project, severely compromises the independence of the grievance redressal body. An independent national grievance redressal body with state and district level bodies under it, should be set up. Further, the NIAI Bill, 2010, provided for establishing an Identity Review Committee to monitor the usage pattern of Aadhaar numbers. This has been removed in the Aadhaar Bill 2016, and must be restored.

     

    Endnotes

    [1] See: http://cis-india.org/internet-governance/blog/Flaws_in_the_UIDAI_Process_0.pdf.

    [2] See: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf.

    [3] See: http://164.100.47.134/lsscommittee/Finance/15_Finance_42.pdf.

    Are we Losing the Right to Privacy and Freedom of Speech on Indian Internet?

    by Amber Sinha — last modified Mar 16, 2016 02:44 PM
    The article was published in DNA on March 10, 2016.

    Last month, it was reported that National Security Council Secretariat (NSCS) had proposed the setting up of a National Media Analytics Centre (NMAC). This centre’s mandate would be to monitor blogs, media channels, news outlets and social media platforms. Sources were quoted as stating that the centre would rely upon a tracking software built by Ponnurangam Kumaraguru, an Assistant Professor at the Indraprastha Institute of Information Technology in Delhi. The NMAC seems to mirror other similar efforts in countries such as US, Canada, Australia and UK, to monitor online content for the reasons as varied as prevention of terrorist activities, disaster relief and criminal investigation.

    The NSCS, the parent body that this centre will fall under, is a part of the National Security Council, India’s highest agency looking to integrate policy-making and intelligence analysis, and advising the Prime Minister’s Office on strategic issues as well as domestic and international threats. The NSCS represents the Joint Intelligence Committee and its duties include the assessment of intelligence from the Intelligence Bureau, Research and Analysis Wing (R&AW) and Directorates of Military, Air and Naval Intelligence, and the coordination of the functioning of intelligence agencies.

    From limited reports available, it appears that the tracking software used by NMAC will generate tags to classify post and comments on social media into negative, positive and neutral categories, paying special attention to “belligerent” comments. The reports say that the software will also try to determine if the comments are factually correct or not. The idea of a government agency systematically tracking social media, blogs and news outlets and categorising content as desirable and undesirable is bound to create a chilling effect on free speech online. The most disturbing part of the report suggested that the past pattern of writers’ posts would be analysed to see how often her posts fell under the negative category, and whether she was attempting to create trouble or disturbance, and appropriate feedback would be sent to security agencies based on it. Viewed alongside the recent events where actors critical of the government and holding divergent views have expressed concerns about attempts to suppress dissenting opinions, this initiative sounds even more dangerous, putting at risk individuals categorised as “negative” or “belligerent”, for exercising their constitutionally protected right to free speech.

    FB

    Getty Images

    It has been argued that the Internet is a public space, and should be treated as subject to monitoring by the government as any other space. Further, this kind of analysis does not concern itself with private communication between two or more parties but only with publicly available information. Why must we raise eyebrows if the government is accessing and analysing it for the purposes of legitimate state interests? There are two problems with this argument. First, any surveillance of communication must always be limited in scope, specific to individuals, necessary and proportionate, and subject to oversight. There are no laws passed by the Parliament in India which allow for mass surveillance measures. Such activities are being conducted through bodies like NSC which came into existence through an Executive Order and have no clear oversight mechanisms built into its functioning. A quick look at the history of intelligence and surveillance agencies in India will show that none of them have been created through a legislation. A host of surveillance agencies have come up in the last few years including the Central Monitoring System, which was set up to monitor telecommunications, and the absence of legislative pedigree translates into lack of appropriate controls and safeguards, and zero public accountability.

    The second and the larger issue is that the scale and level of granularity of personal information available now is unprecedented. Earlier, our communications with friends and acquaintances, our movements, our association, political or otherwise, were not observable in the manner it is today. It would be remiss to underestimate the importance of personal information merely because it exists in the public domain. The ability to act without being subject to monitoring and surveillance is key to the right to free speech and expression. While we accept the importance of free speech and the value of an open internet and newer technologies to enable it, we do not give sufficient importance to how these technologies are affecting the right to privacy.

    Tweets

    Getty Images

    In the last few years, the social media scene in India has been characterised by extreme polemic with epithets such as ‘bhakt’, ‘sanghi’, ‘sickular’ and ‘presstitutes’ thrown around liberally, turning political discussions into a mess of ugliness. It remains to be seen whether the NMAC intends to deal with the professional trolls who rely on a barrage of abuse to disrupt public conversations online. However, the appropriate response would not be greater surveillance, let alone a body like NMAC, with a sweeping mandate and little accountability.

    Link to the original here.

    Privacy Concerns Overshadow Monetary Benefits of Aadhaar Scheme

    by Pranesh Prakash and Amber Sinha — last modified Mar 17, 2016 04:12 PM
    Since its inception in 2009, the Aadhaar system has been shrouded in controversy over issues of privacy, security and viability. It has been implemented without a legislative mandate and has resulted in a PIL in the Supreme Court, which referred it to a Constitution bench. On Friday, it kicked up more dust when the Lok Sabha passed a Bill to give statutory backing to the unique identity number scheme.
    Privacy Concerns Overshadow Monetary Benefits of Aadhaar Scheme

    A villager goes through the process of eye scanning for Unique Identification (UID) database system at an enrolment centre at Merta district in the desert Indian state of Rajasthan. (REUTERS)

    The article was published in the Hindustan Times on March 12, 2016.


    There was an earlier attempt to give legislative backing to this project by the UPA government, but a parliamentary standing committee, led by BJP leader Yashwant Sinha, had rejected the bill in 2011 on multiple grounds. In an about-turn, the BJP-led NDA government decided to continue with Aadhaar despite most of those grounds still remaining.

    Separately, there have been orders passed by the Supreme Court that prohibit the government from making Aadhaar mandatory for availing government services whereas this Bill seeks to do precisely that, contrary to the government’s argument that Aadhaar is voluntary.

    In some respects, the new Aadhaar Bill is a significant improvement over the previous version. It places stringent restrictions on when and how the UID Authority (UIDAI) can share the data, noting that biometric information — fingerprint and iris scans — will not be shared with anyone. It seeks prior consent for sharing data with third party. These are very welcome provisions.

    But a second reading reveals the loopholes.

    The government will get sweeping power to access the data collected, ostensibly for “efficient, transparent, and targeted delivery of subsidies, benefits and services” as it pleases “in the interests of national security”, thus confirming the suspicions that the UID database is a surveillance programme masquerading as a project to aid service delivery.

    The safeguards related to accessing the identification information can be overridden by a district judge. Even the core biometric information may be disclosed in the interest of national security on directions of a joint secretary-level officer. Such loopholes nullify the privacy-protecting provisions.

    Amongst the privacy concerns raised by the Aadhaar system are the powers it provides private third parties to use one’s UID number. This concern, which wouldn’t exist without a national ID squarely relates to Aadhaar and needs a more comprehensive data protection law to fix it. The supposed data protection under the Information Technology Act is laughable and inadequate.

    The Bill was introduced as a Money Bill, normally reserved for matters related to taxation, borrowing and the Consolidated Fund of India (CFI), and it would be fair to question whether this was done to circumvent the Rajya Sabha.

    None of the above arguments even get to the question of implementation.

    Aadhaar hasn’t been working. When looking into reasons why 22% of PDS cardholders in Andhra Pradesh didn’t collect their rations it was found that there was fingerprint authentication failure in 290 of the 790 cardholders, and in 93 instances there was an ID mismatch. A recent paper in the Economic and Political Weekly by Hans Mathews, a mathematician with the CIS, shows the programme would fail to uniquely identify individuals in a country of 1.2 billion.

    The debate shouldn’t be only about the Aadhaar Bill being passed off as a Money Bill and about the robustness of its privacy provisions, but about whether the Aadhaar project can actually meet its stated goals.

    Analysis of Aadhaar Act in the Context of A.P. Shah Committee Principles

    by Vipul Kharbanda — last modified Mar 17, 2016 07:43 PM
    Whilst there are a number of controversies relating to the Aadhaar Act including the fact that it was introduced in a manner so as to circumvent the majority of the opposition in the upper house of the Parliament and that it was rushed through the Lok Sabha in a mere eight days, in this paper we shall discuss the substantial aspects of the Act in relation to privacy concerns which have been raised by a number of experts. In October 2012, the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee submitted its report which listed nine principles of privacy which all legislations, especially those dealing with personal should adhere to. In this paper, we shall discuss how the Aadhaar Act fares vis-à-vis these nine principles.

     

    Introduction

    The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (the “Aadhaar Act”) was introduced in the Lok Sabha (lower house of the Parliament) by Minister of Finance, Mr. Arun Jaitley, in on March 3, 2016, and was passed by the Lok Sabha on March 11, 2016. It was sent back by the Rajya Sabha with suggestions but the Lok Sabha rejected those suggestions, which means that the Act is now deemed to have been passed by both houses as it was originally introduced as a Money Bill. Whilst there are a number of controversies relating to the Aadhaar Act including the fact that it was introduced in a manner so as to circumvent the majority of the opposition in the upper house of the Parliament and that it was rushed through the Lok Sabha in a mere eight days, in this paper we shall discuss the substantial aspects of the Act in relation to privacy concerns which have been raised by a number of experts. In October 2012, the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee submitted its report which listed nine principles of privacy which all legislations, especially those dealing with personal should adhere to. In this paper, we shall discuss how the Aadhaar Act fares vis-à-vis these nine principles.

    In order for the reader to better understand the frame of reference on which we shall analyse the Aadhaar Act, the nine principles contained in the report of the Group of Experts on Privacy are explained in brief below:

    • Principle 1: Notice - Does the legislation/regulation require that entities governed by the Act give simple to understand notice of its information practices to all individuals, in clear and concise language, before any personal information is collected from them.
    • Principle 2: Choice and Consent - Does the legislation/regulation require that entities governed under the Act provide the individual with the option to opt in/opt out of providing their personal information.
    • Principle 3: Collection Limitation - Does the legislation/regulation require that entities governed under the Act collect personal information from individuals only as is necessary for a purpose identified.
    • Principle 4: Purpose Limitation - Does the legislation/regulation require that personal data collected and processed by entities governed by the Act be adequate and relevant to the purposes for which they are processed.
    • Principle 5: Access and Correction - Does the legislation/regulation allow individuals: access to personal information about them held by an entity governed by the Act; the ability to seek correction, amendments, or deletion of such information where it is inaccurate, etc.
    • Principle 6: Disclosure - Does the legislation ensure that information is only disclosed to third parties after notice and informed consent is obtained. Is disclosure allowed for law enforcement purposes done in accordance with laws in force.
    • Principle 7: Security - Does the legislation/regulation ensure that information that is collected and processed under that Act, is done so in a manner that protects against loss, unauthorized access, destruction, etc.
    • Principle 8: Openness - Does the legislation/regulation require that any entity processing data take all necessary steps to implement practices, procedures, policies and systems in a manner proportional to the scale, scope, and sensitivity to the data that is collected and processed and is this information made available to all individuals in an intelligible form, using clear and plain language?
    • Principle 9: Accountability - Does the legislation/regulation provide for measures that ensure compliance of the privacy principles? This would include measures such as mechanisms to implement privacy policies; including tools, training, and education; and external and internal audits.

     

    Analysis of the Aadhaar Act

    The Aadhaar Act has been brought about to give legislative backing to the most ambitious individual identity programme in the world which aims to provide a unique identity number to the entire population of India. The rationale behind this scheme is to correctly identify the beneficiaries of government schemes and subsidies so that leakages in government subsidies may be reduced. In furtherance of this rationale the Aadhaar Act gives the Unique Identification Authority of India (“UIDAI”) the power to enroll individuals by collecting their demographic and biometric information and issuing an Aadhaar number to them. Below is an analysis of the Act based on the privacy principles enumerated I the A.P. Shah Committee Report.

    Collection Limitation

    Collection of Biometric and Demographic Information: The Aadhaar Act entitles every “resident” [1] to obtain an Aadhaar number by submitting his/her biometric (photograph, finger print, Iris scan) and demographic information (name, date of birth, address [2]) [3]. It must be noted that the Act leaves scope for further information to be included in the collection process if so specified by regulations. It must be noted that although the Act specifically provides what information can be collected, it does not specifically prohibit the collection of further information. This becomes relevant because it makes it possible for enrolling agencies to collect extra information relating to individuals without any legal implications of such act.

    Authentication Records: The UIDAI is mandated to maintain authentication records for a period which is yet to be specified (and shall be specified in the regulations) but it cannot collect or keep any information regarding the purpose for which the authentication request was made [4].

    Unauthorized Collection: Any person who in not authorized to collect information under the Act, and pretends that he is authorized to do so, shall be punishable with imprisonment for a term which may extend to three years or with a fine which may extend to Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [5]. It must be noted that the section, as it is currently worded seems to criminalize the act of impersonation of authorized individuals and the actual collection of information is not required to complete this offence. It is not clear if this section will apply if a person who is authorized to collect information under the Act in general, collects some information that he/she is not authorized to collect.

    Notice

    Notice during Collection: The Aadhaar Act requires that the agencies enrolling people for distribution of Aadhaar numbers should give people notice regarding: (a) the manner in which the information shall be used; (b) the nature of recipients with whom the information is intended to be shared during authentication; and (c) the existence of a right to access information, the procedure for making requests for such access, and details of the person or department in-charge to whom such requests can be made [6]. A failure to comply with this requirement will make the agency liable for imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [7]. It must be noted that the Act leaves the manner of giving such notice in the realm of regulations and does not specify how this notice is to be provided, which leaves important specifics to the realm of the executive.

    Notice during Authentication: The Aadhaar Act requires that authenticating agencies shall give information to the individuals whose information is to be authenticated regarding (a) the nature of information that may be shared upon authentication; (b) the uses to which the information received during authentication may be put by the requesting entity; and (c) alternatives to submission of identity information to the requesting entity [8]. A failure to comply with this requirement will make the agency liable for imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/- [9]. Just as in the case of notice during collection, the manner in which the notice is required to be given is left to regulations leaving an unclear picture as to how comprehensive, accessible, and frequent this notice must be.

    Access and Correction

    Updating Information: The Aadhaar Act give the UIDAI the power to require residents to update their demographic and biometric information from time to time so as to maintain its accuracy [10].

    Access to Information: The Aadhaar Act provides that Aadhaar number holders may request the UIDAI to provide access to their identity information expect their core biometric information [11]. It is not clear why access to the core biometric information [12] is not provided to an individual. Further, since section 6 seems to place the responsibility of updation and accuracy of biometric information on the individual, it is not clear how a person is supposed to know that the biometric information contained in the database has changed if he/she does not have access to the same. It may also be noted that the Aadhaar Act provides only for a request to the UIDAI for access to the information and does not make access to the information a right of the individual, this would mean that it would be entirely upon the discretion of the UIDAI to refuse to grant access to the information once a request has been made.

    Alteration of Information: The Aadhaar Act gives individuals the right to request the UIDAI to alter their demographic if the same is incorrect or has changed and biometric information if it is lost or has changed. Upon receipt of such a request, if the UIDAI is satisfied, then it may make the necessary alteration and inform the individual accordingly. The Act also provides that no identity information in the Central database shall be altered except as provided in the regulations [13]. This section provides for alteration of identity information but only in the circumstances given in the section, for example demographic information cannot be changed if it has been lost, similarly biometric information cannot be changed if it is inaccurate. Further, the section does not give a right to the individual to get the information altered but only entitles him/her to request the UIDAI to make a change and the final decision is left to the “satisfaction” of the UIDAI.

    Access to Authentication Record: Every individual is given the right to obtain his/her authentication record in a manner to be specified by regulations. [14]

    Disclosure

    Sharing during Authentication: The UIDAI is entitled to reply to any authentication query with a positive, negative or any other response which may be appropriate and may share identity information except core biometric information with the requesting entity [15]. The language in this provision is ambiguous and it is unclear what 'identity information' may be shared and why it would be necessary to share such information as Aadhaar is meant to be only a means of authentication so as to remove duplication.

    Potential Disclosure during Maintenance of CIDR: The UIDAI has been given the power to appoint any one or more entities to establish and maintain the Central Identities Data Repository (CIDR) [16]. If a private entity is involved in the maintenance and establishment of the CIDR it can be presumed that there is the possibilty that they would, to some degree, have access to the information stored in the CIDR, yet there are no clear standards in the Act regarding this potential access. And the process for appointing such entities. The fact that the UIDAI has been given the freedom to appoint an outside entity to maintain a sensitive asset such as the CIDR raises security concerns.

    Restriction on Sharing Information: The Aadhaar Act creates a blanket prohibition on the usage of core biometric information for any purpose other than generation of Aadhaar numbers and also prohibits its sharing for any reason whatsoever [17]. Other identity information is allowed to be shared in the manner specified under the Act or as may be specified in the regulations [18]. The Act further provides that the requesting entities shall not disclose the identity information except with the prior consent of the individual to whom the information relates [19]. There is also a prohibition on publicly displaying Aadhaar number or core biometric information except as specified by regulations [20]. Officers or the UIDAI or the employees of the agencies employed to maintain the CIDR are prohibited from revealing the information stored in the CIDR or authentication record to anyone [21]. It is not clear why an exception has been carved out and what circumstances would require publicly displaying Aadhaar numbers and core biometric information, especially since the reasons for which such important information may be displayed has been left up to regulations which have relatively less oversight. The section also provides the requesting entities with an option to further disclose information if they take consent of the individuals. This may lead to a situation where a requesting entity, perhaps the of an essential service, may take the consent of the individual to disclose his/her information in a standard form contract, without the option of saying no to such a request. It may lead to situations where the option is between giving consent to disclosure or denial or service altogether. For this reason it is necessary that there should be an opt in and opt out provision wherever a requesting entity has the power to ask for disclosure of information, so that people are not coerced into giving consent.

    Disclosure in Specific Cases: The prohibition on disclosure of information (except for core biometric information) does not apply in case of any disclosure made pursuant to an order of a court not below that of a District Judge [22]. There is another exception to the prohibition on disclosure of information (including core biometric information) in the interest of national security if so directed by an officer not below the rank of a Joint Secretary to the Government of India specially authorised in this behalf by an order of the Central Government. Before any such direction can take effect, it will be reviewed by an oversight committee consisting of the Cabinet Secretary and the Secretaries to the Government of India in the Department of Legal Affairs and the Department of Electronics and Information Technology. Any such direction shall be valid for a period of three months and may be extended by another three months after the review by the Oversight Committee [23]. Although this provision has been criticized, and rightly so, for the lack of accountability since the entire process is being handled within the executive and there is no independent oversight, however it must be mentioned that the level of oversight provided here is similar to that provided to interception requests, which involve a much graver if not the same level of invasion of privacy.

    Penalty for Disclosure: Any person who intentionally and in an unauthorized manner discloses, transmits, copies or otherwise disseminates any identity information collected in the course of enrolment or authentication shall be punishable with imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/ [24]. Further any person who intentionally and in an unathorised manner, accesses information in the CIDR [25], downloads, copies or extracts any data from the CIDR [26], or reveals or shares or distributes any identity information, shall be punishable with imprisonment of upto 3 years and a fine of not less than Rs. 10,00,000/-.

    Consent

    Consent for Authentication: A requesting entity has to take the consent of the individual before collecting his/her identity information for the purposes of authentication and also has to inform the individual of the alternatives to submission of the identity information [27]. Although this provision requires entities to take consent from the individuals before collecting information for authentication, however how useful this requirement of consent would be, still remains to be seen. There may be instances where a requesting entity may take the consent of the individual in a standard form contract, without the individual realizing what he/she is consenting to.

    Note: The Aadhaar Act provides no requirement or standard for the form of consent that must be taken during enrollment. This is significant as it is the point at which individuals are providing raw biometric material and during previous enrollment, has been a point of weakness as the consent taken is an enabler to function creep as it allows the UIDAI to share information with engaged in delivery of welfare services [28].

    Purpose

    Use of Information: The authenticating entities are allowed to use the identity information only for the purpose of submission to the CIDR for authentication [29]. Further, the Act specifies that identity information available with a requesting entity shall not be used for any purpose other than that specified to the individual at the time of submitting the information for authentication [30]. The Act also provides that any authentication entity which uses the information for any purpose not already specified will be liable to punishment of imprisonment of upto 3 years or a fine of Rs. 10,000/- or both. In case of companies the maximum fine amount would be increased to Rs. 10,00,000/ [31].

    Security

    Security and Confidentiality of Information: It is the responsibility of the UIDAI to ensure the security and confidentiality of the identity and authentication information and it is required to take all necessary action to ensure that the information in the CIDR is protected against unauthorized access, use or disclosure and against accidental or intentional destruction, loss or damage [32]. The UIDAI is required to adopt and implement appropriate technical and organisational security measures and also ensure that its contractors do the same [33]. It is also required to ensure that the agreements entered into with its contractors impose the same conditions as are imposed on the UIDAI under the Act and that they shall act only upon the instructions of the UIDAI [34].

    Biometric Information to be Electronic Record: The biometric information collected by the UIDAI has been deemed to be an “electronic record” as well as “sensitive personal data or information”, which would mean that in addition to the provisions of the Aadhaar Act, the provisions contained in the Information Technology Act, 2000 will also apply to such information [35]. It must be noted that while the Act lays down the principle that UIDAI is required to ensure the saecurity of the information, it does not lay down any guidelines as to the minimum security standards to be implemented by the Authority. However, through this section the legislature has linked the security standards contained in the IT Act to the information contained in this Act. While this is a clean way of dealing with the issue, some people may argue that the extremely sensitive nature of the information contained in the CIDR requires the standards for security to be much stricter than those provided in the IT Act. However, a perusal of Rule 8 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 shows that the Rules themselves provide that the standard of security must be commensurate with the information assets being protected. It would thus seem that the Act provides enough room to protect such important information, but perhaps leaves too much room for interpretation for such an important issue.

    Penalty for Unauthorised Access: Apart from the security provisions included in the legislation, the Aadhaar Act also provides for punishment of imprisonment of upto 3 years and a fine which shall not be less than Rs. 10,00,000/-, in case of the following offences:

    1. introduction of any virus or other computer contaminant in the CIDR [36];
    2. causing damage to the data in the CIDR [37];
    3. disruption of access to the CIDR [38];
    4. denial of access to any person who is authorised to access the CIDR [39];
    5. destruction, deletion or alteration of any information stored in any removable storage media or in the CIDR or diminishing its value or utility or affecting it injuriously by any means [40];
    6. stealing, concealing, destroying or altering any computer source code used by the Authority with an intention to cause damage [41].

    Further, unauthorized usage or tampering with the data in the CIDR or in any removable storage medium with the intent of modifying information relating to Aadhaar number holder or discovering any information thereof, is also punishable with imprisonment for a term which may extend to 3 years and also a fine which may extend to Rs. 10,000/- [42].

    Accountability

    Inspections and Audits: One of the functions listed in the powers and functions of the UIDAI is the power to call for information and records, conduct inspections, inquiries and audit of the operations of the CIDR, Registrars, enrolling agencies and other agencies appointed under the Aadhaar Act [43].

    Grievance Redressal: Another function of the UIDAI is to set up facilitation centres and grievance redressal mechanisms for redressal of grievances of individuals, Registrars, enrolling agencies and other service providers [44]. It must be said here that considering the importance that the government has given to and intends to give to Aadhaar in the future, an essential task such as grievance redressal should not be left entirely to the discretion of the UIDAI and some grievance redressal mechanism should be incorporated into the Act itself.

    Openness

    There does not seem to be any provision in the Aadhaar Act which requires the UIDAI to make its privacy policies and procedure available to the public in general even though the UIDAI has the responsibility to maintain the security and confidentiality of the information.

     

    Endnotes

    [1] A resident is defined as any person who has resided in India for a period of atleasy 182 days in the previous 12 months.

    [2] It has been specified that demographic information will not include race, religion, caste, tribe, ethnicity, language, records of entitlement, income or medical history.

    [3] Section 3(1) of the Aadhaar Act.

    [4] Section 32(1) and 32(3) of the Aadhaar Act.

    [5] Section 36 of the Aadhaar Act.

    [6] Section 3(2) of the Aadhaar Act.

    [7] Section 41 of the Aadhaar Act.

    [8] Section 8(3) of the Aadhaar Act.

    [9] Section 41 of the Aadhaar Act.

    [10] Section 6 of the Aadhaar Act.

    [11] Section 28, proviso of the Aadhaar Act.

    [12] Core biometric information is defined as fingerprints, iris scan or other biological attributes which may be specified by regulations.

    [13] Section 31 of the Aadhaar Act.

    [14] Section 32(2) of the Aadhaar Act.

    [15] Section 8(4) of the Aadhaar Act.

    [16] Section 10 of the Aadhaar Act.

    [17] Section 29(1) of the Aadhaar Act.

    [18] Section 29(2) of the Aadhaar Act.

    [19] Section 29(3)(b) of the Aadhaar Act.

    [20] Section 29(4) of the Aadhaar Act.

    [21] Section 28(5) of the Aadhaar Act.

    [22] Section 33(1) of the Aadhaar Act.

    [23] Section 33(2) of the Aadhaar Act.

    [24] Section 37 of the Aadhaar Act.

    [25] Section 38(a) of the Aadhaar Act.

    [26] Section 38(b) of the Aadhaar Act.

    [27] Section 8(2)(a) and (c) of the Aadhaar Act.

    [28] For example, see: http://www.karnataka.gov.in/aadhaar/Downloads /Application%20form%20-%20English.pdf.

    [29] Section 8(2)(b) of the Aadhaar Act.

    [30] Section 29(3)(a) of the Aadhaar Act.

    [31] Section 37 of the Aadhaar Act.

    [32] Section 28(1), (2) and (3) of the Aadhaar Act.

    [33] Section 28(4)(a) and (b) of the Aadhaar Act.

    [34] Section 28(4)(c) of the Aadhaar Act.

    [35] Section 30 of the Aadhaar Act.

    [36] Section 38(c) of the Aadhaar Act.

    [37] Section 38(d) of the Aadhaar Act.

    [38] Section 38(e) of the Aadhaar Act.

    [39] Section 38(f) of the Aadhaar Act.

    [40] Section 38(h) of the Aadhaar Act.

    [41] Section 38(i) of the Aadhaar Act.

    [42] Section 39 of the Aadhaar Act.

    [43] Section 23(2)(l) of the Aadhaar Act.

    [44] Section 23(2)(s) of the Aadhaar Act.

     

    Vulnerabilities in the UIDAI Implementation Not Addressed by the Aadhaar Bill, 2016

    by Pooja Saxena and Amber Sinha — last modified Mar 21, 2016 08:33 AM
    In this infographic, we document the various issues in the Aadhaar enrolment process implemented by the UIDAI, and highlight the vulnerabilities that the Aadhaar Bill, 2016 does not address. The infographic is based on Vidushi Marda’s article 'Data Flow in the Unique Identification Scheme of India,' and is designed by Pooja Saxena, with inputs from Amber Sinha.

     

    Download the infographic: PDF and PNG.

     

    Credits: The illustration uses the following icons from The Noun Project - Thumpbrint created by Daouna Jeong, Duplicate created by Pham Thi Dieu Linh, Copy created by Mahdi Ehsaei.

    License: It is shared under Creative Commons Attribution 4.0 International License.

     

    Vulnerabilities in the UIDAI Implementation Not Addressed by the Aadhaar Bill, 2016

     

    The National Privacy Principles

    by Pooja Saxena and Amber Sinha — last modified Mar 21, 2016 09:48 AM
    In this infographic, we try to break down the National Privacy Principles developed by the Group of Experts on Privacy led by the Former Chief Justice A.P. Shah in 2012.

    License: It is shared under Creative Commons Attribution 4.0 International License.

    CIS' Statement on Sexual Harassment at ICANN55

    by Vidushi Marda — last modified Mar 21, 2016 03:22 PM

    The Centre for Internet and Society

    Statement on Sexual Harassment at ICANN55


    The Centre for Internet and Society (“CIS”) strongly condemns the acts of sexual harassment that took place against one of our representatives, Ms. Padmini Baruah, during ICANN 55 in Marrakech. It is completely unacceptable that an event the scale of an ICANN meeting does not have in place a formal redressal system, a neutral point of contact or even a policy for complainants who have been put through the ordeal of sexual harassment. ICANN cannot claim to be inclusive or diverse if it does not formally recognise a specific procedure or recourse under such instances.


    Ms. Baruah is by no means the first young woman to be subject to such treatment at an ICANN event, but she is the first to raise a formal complaint. Following the incident, she was given no immediate remedy or formal recourse, and that has left her with no option but to make the incident publicly known in the interim. The ombudsman’s office has been in touch with her, but this administrative process is simply inadequate for rights-violations.


    Ms. Baruah has received support from various community, staff, and board members. While we are thankful for their support, we believe that this situation can be better dealt with through some positive measures. We ask that ICANN carry out the following steps in order to make its meetings a truly safe and inclusive space:


    1. Institute a formal redressal system and policy with regard to sexual harassment within ICANN. The policy must be displayed on the ICANN website, at the venue of meetings and made available in delegate kits.

    2. Institute an Anti Sexual Harassment Committee that is neutral and approachable. Merely having an ombudsman who is a white male, however well intentioned, is inadequate and completely unhelpful to the complainant. The present situation is one where the ombudsman has no effective power and only advises the board.

    3. Conduct periodic gender and anti sexual harassment training of the ICANN board to help them better understand, recognise and address instances of sexual harassment.

    4. Conduct periodic gender and anti sexual harassment training for the ombudsman even if he/she will not be the exclusive point of contact for complainants as the ombudsman forms an important part of community and participant engagement

    5. Conduct periodic gender sensitisation for the ICANN community.

     

    Too Clever By Half: Strengthening India’s Smart Cities Plan with Human Rights Protection

    by Vanya Rakesh last modified Mar 22, 2016 01:49 PM
    The data involved in planning for urbanized and networked cities are currently flawed and politically-inflected. Therefore, we must ensure that basic human rights are not violated in the race to make cities “smart”.
    Too Clever By Half: Strengthening India’s Smart Cities Plan with Human Rights Protection

    Data-driven urban cities have drawn criticism as the initiative tends to homogenize Indian culture and treat them alike in terms of their political economy, culture, and governance. Photo Credit: Highways Agency, CC BY 2.0/Flickr

    The article was published in the Wire on March 21, 2016


    As Indian cities reposition themselves to play a significant role in development due to urban transformation, the government has envisioned building 100 smart cities across the country. Due to the lack of a precise definition as to what exactly constitutes a smart city, the mutual consensus that has evolved is that modern technology will be harnessed, which will lead to smart outcomes.

    Here, Big Data and analytics will play a predominant role by the way of cloud, mobile technology and other social technologies that gather data for the purpose of ascertaining and accordingly addressing concerns of people.

    Role of Big Data

    Leveraging city data and using geographical information systems (GIS) to collect valuable information about stakeholders are some techniques that are commonly used in smart cities to execute emergency systems, creating dynamic parking areas, naming streets, and develop monitoring. Other sources which would harness such data would be from fire alarms, in disaster management situations and energy saving mechanisms, which would sense, communicate, analyze and combine information across platforms to generate data to facilitate decision making and manage services.

    According to the Department of Electronics and Information Technology, the government’s plan to develop smart cities in the country could lead to a massive expansion of an IoT (Internet of Things) ecosystem within the country. The revised draft IoT policy aims at developing IoT products in this domain by using Big Data for government decision-making processes. For example, in India a key opportunity that has been identified is with regard to traffic management and congestion. Here, collecting data during peak hours, processing information in real time and using GPS history from mobile phones can give insight into the routes taken and modes of transportation preferred by commuters to deal with traffic woes. The Bengaluru Transport Information System (BTIS) was an early adopter of big data technology which resorted to aggregating data streams from multiple sources to enable planning of travel routes by avoiding traffic congestions, car-pooling, etc.

    Challenges

    The idea of a data-driven urban city has drawn criticism as the initiative tends to homogenize Indian culture and change the fabric of cities by treating them alike in terms of their political economy, culture, and governance.

    Despite basing the idea of a smart city on the assumption that technology-based solutions and techniques would be a viable solution for city problems in India, it is pertinent to note that the collection of personal real-time data may blur the line between personal data with the large data collected from multiple sources, leaving questions around privacy considerations, use and reuse of such data, especially by companies and businesses involved in providing services in legally and morally grey areas.

    Privacy concerns cloud the dependence on big data for functioning of smart cities as it may lead to erosion of privacy in different forms, for example if it is used to carry out surveillance, identification and disclosures without consent, discriminatory inferences, etc.

    Apart from right to privacy, a number of rights of an individual like the right to access and security rights would be at risk as it may enable practices of algorithmic social sorting (whether people get a loan, a tenancy, a job, etc.), and anticipatory governance using predictive profiling (wherein data precedes how a person is policed and governed). Dataveillance raises concerns around access and use of data due to increase in digital footprints (data they themselves leave behind) and data shadows (information about them generated by others). Also, the challenges and the realities of getting access to correct and standardized data, and proper communication seem to be a hurdle which still needs to be overcome.

    The huge, yet untapped, amount of data available in India requires proper categorization and this makes a robust and reliable data management system prerequisite for realization of the country’s smart city vision. Cooperation between agencies in Indian cities and a holistic technology-based approach like ICT and GT (geospatial technologies) to resolve issues pertaining to wide use of technology is the need of the hour.  The skills to manage, analyze and develop insights for effective policy decisions are still being developed, particularly in the public sector. Recognizing this, Nasscom in India has announced setting up a Centre of Excellence (CoE) to create quality workforce.

    Though it is apparent that data will play a considerable role in smart city mission, the peril is lack of planning in terms of policies to govern the big data mechanics and use of data. This calls for development of suitable standards and policies to guide technology providers & administrators to manage and interpret data in a secured environment.

    Legal hurdles

    The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 deals with accountability regarding data security and protection as it applies to ‘body corporates’ and digital data. It defines a ‘body corporate’ as “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities” under the IT Act. Therefore, it can be ascertained that government bodies or individuals collecting and using Big Data for the smart cities in India would be excluded from the scope of these Rules. This highlights the lack of a suitable regulatory framework to take into account potential privacy challenges, which currently seem to be underestimated by our planners and administrators.

    Regarding access to open data, though the National Data Sharing and Accessibility Policy 2012 recognizes sensitive data, the term has not been clearly defined under it. However, the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 clearly define sensitive personal data or information. Therefore, the open data framework must refer to or adopt a clear definition drawing from section 43A Rules to bring clarity in this regard.

    Way forward

    As India moves toward a digital transformation, highlighted by flagship programmes like Smart Cities Mission, Digital India and the UID project, data regulation and recognition of use of data will change the nature of the relationship between the state and the individual.  However, this seems to have been overlooked. Policies that regulate the digital environment of the country will intertwine with urban policies due to the smart cities mission. Use of ICTs in the form of IoT and Big Data entails access to open data, bringing another policy area in its ambit which needs consideration. Identification/development of open standards for IoT particularly for interoperability between cross sector data must be looked at.

    To address privacy concerns due to the use of big data techniques, nuanced data legislation is required. For a conducive big data and technologically equipped environment, the governments must increase efforts to create awareness about the risks involved and provide assurance about the responsible use of data.

    Additionally, a lack of skilled and educated manpower to deal with such data effectively must also be duly considered.

    The concept note produced by the government reflects how it visualizes smart cities to be a product of marrying the physical form of cities and its infrastructure to a wider discourse on the use of technology and big data in city governance. This makes the role of big data quite indispensable, making it synonymous with the very notion of a smart city. However, the important issue is to understand that data analytics is only a part of the idea. What is additionally required is effective governance mechanism and political will. Collaboration and co-operation is the glue that will make this idea work. It is important to merge urban development policies with principles of democracy. The data involved in planning for urbanized and networked cities are currently flawed and politically-inflected. Therefore, collective efforts must go into minimizing pernicious effects of the same to ensure the basic human rights are not violated in the race to make cities “smart”.


    Vanya Rakesh is Programme Officer, The Centre for Internet & Society (CIS), Bangalore. Elonnai Hickok, Policy Director of CIS, also provided inputs for this story.

    Surveillance Project

    by Sunil Abraham last modified Apr 05, 2016 03:21 PM
    The Aadhaar project’s technological design and architecture is an unmitigated disaster and no amount of legal fixes in the Act will make it any better.
    Surveillance Project

    gummy finger to fool a biometric scanner can be produced using glue and a candle. Picture by K. Murali Kumar

    The article will be published in Frontline, April 15, 2016 print edition.


    Zero. The probability of some evil actor breaking into the central store of authentication factors (such as keys and passwords) for the Internet. Why? That is because no such store exists. And, what is the probability of someone evil breaking into the Central Identities Data Repository (CIDR) of the Unique Identification Authority of India (UIDAI)? Greater than zero. How do we know this? One, the central store exists and two, the Aadhaar Bill lists breaking into this central store as an offence. Needless to say, it would be redundant to have a law that criminalises a technological impossibility. What is the consequence of someone breaking into the central store? Remember, biometrics is just a fancy word for non-consensual and covert identification technology. High-resolution cameras can capture fingerprints and iris information from a distance.

    In other words, on March 16, when Parliament passed the Bill, it was as if Indian lawmakers wrote an open letter to criminals and foreign states saying, “We are going to collect data to non-consensually identify all Indians and we are going to store it in a central repository. Come and get it!” Once again, how do I know that the CIDR will be compromised at some date in the future? How can I make that policy prediction with no evidence to back it up? To quote Sherlock Holmes, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” If a back door to the CIDR exists for the government, then the very same back door can be used by an enemy within or from outside. In other words, the principle of decentralisation in cybersecurity does not require repeated experimental confirmation across markets and technologies.

    Zero. The chances that you can fix with the law what you have broken with poor technological choices and architecture. And, to a large extent vice versa. Aadhaar is a surveillance project masquerading as a development intervention because it uses biometrics. There is a big difference between the government identifying you and you identifying yourself to the government. Before UID, it was much more difficult for the government to identify you without your knowledge and conscious cooperation. Tomorrow, using high-resolution cameras and the power of big data, the government will be able to remotely identify those participating in a public protest. There will be no more anonymity in the crowd. I am not saying that law-enforcement agencies and intelligence agencies should not use these powerful technologies to ensure national security, uphold the rule of law and protect individual rights. I am only saying that this type of surveillance technology is inappropriate for everyday interactions between the citizen and the state.

    Some software engineers believe that there are technical fixes for these concerns; they point to the consent layer in the India stack developed through a public-private partnership with the UIDAI. But this is exactly what Evgeny Morozov has dubbed “technological solutionism”—fundamental flaws like this cannot be fixed by legal or technical band-aid. If you were to ask the UIDAI how do you ensure that the data do not get stolen between the enrolment machine and the CIDR, the response would be, we use state-of-the-art cryptography. If cryptography is good enough for the UIDAI why is it not good enough for citizens? That is because if citizens use cryptography [on smart cards] to identify themselves to the state, the state will need their conscious cooperation each time. That provides the feature that is required for better governance without the surveillance bonus. If you really must use biometrics, it could be stored on the smart card after being digitally signed by the enrolment officer. If there is ever a doubt whether the person has stolen the smart card, a special machine can be used to read the biometrics off the card and check that against the person. This way the power of biometrics would be leveraged without any of the accompanying harms.

    Zero. This time, for the utility of biometrics as a password or authentication factor. There are two principal reasons for which the Act should have prohibited the use of biometrics for authentication. First, biometric authentication factors are irrevocable unlike passwords, PINs, digital signatures, etc. Once a biometric authentication factor has been compromised, there is no way to change it. The security of a system secured by biometrics is permanently compromised. Second, our biometrics is so easy to steal; we leave our fingerprints everywhere.

    Also, if I upload my biometric data onto the Internet, I can then plausibly deny all transactions against my name in the CIDR. In order to prevent me from doing that, the government will have to invest in CCTV cameras [with large storage] as they do for passport-control borders and as banks do at ATMs. If you anyway have to invest in CCTV cameras, then you might as well stick with digital signatures on smart cards as the previous National Democratic Alliance (NDA) government proposed the SCOSTA (Smart Card Operating System Standard for Transport Application) standard for the MNIC (Multipurpose National ID Card). Leveraging smart card standards like EMV will ensure harnessing greater network effects thanks to the global financial infrastructure of banks. These network effects will drive down the cost of equipment and afford Indians greater global mobility. And most importantly when a digital signature is compromised the user can be issued a new smart card. As Rufo Guerreschi, executive director of Open Media Cluster, puts it, “World leaders and IT experts should realise that citizen freedoms and states’ ability to pursue suspects are not an ‘either or’ but a ‘both or neither’.”

    Near zero. We now move biometrics as the identification factor. The rate of potential duplicates or “False Positive Identification Rate” which according to the UIDAI is only 0.057 per cent. Which according to them will result in only “570 resident enrolments will be falsely identified as duplicate for every one million enrolments.” However, according to an article published in Economic & Political Weekly by my colleague at the Centre for Internet and Society, Hans Verghese Mathews, this will result in one out of every 146 people being rejected during enrolment when total enrolment reaches one billion people. In its rebuttal, the UIDAI disputes the conclusion but offers no alternative extrapolation or mathematical assumptions. “Without getting too deep into the mathematics” it offers an account of “a manual adjudication process to rectify the biometric identification errors”.

    This manual adjudication determines whether you exist and has none of the elements of natural justice such as notice to the affected party and opportunity to be heard. Elimination of ghosts is impossible if only machines and unaccountable humans perform this adjudication. This is because there is zero skin in the game. There are free tools available on the Internet such as SFinGe (Synthetic Fingerprint Generator) which allow you to create fake biometrics. The USB cables on the UIDAI-approved enrolment setup can be intercepted using generic hardware that can be bought online. With a little bit of clever programming, countless number of ghosts can be created which will easily clear the manual adjudication process that the UIDAI claims will ensure that “no one is denied an Aadhaar number because of a biometric false positive”.

    Near zero. This time for surveillance, which I believe should be used like salt in cooking. Essential in small quantities but counterproductive even if slightly in excess. There is a popular misconception that privacy researchers such as myself are opposed to surveillance. In reality, I am all for surveillance. I am totally convinced that surveillance is good anti-corruption technology.

    But I also want good returns on investment for my surveillance tax rupee. According to Julian Assange, transparency requirements should be directly proportionate to power; in other words, the powerful should be subject to more surveillance. And conversely, I add, privacy protections must be inversely proportionate to power—or again, in other words, the poor should be spared from intrusions that do not serve the public interest. The UIDAI makes the exact opposite design assumption; it assumes that the poor are responsible for corruption and that technology will eliminate small-ticket or retail corruption. But we all know that politicians and bureaucrats are responsible for most of large-ticket corruption.

    Why does not the UIDAI first assign UID numbers to all politicians and bureaucrats? Then using digital signatures why do not we ensure that we have a public non-repudiable audit trail wherein everyone can track the flow of benefits, subsidies and services from New Delhi to the panchayat office or local corporation office? That will eliminate big-ticket or wholesale corruption. In other words, since most of Aadhaar’s surveillance is targeted at the bottom of the pyramid, there will be limited bang for the buck. Surveillance is the need of the hour; we need more CCTVs with microphones turned on in government offices than biometric devices in slums.

    Instantiation technology

    One. And zero. In the contemporary binary and digital age, we have lost faith in the old gods. Science and its instantiation technology have become the new gods. The cult of technology is intolerant to blasphemy. For example, Shekhar Gupta recently tweeted saying that part of the opposition to Aadhaar was because “left-libs detest science/tech”. Technology as ideology is based on some fundamental articles of faith: one, new technology is better than old technology; two, expensive technology is better than cheap technology; three, complex technology is better than simple technology; and four, all technology is empowering or at the very least neutral. Unfortunately, there is no basis in science for any of these articles of faith.

    Let me use a simple story to illustrate this. I was fortunate to serve as a member of a committee that the Department of Biotechnology established to finalise the Human DNA Profiling Bill, 2015, which was to be introduced in Parliament in the last monsoon session. Aside: the language of the Act also has room for the database to expand into a national DNA database circumventing 10 years of debate around the controversial DNA Profiling Bill, 2015. The first version of this Bill that I read in January 2013 said that DNA profiling was a “powerful technology that makes it possible to determine whether the source of origin of one body substance is identical to that of another … without any doubt”. In other words, to quote K.P.C. Gandhi, a scientist from Truth Labs, “I can vouch for the scientific infallibility of using DNA profiling for carrying out justice.”

    Unfortunately, though, the infallible science is conducted by fallible humans. During one of the meetings, a scientist described the process of generating a biometric profile. The first step after the laboratory technician generated the profile was to compare the generated profile with her or his own profile because during the process of loading the machine with the DNA sample, some of the laboratory technician’s DNA could have contaminated the sample. This error would not be a possibility in much older, cheaper and rudimentary biometric technology for example, photography. A photographer developing a photograph in a darkroom does not have to ensure that his or her own image has not accidentally ended up on the negative. But the UIDAI is filled with die-hard techno-utopians; if you tell them that fingerprints will not work for those who are engaged in manual labour, they will say then we will use iris-based biometrics. But again, complex technologies are more fragile and often come with increased risks. They may provide greater performance and features, but sometimes they are easier to circumvent. A gummy finger to fool a biometric scanner can be produced using glue and a candle, but to fake a passport takes a lot of sophisticated technology. Therefore, it is important for us as a nation to give up our unquestioning faith in technology and start to debate the exact technological configurations of surveillance technology for different contexts and purposes.

    One. This time representing a monopoly. Prior to the UID project, nobody got paid when citizens identified themselves to the state. While the Act says that the UIDAI will get paid, it does not specify how much. Sooner or later, this cost of identification will be passed on to the citizens and residents. There will be a consumer-service provider relationship established between the citizen and the state when it comes to identification. The UIDAI will become the monopoly provider of identification and authentication services in India which is trusted by the government. That sounds like a centrally planned communist state to me. Should not the right-wing oppose the Act because it prevents the free market from working? Should not the free market pick the best technology and business model for identification and authentication? Will not that drive the cost of identification and authentication down and ensure higher quality of service for citizens and residents?

    Competing providers

    Competing providers can also publish transparency reports regarding their compliance with data requests from law-enforcement and intelligence agencies, and if this is important to consumers they will be punished by the market. The government can use mechanisms such as permanent and temporary bans and price regulation as disincentives for the creation of ghosts. There will be a clear financial incentive to keep the database clean. Just like the government established a regulatory framework for digital certificates in the Information Technology Act allowing for e-commerce and e-governance. Ideally, the Aadhaar Bill should have done something similar and established an ecosystem for multiple actors to provide services in this two-sided market. For it is impossible for a “small government” to have the expertise and experience to run one of the world’s largest database of biometric and transaction records securely for perpetuity.

    To conclude, I support the use of biometrics. I support government use of identification and authentication technology. I support the use of ID numbers in government databases. I support targeted surveillance to reduce corruption and protect national security. But I believe all these must be put in place with care and thought so that we do not end up sacrificing our constitutional rights or compromising the security of our nation state. Unfortunately, the Aadhaar project’s technological design and architecture is an unmitigated disaster and no amount of legal fixes in the Act will make it any better. Our children will pay a heavy price for our folly in the years to come. To quote the security guru Bruce Schneier, “Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.”

    Will Aadhaar Act Address India’s Dire Need For a Privacy Law?

    by Nehaa Chaudhari last modified Apr 05, 2016 04:01 PM

    The article was published by Quint on March 31, 2016.


    Snapshot

    The passage of the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (will hereby be referred to as “the Act”) has led to flak for the government from privacy advocates, academia and civil society, to name a few.

    To my mind, the opposition deserves its fair share of criticism (lacking so far), for its absolute failure to engage with and act as a check on the government in the passage of the Act, and the events leading up to it.

    The government’s introduction of the Act as a ‘money bill’ under Article 110 of the Constitution of India (“this/the Article”) is a mockery of the constitutional process. It renders redundant, the role of the Rajya Sabha as a check on the functioning of the Lower House.

    Article 110 limits a ‘money bill’ only to six specific instances: covering tax, the government’s financial obligations and, receipts and payments to and from the Consolidated Fund of India, and, connected matters.

    The Act lies well outside the confines of the Article; the government’s action may attract the attention of the courts.

    Political One-Upmanship

    Arun Jaitely
    Finance Minister Arun Jaitley (left) listens to Reserve Bank of India (RBI) Governor Raghuram Rajan. (Photo: Reuters)

    In the past, the Supreme Court (“the Court”) has stepped into the domain of the Parliament or the Executive when there was a complete and utter disregard for India’s constitutional scheme. In recent constitutional history, this is perhaps most noticeable in the anti-defection cases, (beginning with Kihoto Hollohan in 1992); and, in the SR Bommai case in 1994, on the imposition of the President’s rule in states.

    In hindsight, although India has benefited from the Court’s action in the Bommai and Hollohan cases, it is unlikely that the passage of the Aadhaar Act as a ‘money bill’, reprehensible as it is, meets the threshold required for the Court’s intervention in Parliamentary procedure.

    Besides, the manner of its passage, the Act warrants

    Instead, a part of the Aadhaar debate has involved political one-upmanship between the Congress and the BJP, pitting the former’s NIDAI Bill against the latter’s Aadhaar Act.

    While an academic comparison between the two is welcome, its use as a tool for political supremacy would be laughable, were it not deeply problematic, given the many serious concerns highlighted above.

    Better Than UPA Bill?

    Privacy
    The Act may have more privacy safeguards than the earlier UPA Bill. (Photo: iStockphoto)

    And while the Act may have more privacy safeguards than the earlier UPA Bill, critics have argued that they not up to the international standard, and instead, that they are plagued by opacity.

    Additionally, despite claims that the Act is a significant improvement over the UPA Bill, it fails to address concerns, including around the centralised storage of information, that were raised by civil society members and others.

    Perhaps most problematically, however, the Act takes away an individual’s control of her own information. Subsidies, government benefits and services are linked to the mandatory possession of an Aadhar number (Section 7 of the Act), effectively negating the ‘freedom’ of voluntary enrollment (Section 3 of the Act). This directly contradicts the recommendations of the Justice AP Shah Committee, before whom the Unique Identification Authority of India had earlier stated that enrollment in Aadhaar was voluntary.

    To make matters worse, the individual does not have the authority to correct, modify or alter her information; this lies, instead, with the UIDAI alone (Section 31 of the Act). And the sharing of such personal information does not require a court order in all cases.

    Students
    Kanhaiya Kumar speaking in JNU on 3 March 2016. (Photo: PTI)

     

    It may be authorised by Executive authorities under the vague, ill-understood concept of ‘national security’, (Section 33(2) of the Act) which the Act does not define. We would do well to learn the dangers of leaving ‘national security’ open to interpretation, in the aftermath of the recent events at JNU.


    These recent events around Aadhaar have only underscored the dire urgency for comprehensive privacy legislation in India and, the need to overhaul our data protection laws to meet our constitutional commitments along with international standards.

    Meanwhile, constitutional challenges to the Aadhaar scheme are currently pending in the Supreme Court. The Court’s verdict may well decide the future of the Aadhaar Act, with the stage already set for a constitutional challenge to the legislation. The BJP’s victory in this case may be short-lived.

    Sexual Harassment at ICANN

    by Padmini Baruah last modified Apr 06, 2016 02:40 PM
    Padmini Baruah represented the Centre for Internet & Society at ICANN in the month of March 2016. In a submission to ICANN she is calling upon the ICANN board for implementing a system for investigating cases related to sexual harassments.

    On the 6th of March, 2016, Sunday, at about 10 am in the gNSO working session being conducted at the room Diamant, I was sexually harassed by someone from the private sector constituency named Khaled Fattal. He approached me, pulled at my name tag, and passed inappropriate remarks. I felt like my space and safety as a young woman in the ICANN community was at stake.

    I had incidentally been in discussion with the ICANN Ombudsman on developing a clear and coherent sexual harassment policy and procedure for the specific purposes of ICANN’s public meetings. Needless to say, this incident pushed me to take forward what had hitherto been a mere academic interest with increased vigour. I was amazed, firstly that the office of the ombudsman only had two white male members manning it. I was initially inhibited by that very fact, but made two points before them:

    1. With respect to action on my individual case.
    2. With respect to the development of policy in general.

    I would like to put on record that the ombudsman office was extremely sympathetic and gave me a thorough hearing. They assured me that my individual complaint would be recorded, and sought to discuss the possibility of me raising a public statement with respect to policy, as they believed that the Board would be likely to take this suggestion up from a member of the community. I was also informed, astoundingly, that this was the first harassment case reported in the history of ICANN.

    I then, as a newcomer to the community, ran this idea of making a public statement by no means an easy task at all, given the attached stigma that comes with being branded a victim of a sexual crime by certain senior people within ICANN who had assured me that they would take my side in this regard. To my dismay, there were two strong stands of victim blaming and intimidation that I faced I was told, in some cases by extremely senior and well respected, prominent women in the ICANN community, that raising this issue up would demean my credibility, status and legitimacy in ICANN, and that my work would lose importance, and I would “...forever be branded as THAT woman.” My incident was also trivialised in offhand casual remarks such as “This happened because you are so pretty”, “Oh you filed a complaint, not against me I hope, ha ha” which all came from people who are very high up in the ICANN heirarchy. I was also asked if I was looking for money out of this. Click to read the full statement made to ICANN here.


    Aadhaar: Still Too Many Problems

    by Pranesh Prakash last modified Apr 06, 2016 03:31 PM
    While one wishes to welcome govt’s attempt to bring Aadhaar within a legislative framework, the fact is there are too many problems that still remain unaddressed for one to be optimistic.

    The article was published by Livemint on March 7, 2016.


    The Aadhaar Bill has been introduced as a money bill, even though it doesn’t qualify as such under Article 110 of the Constitution. If the Speaker agrees to this, it will render the Rajya Sabha toothless in this matter, and will weaken our democracy. The government should reintroduce it as an ordinary legislative bill, which is what it is.

    While the government has in the past argued before the Supreme Court that Aadhaar is voluntary, Section 7 of the bill allows the government to mandate an Aadhaar number (or application for an Aadhaar number) as a prerequisite for obtaining some subsidies, benefits, services, etc. This undermines its arguments before the Supreme Court, which led the court to pass orders holding that Aadhaar should not be made mandatory. This move to make it mandatory will now need the government to argue that rather than contravene the apex court order, it has instead removed the rationale for it.

    Interestingly, the Bharatiya Janata Party (BJP)-led National Democratic Alliance (NDA) government seems to have done a U-turn on the issue of the unique identification number not being proof of citizenship or domicile. The previous Congress-led United Progressive Alliance (UPA) government never meant the Aadhaar number to be proof of citizenship or domicile. This was attacked by the Yashwant Sinha-chaired standing committee on finance, which feared that illegal immigrants would get Aadhaar numbers. Now, the BJP and the NDA seem to be in agreement with the original UPA vision of Aadhaar.

    Importantly, there is very strong language when it comes to the issue of privacy and confidentiality of the information that is held by the Unique Identification Authority of India (UIDAI). Section 29 (1), for instance, says that no biometric information will be shared for any reason whatsoever, or used for any purpose other than Aadhaar number generation and authentication. However, that provision is undermined wholly by Section 33, which says that “in the interest of national security”, the biometric info may be accessed if authorized by a joint secretary. This will only fan the fears of those who have argued that the real rationale for Aadhaar was not, in fact, delivery of services, but to create a national database of biometric data available to government snoops.


    Also Read

    Further, there are no remedies available for governmental abuse of this provision.

    Lastly, in terms of privacy, the concern of those people who have been opposing Aadhaar is not just that the biometric and other identity information may be leaked to private parties, but also that having a unique Aadhaar number helps private parties to combine and use other databases that are linked with Aadhaar numbers in a manner that is not within the subject’s control. This is not at all addressed in this bill, and we need a robust data protection law in order to do that.

    There are some other crucial details that the law doesn’t address: Is user consent, to be taken by third parties that use the UID database for authentication, needed for each instance of authentication, or would a general consent hold forever? How can consent be revoked?

    There were many other objections that were raised against the Aadhaar scheme that have not been addressed by the government. For instance, in a recent article in the Economic and Political Weekly, Hans Varghese Mathews points out that going by the test data UIDAI made available in 2012, for a population of 1.3 billion people, the incidence of false positives—the probability of the identities of two people matching—is 1/112.

    This is far too high a ratio to be acceptable.

    Actual data from the field in Andhra Pradesh—of people who were unable to claim rations under the public distribution system (PDS)—paints a worse picture. A survey commissioned by the Andhra Pradesh government said 48% of respondents pointed to Aadhaar-related failures as the cause of their inability to claim rations.

    So, even if the Aadhaar numbers were no longer issued to Lord Hanuman (Rajasthan), to dogs (e.g., Tommy Singh, a mutt in Madhya Pradesh), and with photos of a tree (New Delhi), it might not prove to be usable in a country of India’s size, given the capabilities of the fingerprint machines. As my colleague Sunil Abraham notes, the law cannot fix technological flaws.

    So, while one wishes one could welcome the government’s attempt to bring Aadhaar within a legislative framework, the fact is there are too many problems that still remain unaddressed for one to be optimistic.

    Pranesh Prakash is policy director at the Centre for Internet and Society, a think tank.

    Adoption of Standards in Smart Cities

    by Prasad Krishna last modified Apr 11, 2016 03:03 AM

    PDF document icon Adoption of Standards in Smart Cities.pdf — PDF document, 285 kB (292641 bytes)

    FAQ on the Aadhaar Project and the Bill

    by Elonnai Hickok, Vanya Rakesh, and Vipul Kharbanda — last modified Apr 13, 2016 02:06 PM
    This FAQ attempts to address the key questions regarding the Aadhaar/UIDAI project and the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Bill, 2016 (henceforth, Bill). This is neither a comprehensive list of questions, nor does it contain fully developed answers. We will continue to add questions to this list, and edit/expand the answers, based on our ongoing research. We will be grateful to receive your comments, criticisms, evidences, edits, suggestions for new answers, and any other responses. These can either be shared as comments in the document hosted on Google Drive, or via tweets sent to the information policy team at @CIS_InfoPolicy.

     

    To comment on and/or download the file, click here.


     

    Aadhaar Act and its Non-compliance with Data Protection Law in India

    by Vanya Rakesh last modified Apr 18, 2016 11:43 AM
    This post compares the provisions of the Aadhaar Act, 2016, with India's data protection regime as articulated in the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

     

    Download the file: PDF.


    Amidst all the hue and cry, the Aadhaar Act 2016, which was introduced with the aim of providing statutory backing to the use of Aadhaar, was passed in the Lok Sabha in its original form on March 16, 2016, after rejecting the recommendations made by Rajya Sabha . Though the Act has been vehemently opposed on several grounds, one of the concerns that has been voiced is regarding privacy and protection of the demographic and biometric information collected for the purpose of issuing the Aadhaar number.

    In India, for the purpose of data protection, a body corporate is subject to section 43A of the Information Technology Act, 2000 ("IT Act ") and subsequent Rules, i.e. -The Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 ("IT Rules"). Section 43A of the IT Act, 2000 holds a body corporate, which is possessing, dealing or handling any sensitive personal data or information, and is negligent in implementing and maintaining reasonable security practices resulting in wrongful loss or wrongful gain to any person, liable to compensate the affected person and pay damages.

    Rule 3 of the IT Rules enlists personal information that would amount to Sensitive personal data or information of a person and includes the biometric information. Even the Aadhaar Act states under section 30 that the biometric information collected shall be deemed as "sensitive personal data or information", which shall have the same meaning as assigned to it in clause (iii) of the Explanation to section 43A of the IT Act; this reflects that biometric data collected in the Aadhaar scheme will receive the same level of protection as is provided to other sensitive personal data under Indian law. This implies that, the agencies contracted by the UIDAI (and not the UIDAI itself) to perform functions like collection, authentication, etc. like the Registrars, Enrolling Agencies and Requesting Entities, which meet the criteria of being a 'body corporate' as defined in section 43A, could be held responsible under this provision, as well as the Rules, to ensure security of the data and information of Aadhaar holder and could potentially be held liable for breach of information that results in loss to an individual if it can be proven that they failed to implement reasonable security practices and procedures.

    In light of the fact that some actors in the Aadhaar scheme could be held accountable and liable under section 43A and associated Rules, this article compares the regulations regarding data security as found in section 43A and IT Rules 2011 with the provisions of Aadhaar Act 2016, and discusses the implications of the differences, if any.

    1. Compensation and Penalty

    Section 43A: Section 43A of the IT Act, 2000 (Amended in 2008) provides for compensation for failure to protect data. It states that a body corporate, which is possessing, dealing or handling any sensitive personal data or information, and is negligent in implementing and maintaining reasonable security practices resulting in wrongful loss or wrongful gain to any person, is liable to compensate the affected person and pay damages not exceeding five crore rupees.

    Aadhaar Act : Chapter VII of the Act provides for offences and penalties, but does not talk about damages to the affected party.

    • Section 37 states that intentional disclosure or dissemination of identity information, to any person not authorised under the Aadhaar Act, or in violation of any agreement entered into under the Act, will be punishable with imprisonment up to three years or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
    • Section 38 prescribes penalty with imprisonment up to three years and a fine not less than ten lakh rupees in case any of the acts listed under the provision are performed without authorisation from the UIDAI.
    • Section 39 prescribes penalty with imprisonment for a term which may extend to three years and fine which may extend to ten thousand rupees for tampering with data in Central Identities Data Repository.
    • Section 40 holds a requesting entity liable for penalty for use of identity information in violation of Section 8 (3) with imprisonment up to three years and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
    • Section 41 holds a requesting entity or enrolling agency liable for penalty for violation of Section 8 (3) or Section 3 (2) with imprisonment up to one year and/or a fine up to ten thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).
    • Section 42 provides general penalty for any offence against the Act or regulations made under it, for which no specific penalty is provided, with imprisonment up to one year and/or a fine up to twenty five thousand rupees (in case of an individual), and fine up to one lakh rupees (in case of a company).

    Though the Aadhaar Act prescribes penalty in case of unauthorised access, use or any other act contravening the Regulations, it fails to guarantee protection to the information and does not provide for compensation in case of violation of the provisions.

    2. Privacy Policy

    IT Rules: Rule 4 requires a body corporate to provide a privacy policy on their website, which is easily accessible, provides for the type and purpose of personal, sensitive personal information collected and used, and Reasonable security practices and procedures.

    Aadhaar Act: Though in practise the contracting agencies (the body corporates under the Aadhaar ecosystem) may maintain a privacy policy on their website, the Aadhaar Act does not require a privacy policy for the UIDAI or other actors.

    Implications: Because contracting agencies will be covered by the IT Rules if they are 'body corporates', the requirement to maintain a privacy policy will be applicable to them.

    3. Consent

    IT Rules: Rule 5 requires that prior to the collection of sensitive personal data, the body corporate must obtain consent, either in writing or through fax regarding the purpose of usage before collection of such information.

    Aadhaar Act: The Act is silent regarding consent being acquired in case of the enrolling agency or registrars. However, section 8 provides that any requesting entity will take consent from the individual before collecting his/her Aadhaar information for authentication purposes, though it does not specify the nature (written/through fax).

    Implications: If the enrolling agency is a body corporate, they will also be required to take consent prior to collecting and processing biometrics. It is possible that since the Aadhaar Act envisages a scheme which is quasi-compulsory in nature, a consent provision was deliberately left out. This circumstance would give the enrolling agencies an argument against taking consent, by saying that the Aadhaar Act is a specific legislation which is also later in point of time than the IT Rules, and a deliberate omission of consent coupled with the compulsory nature of the Aadhaar scheme would mean that they are not required to take consent of the individuals before enrolment.

    4. Collection Limitation

    IT Rules: Rule 5 (2) requires that a body corporate should only collect sensitive personal data if it is connected to a lawful purpose and is considered necessary for that purpose.

    Aadhaar Act: Section 3(1) of the Act states that every resident shall be entitled to obtain an aadhaar number by submitting his demographic information and biometric information by undergoing the process of enrolment.

    5. Notice

    IT Rules: Rule 5(3) requires that while collecting information directly from an individual, the body corporate must provide the following information:

    • The fact that information is being collected
    • The purpose for which the information is being collected
    • The intended recipients of the information
    • The name and address of the agency that is collecting the information
    • The name and address of the agency that will retain the information

    Aadhaar Act: Section 3 of the Act states that at the time of enrolment and collection of information, the enrolling agency shall notify the individual as to how their information will be used; what type of entities the information will be shared with; and that they have a right to see their information and also tell them how they can see their information. However, the Act is silent regarding notice of name and address of the agency collecting and retaining the information.

    6. Retention Limitation

    IT Rules: Rule 5(4) requires that body corporate must retain sensitive personal data only for as long as it takes to fulfil the stated purpose or otherwise required under law.

    Aadhaar Act: The Act is silent regarding this and does not mention the duration for which the personal information of an individual shall be retained by the bodies/organisations contracted by UIDAI.

    7. Purpose Limitation

    IT Rules: Rule 5(5) requires that information must be used for the purpose that it was collected for.

    Aadhaar Act Section 57 contravenes this and states that the Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies. Section 8 of the Act states that for the purpose of authentication, a requesting entity is required to take consent before collection of Aadhaar information and use it only for authentication with the CIDR. Section 29 of the Act states that the core biometric information collected will not be shared with anyone for any reason, and must not be used for any purpose other than generation of Aadhaar numbers and authentication. Also, the Identity information available with a requesting entity will not be used for any purpose other than what is specified to the individual, nor will it be shared further without the individual's consent.

    Act will not prevent use of Aadhaar number for other purposes under law by the State or other bodies.

    8. Right to Access and Correct

    IT Rules : Rule 5(6) requires a body corporate to provide individuals with the ability to review the information they have provided and access and correct their personal or sensitive personal information.

    Aadhaar Act : The Act provides under section 3 that at the time of enrolment, the individual needs to be informed about the existence of a right to access information, the procedure for making requests for such access, and details of the person or department in-charge to whom such requests can be made. Section 28 of the Act provides that every aadhaar number holder may access his identity information except core biometric information. Section 32 provides that every Aadhaar number holder may obtain his authentication record. Also, if the demographic or biometric information about any Aadhaar number holder changes, is lost or is found to be incorrect, they may request the UIDAI to make changes to their record in the CIDR.

    9. Right to 'Opt Out' and Withdraw Consent

    IT Rules: Rule 5(7) requires that the individual must be provided with the option of 'opting out' of providing data or information sought by the body corporate. Also, they must have the right to withdraw consent at any point of time.

    Aadhaar Act: The Aadhaar Act does not provide an opt- out provision and also does not provide an option to withdraw consent at any point of time. Section 7 of the Aadhaar Act actually implies that once the Central or State government makes aadhaar authentication mandatory for receiving a benefit then the individual has no other option but to apply for an Aadhaar number. The only concession that is made is that if an Aadhaar number is not assigned to an individual then s/he would be offered some alternative viable means of identification for receiving the benefit.

    10. Grievance Officer

    IT Rules: Rule 5(9) requires that body corporate must designate a grievance officer for redressal of grievances, details of which must be posted on the body corporate's website and grievances must be addressed within a month of receipt.

    Aadhaar Act: The Aadhaar Act does not provide for any such mechanism for grievance redressal by the registrars, enrolling agencies or the requesting entities. However, since the contracting agencies will also get covered by the IT Rules if they are 'body corporates', the requirement to designate a grievance officer would be applicable to them as well due to the IT Rules.

    11. Disclosure with Consent, Prohibition on Publishing and Further Disclosure

    IT Rules: Rule 6 requires that body corporate must have consent before disclosing sensitive personal data to any third person or party, except in the case with Government agencies for the purpose of verification of identity, prevention, detection, investigation, on receipt of a written request. Also, the body corporate or any person on its behalf shall not publish the sensitive personal information and the third party receiving the sensitive personal information from body corporate or any person on its behalf shall not disclose it further.

    Aadhaar Act: Regarding the requesting entities, the Act provides that they shall not disclose the identity information except with the prior consent of the individual to whom the information relates. The Act also states that the Authority shall take necessary measures to ensure confidentiality of information against disclosures. However, as an exception under section 33, the UIDAI may reveal identity information, authentication records or any information in the CIDR following a court order by a District Judge or higher. The Act also allows disclosure made in the interest of national security following directions by a Joint Secretary to the Government of India, or an officer of a higher rank, authorised for this purpose. The Act is silent on the issue of obtaining consent of the individual under these exceptions. Additionally, the Act also states that the Aadhaar number or any core biometric information collected or created regarding an individual under the Act shall not be published, displayed or posted publicly, except for the purposes specified by regulations.

    12. Requirements for Transfer of Sensitive Personal Data

    IT Rules : Rule 7 requires that body corporate may transfer sensitive personal data into another jurisdiction only if the country ensures the same level of protection and may be allowed only if it is necessary for the performance of the lawful contract between the body corporate or any person on its behalf and provider of information or where such person has consented to data transfer.

    Aadhaar Act : The Act is silent regarding transfer of personal data into another jurisdiction by the any of the contracting bodies like the Registrar, Enrolling agencies or the requesting entities. However, if these agencies satisfy the requirement of being "body corporates" as defined under section 43A, then the above requirement regarding transfer of data to another jurisdiction under IT Rules would be applicable to them. However, considering the sensitive nature of the data involved, the lack of a prohibition of transferring data to another jurisdiction under the Aadhaar Act appears to be a serious lacuna.

    13. Security of Information

    IT Rules: Rule 8 requires that the body corporate must secure information in accordance with the ISO 27001 standard or any other best practices notified by Central Government. These practices must be audited annually or when the body corporate undertakes a significant up gradation of its process and computer resource.

    Aadhaar Act: Section 28 of the Act states that the UIDAI must ensure the security and confidentiality of identity information and authentication records. It also states that the Authority shall adopt and implement appropriate technical and organisational security measures, and ensure the same are imposed through agreements/arrangements with its agents, consultants, advisors or other persons. However, it does not mention which standards/measures have to be adopted by all the actors in Aadhaar ecosystem for ensuring the security of information, though it can be argued that if the contractors employed by the UIDAI are body corporate then the standards prescribed under the IT Rules would be applicable to them.

    Implications of the Differences for Body Corporates in Aadhaar Ecosystem

    An analysis of the Rules in comparison to the data protection measures under the Aadhaar Act shows that the requirements regarding protection of personal or sensitive personal information differ and are not completely in line with each other.

    Though the Aadhaar Act takes into account the provisions regarding consent of the individual, notice, restriction on sharing, etc., the Act is silent regarding many core measures like sharing of information across jurisdictions, taking consent before collection of information, adoption of security measures for protection of information, etc. which a body corporate in the Aadhaar ecosystem must adopt to be in compliance with section 43A of the IT Act. It is therefore important that the bodies collecting, handling, sharing the personal information and are governed by the Aadhaar Act, must adhere to section 43A and the IT Rules 2011. However, applicability of Aadhaar Act as well as section 43A and IT Rules 2011 would lead to ambiguity regarding interpretation and implementation of the Law. The differences must be duly taken into account and more clarity is required to make all the bodies under this Legislation like the enrolling agencies, Registrars and the Requesting Entities accountable under the correct provisions of Law. However, having two separate legislations governing the data protection standards in the Aadhaar scheme seems to have been overlooked. A harmonized and overarching privacy legislation is critical to avoid unclarity in the applicability of data protection standards and would also address many privacy concerns associated to the scheme.

    Appendix I

    The Rajya Sabha had proposed five amendments to the Aadhaar Act 2016, which are as follows:

    i. Opt-out clause: A provision to allow a person to "opt out" of the Aadhaar system, even if already enrolled.

    ii. Voluntary: To ensure that if a person chooses not to be part of the Aadhaar system, he/she would be provided "alternate and viable" means of identification for purposes of delivery of government subsidy, benefit or service.

    iii. Amendment restricting the use of Aadhaar numbers only for targeting of government benefits or service and not for any other purpose.

    iv. Amendment seeking change of the term "national security" to "public emergency or in the interest of public safety" in the provision specifying situations in which disclosure of identity information of an individual to certain law enforcement agencies can be allowed.

    v. Oversight Committee: The oversight committee , which would oversee the possible disclosure of information, should include either the Central Vigilance Commissioner or the Comptroller and Auditor-General.

    Sources:

    Appendix II - Section 43A: Compensation for Failure to Protect Data

    Where a body corporate, possessing, dealing or handling any sensitive personal data or information in a computer resource which it owns, controls or operates, is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person, such body corporate shall be liable to pay damages by way of compensation to the person so affected.

    For the purposes of this section:

    • "body corporate" means any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities;
    • "reasonable security practices and procedures" means security practices and procedures designed to protect such information from unauthorised access, damage, use, modification, disclosure or impairment, as may be specified in an agreement between the parties or as may be specified in any law for the time being in force and in the absence of such agreement or any law, such reasonable security practices and procedures, as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit;
    • "sensitive personal data or information" means such personal information as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit.'.

    The term 'body corporate' has been defined under section 43A as "any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities"

    Aadhaar Act 43A & IT Rules

    by Vanya Rakesh last modified Apr 17, 2016 02:23 PM

    PDF document icon Aadhaar Act, 43A & IT Rules_final draft.pdf — PDF document, 331 kB (339414 bytes)

    The Last Chance for a Welfare State Doesn’t Rest in the Aadhaar System

    by Sumandro Chattapadhyay last modified Apr 19, 2016 01:18 PM
    Boosting welfare is the message, which is how Aadhaar is being presented in India. The Aadhaar system as a medium, however, is one that enables tracking, surveillance, and data monetisation. This piece by Sumandro Chattapadhyay was published in The Wire on April 19, 2016.

     

    Originally published in and cross-posted from The Wire.


    Once upon a time, a king desired that his parrot should be taught all the ancient knowledge of the kingdom. The priests started feeding the pages of the great books to the parrot with much enthusiasm. One day, the king asked the priests if the parrot’s education has completed. The priests poked the belly of the parrot but it made no sound. Only the rustle of undigested pages inside the belly could be heard. The priests declared that the parrot is indeed a learned one now.

    The fate of the welfare system in our country is quite similar to this parrot from Tagore’s parable. It has been forcefully fed identification cards and other official documents (often four copies of the same) for years, and always with the same justification of making it more effective and fixing the leaks. These identification regimes are in effect killing off the welfare system. And some may say that that has been the actual plan in any case.

    The Aadhaar number has been recently offered as the ‘last chance’ for the ailing welfare system – a last identification regime that it needs to gulp down to survive. This argument wilfully overlooks the acute problems with the Aadhaar project.

    Firstly, the ‘last chance’ for a welfare state in India is not provided by implementing a new and improved identification regime (Aadhaar numbers or otherwise), but by enabling citizens to effectively track, monitor, and ensure delivery of welfare, services, and benefits. This ‘opening up’ of the welfare bureaucracy has been most effectively initiated by the Right to Information Act. Instead of a centralised biometrics-linked identity verification platform, which gives the privilege of tracking and monitoring welfare flows only to a few expert groups, an effective welfare state requires the devolution of such privilege and responsibility.

    We should harness the tracking capabilities of electronic financial systems to disclose how money belonging to the Consolidated Fund of India travel around state agencies and departmental levels. Instead, the Aadhaar system effectively stacks up a range of entry barriers to accessing welfare – from malfunctioning biometric scanners, to connectivity problems, to the burden of keeping one’s fingerprint digitally legible under all labouring and algorithmic circumstances.

    Secondly, authentication of welfare recipients by Aadhaar number neither make the welfare delivery process free of techno-bureaucratic hurdles, nor does it exorcise away corruption. Anumeha Yadav has recently documented the emerging ‘unrest at the ration shop’ across Rajasthan, as authentication processes face technical and connectivity delays, people get ‘locked out’ of public services for not having or having Aadhaar number with incorrect demographic details, and no mechanisms exist to provide rapid and definitive recourse.

    RTI activists at the Satark Nagrik Sangathan have highlighted that the Delhi ration shops, using Aadhaar-based authentication, maintain only two columns of data to describe people who have come to the shop – those who received their ration, and those who did not (without any indication of the reason). This leads to erasure-by-design of evidence of the number of welfare-seekers who are excluded from welfare services when the Aadhaar-based authentication process fails (for valid reasons, or otherwise).

    Reetika Khera has made it very clear that using Aadhaar Payments Bridge to directly transfer cash to a beneficiary’s account, in the best case scenario, may only take care of one form of corruption: deception (a different person claiming to be the beneficiary). But it does not address the other two common forms of public corruption: collusion (government officials approving undue benefits and creating false beneficiaries) and extortion (forceful rent seeking after the cash has been transferred to the beneficiary’s account). Evidently, going after only deception does not make much sense in an environment where collusion and extortion are commonplace.

    Thirdly, the ‘relevant privacy question’ for Aadhaar is not limited to how UIDAI protects the data collected by it, but expands to usage of Aadhaar numbers across the public and private sectors. The privacy problem created by the Aadhaar numbers does begin but surely not end with internal data management procedures and responsibilities of the UIDAI.

    On one hand, the Aadhaar Bill 2016 has reduced the personal data sharing restrictions of the NIAI Bill 2010, and has allowed for sharing of all data except core biometrics (fingerprints and iris scan) with all agencies involved in authentication of a person through her/his Aadhaar number. These agencies have been asked to seek consent from the person who is being authenticated, and to inform her/him of the ways in which the provided data (by the person, and by UIDAI) will be used by the agency. In careful wording, the Bill only asks the agencies to inform the person about “alternatives to submission of identity information to the requesting entity” (Section 8.3) but not to provide any such alternatives. This facilitates and legalises a much wider collection of personal demographic data for offering of services by public agencies “or any body corporate or person” (Section 57), which is way beyond the scope of data management practices of UIDAI.

    On the other hand, the Aadhaar number is being seeded to all government databases – from lists of HIV patients, of rural citizens being offered 100 days of work, of students getting scholarships meant for specific social groups, of people with a bank account. Now in some sectors, such as banking, inter-agency sharing of data about clients is strictly regulated. But we increasingly have non-financial agencies playing crucial roles in the financial sector – from mobile wallets to peer-to-peer transaction to innovative credit ratings. Seeding of Aadhaar into all government and private databases would allow for easy and direct joining up of these databases by anyone who has access to them, and not at all by security agencies only.

    When it becomes publicly acceptable that the money bill route was a ‘remedial’ instrument to put the Rajya Sabha ‘back on track’, one cannot not wonder about what was being remedied by avoiding a public debate about the draft bill before it was presented in Lok Sabha. The answer is simple: welfare is the message, surveillance is the medium.

    Acceptance and adoption of all medium requires a message, a content. The users are interested in the message. The message, however, is not the business. Think of Free Basics. Facebook wants people with none or limited access to internet to enjoy parts of the internet at zero data cost. Facebook does not provide the content that the users consume on such internet. The content is created by the users themselves, and also provided by other companies. Facebook own and control the medium, and makes money out of all content, including interactions, passing through it.

    The UIDAI has set up a biometric data bank and related infrastructure to offer authentication-as-a-service. As the Bill clarifies, almost all agencies (public or private, national or global) can use this service to verify the identity of Indian residents. Unlike Facebook, the content of these services do not flow through the Aadhaar system. Nonetheless, Aadhaar keeps track of all ‘authentication records’, that is records of whose identity was authenticated by whom, when, and where. This database is gold (data) mine for security agencies in India, and elsewhere. Further, as more agencies use authentication based on Aadhaar numbers, it becomes easier for them to combine and compare databases with other agencies doing the same, by linking each line of transaction across databases using Aadhaar numbers.

    Welfare is the message that the Aadhaar system is riding on. The message is only useful for the medium as far as it ensures that the majority of the user population are subscribing to it. Once the users are enrolled, or on-boarded, the medium enables flow of all kinds of messages, and tracking and monetisation (perhaps not so much in the case of UIDAI) of all those flows. It does not matter if the Aadhaar system is being introduced to remedy the broken parliamentary process, or the broken welfare distribution system. What matters is that the UIDAI is establishing the infrastructure for a universal surveillance system in India, and without a formal acknowledgement and legal framework for the same.

     

    RTI regarding Smart Cities Mission in India

    by Vanya Rakesh last modified Apr 21, 2016 02:12 AM

    PDF document icon RTI.pdf — PDF document, 11552 kB (11830139 bytes)

    RTI regarding Smart Cities Mission in India

    by Paul Thottan — last modified Apr 21, 2016 02:25 AM
    Centre for Internet & Society (CIS) had filed an RTI on 3 February 2016 before the Ministry of Urban Development (MoUD) regarding the Smart Cities Mission in India. The RTI sought information regarding the role of various foreign governments, private industry, multilateral bodies that will provide technical and financial assistance for this project and information on Government agreements regarding PPP’s for financing the project.

    A response to the RTI is here.


    1. The various government, private industry and civil society actors involved in the Smart Cities Mission.
    2. The various agreements the Government has undertaken through PPP’s for financing the mission.
    3. Role of private companies in this project.
    4. The process for selecting the cities for this mission and ministry responsible for this task.
    5. The various international organisations, foreign governments and multilateral bodies that will provide technical and financial assistance for this project.

    The MoUD sent its reply to the RTI application and the response is as follows:

    • With reference to the first query, the answer provided was that the mission statement and guidelines are available on the Missions website - smartcities.gov.in. This mission statement essentially envisages the role of citizens/citizen groups such as Resident Welfare Associations, Taxpayers Associations, Senior Citizens and Slum Dwellers Associations etc, apart from the government of India, States, Union Territories and Urban local bodies.
    • Regarding information about agreements for the purpose of financing the project, it has been provided in the response that the Ministry would facilitate the execution of MoU’s between Foreign Agencies and States/UT’s for assistance under this mission. The two agreements that have been executed include the MoU between the United States Trade and Development Agency (USTDA) and the French Agency for Development (AFD) for the States/UT’s of Andhra Pradesh, Uttar Pradesh, Rajasthan, Maharashtra, Chandigarh and Puducherry. They have also provided us with copies of the same and they have been summarised below. They also go on to state that various countries like Spain, Canada, Germany, China, Singapore, UK and South Korea have also shown interest in collaborating with the Ministry for the development of Smart Cities.
    • CIS sought the documents relating to role of private actors in this field. This information could not be provided by the Department since it was not available with them. Further, an application has been sent to the SC-III Division for providing the information directly to us.
    • As regards the fourth query, the information provided states that the role of the government, States/UT’s and Urban Local Bodies has been envisaged in para 13 of the Smart Cities Mission Statement - smartcities.gov.in
    • With respect to the query regarding the foreign actors involved, the information provided states that the documents relating to the involvement of the same are scattered in different files. Compilation of such information would divert the limited resources of the Public Authority disproportionately. Another application must be filed if any specific information is required.

    Copies of several MoUs signed between Foreign Development Agencies and States (for the respective cities) that were shared with us are:

    • Memorandum of Understanding between the United States Trade and Development Agency(USTDA) and the Government of Andhra Pradesh of the Republic of India on Cooperation to support the development of Smart Cities in Andhra Pradesh-namely Visakhapatnam.
    • Memorandum of Understanding between the United States Trade and Development Agency (USTDA) and the Government of Rajasthan of the Republic of India on Cooperation to support the development of Smart Cities in Rajasthan- namely Ajmer.
    • Memorandum of Understanding between the United States Trade and Development Agency (USTDA) and the Government of Uttar Pradesh of the Republic of India on Cooperation to support the development of Smart Cities in Uttar Pradesh- namely Allahabad.
    • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of the Union Territory of Chandigarh of the Republic of India on Technical Cooperation in the field of Sustainable Urban Development.
    • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of Maharashtra on Technical Cooperation in the field of Sustainable Urban Development.
    • Memorandum of Understanding between the Agence Francaise De Developpement and the Government of the Union Territory of Puducherry of the Republic of India on Technical Cooperation in the field of Sustainable Urban Development.

    Key clauses under the MoU between the United States Trade and Development Agency (USTDA) and the governments of Andhra Pradesh, Rajasthan and Uttar Pradesh are:

    • The MoU undertaken by the USTDA for the development of Visakhapatnam, Allahabad and Ajmer clearly establishes that the document only cements the intention of the body to assist in the development of these cities and funding must be addressed separately.
    • The USTDA intends to contribute specific funding for feasibility studies, study tours, workshops/training, and any other projects mutually determined, in furtherance of this interest. The USTDA will also fund advisory services for the same.
    • The USTDA will seek to bring in other US government agencies such as the Department of Commerce, the US Export Import Bank and other trade and economic agencies to encourage US-India infrastructure development cooperation and support the development of smart cities in Vishakhapatnam, Allahabad and Ajmer.
    • One of the key points the USTDA stresses on is the creation of a Smart Solutions for Smart Cities Reverse Trade Mission, where Indian delegates will get a chance to showcase their methodologies and inventions in the United States.
    • The MoU also talks about involving industry organisations in the development of Smart Cities, to address important aviation and energy related infrastructure connected to developing smart cities.
    • The respective State Governments of the cities will provide resources for the development of these smart cities, including technical information and data related to smart cities planning; staff, logistical and travel support, and state budgetary resources will be allocated accordingly.

    Key clauses under the MoU between the Agency Francaise De Developpement (AFD) and the governments of Maharashtra, Chandigarh and Puducherry are:

    • The MoU with AFD is along the same lines but with more detail provided in the field of research in sustainable urban development. It comprises of four articles dealing with implementation, research, resource allocation and cooperation.
    • The AFD clearly states that it will adopt an active role in managing and implementing the project.
    • The AFD will equip the respective state governments with a technical cooperation programme which will include a pool of French experts from the public sector, complemented by experts from the private sector.
    • The MoU goes on to state the various vectors of sustainable urban development that will be the focal point of this project – urban transport, water and waste management, integrated development and urban planning, architecture and heritage, renewable energy, energy efficiency etc.
    • Apart from strategizing, the AFD looks to provide technical support as well. This technical expertise would be used to strengthen strategy and management of urban services in the city.
    • They would also play a key role in management through the creation of a Special Purpose Vehicle(SPV) to build strategic management (Human Resources, finance, potential market assessment) and capacity building for financial management.
    • As per Article II of the MoU, this support framework will be accompanied by annual reviews, a policy similar to the USTDA Smart Solutions for Smart Cities Reverse Mission with Indian and French counterparts, collaboration between academic and research institutions for the exchange of information, documentation and results of research in the field of smart cities (a key policy to establish firm research groundwork and increase cooperation and innovation), capacity building research and development.
    • Article III of the MoU deals with resource allocation wherein the respective State Governments will assist AFD by providing technical information and data related to smart cities planning, and also meet their logistical requirements.

    Can the Aadhaar Act 2016 be Classified as a Money Bill?

    by Pooja Saxena — last modified Apr 25, 2016 01:48 PM
    In this infographic, we show if the Aadhaar Act 2016, recently tabled in and passed by the Lok Sabha as a money bill, can be classified as a money bill. The infographic is designed by Pooja Saxena, based on information compiled by Amber Sinha and Sumandro Chattapadhyay.

     

    Download the infographic: PDF and JPG.

     

    License: It is shared under Creative Commons Attribution 4.0 International License.

     

    Does Aadhaar Act satisfy the conditions for a money bill?

     

    Can the Matters Dealt with in the Aadhaar Act be the Objects of a Money Bill?

    by Pooja Saxena — last modified Apr 24, 2016 02:15 PM
    In this infographic, we highlight the matters dealt with in the Aadhaar Act 2016, recently tabled in and passed by the Lok Sabha as a money bill, and consider if these can be objects of a money bill. The infographic is designed by Pooja Saxena, based on information compiled by Sumandro Chattapadhyay and Amber Sinha.

     

    Download the infographic: PDF and JPG.

     

    License: It is shared under Creative Commons Attribution 4.0 International License.

     

    Can the matters dealt with in the Aadhaar Act be the objects of a money bill?

     

    The Aadhaar Act is Not a Money Bill

    by Amber Sinha — last modified Apr 25, 2016 10:51 AM
    While the authority of the Lok Sabha Speaker is final and binding, Jairam Ramesh’s writ petition may allow the Supreme Court to question an incorrect application of substantive principles. This article by Amber Sinha was published by The Wire on April 24, 2016.

     

    Originally published by The Wire on April 24, 2016.


    Since its introduction as a money bill in the Lok Sabha in the first week of March [1], the Aadhaar (Targeted delivery of Financial and other subsidies, benefits and services) Bill, 2016 has been embroiled in controversy. The Lok Sabha rejected the five recommendations of the Rajya Sabha and adopted the bill on March 16 and only presidential assent was required for it become to become valid law. However, former Union Minister Jairam Ramesh filed a writ petition contesting the decision to treat the Aadhaar Bill as a money bill. The petition is due to be heard before the Supreme Court on April 25, and should the court decide to entertain the petition, it could have far-reaching implications for the Aadhaar project and the manner in which money bills are passed by the Parliament.

    There are three broad categories of bills (all legislations or Acts are known as ‘bills’ till they are passed by the Parliament) that the Parliament can pass. The first kind, Constitution Amendment Bills, are those that seek to amend a provision in the Constitution of India. The second are financial bills which contain provisions on matters of taxation and expenditure. Money bills are a subset of the financial bills which contain provisions only related to taxation, financial obligations of the government, expenditure from or receipt to the Consolidated Fund of India and any matters incidental to the above. The third category is of ordinary bills which includes all other bills. The process for the enactment of all these bills is different. Money bills are peculiar in that they can only be introduced in the Lok Sabha where it can be passed by simple majority. Following this, it is transmitted to the Rajya Sabha. The Rajya Sabha’s powers are restricted to giving recommendations on the Bill and sending it back to the Lok Sabha, which the Lok Sabha is under no obligation to accept. The decision to introduce the Aadhaar Bill as a money bill has been widely seen as an attempt to circumvent the Rajya Sabha where the ruling party is in a minority.

    Article 110 (1) of the Constitution defines a money bill as one containing provisions only regarding the matters enumerated or any matters incidental to them. These are a) imposition, regulation and abolition of any tax, b) borrowing or other financial obligations of the Government of India, c) custody, withdrawal from or payment into the Consolidated Fund of India (CFI) or Contingent Fund of India, d) appropriation of money out of CFI, e) expenditure charged on the CFI or f) receipt or custody or audit of money into CFI or public account of India. Article 110 is modelled on Section 1(2) of the (UK) Parliament Act, 1911 which also defines the money bills as those only dealing with certain enumerated matters. The use of the word “only” was brought up by Ghanshyam Singh Gupta during the Constituent Assembly Debates. He pointed out that the use of the word “only” limits the scope of money bills to only those legislations which did not deal with other matters. His amendment to delete the word “only” was rejected clearly establishing the intent of the framers of the Constitution to keep the ambit of money bills extremely narrow.

    While the Aadhaar Bill does make references to benefits, subsidies and services funded by the Consolidated Fund of India (CFI), even a cursory reading of the bill reveals its main objectives as creating a right to obtain a unique identification number and providing for a statutory apparatus to regulate the entire process. The mere fact of establishing the Aadhaar number as the identification mechanism for benefits and subsidies funded by the CFI does not give it the character of a money bill. The bill merely speaks of facilitating access to unspecified subsidies and benefits rather than their creation and provision being the primary object of the legislation. Erskine May’s seminal textbook, ‘Parliamentary Practice” is instructive in this respect and makes it clear that a legislation which simply makes a charge on the Consolidated Fund does not becomes a money bill if otherwise its character is not that of one.

    PDT Achary, former secretary general of the Lok Sabha, has expressed concern about the use of Money Bills as a means to circumvent the Rajya Sabha. He has written here [2] and here [3], on what constitutes a money bill and how the attempts to pass off financial bills like the Aadhaar Bill as money bills could erode the supervisory role Rajya Sabha is supposed to play. This is especially true in the case of a legislation like the Aadhaar Bill which has far reaching implications for individual privacy as it governs the identification system conceptualised to provide a unique and lifelong identity to residents of India dealing with both the analog and digital machinery of the state and by virtue of Section 57 of any private entities. Already over 1 billion people have been enrolled under this identification scheme, and the project has been a subject of much debate and a petition before the Supreme Court. The project has been portrayed as both the last hope for a welfare state and surveillance infrastructure. Regardless of which of the two ends of spectrum one leans towards, it is undeniable that the law governing the Aadhaar project deserved a proper debate in the Parliament. Even those who are strong proponents of the project must accept the decision to pass it off as a money bill undermines the importance of democratic processes and is a travesty on the Constitution and a blatant abrogation of the constitutional duties of the speaker.

    The petition by Jairam Ramesh would hinge largely on the powers of the judiciary to question the decision of the Speaker of the Lok Sabha. Article 110 (3) is very clear in pronouncing the authority of the Speaker as final and binding. Additionally, Article 122 prohibits the courts from questioning the validity of any proceedings in Parliament on the ground of any alleged irregularity of procedure. The powers of privilege that Parliamentarians enjoy are integral to the principle of separation of powers. However, the courts may be able to make a fine distinction between inquiring into procedural irregularity which is prohibited by the Constitution; and questioning an incorrect application of substantive principles, which I would argue, is the case with the Speaker decision.

    References

    [1] See: http://thewire.in/2016/03/07/arun-jaitley-introduces-money-bill-on-aadhar-in-lok-sabha-24115/.

    [2] See: http://indianexpress.com/article/opinion/columns/show-me-the-money-4/.

    [3] See: http://www.thehindu.com/opinion/lead/circumventing-the-rajya-sabha/article7531467.ece.

     

    Cyber Security of Smart Grids in India

    by Elonnai Hickok and Vanya Rakesh — last modified Apr 28, 2016 03:34 PM
    An integral component of the ambitious flagship programme of the Indian Government- Digital India, which paves way for a digital data avalanche in the country, is a well-designed digital infrastructure ensuring high connectivity and integration of services, the potential areas being smart cities, smart homes, smart energy and smart grids, to list a few. Likewise, the 100 Smart Cities Mission envisions changing the face of urbanization in India, to manage the exponential growth of population in the cities by creating smart cities with ICT driven solutions, along with big data analytics. Smart grid technologies are key for both these schemes.

    The article by Elonnai Hickok and Vanya Rakesh was published by Dataquest on April 25, 2016


    Smart grid is a promising power delivery infrastructure integrated with communication and information technologies which enables monitoring, prediction and management of energy usages. Establishment of smart grids becomes highly important for the Indian economy, as the present grid losses are one of the highest in the world at upto 50% and costing India upto 1.5% of its GDP. India operates one of the largest synchronous grids in the world – covering an area of over 3 million sq km, 260 GW capacity and over 200 million customers with the estimated demand of India increasing 4 times by the year 2032.

    In the year 2013, the Ministry of Power (MoP), in consultation with India Smart Grid Forum and India Smart Grid Task Force released a smart grid vision and roadmap for India, a key policy document aligned to MoP’s overarching objectives of “Access, Availability and Affordability of Power for All”. It lays plans for a framework to address cyber security concerns in smart grids as well. To achieve goals envisaged in the roadmap, the Government of India established the National Smart Grid Mission in the year 2015 for planning, monitoring and implementation of policies and programs related to Smart Grid activities.

    A number of smart grid projects have been introduced, and are currently underway. KEPCO in Kerala has established smart meter/intelligent power transmission and distribution equipment system in the year 2011 and the smart grid operations focus on peak reduction, load standardization, reduction in power transmission/distribution loss, response to new/renewable energy and reduction in black-out time. Gujarat was introduced to India’s first modernized electrical grid in the year 2014, to study consumer behaviour of electricity usage and propose a tariff structure based on usage and load on the power utility by installing new meters embedded with SIM card to monitor the data. The Bangalore Electricity Supply Company Ltd. (BESCOM) project in Bangalore envisaged the Smart Grid Pilot Project for integration of renewable and distributed energy resources into the grid, which is vital to meet growing electricity demands of the country, curb power losses, and enhance accessibility to quality power.

    Cybersecurity challenges

    At the same time, the introduction of a smart grid brings with it certain security risks and concerns, particularly to a nation’s cyber security. Increased interconnection and integration may render the grids vulnerable to cyber threats, putting stored data and computers at great risk.With sufficient cyber security measures, policies and framework in place, a Smart Grid can be made more efficient, reliable and secure as failure to address these problems will hinder the modernization of the existing power system. Smart Grids, comprising of numerous communication, intelligent, monitoring and electrical elements employed in power grid, have a greater exposure to cyber-attacks that can potentially disrupt power supply in a city.

    Cyber security and data privacy are some of the key challenges for smart grids in India, as establishment of digital electricity infrastructure entails the challenge of communication security and data management. Digital network and systems are highly prone to malicious attacks from hackers which can lead to misutilisation of consumers’ data, making cyber security the key issue to be addressed. Vulnerabilities allow an attacker to break a system, corrupt user privacy, acquire unauthorized access to control the software, and modify load conditions to destabilize the grid. Hackers or attackers, who compromise a smart meter can immediately alter their energy costs or change generated energy meter readings to monetize it by help of remote PCs. Also, inserting false information could mislead the electric utility into making incorrect decisions about the local usage and capacity.

    Initiatives in India

    As cybersecurity is critical for Digital India and the Smart City Concept note highlights a smart grid to be resilient to cyber attacks, a National Cyber Coordination Centre is being established by the Indian Government. Also, National Cyber Safety and Security Standards has been started with a vision to safeguard the nation from the current threats in the cyberspace, undertaking research to understand the nature of cyber threats and Cyber Crimes by facilitating a common platform where experts shall provide an effective solution for the complex and alarming problems in the society towards cyber security domain. Innovative strategies and compliance procedures are being developed to curb the increasing complexity of the Global Cyber Threats faced by countries at large.

    The National Cyber Security Policy 2013 was released with an umbrella framework for providing guidance for actions related to security of cyberspace, by the Department of Electronics and Information Technology (DeitY). The Working Group on Information Technology established under the Planning Commission has also published a 12 year plan on IT development in India with a road map for cyber security, stating six key priority and focus areas for cyber security including:Enabling Legal Framework ; Security Policy, Compliance and Assurance; Security R&D; Security Incident – Early Warning and Response ; Security awareness, skill development and training, and Collaboration.

    In case of Bangalore, to ensure smooth implementation of BESCOM’s vision, the company realised the need to put a cyber-security system in place to protect the smart grid installations in Bangalore city. To ensure security, BESCOM has come out with a separate IT security policy and dedicated trained IT cadre to safeguard its data and servers, becoming one of the few Discoms in India to take such measures for safeguarding the servers and data network from cyber crimes and threats.

    Way forward

    An electric system like Smart grids has enormous and far-reaching economic and social benefits. However, increased interconnection and integration tends to introduce cyber-vulnerabilities into the grid. With the evolution of cyber threats/attacks over time, it can be said that there are a lot of challenges for implementing cyber security in Indian smart grid. Considering importance of secure smart grid networks for flagship projects in India, the existing regulatory framework does not seem to adequately take into consideration the cyber security implications.

    In light of this, the government must aim to develop and adopt high level cybersecurity policy to withstand cyber-attacks. Also, India must focus on skills development in this domain and have a capable workforce to achieve the targets set by Indian Government. The country must look up to develop an overall intelligence framework that brings together industry, governments and individuals with specific capabilities for this purpose.

    The National Cyber Security Policy 2013, protecting public and private infrastructure from cyber attacks, along with all kinds of information, such as personal information of web users, banking and financial information,etc. is yet to be implemented by the Government properly. In the Indian Power sector, the cyber security regulations or mandates are absent in the National Electricity Policy (NEP) as well as the Electricity Act 2003 and its amendment in 2007, with no reference to cyber security concerns. These key legislations must be amended to take into account the growing challenges due to increased use of ICT in the power sector.

    As the concept of smart grids is still evolving in India, professional intervention from various domains has pushed for adoption and development of standard process and products. Many international standard setting organisations like IEC, IEEE, NIST, CENELEC are engaged in standardization activities of Smart Grids and in India, the Bureau of Indian Standards (BIS) has been rolling out several varieties of standards targeting various technologies. Therefore, BIS must develop standards taking into account the security challenges in the cyberspace as well.

    Apart from policy and regulatory measure, the system on which the smart grids are built and networked must be made architecturally strong and secure.One of the areas where due attention is required is making the Supervisory Control and Data Acquisition (SCADA) secure, a system that operates with coded signals to provide control of remote equipment and is entirely based on computer systems and network. Numerous systems also employ the Public Key Infrastructure (PKI) to secure the Smart Grids and address the security challenges by enabling identification, verification, validation and authentication of connected meters for network access. This can be leveraged for securing data integrity, revenue streams and service continuity. The key vulnerable areas prone to cyber attacks on information transmission are network information, data integrity and privacy of information. The information transmission networks must be well-designed as the network unavailability may result in the loss of real-time monitoring of critical smart grid infrastructures and power system disasters.

    Addressing these fast growing challenges and cyber security needs of the country by adopting suitable regulatory, policy and architectural steps would help achieve the objectives of Digital India and Smart Cities enabling “Access, Availability and Affordability for All”.

    Privacy Issues with DRM

    by Jalaj Pandey — last modified May 03, 2016 02:41 AM
    This post has been written by Jalaj Pandey interning at CIS. It elaborates upon the various privacy issues with the Digital Rights Management. The author talks about the various ways in which content producers use DRM as a tool to infringe the privacy of the end users.

    Nehaa Chaudhari provided inputs and also edited the blog post. Click to download the File.


    The ubiquity of internet in today's world has made content and information sharing an easy task. A certain media file can be shared and made public with hardly any technical obstacles. Issues like hacking, unauthorized copying and publication, unlicensed usage have become concerns for content producers, who have employed Digital Rights Management (hereafter DRM) measures to address some of them.

    Several instances of the online privacy intrusion by the content producers have been recorded. In such a scenario the balancing the rights of the content producers and the end users becomes an important one. It is imperative to find a common ground to safeguard the interests of both the parties involved. In the recent past DRM has been receiving a lot of flak because of the privacy issues contented by the users.

    In the most rudimentary form privacy can be explained as any information about an individual which he/she does not want to be made public. It is important to mention that this information is seen from the perspective of an ordinary reasonable person. The UN Declaration of Human Rights, 1948, defines privacy as a fundamental right of every human. The functioning of the DRM is based on restricting the usage or distribution of the content. Since this restriction is only possible after there is a formal identification of the end user, the content producers end up collecting information about the users. For example: a DRM for a music file might work in a manner where it can only be accessed by one computer from which the user accesses and registers for the first time. DRMs initially identify the IP addresses of the system and make the file functioning on only that IP address. In this way the producer ends up collecting information about the end user. Different DRM models take different ways to collect information of their user. While collecting IP addresses in one of them the other way is tracking the user information via download, browsing activities, subscription service, etc. The usage log of the users is generated and becomes a valuable asset to assess and predict the preferences of the users

    Two contentions of privacy have been raised on the privacy issues of DRM -

    a) What is the accountability of this process and

    b) Whether it puts the content producers in a position where they can control the users.

    The information collected is under the control of content producers, who mostly store this information in the form database. BEUC (European Consumer Organization) claimed that the DRM systems technologically enable content providers to monitor private consumption of content, create reports of consumption, and profile users.

    The information is at the disposal of the content producers. An assessment of DRM applications under Canadian Privacy showed that the firms did not even recognise privacy issues of the customers as a priority. In fact the firms failed to provide the information that was stored in their databases. This gives an idea about the lack of transparency that exists in collecting the information about users. The question whether users are aware of what information is being collected and to what extent they are being tracked online remains unanswered. The CEN/ISSS (European Committee for Standardization/ Information Society Standardisation System) pointed out that DRMs have a large potential to transmit, generate personal information about users. It has also been characterized by unprecedented levels of monitoring by various content producers.

    Further the principled level argumentation to this is on lines of collection of information without any authentication from the user herself/himself. It is essential that if any information is collected or saved by the producers it should only be after taking consent of the user. Surveillance and compelled disclosure of information about intellectual consumption threaten rights to personal integrity.

    DRMs take away the anonymity of the consumption. Since the producers can practically monitor the content usage of the user, this has led to wide scale of price discrimination. This means that producers would monitor and assess the preferences of the user and subsequently raise the prices of that particular class of products. In the report of FIPR (Foundation of Information Policy and Research) it was found that Microsoft had been trying to implement their DRM systems in their products using a similar approach to gain a monopoly position as in their strategy of browser implementation.

    The Sony BMG copy protection rootkit scandal in 2005 brought much criticism to DRM. It was found out that Sony BMG had introduced illegal and harmful copy protection measure in its CDs. The rootkit element of the software is used to hide virtually all traces of the copy protection software's presence on a PC, so that an ordinary computer user would have no way to find it. Further more than just the DRM part of it the software also made the user's system open to a number of malwares and created vulnerabilities in the system. Sony was eventually made to compensate consumer costs, etc on the same. However the question of whether the database in the hands of companies can be used in arbitrary manner was intensely discussed after this.

    It is essential that an effective framework is brought into effect which caters to privacy interests of the users. Privacy is the basic human right and it is the onus of the State to protect and safeguard this right. It is essential that the State does not compromise and support mechanisms which promote the welfare of the content producers over the users. The balance of users and producers becomes all the more important in a developing country like ours. The lack the awareness and the knowledge coupled with increasing usage of internet can lead to the exploitation of many. It is essential that the States see through these problems and collectively find an all encompassing solution to it.



    K. G. Coffman and A. M. Odlyzko, Growth of the Internet, AT&T Labs - Research, July 6, 2001, available at, ( www.dtc.umn.edu/~odlyzko//doc/oft.internet.growth.pdf) (hereinafter Growth).

    The Daily Source, The Growing Impact of the Internet, April 4, 2016, available at (https://www.dailysource.org/about/impact).

    Corryne Mcsherry, Adobe Spyware Reveals (Again) The Price Of DRM: Your Privacy And Security, Electronic Frontier Foundation, October 17, 2014, available at,

    (https://www.eff.org/deeplinks/2014/10/adobe-spyware-reveals-again-price-drm-your-privacy-and-security).

    Digital Rights Management: A failure in the developed world, a danger to the developing world, Electronic Frontier Foundation, March 23, 2005, available at,

    (https://www.eff.org/wp/digital-rights-management-failure-developed-world-danger-developing-world).

    R. Subramanya and Byung k. Yi, Digital Rights Management, available at, ( https://www.academia.edu/8054608/Digital_Rights_Management) (hereinafter Digital Rights Management).

    Global internet liberty campaign, privacy and human rights, An International Survey of Privacy Laws and Practice, available at, (http://gilc.org/privacy/survey/intro.html).

    Ann Cavoukian, Privacy and Digital Rights Management (DRM): An Oxymoron, Information and Privacy Commissioner Ontario, available at, ( https://www.ipc.on.ca/images/Resources/up-1drm.pdf ) (hereinafter Oxymoron)

    Varian, H.R. (1985) 'Price discrimination and social welfare', American Economic Review, Vol. 75, available at, (http://www.economics-ejournal.org/economics/journalarticles/2007-1/references/Varian1985).

    Privacy and Digital Rights Management,A position paper for the W3C workshop on Digital Rights Management, January 2001, available at, ( www.w3.org/2000/12/drm-ws/pp/hp-poorvi.html).

    Growth supra note, 1.

    Digital Rights Management supra note, 5.

    Thierry Rayna, Privacy or piracy, why choose? Two solutions to the issues of digital rights management and the protection of personal information, Intellectual Property Management, Vol. X, No. Y, available at,

    (www.inderscienceonline.com/doi/abs/10.1504/IJIPM.2008.021138).

    Oxymoron supra note, 7.

    BEUC, Consumentenbond, and CLCV at DRM Working Group 1 (2002), available at, (https://privacy.org.nz/assets/Files/4558510.pdf).

    Natali Helberger and Kristo´f Ker´enyi and Bettina Krings, Digital Rights Management and Consumer Acceptability: A Multi-Disciplinary Discussion of Consumer Concerns and Expectations, available at (citeseerx.ist.psu.edu/showciting?cid=733532).

    Knud Bohle, Indicare, Research into unfriendly DRM : A Review, December, 2004,available at, (citeseerx.ist.psu.edu/showciting?cid=733532) (hereinafter Indicare).

    European Committee for Standardization/Information Society Standardisation System (CEN/ISSS) DRM Report, 2003.

    Indicare supra note, 16.

    News Release, "Forrester Technographics Finds Online Consumers Fearful of Privacy Violations" (October 27, 1999 available at, (www.forrester.com/ER/Press/Release/0,1769,177,FF.html).

    Julia E. Cohen, Georgetown Law Faculty Publications, DRM and Privacy, January 2010, available at,

    (https://www.academia.edu/2164013/DRM_and_Privacy).

    Thierry Rayna, Privacy or piracy, why choose? Two solutions to the issues of digital rights management and the protection of personal information, Intellectual Property Management, available at, ( www.inderscienceonline.com/doi/abs/10.1504/IJIPM.2008.021138) (hereinafter Privacy or piracy).

    Moe, W. and Fader, P. (2004) 'Dynamic conversion behavior at e-commerce sites', Management Science, Vol. 50, available at,

    (https://www.researchgate.net/publication/227447618_Dynamic_Conversion_Behavior_at_E-Commerce_Sites).

    Privacy or piracy supra note, 21.

    Sismeiro, C. and Bucklin, R. (2004) 'Modeling purchase behavior at an e-commerce web site: a task completion approach', Journal of Marketing Research, available at, (citeseerx.ist.psu.edu/showciting?cid=906878).

    Ross Anderson, Foundation of Information Policy and Research Consultation Response to DRM (2004), available at, (www. fipr.org/APIG_DRM_submission.pdf).

    Otto Helweg, Sony, Rootkits and Digital Rights Management Gone Too Far, Oct, Oct. 31, 2014, available at (https://blogs.technet.microsoft.com/markrussinovich/2005/10/31).

    Sony BMG Litigation Info, Electronic Frontier Foundation, available at, (https://www.eff.org/cases/sony-bmg-litigation-info).

    Privacy Issues with DRM

    by Prasad Krishna last modified May 03, 2016 02:39 AM

    ZIP archive icon Privacy issues with DRM.docx — ZIP archive, 26 kB (27141 bytes)

    Identity of the Aadhaar Act: Supreme Court and the Money Bill Question

    by Vanya Rakesh and Sumandro Chattapadhyay — last modified May 09, 2016 11:52 AM
    A writ petition has been filed by former Union minister Jairam Ramesh on April 6 challenging the constitutionality and legality of the treatment of this Act as a money bill. The Supreme Court heard the matter on April 25 and invited the Union government to present its view. It is our view that the Supreme Court can not only review the Lok Sabha speaker’s decision, but should also ask the government to draft the Aadhaar Bill again, this time with greater parliamentary and public deliberation. Vanya Rakesh and Sumandro Chattapadhyay wrote this article on The Wire.

     

    Published by and cross-posted from The Wire.


    The Aadhaar Act 2016, passed in the Lok Sabha on March 16, 2016, faced opposition ever since it was tabled in parliament. In particular, the move to introduce it as a money bill has been vehemently challenged on grounds of this being an attempt to bypass the Rajya Sabha completely. A writ petition has been filed by former Union minister Jairam Ramesh on April 6 challenging the constitutionality and legality of the treatment of this Act as a money bill. The Supreme Court heard the matter on April 25 and invited the Union government to present its view.

    It is our view that the Supreme Court can not only review the Lok Sabha speaker’s decision, but should also ask the government to draft the Aadhaar Bill again, this time with greater parliamentary and public deliberation.

    The money bill question

    M.R. Madhavan has argued that the Aadhaar Act contains matters other than “only” those incidental to expenditure from the consolidated fund, as it establishes a biometrics-based unique identification number for beneficiaries of government services and benefits, but also allows the number to be used for other purposes beyond service delivery. While Pratap Bhanu Mehta calls this a subversion of “the spirit of the constitution”, P.D.T. Achary, former secretary general of the Lok Sabha, expressed concern about the attempts to pass off financial bills like Aadhaar as money bills as a means to circumvent and erode the supervisory role of the Rajya Sabha. Arvind Datar has further emphasised that when the primary purpose of a bill is not governed by Article 110(1), then certifying it as a money bill is an unconstitutional act.

    Article 110(1) of the Constitution identifies a bill as a money bill if it contains “only” provisions dealing with the following matters, or those incidental to them:

    1. imposition and regulation of any tax,
    2. financial obligations undertaken by Indian Government,
    3. payment into or withdrawal from the Consolidated Fund of India (CFI) or Contingent Fund of India,
    4. appropriation of money and expenditure charged on the CFI or receipt, and
    5. custody, issue or audit of money into CFI or public account of India.

    However, the link of the Act with the Consolidated Fund of India is rather tenuous, since it depends on the Union or state governments declaring a certain subsidy to be available upon verification of the Aadhaar number. The objectives and validity of the Act would not actually change if the Aadhaar number no longer was directly connected to the delivery of services. The use of the word “if” in section 7 explicitly leaves scope for a situation where the government does not declare an Aadhaar verification as necessary for accessing a subsidy. In such a scenario, the Act will still be valid but without any formal connection with any charges on the Consolidated Fund of India.

    A case of procedural irregularity?

    The constitution of India borrows the idea of providing the speaker with the authority to certify a bill as money bill from British law, but operationalises it differently. In the UK, though the speaker’s certificate on a money bill is conclusive for all purposes under section 3 of the Parliament Act 1911, the speaker is required to consult two senior members, usually one from either side of the house, appointed by the committee from amongst those senior MPs who chair general committees. In India, the speaker makes the decision on her own.

    Although article 110 (3) of the Indian constitution states that the decision of the speaker of the Lok Sabha shall be final in case a question arises regarding whether a bill is a money bill or not, this does not restrict the Supreme Court from entertaining and hearing a petition contesting the speaker’s decision. As the Aadhaar Act was introduced in the Lok Sabha as a money bill even though it does not meet the necessary criteria for such a classification, this treatment of the bill may be considered as an instance of procedural irregularity.

    There is ample jurisprudence on what happens when the Supreme Court’s power of judicial review comes up against Article 122 – which states that the validity of any proceeding in the parliament can (only) be called into question on the grounds of procedural irregularities. In the crucial judgment of Raja Ram Pal vs Hon’ble Speaker, Lok Sabha and Others (2007), the court evaluated the scope of judicial review and observed that although parliament is supreme, unlike Britain, proceedings which are found to suffer from substantive illegality or unconstitutionality, cannot be held protected from judicial scrutiny by article 122, as opposed to mere irregularity. Deciding upon the scope for judicial intervention in respect of exercise of power by the speaker, in Kihoto Hollohan vs Zachillhu and Ors. (1992), the Supreme Court held that though the speaker of the house holds a pivotal position in a parliamentary democracy, the decision of the speaker (while adjudicating on disputed disqualification) is subject to judicial review that may look into the correctness of the decision.

    Several past decisions of the Supreme Court discuss how the tests of legality and constitutionality help decide whether parliamentary proceedings are immune from judicial review or not. In Ramdas Athawale vs Union of India (2010), the case of Keshav Singh vs Speaker, Legislative Assembly (1964) was referred to, in which the judges had unequivocally upheld the judiciary’s power to scrutinise the actions of the speaker and the houses. It was observed that if the parliamentary procedure is illegal and unconstitutional, it would be open to scrutiny in a court of law and could be a ground for interference by courts under Article 32, though the immunity from judicial interference under this article is confined to matters of irregularity of procedure. These observations were reiterated in Mohd. Saeed Siddiqui vs State of Uttar Pradesh (2014) and Yogendra Kumar Jaiswal vs State of Bihar (2016).

    Thus, the decision of the Lok Sabha speaker to pass and certify a bill as a money bill is definitely not immune from judicial review. Additionally, the Supreme Court has the power to issue directions, orders or writs for enforcement of rights under Article 32 of the constitution, therefore, allowing the judiciary to decide upon the manner of introducing the Aadhaar Act in parliament.

    National implications demand public deliberation

    As the provisions of the Aadhaar Act have far reaching implications for the fundamental and constitutional rights of Indian citizens, the Supreme Court should look into the matter of its identification and treatment as a money bill and whether such decisions lead to the thwarting of legislative and procedural justice.

    The Supreme Court may also take this opportunity to reflect on the very decision making process for classification of bills in general. As Smarika Kumar argues, experience with the Aadhaar Act reveals a structural concern regarding this classification process, which may have substantial implications in terms of undermining public and parliamentary deliberative processes. This “trend,” as Arvind Datar notes, of limiting legislative discussions and decisions of national importance within the space of the Lok Sabha must be swiftly curtailed.

    Apart from deciding upon the legality of the nature of the bill, it is vital that the apex court ask the government to categorically respond to the concerns red-flagged by the Standing Committee on Finance, which had taken great exception to the continued collection of data and issuance of Aadhaar numbers in its report, and to the recommendations passed in the Rajya Sabha recently. Further, the repeated violation of the Supreme Court’s interim orders – that the Aadhaar number cannot be made mandatory for availing benefits and services – in contexts ranging from marriages to the guaranteed work programme should also be addressed and responses sought from the Union government.

    Evidently, the substantial implications of the Aadhaar Act for national security and fundamental rights of citizens, primarily privacy and data security, make it imperative to conduct a duly balanced public deliberation process, both within and outside the houses of parliament, before enacting such a legislation.

     

     

    Criminal defamation remains and so does the debate

    by Japreet Grewal — last modified May 23, 2016 06:05 AM

    The judgment on the plea to de-criminalise defamation is out and despite its verbosity and rich vocabulary is an embarrassment to our recent judicial milestone of constitutional challenges. In the case of Subramanian Swamy vs. Union of India, a two judge bench headed by Justice Dipak Misra, has upheld the constitutionality of Section 499 and Section 500 of Indian Penal Code, 1860 (IPC) and Section199 of Code of Criminal Procedure, 1973 (CrPC) that criminalise defamation.

    The judgment has not satisfactorily answered several pertinent questions. Various significant issues relating to the existing regime of defamation have been touched upon in the judgment but the bench has skipped the part where it is required to analyse and give its own reasoning for upholding or reading down the law. This post points out what should have been looked at.

    A. Whether defamation is a public or a private wrong?  What is the State’s interest in protecting the reputation of an individual against other private individuals? Is criminal penalty for defamatory statements an appropriate, adequate or disproportionate remedy for loss of reputation?


    At the core of the debate to decriminalise defamation lies the question, whether defamation is a public or a private wrong. The question was raised in the Subramanian Swamy case and the court held that defamation is a public wrong. Our problem with the court’s decision lies in its failure to provide a sound and comprehensive analysis of the issue. In order to understand whether defamation is a public or a private wrong, it is necessary that we look at what reputation means, what happens when reputation is harmed and whose interests are affected by such harm.

    Reputation is not defined in law, however the Supreme Court has held that reputation is a right to enjoy the good opinion of others and the good name, the credit, honour or character which is derived from such favourable public opinion. The definition reflects several elements that constitute reputation which when harmed have different bearing on the reputation of an individual. Academic Robert C Post in his paper, The Social Foundations on Defamation Law: Reputation and Constitution, says that reputation can be understood as a form of intangible property akin to goodwill or as dignity (the respect including self-respect that arises from observance of rules of the society). While reputation when seen as property can be estimated in money and thus adequately compensated through a civil action for damages, loss of dignity is not a materially quantifiable loss, and thus, monetary compensation appears irrelevant. The purpose of the defamation law could either be to ensure that reputation is not wrongfully deprived of its proper market value or the respect/acceptance of the society. Explanation 4 to Section 499 of the IPC accommodates both such situations and provides that reputation is harmed if it directly or indirectly, in the estimation of others, lowers the moral or intellectual character of that person, or lowers the character of that person in respect of his caste or of his calling, or lowers the credit of that person, or causes it to be believed that the body of that person is in a loathsome state, or in a state generally considered as disgraceful.

    Post adds that an individual’s reputation is a product of his interaction with the society by following the norms of conduct (which he calls rules of civility) created by the society, thus the society has an interest in enforcing its rules of civility through defamation law by policing breaches of these rules. Criminal defamation acknowledges that loss of reputation is a wrong to the societal interests; however these interests have not been deliberated upon by the courts in India.

    The Subramanian Swamy case was an occasion where, it was imperative that the court took up this exercise and explained what interest the society had in protecting the reputation of an individual for it to be classified as a public wrong. The court stated, “the law relating to defamation protects the reputation of each individual in the perception of the public at large. It matters to an individual in the eyes of the society. There is a link and connect between individual rights and the society; and this connection gives rise to community interest at large. Therefore, when harm is caused to an individual, the society as a whole is affected and the danger is perceived” With this reasoning it can be inferred that the society has an interest in all private wrongs. Where would that inference land us? This reasoning is ambiguous and inadequate.

    On the other hand, criminal penalty for perfectly private wrongs such as copyright infringement and dishonour of cheques urges us to ask if there is a problem with the rigid distinction of public and private wrongs. Should we be asking the question differently?

    The judgment has provided extremely inadequate answers to this question and has left matters ambiguous.

    B. Can the right to reputation under Article 21 be enforced against another individual’s freedom of expression and are safeguards already built in law so as not to unreasonably restrict and stifle free expression in this regard?

    Defamation finds a place in the list of constitutionally allowed restrictions on freedom of speech under Article 19 (2). Defamation protects the right to reputation of an individual thus free expression by this reason is subject to the right to reputation of an individual. The court had repeatedly observed that right to reputation is a part of the right to life under Article 21 of the Constitution. The question of enforceability of right to reputation under Article 21 against freedom of expression under Article 19 (1) (a) came into question in the instant case; it was contended that a fundamental right is enforceable against the State but cannot be invoked to serve a private interest of an individual. Thus, the right to reputation as manifested in defamation being a wrong committed against a private person by another person is unconnected and falls outside the scope of Article 19 (2). It is pertinent to note that Article 21 (which includes right to reputation) is enforceable not only against the state but also against private individuals. What is relevant here is an understanding of horizontal enforceability of fundamental rights (certain fundamental rights can be enforced against private individuals and non-state actors). This would help explain the dilemma in enforcing the right to reputation of an individual against free speech of another individual. It is vaguely mentioned in the judgment (see para 88) but has not been deliberated upon.

    What follows from the discussion of enforceability of right to reputation, is the discussion on how reasonably it restricts speech. The Supreme Court has previously held that while determining reasonableness, the underlying purpose of the restrictions imposed, the extent and urgency of the evil sought to be remedied thereby, the disproportion of the imposition, the prevailing conditions at the time, should all enter into the judicial verdict. We briefly analyse the critical aspects of the regime of criminal defamation on these parameters.

    Underlying purpose

    At the heart of the defamation law is the need to find the most suitable remedy for loss of reputation of an individual. How does one restore reputation of an individual in the society and whether criminal penalty an appropriate remedy?

    Extent of restriction

    The extent to which defamation law restricts free speech could be analysed by looking at various aspects such as what kind of speech is considered defamatory, what procedure is followed to bring action against the alleged wrong doer and scope of abuse of the law. Explanation 1 to Section 499 of IPC provides that a statement or imputation is defamatory if it is not made in public good. It is not sufficient to prove that such statement or imputation is in fact true. The idea of public good is at best vague without any means to evaluate it. Further, under Section 199 of CrPC allows multiple complaints to be filed in different jurisdictions for a single offensive publication. Besides, usage of terms like “some person aggrieved” leaves room for parties other than the person in respect of whom defamatory material is published to bring action and the provision also allows the privilege of two sets of procedures for prosecution (in official capacity and in private capacity) to public servants without satisfactory reasoning provided for such discrimination. These provisions have the potential to be used to file frivolous complaints and could be a handy tool for harassment of journalists or activists among others.

    Proportionality

    Does the publication or imputation of defamatory material warrant payment of fine and imprisonment? Earlier in the post, we brought up the question of relevance of such measures to the act of defamation. Assuming that it is relevant, do we think it is harsh or commensurate to the wrongful act. It is necessary to look at the process of prosecution before we determine the proportionality of the restriction. Criminal law assumes that the accused is innocent until he is proven guilty. Therefore until the judiciary determines that the act of defamation was committed, how does the process help the accused in maintaining status quo.  It is also pertinent to look at the threshold for civil defamation. Under the civil wrong of defamation, truth works as a complete defence while under criminal defamation, a statement despite being true could invite penalty if it is not published in public good. Thus a lower threshold for criminal liability would upset the balance of proportionality. These aspects are critical to determine the reasonableness of criminal defamation and it is unfortunate that the judgment that runs into hundreds of pages has not evaluated them.

    Conclusion

    The convoluted debate on criminal defamation remains intact post the pronouncement of this judgment. Questions of competing interests of society and individuals or individuals per se, and ambiguous rationale behind imposition of liability, arbitrariness of procedure for prosecution have not been examined. Further, the hardship in compartmentalising free speech, the right to reputation and the right to privacy remains unanswered.

     

     

     

    Comments on Draft Electronic Health Records Standards

    by Amber Sinha — last modified Dec 15, 2016 08:45 AM
    The Centre for Internet & Society submitted its comments on the Draft Electronic Health Records Standards to the Ministry of Health and Family Welfare.

     

    To,
    Ministry of Health and Family Welfare,
    Room 307 D,
    Nirman Bhavan,
    New Delhi 110108

    Subject: Comments on the Electronic Health Record (EHR) Standards of India

    The Electronic Health Record (EHR) Standards (hereinafter “EHR Standards”) were publicly circulated on March 18, 2016 seeking comments and views from stakeholders and the general public. Having reviewed the EHR Standards and referred to other robust standards dealing with the same subject matter, we wish to submit the following comments on the EHR Standards.

    Standards and Interoperability

    The EHR Standards state that the "primary aim of interoperability standards is to ensure syntactic (structural) and semantic (inherent meaning) interoperability of data amongst systems at all times" [1]. It is mentioned that set of standards outlined in this document represents an incremental approach to adopting standards and that they need to be flexible and modifiable to adapt to the demographic and resource diversity in India.

    Comments:

    1. The EHR Standards make a reference to syntactic and semantic interoperability without really defining these terms or stipulating clear steps for how they may be achieved. It is suggested that these terms are clearly defined. Syntactic interoperability can be defined as ensuring the preservation of the clinical purpose of the data during transmission among healthcare systems. Similarly, semantic interoperability can defined as enabling multiple systems to interpret the information that has been exchanged in a similar way through pre-defined shared meaning of concepts [2].
    2. Inadequate human resource capacity remains a critical challenge to the adoption of e-health standards. The WHO and ITU eHealth Strategy Toolkit [3] recommends the development of effective health ICT workforce, capable of designing, building, operating and supporting e-health services. This workforce could participate in standards development, as well as the localization of international standards to fit a country's specific need. The EHR Standards should also include mechanisms and solutions to address these issues.

    Ownership of Data

    The physical or electronic records, which are generated by the healthcare provider are held in trust by them on behalf of the patient [4]. It is stated that the contained data which is sensitive personal data or personal information of the patient as per the Information Technology Act, 2000 is owned by the patients, however the medium for storage or transmission of such data is owned by the healthcare provider.

    Comments:

    1. Currently, the EHR Standards state that the contained data which are the sensitive personal data of the patient is owned by the patient. While medical records and history is included within the scope of sensitive personal data under the Information Technology Act, 2000, the definition of "Personal Health Information" under the EHR Standards is more expansive. Therefore, it is recommended that all Personal Health Information is deemed to be owned by the patient.
    2. Currently, the EHR Standards do not clearly specify the bodies and individuals who would be subject to the requirements under this document. A definition similar to that of "covered entities" under the US Health Insurance Portability and Accountability Act (HIPAA) could be used [5].

    Privileges of Patient

    Currently, the privileges of the patient include the rights to inspect and view their medical records. Further, the patient can request a healthcare organization that stores/maintains their medical records, to withhold specific information that they do not want disclosed to other

    organizations or individuals. Also, patients can demand information from a healthcare provider on the details of disclosures performed on the patient's medical records [6].

    Comments:

    1. Currently, the EHR Standards only refer to "medical records" as being available for inspection and review of the patients. This should be expanded to also include information about enrollment, payment, claims adjudication, case or medical management record systems maintained by or for a health plan; or Other records that are used, to make decisions about individuals by healthcare providers or other bodies [7].
    2. The EHR standards do not currently stipulate that the upon request by a patient, healthcare providers must exercise timeliness in providing the information to them. A time-limit such 30 calendar days should be clearly stated within which the healthcare provider must process the request.
    3. The right of patients to request information from a healthcare provider on the details of disclosures should include within its scope the rights to receive the date of the disclosure; the name and address of the entity or person who received the information; a brief description of the medical information disclosed; and; a brief summary of the purpose of the disclosure [8].
    4. A right to seek amendment of the one's medical records should also be provided to patients in cases where the information is incomplete.

    Patient Identifying Information

    Under the Standards, Personal identifiers include the following: Name, Address (all geographic subdivisions smaller than street address, and PIN code) All elements (except years) of dates related to an individual (including date of birth, date of death, etc.), Telephone, cell (mobile) phone and/or Fax numbers, Email address, Bank Account and/or Credit Card Number, Medical record number, Health plan beneficiary number, Certificate/license number, Any vehicle or other any other device identifier or serial numbers, PAN number, Passport number, AADHAAR card, Voter ID card, Fingerprints/Biometrics, Voice recordings that are non-clinical in nature, Photographic images and that possibly can individually identify the person, Any other unique identifying number, characteristic, or code [9].

    Comments:

    The above mentioned list is not adequate and exhaustive such as the definition and scope of Protected Health Information under the HIPAA [10]. The following identifiers must be included within the scope of Patient Identifying Information: Device identifiers and serial numbers, Web Universal Resource Locators (URLs), Internet Protocol (IP) address numbers.

    Disclosure of Protected/Sensitive Information

    The EHR Standards state that disclosure of protected/sensitive information for use in treatment, payments and other healthcare operations must be only done after obtaining a general consent of the patient. On the other hand, disclosures for for non-routine and most non-health care purposes must be done only after obtaining the specific consent of the patient. Only for certain specified national priority activities, such as notifiable/communicable diseases, it is stated that "the health information may be disclosed to appropriate authority as mandated by law without the patient's prior authorization."

    Comments:

    1. The terms "specific consent" and "general consent" need to be clearly defined.
    2. In cases of disclosures for for non-routine and most non-health care purposes, a written authorisation should be mandatory. It should be clearly specified that a healthcare provider may not condition treatment, payment, enrollment, or benefits eligibility on an individual granting an authorization.
    3. There is confusion due to the use of numerous terms such as "health information", "protected health information", "sensitive personal data", "personal information" and "protected/sensitive information" in the EHR Standards for the same purpose. Some of these above terms are defined while the others are not. In order to remove the ambiguity caused due to this, it is recommended that the term "protected health information" is used throughout the document.
    4. All bodies dealing with medical data should be required to abide by the principle of "data minimisation" in use and disclosure. They must take reasonable efforts to use, disclose, and request only the minimum amount of protected health information needed to accomplish the intended purpose of the use, disclosure, or request.
    5. For internal uses, healthcare providers and other entities must develop and implement policies and procedures that restrict access and uses of protected health information based on the specific roles of the members of their workforce.


    Amber Sinha,
    Centre for Internet and Society,
    No. 194, 2nd 'C' Cross,
    Domlur, 2nd Stage,
    Bengaluru, 560071


     

    [1] Page 7 of the EHR Standards.

    [2] Funmi Adebesin, Rosemary Foster, Paula Kotze, Darelle van Greunen, "A review of interoperability standards in e-Health and imperatives for their adoption in Africa", Research Article - SACJ No. 50, July 2013; L. E. Whitman and H. Panetto. "The missing link: Culture and language barriers to interoperability", Annual Reviews in Control, vol. 30, no. 2, 2006.

    [3] WHO and ITU. "National eHealth Strategy Toolkit", available at http://goo.gl/uxMvE.

    [4] Page 19 of the EHR Standards.

    [5] Covered Entity includes a healthcare provider ( Doctors, Clinics, Psychologists, Dentists, Chiropractors, Nursing Homes, Pharmacies), a health plan (Insurance companies, HMOs, Company Health Plans, Government programs that pay for health care) and Healthcare Clearinghouse.

    [6] Page 20 of the EHR Standards.

    [7] Individuals' Right under HIPAA to Access their Health Information 45 CFR § 164.524, available at http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/ .

    [8] Patient Rights Under HIPAA Accounting of Disclosures of Health Information, available at http://uthscsa.edu/hipaa/patientrights/accountingofdisclosures.pdf.

    [9] Page 21 of the EHR Standards.

    Submission by the Centre for Internet and Society on Draft New ICANN By-laws

    by Vidushi Marda last modified May 31, 2016 02:49 AM
    The Centre for Internet & Society sent its comments on the Draft New ICANN Bylaws. The submission was prepared by Pranesh Prakash, Vidushi Marda, Udbhav Tiwari and Swati Muthukumar. Special thanks to Sunil Abraham for his input and feedback.

    We at the Centre for Internet and Society are grateful for the opportunity to comment on the draft new ICANN by-laws. Before we comment on specific aspects of the Draft by-laws, we would like to make a few general observations:

    Broadly, there are significant differences between the final form of the by-laws and that which has been recommended by the participants in the IANA transition process through the ICG and the CCWG. They have been shown to be unnecessarily complicated, lopsided, and skewed towards U.S.-based businesses in their past form, which continues to reflect in the current form of the draft by-laws.

    The draft by-laws are overwrought, but some of that is not the fault of the by-laws, but of the CCWG process itself.  Instead of producing a broad constitutional document for ICANN, the by-laws read like the worst of governmental regulations that go into unnecessary minutiae and create more problems than they solve. Things that ought not to be part of fundamental by-laws — such as the incorporating jurisdiction of PTI, on which no substantive agreement emerged in the ICG — have been included as such.

    Simplicity has been seen as a sin and has made participation in this complicated endeavour an even more difficult proposition for those who don’t choose to participate in the dozens of calls held every month. On specific substantive issues, we have the following comments:

    Jurisdiction of ICANN’s Principal Office

    Maintaining by-law Article XVIII, which states that ICANN has its principal office in Los Angeles, California, USA, these Draft by-laws make an assumption that ICANN’s jurisdiction will not change post transition, even though the jurisdiction of ICANN and its subsidiary bodies is one of the key aspects of post transition discussion to be carried out in Work Stream 2 (WS2). Despite repeated calls to establish ICANN as an international community based organisation (such as the International Red Cross or International Monetary Fund), the question of ICANN's future jurisdiction was deferred to WS2 of the CCWG-Accountability process. All of the new proposed by-laws have been drafted with certainty upon ICANN's jurisdiction remaining in California. Examples of this include the various references to the California Civil Code in the by-laws and repeated references to entities and structures (such as public benefit corporations) in the fundamental by-laws of the ICANN that can only be found in California.

    This would make redundant any discussion in WS2 regarding jurisdiction, since they cannot be implemented without upending the decisions relating to accountability structures made in WS1, and embedded in the by-laws.

    CIS suggests an provision expressly be inserted in the by-laws to allow changes to the by-laws in WS2 insofar as matters relating to jurisdiction and other WS2 issues are concerned, to make it clear that there is a shared understanding that WS2 decisions on jurisdiction are not meant to be redundant.

    Jurisdiction of the Post-Transition IANA Authority (PTI)

    The structure of the by-laws and the nature of the PTI in Article 16 make its Californian jurisdiction integral to the very organisation as a whole and control all its operations, rights and obligations. This is so despite this issue not having been included in the CWG report (except for footnote 59 in the CWG report, and as a requirement proposed by ICANN’s lawyers, to be negotiated with PTI’s lawyers, in Annex S of the CWG report).  The U.S. government’s requirement that the IANA Functions Operator be a U.S.-based body is a requirement that has historically been a cause for concern amongst civil society and governments.  Keeping this requirement in the form of a fundamental by-law is antithetical to the very idea of internationalizing ICANN, and is not something that can be addressed in Work Stream 2.

    CIS expressed its disagreement with the inclusion of the U.S-jurisdiction requirement in Annex S in its comments to the ICG. Nothing in the main text of the CWG or ICG recommendations actually necessitate Californian jurisdiction for the PTI.  Thus, clearly the draft by-laws include this as a fundamental by-law despite it not having achieved any form of documented consensus in any prior process. This being a fundamental by-law would make shifting the PTI’s registered and principal office almost impossible once the by-laws are passed.

    No reasoning or discussion has been provided to justify the structure, location and legal nature of the PTI. The fact that the revenue structure, by-laws and other details have not even been hinted at in the current document, indicate that the true rights and obligations of PTI have been left at the sole discretion of the ICANN while simultaneously granting it fundamental by-law protection. This is not only deeply problematic on front of delegation of excessive responsibility for a key ICANN function without due oversight but also leads to situation where the community is agreeing to be bound to a body whose fundamental details have not even been created yet, and yet is a fundamental by-law.

    CIS would therefore suggest that the PTI related clauses in the by-laws be solely those on which existing global Internet community consensus can be shown, and the PTI’s jurisdiction is not something on which such consensus can be shown to exist.  Therefore the by-laws should be rewritten to make them agnostic to PTI’s jurisdiction. Further, CIS suggests that the law firm appointed for PTI be non-American, since U.S.-based law firms capable law firms in Brazil, France, and India.

    We would also like to note that we have previously proposed that PTI’s registered office and ICANN’s registered office be in different jurisdictions to increase jurisdictional resilience against governmental and court-based actions.

    Grandfathering Agreements Clause

    A fair amount of discussion has taken place both in the CCWG mailing list about Section 1.1 (d)(ii), which concerns the inclusion of certain agreements into the scope of protection granted to ICANN from its Mission and Objective statement goals. CIS largely agrees with the positions taken by the IAB and CCWG in their comments of demanding the removal of parts B, C, D E and F of Section 1.1(d)(ii) as all of these are agreements that were not included in the scope of the CCWG Proposal and a fair few of these agreements (such as the PTI agreement) have not even been created yet. This leads to practical and legal issues for the ICANN as well as the community as it restricts possible accountability and transparency measures that may be taken in the future.
    CIS as its suggestion therefore agrees with the IAB and CCWG in this regard and supports the request by them that demand by these grandfathering provisions be removed.

    Inspection Rights

    Section 22.7 severely limits the transparency of ICANN’s functioning, and we believe it should be amended.

    (a) It limits Inspection Requests to Decisional Participants and does not allow for any other interested party to make a request for inspection.  While the argument has been made that Californian law requires inspection rights for decisional participants, neither the law nor CCWG’s recommendations require restricting the inspection rights to decisional participants. CIS’s suggestion is to allow for any person in the public to make a request for examination, but to have to declare the nature of the public interest behind requests for non-decisional participants, so that an undue number of requests are not made for the purpose of impairing the operations of the organisation.

    (b) The unclear but extremely limited definition of ‘permitted scope’, which does not allow one to question any ‘small or isolated aspect’ of ICANN’s functioning, where there is no explicit definition of what constitutes the scope of matters relevant to operation of ICANN as a whole, leaving a loophole for potential exploitation. CIS suggests the removal of this statement and to allow only for limitations listed in Section 22.7 (b) for Inspection Requests.

    (3) There is no hard deadline provided for the information to be made available to the querying body, thus allowing for inordinate delays on the part of the ICANN, which is open to abuse. CIS suggests the removal of the clause ‘or as soon as reasonably practicable thereafter’ in this section.

    (4) The need for insisting that the material be used only for restricted purposes. CIS suggests that as a step towards ICANN’s transparency, it is essential that they allow the use of the information for any reason deemed necessary by the person demanding inspection. There is no clear reason to require restriction to EC proceedings for non-confidential material.  This requirement should be removed.

    Work Stream 2 Topics

    Section 27.2, which covers necessary topics for WS2, currently does not include key aspects such as PTI documents, jurisdictional issues, etc. In this light, we suggest that they be included and a clause be inserted to indicate that this list of topics is indicative and the CCWG can expand the scope of items to be worked on in WS2 as well as make changes to work completed in WS1 (such as these by-laws) to meet WS2 needs as well.

    FOI-HR

    Section 27.3 (a) requires the FOI-HR to be approved by "(ii) each of the CCWG-Accountability’s chartering organizations..” which is inconsistent with the CCWG proposal that forms the basis for these by-laws. The requirement of formal approval from every Chartering Organisation in the current draft is inconsistent with Annex 6 of the CCWG proposal, that has no such requirement.

    CIS strongly advocates for a change in the bylaw text to align with the intent of the CCWG Accountability report, and to reflect that the process of developing the FOI-HR shall follow the same procedure as Work Stream 1.

    Contracts with ICANN

    Section 27.5 currently states that “Notwithstanding the adoption or effectiveness of the New by-laws, all agreements, including employment and consulting agreements, entered by ICANN shall continue in effect according to their terms.”

    As the section currently stands, there is a possibility that prior to the creation of by-laws, agreements that may be in contravention of the by-laws may be brought forth intentionally before the commencement of the operation of ICANN’s Mission statement in the said by-laws. The clause may be updated as follows to avoid this —

    “Notwithstanding the adoption or effectiveness of the New by-laws, all agreements, including employment and consulting agreements, entered by ICANN shall continue in effect according to their terms, provided that they are in accordance with ICANN’s Mission Statement.”

    Criminal Defamation and the Supreme Court’s Loss of Reputation

    by Bhairav Acharya last modified Jun 03, 2016 03:05 AM
    The Supreme Court’s refusal, in Subramanian Swamy v. Union of India, to strike down the anachronistic colonial offence of criminal defamation is wrong. Criminalising defamation serves no legitimate public purpose; the vehicle of criminalisation – sections 499 and 500 of the Indian Penal Code, 1860 (IPC) – is unconstitutional; and the court’s reasoning is woolly at best.

    The article was published in the Wire on May 14, 2016.


    Politics and censorship

    Two kinds of defamation actions have emerged to capture popular attention. First, political interests have adopted defamation law to settle scores and engage in performative posturing for their constituents. And, second, powerful entities such as large corporations have exploited weaknesses in defamation law to threaten, harass, and intimidate journalists and critics.

    The former phenomenon is not new. Colonial India saw an explosion of litigation as traditional legal structures were swept away and native disputes successfully migrated to the colonial courts. These included politically-motivated defamation actions that had little to do with protecting reputations. In fact, defamation litigation has long become an extension of politics, in many cases a new front for political manoeuvring.

    The latter type of defamation action is far more sinister. Powerful elites, both individuals and corporations, have cynically misused the law of defamation to silence criticism and chill the free press. By filing excessive and often unfounded complaints that are dispersed across the country, which threaten journalists with imprisonment, powerful elites frighten journalists into submission and vindictively hound those who refuse to back down. Such actions are called Strategic Lawsuits against Public Participation (SLAPPs) which Rajeev Dhavan warns have created a new system of censorship.

    Petitions and politicians

    Defamation originates from the concept of scandalum magnatum – the slander of great men – which protected the reputations of aristocrats. The crime was linked to sedition, so insulting a lord was akin to treason. In today’s neo-feudal India, political leaders are contemporary aristocrats. Investigating them can invite devastating consequences, even death. Most of the time, they retaliate through defamation law. Since the criminal justice system is most compromised at its base, where the police and magistrates directly interact with people, the misuse of criminal defamation law hurts ordinary citizens.

    This is different from politicians prosecuting each other since they rarely, if ever, suffer punishment. Of all the petitions before the Supreme Court concerning the decriminalisation of defamation, the three that received the most news coverage were those of Subramanian Swamy, Rahul Gandhi, and Arvind Kejriwal. They are all politicians, their petitions were made in response to defamation complaints filed by rival politicians. On the other hand, there are numerous cases which politicians have filed against private members of civil society to silence them. When presented with these concerns, the Supreme Court simply failed to seriously engage with them.

    The architecture of defamation

    Defamation has many species, a convoluted history, and complex defences. Defamation can be committed by the spoken word, which is slander, or the written word, which is libel. The historical distinction between these two modes of defamation is based on the permanence of written words. Before the invention of the printing press, the law was chiefly concerned with slander. But as written ideas proliferated through mass publication technologies, libel came to be viewed as more malevolent and the law visited serious punishments on writers and publishers.

    Such a distinction presumes a literate readership. In largely illiterate societies, the spoken word was more potent. This is why films and radio have long attracted censorship and state control in India. Before mass publishing forked defamation into libel and slander, there existed only the historical crime of libel. Historical libel had four species: seditious libel, blasphemous libel, obscene libel, and defamatory libel.

    Seditious libel, which has been repealed in Britain, prospers in India as the offence of sedition which is criminalised by section 124A of the IPC. Blasphemous libel, repealed in Britain, fares well in India as the offence of blasphemy under section 295A of the IPC. Obscene libel, as the offence of obscenity, is criminalised by section 294 of the IPC. And defamatory libel, repealed in Britain, which is the offence of criminal defamation that the Subramanian Swamy case upheld, continues to exist under section 499 of the IPC.

    Confusing harms

    Of the many errors that litter the Supreme Court’s May 13, 2016 judgment in the Subramanian Swamy case, perhaps the most egregious is the failure to recognise the harm that criminal defamation poses to a healthy civil society in a free democracy. At the crux of this mistake is the Supreme Court’s failure to distinguish between private injury and social harm. Two people may, in their private capacities, litigate a civil suit to recover damages if one feels the other has injured her reputation. This private action of defamation was not in issue before the court.

    On the other hand, by criminalising defamation, why should the state protect the reputations of individuals while expending public resources to do so? This goes to the concept of crime. When an action is serious enough to harm society it is criminalised. Rape strikes at the root of public safety, human dignity, equality, and peace, so it is a crime. A breach of contract only injures the party who was expecting the performance of contractual duties; it does not harm society, so it is not a crime. Similarly, a loss of reputation, which is by itself difficult to quantify, does no harm to society and so it should not be a crime.

    Truth and the public good

    It may be argued, and the Supreme Court hints, that at its fundament, society is premised on the need for truth; so lies should be penalised. This is where defamation law wanders into moral policing. In Indian and European philosophies, truth is consecrated as a moral good. The Supreme Court quotes from the Bhagavad Gita on the virtue of truth. But while quotes like these are undoubtedly meaningful, they have no utility in a constitutional challenge. In reality, society is composed of truth, lies, untruths, half-truths, rumour, satire, and a lot more. In fact, the more shades of opinion there are, the livelier that society is. So lies should not invite criminal liability.

    If we concede the moral debate and arrive at a consensus that the law must privilege truth over lies, then truth alone should be a complete defence to defamation. If the law criminalises untruth, then it must sanctify truth. That means when tried for the crime of defamation, a journalist must be acquitted if her writing is true. But the law and the Supreme Court require more. In addition to proving the truth, the journalist must prove that her writing serves the public good. So speaking truth is illegal if it does not serve the public good.

    In fact, truth has only recently been recognised as a defence to defamation, albeit not a complete defence. This belies the social foundations of criminal defamation law. The purpose of the offence is not to uphold truth, it is to protect the reputations of the powerful. But what is reputation? The Supreme Court spends 25 pages trying to answer this question with no success. Instead, the court declares that reputation is protected by the right to life guaranteed by Article 21 of the Indian Constitution but it offers no sound reasoning to support this claim. The court also fails to explain why the private civil action of defamation is insufficient to protect reputation.

    The constitution and constitutionalism

    There are two core constitutional questions posed by the Subramanian Swamy case. They are:

    • Does the crime of defamation fall within one of the nine grounds listed in Article 19(2) of the constitution; and
    • Are sections 499 and 500 of the IPC which criminalise and punish defamation reasonable restrictions on the right to free speech?

    Article 19(2) contains nine grounds in the interests of which a law may reasonably restrict the right to free speech. Defamation is one of the nine grounds, but the provision is silent as to which type of defamation, civil or criminal, it considers. However, B.R. Ambedkar’s comments in the Constituent Assembly arguably indicate that criminal defamation was intended to be a ground to restrict free speech.

    The answer to the second question lies in measuring the reasonableness of the restriction criminal defamation places on free speech. If the restriction is proportionate to the social harm caused by defamation, then it is reasonable. However, restating an earlier point, criminalising defamation serves no legitimate public purpose because society is unconcerned with the reputations of a few individuals. Even if society is concerned with private reputations, the private civil action of defamation is more than sufficient to protect private interests. Further, the danger that current criminal defamation law poses to India’s free speech environment is considerable. Dhavan says: “Defamation cases [are] a weapon by which the rich and powerful silence their critics and censor a democracy.”

    The Subramanian Swamy case highlights several worrying trends in India’s constitutional jurisprudence. The judgment is delivered by one judge speaking for a bench of two. Such critically significant constitutional challenges cannot be left to the whims of two unelected and unaccountable men. Moreover, from its position as the guarantor of individual freedoms, the Supreme Court appears to be in retreat. This will have far-reaching and negative consequences for India’s citizenry. If the court fails to enhance individual freedoms, what is its constitutional role? The judiciary would do well to stay away from policy mundanities and focus on promoting India’s democratic project, lest it injure its own reputation.

    Women's Safety? There is an App for That

    by Rohini Lakshané last modified Jan 10, 2017 02:48 AM
    “After locking ourselves in a room for more than 6 days, this is what we came out [sic] with. Join us in helping make WOMEN feel SAFE,” read a gloating press release about a smartphone app for women to notify their near ones that they were in distress. It was one among many such PRs frequently landing in my mailbox after the rape and murder of a young student on board a private bus in Delhi in 2012.

    The article by Rohini Lakshané was published in Gender IT.org on May 19, 2016. This was also mirrored by Feminism in India on January 9, 2017.


    The incident had spurred protests across the country and made international headlines. Along with all this came a slew of new “women’s safety” apps. Existing ones, many of which had fizzled out, were conveniently relaunched. My own experience of user-testing such apps in India back then was that they were unreliable at best and dangerously counterproductive at worst. Some of them were endorsed by governments and celebrities and ended up being glorified despite their flaws, their technical and systemic handicaps never acknowledged at all.

    There are myriad mobile phone apps meant to be deployed for personal safety, but their basic functioning is more or less the same: the user activates the app (by pressing a button, shaking the device or similar cue), which sends a distress message containing the users’ location to pre-defined contacts. Some apps include additional artefacts such as a short audio or video recording of the situation. Some others augment this mechanism by alerting the police and other agencies best placed to respond to the emergency. For example, the Companion app for students living on campus notifies the university along with police. The SOS buttons in taxi-hailing apps such as Uber enable the user’s contacts to follow the cab’s GPS trail and notify them and the cab company’s “incident response team” of emergencies. Apps such as Kitestring would treat the lack of the user’s response within a time-window as the trigger for a distress message. All their technical wizardry perhaps makes it easy to lose sight of the fact that technology is not a saviour but a tool or an enabler, that technology alone cannot be the panacea of a problem that is deeply complex and, in reality, rooted in society and governance.

    The Indian government announced last month that every phone sold in the country from January 2017 should be equipped with a panic button that sends distress flares to the police and a trusted set of contacts. Nearly half the phones sold in India cost USD 100 or less. Prices are kept so low by sacrificing features and the quality of the hardware; there are a lot of phones with substandard GPS modules, poor touchscreens, slow processors, bad cameras, tiny memory, and dismal battery life. They run on different versions of different operating systems, some of them outdated. All of these factors would determine if someone is able to use the app at all and how quickly they and their phone would be able to respond to an emergency. Additionally, mobile phone signals become thin or shaky in areas with a high number of users and buildings located cheek-by-jowl. Even when the mobile hardware is good and the mobile signal usable, GPS accuracy can be spotty and constant location tracking would hog battery. These issues would affect the efficacy of any app. Besides, there is too much uncertainty for an app developer to factor in. (Two years ago, I learnt about an app called Pukar, then operational in collaboration with police departments in four cities in India. Pukar solved the problem of potential inaccuracy of the GPS location by getting the user’s contacts to tell the police where the person in distress might be.) Designing a one-size-fits-all safety app is almost impossible. The app that rings a loud alarm when triggered may save someone’s life or spoil the chances of someone who is trying to get help while hiding. Different people may be vulnerable to different kinds of distress situations and an app can at best be optimised for some target user groups.

    An app that does not work in tandem with existing machinery for law enforcement and public safety is a bad idea.

    In the end, the “technical” problems may actually be problems of economic disparity. Making it mandatory for people to own phones equipped with certain hardware or requiring them to upgrade to more reliable devices would drive the phones out of the financial reach of many. Indian manufacturers have expressed concerns that the proposed panic button would raise costs for them as well the end buyers. Popularising a downloadable app and informing its target users how to install and work it correctly needs a marketing blitzkrieg, which is something only the state or well-funded developers can afford. The New Delhi police department runs a dedicated control room for reports arriving from its safety app, Himmat (the word for courage in many Indian languages). It’s an expensive affair.

    An app that does not work in tandem with existing machinery for law enforcement and public safety is a bad idea. It puts the onus of “keeping women safe” on members of their social circles or on intermediaries and private parties such as cab companies, while absolving law enforcement agencies of their failing to provide security. It opens doors to victim blaming in case someone is unable to use the app at the right time in the right way, or if the app fails.

    On the contrary, an app that does loop in the police raises concerns about surveillance and protection of data available to the police, which is especially problematic in places such as India where there is no law for privacy or data protection. Alwar, one of the cities where Pukar was implemented, is super-populated with a large geographical area and a high crime rate. Police departments in such places tend to be overworked and understaffed. Without significant policing reforms, it is questionable whether they will be able to respond in time. A sting operation done by two media outlets on 30 senior officials of the New Delhi police department in 2012 showed the cops blaming victims of sexual violence with gay abandon. “If girls don't stay within their boundaries, if they don't wear appropriate clothes, then naturally there is attraction. This attraction makes men aggressive, prompting them to just do it [sexual assault]," reads one of their nuggets. “It's never easy for the victim [to complain to the police]. Everyone is scared of humiliation. Everyone's wary of media and society. In reality, the ones who complain are only those who have turned rape into a business," goes another. An app that lets known people monitor someone’s location also poses the risk of abuse, coercion and surveillance by intimate partners or members of the family.

    Unfortunately, there is no app for reforming a morass in law enforcement or dismantling patriarchy.

    Facebook: A Platform with Little Less Sharing of Personal Information

    by Nishant Shah last modified Jun 05, 2016 02:38 AM
    As Facebook becomes less personal, what happens to digital friendship?

    The article was published in the Indian Express on May 8, 2016.


    Facebook is worried. Even though usage is growing, something strange is happening on the social network. For the first time since it started its journey as a website to rate datable people on college campuses, to becoming the global reference point that defined friendship in the connected age, people are sharing less personal information on Facebook. For a social media network that positions itself largely as a space where our everyday, banal doings become newsworthy articulations, this is surprising news. But it is true. On Facebook, the traffic is high, but most of it is now sharing of external information. People are sharing links to news, to listicles, to videos, to blogs entries, to pictures and to information that they find interesting, but they are writing less and less about what it is that they are doing and feeling.

    Ironically, this coincides with the latest change in Facebook’s “response” options, where the ubiquitous “Like” button can now expand to other emojis where you can also be appropriately angry, sad, surprised, or happy about the shared content. Even as Facebook is trying to get its users to qualify how they feel and give emotional value to their likes, people seem to be sharing even less of their private lives on Facebook.

    One of the key ways of understanding this drop in people sharing their personal information is through the concept of “context collapse”. It has been a concern since the first instances of disembodied digital communication. In our everyday life, we make sense of information based on the different contexts that surround us. The person who authors the information, the setting within which that information reaches us, the emotional state that we are in when encountering the information, our sense of where we are when processing it, and the preparedness we have for receiving this information are all crucial parameters by which we make sense of the meaning of the information and also our response to it.

    In the case of Facebook, the context within which information and transactions have made sense is “friendship”. The site’s USP was that you could bring in a variety of information, but you were always sharing it with friends. You could have a large audience, but this audience is formed of people you know, people you trust, people you add to your friend groups — there is a sense of intimacy, privacy, and casualness that marks the flow of information. You are able to talk, in an equal breath, about what you had for breakfast, your crush on a celebrity, your random acts of charity, and your strong political rant, one after the other, without requiring to think about what you are posting and how others will receive it.

    However, Facebook is not really a friendship platform. It is a company interested in selling our interactions and data to advertisers who can target us with content and information based on the patterns of our behaviour. To serve its advertisers better, Facebook started privileging “verified” information trying to ensure news and content producers higher attention and more eyeballs. This was further strengthened by their continued integration with third party vendors, who could push and pull information into the social world of Facebook, and is seen as one of the biggest reasons for this drop. Any newsfeed in the last few months has had equal amounts of professional and amateur content, leading to a context collapse, where you no longer feel like your Facebook feed is a private and intimate conversation with friends.

    Similarly, Facebook’s expansive integration of its products —WhatsApp chats, Instagram updates, and Tumblr posts all can collapse into one — produced a confusing space where the personal information that you were once happy to share with your friends, is suddenly being shared along with news and information. Also, digital behaviour works on mirroring, and we often shape our updates to match what we see on our timelines. If we more and more see external content rather than personal statuses, we also start sharing more third party news and links, thus producing a domino effect of everybody shying away from extremely personal or intimate moments.

    Facebook, for the millennials, has been the context within which friendship got structured. Its own transitions have now collapsed that context, leading people to think of it as a content aggregator. It is going to be interesting to see what happens to our digital friendships and networks if Facebook is no longer the space where they are housed.

    Online Censorship on the Rise: Why I Prefer to Save Things Offline

    by Nishant Shah last modified Jun 05, 2016 03:26 AM
    As governments use their power to erase what they do not approve of from the web, cloud storage will not be enough.

    The article was published in the Indian Express on April 17, 2016.


    It took me some time to trust the cloud. Growing up with digital technologies that were neither resilient nor reliable — a floppy drive could go kaput without you having done anything, a CD once scratched could not be recovered, hard drives malfunctioned and it was a given that once every few months your PC would crash and need a re-install — I have always been paranoid about making backups and storing information. Once I kicked into my professional years, I developed a foolproof, albeit paranoid, system, where I backed up my machines to a common hard drive, made a mirror image of that hard drive, and for absolutely crucial documents, I would put them on to a separate DVD which would have the emergency documents. It was around 2006, when I discovered the cloud.

    It began with Google’s unlimited email accounts where you could mail information to yourself and then it would stay there for a digital eternity. I noticed that the size of my digital storage began decreasing. I no longer download videos I find on the web. I don’t save information on a device and I have come to think of the web as one large cloud, relying on the fact that if something is online once, it will always be available to me.

    However, over the last couple of months, I have started noticing something different in my usage patterns. These days, when I do come across interesting information, instead of merely indexing it, I find myself making an offline copy of that information. Tweets enter a Storify folder. YouTube videos get downloaded. I make PDF copies of blogs and take screenshots of digital medial updates. I have been wondering why I am suddenly so invested in archiving the web when, theoretically, it is always there.

    When I voiced this to a group of young students, I was surprised to hear that I wasn’t alone. The web is becoming a space that is crowded with take-downs, deletions, removals, and retractions which leave no archival memory. The students quickly pointed out that these take-downs are not just personal redactions. In fact, what we personally choose to remove has very little chances of actually disappearing from the web. Instead, these are things that are removed by governments, private companies and intermediaries who are being largely held liable for the content of the information that they make available.

    Turkey, recently, demanded that German authorities remove a satirical German video titled Erdowie, Erdowo, Erdogan mocking their President. In response, Germany reminded the Turkish diplomacy of that lovely little thing called freedom of speech, and in the meantime, Extra 3, the group that had released the video on YouTube, added English subtitles to the video. Just for perks. I hope you gave a brownie point to Germany, even as you scrambled to see the video.

    On the home front, though, things are not as celebratory. The minister of state for information and broadcasting, Rajyavardhan Rathore, and the head of the BJP’s information and technology cell, Arvind Gupta, have called for action against journalist Raghav Chopra who tweeted a photoshopped image of PM Narendra Modi bending down to touch the feet of a man dressed in Saudi Arabia’s national dress, to make a political comment about the PM’s recent visit to SA.

    The two politicos, who have not had much to say about the doctored videos that were used to convict innocent students in JNU or the photoshopping that the government’s Press Information Bureau had indulged in to give us that iconic image of the prime minister doing an aerial survey of #ChennaiFloods, have taken umbrage against an image because it seems (obviously) false, and are demanding its takedown.

    My proclivity for saving things offline is perhaps fuelled by this web of partisan censorship and the atmosphere of precarious hostility that governments seem to be supporting. Increasingly, we have seen, in India and around the globe, a rush of political power that exercises its clout to remove information, images and stories that they do not approve of.

    Instinctively, I am reacting to the fact that intellectual questioning or cultural critique is being removed from the web at the behest of these vested powers, and that the cloud, light and airy as it sounds, is prone to some incredible acts of censorship and removal. I have found myself facing too many removal notices and take-down errors when trying to revisit bookmarked sites, that I am beginning to feel that the only way to keep my information safe might be to archive the whole web on a personal server.

    A Large Byte of Your Life

    by Nishant Shah last modified Jun 05, 2016 03:35 AM
    With the digital, memory becomes equated with storage. We commit to storage to free ourselves from remembering.

    The article was published in Indian Express on April 3, 2016.


    This is the story of a broken Kindle. A friend sent a message to a WhatsApp group that I belong to that she is mourning the loss of her second-generation Kindle, that she bought in 2012, and since then had been her regular companion. It is not the story of hardware malfunction or a device just giving up. Instead, it is a story of how quickly we forget the old technologies which were once new. The friend, on her Easter holiday, was visiting her sister, who has a six-year-old daughter.

    This young one, a true digital native, living her life surrounded by smart screens, tablets, phones, and laptops, instinctively loves all digital devices and plays with them. In her wanderings through her aunt’s things, she came across the old Kindle — unsmart, without a touch interface, studded with keys, not connected to any WiFi, and rendered in greyscale. It was an unfamiliar device. But with all the assurance of somebody who can deal with digital devices, she took it in her hands to play with it.

    Much to her dismay, none of the regular modes of operation worked. The old Kindle did not have a touch screen operated lock. It wasn’t responding to scroll, swipe and pinch. It had no voice command functions. As she continued to cajole it to come to life, it only stared at her, a lock on the digital interface, refusing to budge to the learned demands and commands of the new user. After about 20 minutes of trying to wake the Kindle up, she became frustrated with it and banged it harshly on the table, where it cracked, the screen blanked out and that was the end of the story.

    Or rather, it is the beginning of one. As my friend registered the loss of her clunky, clumsy, heavy, non-intuitive Kindle, and messages of grief poured in, with the condolence that the new ones are so much better and the assurances that at least all her books are safe on the Amazon cloud, I see in this tale, the quest of newness that the digital always has to offer.

    If it has missed your attention, the digital is always new. Our phones get discarded every few seasons, even as phone companies release new models every few months. Our operating systems are constantly sending us notifications that they need to be updated. Our apps operate in stealth mode, continuingly adding updates where bugs are fixed and features are added. Most of us wouldn’t know what to do if we were faced with a computer that doesn’t “heal”, “backup” or “restore” itself. If our lives were to be transferred back to dumb phones, or if we had to deal with devices that do not strive to learn and read us, it might lead to some severe anxiety.

    The newness that the digital offers is also found in our socially mediated lives. Our digital memories are short-lived — relationships rise and fall in the span of days as location-based dating apps offer an infinite range of options to choose your customised partner; celebrities are made and unmade overnight as clicks lead to viral growth and then disappear to be replaced by the next new thing; communities find droves of subscribers, only to become a den of lurkers where nothing happens; must-have apps find themselves discarded as trends shift and new must-haves crop up overnight. Breathless, bountiful and boundless, the digital keeps us constantly running, just to be in the same place, always the same and yet, always new.

    We would be hard pressed to remember that magical moment when we first discovered a digital object. For millennials, the digital is such a natural part of their native learning environments that they do not even register the first encounter or the subsequent shifts as they navigate across the connected world. Increasingly, we tune ourselves to the temporality and the acceleration of the digital, tailoring our memories to what is important, what is now, and what is immediately of use, excluding everything else and dropping it into digital storage, assured in our godlike capacities to archive everything.

    This affordance of short digital memories is enabled partly by the fact that we are subject to information overload, but partly also to the fact that our machines can now remember, more accurately and more robustly than the paltry human, prone to error and forgetfulness. With the digital, memory becomes equated with storage, and the more we commit to storage, the more we free ourselves from the task of remembering.

    The broken Kindle is a testimony not only to the ways in which we discard old devices but also our older forms of individual and collective memory — quickly doing away with information that is not of the now, that is not urgent, and that does not have immediate use value. My friend’s Kindle got replaced in two days. All her books were re-loaded and she was set to go. However, as she told me in a chat, she is not going to throw away her old broken Kindle. Because she wants to remember it — remember the joy of reading her favourite books on it. She is scared that if she throws it away, she might forget.

    The Digital is Political

    by Nishant Shah last modified Jun 05, 2016 03:58 AM
    To speak of technology is to speak of human life and living.

    The article was published in the Indian Express on March 20, 2016.


    “You are supposed to write about the internet, why do you keep talking about all this politics?” I was taken aback when I was faced with this question. It is true – since the year has begun, I have talked about digital education and the ways in which it needs to account for unexpected and underserved communities, about net neutrality and why the Indian government needs to build a stronger, safer, and a more inclusive digital ecosystem. I have written about freedom of speech and expression and how this is going to be the year when we stand together to save the internet from vested interests that seek to convert it from a public commons into a private commodity.

    In my head, all these questions — of inclusion, of access, of presence, of rights — are questions of human life and living, but they are also those that are being hugely restructured by the internet and digital technologies. When faced with the query, I was reminded of a deep-seated division that has been at the heart of digital cultures.

    Way back in the ’90s, when the internet was still a space of science fiction and the World Wide Web was in its nascent stages, there was a distinction made between Virtual Reality (VR) and Real Life (RL). The presumption in the construction of these categories was that the digital is only an escape, the technological is merely a prosthesis, and the internet is just a thing that a few geeks engaged with in their free time. However, the last three decades have made this distinction between VR and RL redundant.

    We live in digital times. The digital is not just something we use strategically and specifically to do a few tasks. Our very perception of who we are, how we connect to the world around us, and the ways in which we define our domains of life, labour, and language are hugely structured by the digital technologies. The digital is ubiquitous and hence, like air, invisible. We live within digital systems, we live with intimate gadgets, we interact through digital media, and even though we might all be equally digital natives, there is no denying the fact that the very presence and imagination of the digital has dramatically restructured our lives. The digital, far from being a tool, is a condition and context that defines the shapes and boundaries of our understanding of the self, the society, and the structures of governance.

    The pervasive nature of the digital technologies and internet can be found at multiple levels. For instance, we do not think about going online anymore, because most of our devices are connected 24×7 to the digital web. Even when we are not online, sunk in a bad network connection, or protecting our precious data usage, we know that our avatars and digital identities are online and talking without us.

    So established is this phenomenon that we even have a name for the anxiety it creates: FOMO — the Fear Of Missing Out. Similarly, the digital can be located at the level of human understanding. We are used to thinking of ourselves as digital systems. We talk about our primary identity as one marked by information overload. We often complain, when faced with too many demands on our time and space, that we don’t have enough bandwidth to deal with new problems, and we are not referring to digital connectivity.

    The digital also has space at the level of policy and governance. If you, like the many millions of Indians, have registered for an Aadhaar card, you have already been marked by a digital identity whether or not you have broadband access. When our government launches Digital India campaigns, it is not merely about an economic model of growth, but it is suggesting that the digital is going to be at the foundations of the new India that we want to build for the future.

    If the digital is so central to our fundamental understanding of the self, the society, and the state, then surely it is time to stop thinking that these technologies have nothing to do with politics? There remains a forced imagination of technologies as devices, as tools, as prostheses which do not have any other role than the performing of a function. However, this is a fallacy, because not only do technologies shape our sense of who we are, but they also prescribe new templates and models of who we are going to be. In the process, these technologies take political action, create social structures, mobilise cultural possibilities, and often, because they are technologies that are still elite and available to the privileged few in the country, they enable decisions which are not always fair, open, and just.

    Hence, a technological decision cannot be read merely as a technical decisions but as human decisions. To speak of technology is to speak of human life and living. To write about technology is to write about politics, because a separation between the two is not only futile but downright dangerous.

    CIS's Comments on the Draft Geospatial Information Regulation Bill, 2016

    by Pranesh Prakash last modified Jun 05, 2016 03:06 PM
    The Centre for Internet and Society is alarmed by the Draft Geospatial Information Regulation Bill, 2016, and has recommended that the proposed law be withdrawn in its entirety. It offered the following detailed comments as its submission.

    Comments on the Draft Geospatial Information Regulation Bill, 2016

    by the Centre for Internet and Society

    1. Preliminary

    1.1. This submission presents comments and recommendations by the Centre for Internet and Society (“CIS”) on the draft Geospatial Information Regulation Bill, 2016 (“the draft bill” / “the proposed bill” / “the bill”).

    2. Centre for Internet and Society

    2.1. The Centre for Internet and Society is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from the perspectives of policy and academic research. The areas of focus include accessibility for persons with disabilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, digital privacy, and cyber-security. The academic research at CIS seeks to understand the reconfiguration of social processes and structures through the internet and digital media technologies, and vice versa.

    2.2. This submission is consistent with CIS’ commitment to safeguarding the public interest, and particularly the representing the interests of ordinary citizens and consumers. The comments in this submission aim to further the principles of people’s right to information regarding their own country, openness-by-default in governmental activities, freedom of speech and expression, and the various forms of public good that can emerge from greater availability of open (geospatial) data created by both public and private agencies, and the innovations made possible as a result.

    3. Comments

    3.1. General Remarks

    3.1.1. While CIS welcomes the intentions of the government to prevent use of geospatial information to undermine national security, the proposed bill completely fails to do so, infringes upon Constitutional rights, harms innovation, undermines the national initiatives of Digital India and Startup India, is completely impractical and unworkable, and it will lead to a range of substantial harms if the government actually seeks to enforce it.

    3.1.2. There are already laws in place that prevent the use of geospatial information to undermine national security. For instance, the Official Secrets Act, 1923 (“OSA”) already contains provisions — sections 3(2)(a), (b), and (c) — all of which would prevent a person from creating maps that undermine national security and would penalise their doing so. Section 5 of the OSA contains multiple provisions that penalise the possession and communication of maps that undermine “national security.” The penalties under the OSA range from imprisonment of up to 3 years all the way to imprisonment up to 14 years. Given this, there is absolutely no need to create yet another law to deal with maps that undermine “national security.” Indeed, it is the government’s stated policy to reduce the number of laws in India, whereas the proposed bill introduces a redundant new law that adds multiple layers of bureaucracy.

    3.1.3. The National Mapping Policy, 2005, already puts in place restrictions on wrongful depictions of India’s international boundaries, and as we explain below in section 3.4 of this document, even the National Mapping Policy is over-broad. Even if the government wishes to provide statutory backing to the policy, it should be a very different law that is far more limited in scope, and restricts itself to criminalising those who misrepresent India’s international boundary with an intention to mislead people into thinking that that is the official boundary of India as recognised by the Survey of India. CIS would support a law of such limited scope and mandate, provided it has an appropriate penalty.

    3.1.4. There would be much utility in a law that creates a duty on the Survey of India to make available, in the form of an open standard, an official electronic version of the maps that it creates, and expressly allows and encourages citizens and startups to reuse such official maps, however the Ministry of Home Affairs would not be the nodal ministry for such a law.

    3.1.5. We recommend that the proposed law be scrapped in its entirety.

    3.1.6. We additionally provide an alternative manner of reducing the harms caused by this bill, in our comments below. By no means should these further comments be seen as a repudiation of our above position, since we do not feel the proposed bill, even with the inclusion of all of our recommendations, would truly further its stated aims. All our below recommendations would do is to reduce the bill’s harmful, and often unintended, consequences.

    3.2. Definition of “Geospatial Information” is over-broad, all- encompassing

    3.2.1. The second part of the definition of “geospatial information” refers to all “graphic or digital data depicting natural or man-made physical features, phenomenon or boundaries of the earth or any information related thereto” that are “referenced to a co-ordinate system and having attributes.” (Section 2(1)(e)) As per the definition, this will include all geo-referenced information, and data, that is produced by everyday users as an integral part of various everyday uses of digital technologies. This will also include geo-referenced tweets and messages, location of public and private vehicles shared in the real-time with agencies tracking their location (from public transport authorities, to insurance agencies, etc.), location data of mobile phones collected and used by telecommunication service providers, location of mobile phones shared by the user with various kinds of service providers (from taxi companies to delivery agencies), etc.

    3.2.2. We recommend that instead of regulating all kinds of geospatial information, and giving rise to a range of possible harms, the draft bill be revised to specifically address “sensitive geospatial information,” defined as geospatial information related to the “Prohibited Places” as defined in the Official Secrets Act 1923 (section 2(8)) which will allow the bill to effectively respond to its key stated concerns of ensuring “security, sovereignty and integrity of India.” Since the National Map Policy defines “Vulnerable Points” and “Vulnerable Areas” (para 3(b)) as the two main types of geospatial units associated with “Prohibited Places”, these terms should also be referred to in the revised version of the draft bill.

    3.3. Unreasonable regulation of acquiring and end-use of geospatial information

    3.3.1. Section 3 of the draft bill states that “[s]ave as otherwise provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall acquire geospatial imagery or data including value addition” and “[e]very person who has already acquired any geospatial imagery or data ... including value addition prior to coming of this Act into effect, shall within one year from the commencement of this Act, make an application alongwith requisite fees to the Security Vetting Authority.” This effectively makes it illegal to acquire and maintain ownership of geospatial information that has not been subjected to security vetting.

    3.3.2. This draft bill doesn’t apply just to geospatial information that may undermine national security but covers all manners of geospatial information and modern geospatial technologies embedded in everyday digital devices and intimately connected to various electronic products and services, from cars to mobile phones, result in the creation and acquiring of various kinds of geo-referenced information, ranging from the geo-referenced photographs to locations shared with friends. Even ordinary users who are unknowingly looking at maps that contain sensitive geospatial information, which are illegal under the Official Secrets Act, are committing an illegal act under the draft bill, because the users temporarily acquires such sensitive geospatial information in her/his digital device, as part of the very act of browsing the map concerned. This clearly cannot be the intention of the bill. Thus we recommend deletion of the word “acquire.”

    3.3.3. Further, the insertion of the phrase “including value addition” in both Section 3(1) and 3(2) appears to suggest that all users who have created derivative products using geospatial information that includes sensitive data (that is data related to Prohibited Places) may be held liable under this draft bill, even if these users have not themselves collected or created such sensitive geospatial information, which was part of the original geospatial information published by the source map agency. This too cannot be the intention of the bill. Thus, we recommend deletion of the phrase “including value addition.”

    3.3.4. In the definition of the “Security Vetting of Geospatial Information” itself, it is mentioned that the process will include “screening of the credentials of the end-users and end-use applications, with the sole objective of protecting national security, sovereignty, safety and integrity.” (Section 2(1)(o)) This appears to indicate that all end-users of all electronic and analog services and products using geospatial information will have to be individually vetted before such services and products are used, which would cover a large proportion of the Indian population. This imposes an enormous and impractical burden on the Indian digital economy in particular, and the entire national economy in general, without improving national security. This too cannot be the intention of the draft bill. Thus, we recommend deletion of this phrase, and ensure that end users are not covered by the law.

    3.3.5. Given these specific characteristics of how modern geospatial technologies work, and how they provide a basis for various kinds of everyday use of electronic products and services, we would like to submit that the regulatory focus should be on large-scale and/or commercial dissemination, publication, or distribution of geospatial information, and not on the acts of acquiring, possessing, sharing, and using geospatial information. Further, the regulation in general should be aimed at the party owning the geospatial information in question, and not at the parties involved in its dissemination (say, Internet Service Providers) or in its generation or use (say, end-users).

    3.4. Removal of journalistic, political, artistic, creative, and speculative depictions of India from the scope of Section 6

    3.4.1. Section 6 of the draft bill states that “[n]o person shall depict, disseminate, publish or distribute any wrong or false topographic information of India including international boundaries through internet platforms or online services or in any electronic or physical form.” Section 15 imposes a penalty for such wrong depiction of maps of India.

    3.4.2. Depictions of India, which do not purport to accurately represent the international boundaries as recognised by the Indian government should not be penalised. For instance, a map published in a newspaper article about India’s border disputes that shows the incorrect claims that the Chinese government has made over Indian territory would also be penalised as “wrong or false topographic information of India”, since there is a clear intention to depict the boundary as claimed by China. Criminalising such journalism cannot be the legitimate intent of such a provision.

    3.4.3. There are numerous instances which have been willfully depicting inaccurate and inauthentic maps of India with international borders for political ends. For instance, there are often depictions of India which show territories within present day Pakistan, Bangladesh, Bhutan, Nepal and Sri Lanka as part of an “Akhand Bharat.” Depictions of this sort should not be penalised. In doing so, would contradict the freedom of expression guaranteed under Article 19(1)(a) without being a reasonable restriction under Article 19(2).

    3.4.4. Even depictions of India for purposes of speculative fiction would be penalised under this proposed bill unless they depict the official borders. This is clearly undesirable and would not be allowed as a reasonable restriction under Article 19(2).

    *3.4.5.* Even geography students in schools and colleges who mis-draw the official map of India would be liable to penalties under the draft bill. This plainly, cannot be the intention of the drafters of this bill. The creator of a rough and inaccurate tourist map of an Indian city can also be identified as committing a criminal act under the proposed bill as she would be depicting “… wrong or false topographic information of India …”

    3.4.6. In brief: Merely depicting, disseminating, publishing or distributing any “wrong or false topographic information of India” should not be penalised. unless a person publishes and widely circulates an incorrect map of India while claiming that that represents the official international boundaries of India, such should not be penalised.

    3.4.7. CIS recommends that the bill should instead state: “No person shall depict, disseminate, publish, or distribute any topographic information purporting to accurately depict the international boundaries of India as recognised by the Survey of India unless he is authorised to do so by the Surveyor General of India; provided that usage by any person of the international boundaries as is electronically and in print made available by the Survey of India shall deemed to be usage that is authorised by the Surveyor General of India.”

    3.5. Absence of Publicly Available and Openly Reusable Standardised National Boundary of India

    3.5.1. Given the lack of an reusable versions of maps of India, including of India’s official boundary as recognised by the Survey of India, it becomes impossible for people to accurately depict the boundary of India. We recommend that the bill requires the Survey of India to publish all “Open Series Maps,”as defined in the National Mapping Policy, 2005, including maps depicting the official international and subnational political and administrative boundaries of India, using open geospatial standards and under an open licence allowing such geospatial data to be used by citizens and all companies.

    3.6. Remove Requirement for Prior License for Acquire, Dissemination, Publication, or Distribution of Geospatial Information

    3.6.1. Section 9 of the draft bill refers to “any person who wants to acquire, disseminate, publish, or distribute any geospatial information of India” (emphasis added), which can be interpreted as the need for a prior license before any person decides to acquire (including creation, collection, generation, and buying) geospatial information. This creates at least two problems:

    • modern digital geospatial technologies have enabled everyday digital devices (like smartphones) to instantaneously acquire, disseminate, publish, and distribute geospatial information all the time when the person holding that device is looking at online digital maps, say Google Maps, or sharing location with their friends, online platforms and services and service providers (both local and foreign); and

    • the requirement of prior license involves payment of a “requisite fees” to the Security Vetting Authority, which may act as an arbitrary (since the fee might be based upon the volume of geospatial information to be acquired that one may not know fully determine before acquiring) and effective barrier to acquiring, dissemination, publication, or distribution of geospatial information even if it does not violate the concerns of “security, sovereignty, and integrity” in any manner. This requirement also impedes competition in the market, because new entrants to the geospatial industry may not have enough upfront capital to procure licenses.

    3.6.2. Further, the requirement of necessary prior license for acquiring geospatial information does not seem to be a crucial component of the security vetting process, since the geospatial information, once acquired by the agency concerned, is in any case directed to be shared with the Security Vetting Authority for undertaking necessary expunging of sensitive or incorrect information.

    3.6.3. We recommend revision of this section so that no prior license and/or permission is required for collection, acquiring, distribution, and/or use of geospatial information; instead, a framework may be established for monitoring of published geospatial information for purposes of ensuring geospatial information pertaining to “Prohibited Places,” as defined under the Official Secrets Act, is not made available to the general public by any person or entity under Indian jurisdiction, including, for instance, Indian subsidiaries and branches of foreign corporations.. Such a framework must not address the end-user of such geospatial information, but its publishers.

    3.7. Unenforceable jurisdictional scope

    3.7.1. Section 5 of the draft bill states “[s]ave as otherwise provided in any international convention, treaty or agreement of which India is signatory or as provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall, in any manner, make use of, disseminate, publish or distribute any geospatial information of India, outside India, without prior permission from the Security Vetting Authority.”

    3.7.2. In compliance with this section, domestic and foreign companies and platforms will be required to obtain permission from the Security Vetting Authority of India prior to publishing, distributing etc. geospatial information. Similarly in the preliminary, the draft bill holds in person who commits an offence beyond India under the scope of the bill. The bill is thus proposing extraterritorial applicability of its provisions, yet the extent and method of enforcement of the same on other jurisdictions are kept unclear.

    3.8. Negative implications for rights of citizens

    3.8.1. There are a number of sections in the draft bill which have negative implications for the rights of all users and potentially impinge on the constitutional rights of Indian citizens. These include:

    a. Section 18(2) which empowers the Enforcement Authority to conduct a search without a judicial search order;

    b. Section 17(3) which empowers the Enforcement Authority to conduct undefined surveillance and monitoring to enforce the Act;

    c. Chapter (V) which penalises individuals with Rs. 1-100 Crores and/or seven years in prison for an offence under the act;

    d. Section 22 which allows the government to take ownership of a person’s land if a financial penalty has not been paid;

    e. Section 30(1) which holds, in the case of the offense being committed by a company, every person in charge of and responsible for the conduct of business of the company, guilty and liable.

    3.9. Overly broad powers and responsibilities of the Apex Committee and Enforcement Authority, and lack of adequate oversight

    3.9.1. Section 7(2) states that “[t]he Apex Committee shall do all such acts and deeds that may be necessary or otherwise desirable to achieve the objectives of the Act, including the following functions:...” The wording in this section is broad and open ended, and allows for the responsibilities of the Apex Committee to be expanded without clear oversight of such expansion.

    3.9.2. Similarly, section 17 established an “Enforcement Authority” for the purpose of carrying out surveillance and monitoring for enforcement of the draft bill. The Authority has been given a number of powers including the power of inquiry, the power to adjudicate, and the power to give directions. These powers have direct implications on the rights of individuals, yet the Authority is not subject to oversight or accountability requirements.

    3.9.3. We recommend that the powers and responsibilities of the Apex Committee and Enforcement Authority are narrowly defined in the draft bill itself, limited by the principle of necessity, and subject to independent oversight and accountability requirements.

    3.10. Remove the Security Vetting Authority’s power of delegation

    3.10.1. Section 8(3) allows the Security Vetting Authority to delegate to any constituent member of the Authority, other subordinate committee, or officer powers and functions as it may deem necessary except the power to grant a licence. In practice, this will allow security vetting to be done by another institution and risks potential involvement of private agencies and/or quasi-governmental bodies.

    3.10.2. We recommend that the power of delegation should not be granted to the Security Vetting Authority.

    3.11. Negative implications for innovation and India’s digital economy

    3.11.1. Section 3 of the draft bill states “[s]ave as otherwise provided in this Act, rules or regulations made thereunder, or with the general or special permission of the Security Vetting Authority, no person shall acquire geospatial imagery or data including value addition of any part of India either through any space or aerial platforms such as satellite, aircrafts, airships, balloons, unmanned aerial vehicles or terrestrial vehicles, or any other means whatsoever”. This effectively ensures that each and every user of geospatial data, products, services, and solutions — since all of these either include or are derivatives of geospatial information — would require prior permission from the Security Vetting Authority. This will substantially affect the existing and emerging digital economy in particular, and the entire economy in general.

    3.11.2. Further, Section 9 of the draft bill mandates that any person submitting an application for geospatial information to be vetted must pay a fee. As the provisions of the bill mandate that users approach the Security Vetting Authority for license to use geospatial information, this will impose an immense burden on all users of digital devices in and outside of India. CIS submits that imposition of this fee for security vetting be removed.

    3.12. Disproportionate penalty for acquisition of geospatial information

    3.12.1. Section 12 states that “[p]enalty for illegal acquisition of geospatial information of India.- Whoever acquires any geospatial information of India in contravention of section 3, shall be punished with a fine ranging from Rupees one crore to Rupees one hundred crore and/or imprisonment for a period upto seven years.” Seven years in prison is disproportionate to the offense of acquiring geospatial information without vetting by the authority concerned. This is particularly true given the broad and all-encompassing definition of “geospatial information” in the draft bill, and the fact that the bill applies to individuals and companies both within and outside of India.

    3.13. Improper and inconsistent usage of terms in the draft bill

    3.13.1. Section 4 of the draft bill regulates the visualization, publication, dissemination and distribution of geospatial information of India, while section 5 regulates use, dissemination, publication, and distribution of geospatial information outside of India. The definition of “visualization” remains unclear, and the act is only regulated in section 4. The section 6 of the draft bill uses the term ‘depict’, which is undefined as well. We submit that in this context terms are interchangeable, and the draft bill should either define them expressly to avoid ambiguity in interpretations, or consistently use only one throughout the draft bill.

    3.13.2. Section 11 (3) of the draft bill requires licensees to “[d]isplay the insignia of the clearance of the Security Vetting Authority on the security-vetted geospatial information by appropriate means such as water-marking or licence as relevant, while disseminating or distributing of such geospatial information.” We observe that geospatial information includes graphical representation, location coordinates, inter alia. While the former may be represented visually on an “as is” basis after the completion of the vetting, the latter may be used to perform other complex functions at the “back-end” (i.e., vendor-facing side) in various technologies. Water-marking and/or displaying of insignia would place undue burden on the licensee, depending on the kind of platform, service, or individual.

    3.14. Lack of reference to technical implementation guidance

    3.14.1. The regulation, harmonisation, and standardisation of the collection, generation, dissemination etc. of geospatial information is a complex process that goes beyond a process of security vetting and that will require extensive technical implementation guidance from the government. At a minimum this could include quality assurance considerations and standard operating procedures, yet the draft bill makes no reference to the need for technical standards or guidance.

    Comments prepared by Sumandro Chattapadhyay, Adya Garg, Pranesh Prakash, Anubha Sinha, and Elonnai Hickok. Submitted by the Centre for Internet and Society, on June 3, 2016.

    Smart City Policies and Standards: Overview of Projects, Data Policies, and Standards across Five International Smart Cities

    by Kiran A. B., Elonnai Hickok and Vanya Rakesh — last modified Jun 11, 2016 01:29 PM
    This blog post aims to review five Smart Cities across the globe, namely Singapore, Dubai, New York City, London and Seoul, the Data Policies and Standards adopted. Also, the research seeks to point the similarities, differences and best practices in the development of smart cities across jurisdictions.

     

    Download the brief: PDF.


    Introduction

    Smart City as a concept is evolutionary in nature, and the key elements like Information and Communication Technology (ICT), digitization of services, Internet of Things (IoT), open data, big data, social innovation, knowledge, etc., would be intrinsic to defining a Smart City [1].

    A Smart City, as a “system of systems”, can potentially generate vast amounts of data, especially as cities install more sensors, gain access to data from sources such as mobile devices, and government and other agencies make more data accessible. Consequently, Big Data techniques and concepts are highly relevant to the future of Smart Cities. It was noted by Kenneth Cukier, Senior Editor of Digital Products at The Economist, that Big Data techniques can be used to enhance a number of processes essential to cities - for example, big data can be used to spot business trends, determine quality of research, prevent diseases, tack legal citations, combat crime, and determine real-time roadway traffic conditions [2]. Having said this, data is deemed to be the lifeblood of a Smart City and its availability, use, cost, quality, analysis, associated business models and governance are all areas of interest for a range of actors within a smart city [3]

    This blog reviews five Smart Cities namely Singapore, Dubai, New York City, London and Seoul. In doing so, the research seeks to point the similarities, differences and best practices in the development of smart cities across jurisdictions. To achieve this, the research reviews:

    • The definition of a Smart City in a given context or project (if any).
    • Existing policy/regulations around data or notes the lack thereof.
    • The cities adherence to the International standards and providing an update on the current status of the Smart City programme.

     

    Singapore

    Introduction

    The Smart Nation programme in Singapore was launched on 24th November, 2014. The programme is being driven by the Infocomm Development Authority of Singapore, through which Singapore seeks to harness ICT, networks and data to support improved livelihoods, stronger communities and creation of new opportunities for its residents [4] According to the IDA, a Smart Nation is a city where “people and businesses are empowered through increased access to data, more participatory through the contribution of innovative ideas and solutions, and a more anticipatory government that utilises technology to better serve citizens’ needs” [5]. The Smart Nation programme is driven by a designated Office in the Prime Minister’s Office [6]. As a core component to the Smart Nation Programme, the Smart Nation Platform has been developed as the technical architecture to support the Programme. This Platform enables greater pervasive connectivity, better situational awareness through data collection, and efficient sharing and access to collected sensor data, allowing public bodies to use such data to develop policy and practical interventions [7] Such access would allow for anticipatory governance - a goal of the Smart Nation Programme as noted by Dr. Yaacob Ibrahim, Minister for Communications and Information stating “Insights gained from this data would enable us to better anticipate citizens’ needs and help in better delivery of services” [8].

    Status of the Project

    The Smart Nation Programme is an ongoing initiative, being built on the past programme Intelligent Nation 2015 (iN2015 masterplan). The plan involves putting in place the infrastructure, policies, ecosystem and capabilities to enable a Smart Nation, by adopting a people-centric approach [9]. A number of co-creating solutions adopted by the Government include:
    • Development of Mobile Apps to facilitate communication between the public and the providers of public services.
    • Organization of Hackathons by government agencies or corporations in collaboration with schools and industry partners to ideate and develop solutions to tackle real-world challenges.
    • Adopt measure for smart mobility to create a more seamless transport experience and providing greater access to real-time transport information so that citizens can better plan their journeys.
    • Smart technologies are also being introduced to the housing estates [10].

    Policies and Regulations

    The Smart Nation plan derives its legitimacy from the constitution of Singapore, holding the Prime Minister responsible to take charge of the subject ‘Smart Nation’ blueprint under the Statutory body of ‘Smart Nation’ Programme Office [11]. Singapore has a comprehensive data protection law – the Personal Data Protection Act 2012, rules governing the collection, use, disclosure and care of personal data. The Personal Data Protection Commission of Singapore has committed to work closely with the private sector, and also to support the Smart Nation vision on data privacy and cyber security ecosystem [12] [13].

    Towards achieving the Smart Nation vision the government has also promoted the use of open data. In 2015 the Department of Statistics has made a vast amount of data available (across multiple themes say transport, infocomm, population, etc.) for free to the public in order to encourage innovation and facilitate the Smart Nation [14]. Prior to this initiative, the government had adopted the Open Data Policy in 2011, enabling public data for analysis, research and application development [15]. The concept of Virtual Singapore, which is a part of the Smart Nation Initiative, has been developed to adopt and simulate solutions on a virtual platform using big data analytics [16].

    Adoption of International Standards

    The Smart Nation initiative follows the standards laid under the purview of the Singapore Standards Council (SSC). It specifies three types of Internet of Things (IoT) Standards – sensor network standards (TR38 - for public areas & TR40 - for homes), IoT foundational standards (common set of guidelines for IoT requirements and architecture, information and service interoperability, security and data integrity) and domain-specific standards (healthcare, mobility, urban living, etc.) [17].

    Singapore is part of ISO/IEC JTC 1/WG7 Sensor Networks and ISO/IEC JTC 1/WG10 Internet of Things (IoT) [18]. Singapore IT standards abides to the international standards as defined by ISO, ITU, etc.Singapore is a member of many international standards forums (see Singapore International Standards Committee) which includes JTC1/WG9 - Big Data; JTC1/WG10 - Internet of Things; JTC1/WG11 - Smart Cities.

     

    Dubai, United Arab Emirates

    Introduction

    The Dubai Smart City strategy was launched as part of the Dubai Plan 2021 vision, in the year 2015 [19]. Dubai Plan 2021 describes the future of Dubai evolving through holistic and complementary perspectives, starting with the people and the society and places the government as the custodian of the city’s development. Within the Plan, the smart city theme envisions a platform that is fully connected and integrated infrastructure that enables easy mobility for all residents and tourists, and provides easy access to all economic centers and social services, in line with the world’s best cities [20]. Center to the smart city platform is data and data analytics, particularly cross functional data and big data techniques to give a complete view of the city [21] As envisioned, the Dubai Data portal would provide a gateway to empower relevant stakeholders to understand the nuances of the city and pursue questions that will result in the greatest impact from the city’s data [22]. The platform will be based on current data and existing services, initiatives, and networks to identify opportunities for a smart city [23]. The Smart City Plan also includes a framework for aligning districts of Dubai with the Smart City vision and dimensions [24].

    The Smart Dubai roadmap 2015 provides a consolidated report and planned smart city services, its status and the stage of its implementation, for e.g. Smart Grid, Mobile Payment, Smart Water, Health applications, Public Wi-Fi, Municipality, E-Traffic solutions, etc [25].

    Status of the Project

    The Smart Dubai strategy is envisioned to be completed by the year 2020, and currently it’s ongoing. The first phase of Smart Dubai masterplan is expected to end by 2016. Between 2017 and 2019, the plan aims to deliver new initiatives and services. The second phase of the masterplan is expected to be completed by the year 2020 [26].

    Policies and Regulations

    The Smart City Plan is being driven by the Dubai Smart City Office – which has been established under Law No. (29) of 2015 on the establishment of Dubai Smart City Office; Law No. (30) of 2015 on the establishment of Dubai Smart City Establishment; Decree No. (37) of 2015 on the formation of the Board of the Dubai Smart City Office; and Decree No (38) of 2015- appointing a Director General for the Office, which will develop overall policies and strategic plans, supervise the smart transformation process and approve joint initiatives, projects and services [27]. Also, an open data law called Dubai Open Data Law was issued to complete the legislative framework for transforming Dubai into a Smart City [28]. This law will enable the sharing of non-confidential data between public entities and other stakeholders.

    Adoption of International Standards

    In 2015 the Smart Dubai Executive Committee has collaborated through an agreement with the International Telecommunications Union (ITU) adopt the performance indicators by the ITU Focus Group on Smart Sustainable Cities to evaluate the feasibility of the indicators [29]. The Focus Group is working towards identifying global best practices for the development of smart cities [30].

     

    New York City, United States of America

    Introduction

    The ‘One New York Plan’ announced in the year 2015 is a comprehensive plan for a sustainable and resilient city. It includes the adoption of digital technology and considers the importance of the role of data in transforming every aspect of the economy, communications, politics, and individual and family life [31]. Furthermore, through a publication on 'Building a Smart+Equitable City', the Mayor’s Office of Technology and Innovation (MOTI) describes efforts to leverage new technologies to build Smart city.

    Accordingly, the plan seeks to establish better lives through establishing principles and strategic frameworks to guide connected device and Internet of Things (IoT) implementation; MOTI serving as the coordinating entity for new technology and IoT deployments across all City agencies; collaborating with academia and the private sector on innovative pilot projects, and partnering with municipal governments and organizations around the world to share best practices and leverage the impact of technological advancements [32].

    Status of the Project

    OneNYC represents a unified vision for a sustainable, resilient, and equitable city developed with cross-cutting interagency collaboration, public engagement, and consultation with leading experts in their respective fields. The Mayor’s Office of Sustainability oversees the development of OneNYC and now shares responsibility with the Mayor’s Office of Recovery and Resiliency for ensuring its implementation [33].

    Policies and Regulations

    As per the Local Law 11 of 2012, each City entity must identify and ultimately publish all of its digital public data for citywide aggregation and publication by 2018. In adherence to this law, there exists a NYC Open Data Plan which requires annual data updation [34].

    The LinkNYC initiative, one of the key projects to make New York a ‘smart’ city, aims to connect everyone through a city wide wi-fi network. The LinkNYC initiative will retrofit payphones with kiosks to provide high-speed WiFi hotspots and charging stations for increased connectivity [35]. Data Privacy in the initiative is addressed through the customer first privacy policy, which considers user’s privacy on priority and will not sell any personal information or share with third parties for their own use. LinkNYC will use anonymized, aggregate data to make the system more efficient and to develop insights to improve your Link experience [36].

    Adoption of International Standards

    The ANSI Network on Smart and Sustainable Cities (ANSSC) is a forum for information sharing and coordination on voluntary standards, conformity assessment and related activities for smart and sustainable cities in the US [37]. The US is a signatory of the ISO/ITU defined standards on smart cities [38].

     

    London, United Kingdom

    Introduction

    The Smart London Plan was unveiled in the year 2013 by the Mayor of London. The plan is being driven through the Greater London Authority, with the advice of the Smart London Board. The Smart London Plan envisions ‘Using the creative power of new technologies to serve London and improve Londoner’s lives[39]. ‘Smart London’ is about harnessing new technology and data so that businesses, Londoners and visitors experience the city in a better way, and do not face bureaucratic hassle and congestion. Smart London seeks to improve the city as a whole and focuses on city macro functions that result from the interplay between city subsystems - such as local labour markets to financial markets, from local government to education, healthcare, transportation and utilities. According to strategy documents, a smarter London recognises and employs data as a service and will leverage data to enable informed decision making and the design of new activities.

    Status of the Project

    This project is currently ongoing. Since its formation in March 2013, the Smart London Board has been advising the Greater London Authority.The Plan sits within the overarching framework of the Mayor’s Vision 2020 [40].

    Policies and Regulations

    The Smart London Plan incorporates the existing open data platform called ‘London DataStore’. The rules and guidelines for this platform are defined by the Greater London Authority, which includes working with public and private sector organisations to create, maintain and utilise it, enabling common data standards, identify and prioritise which data are needed to address London’s growth challenges, establish a Smart London Borough Partnership to encourage boroughs to free up London’s local level data. Also, privacy is protected and there is transparent use of data - to ensure data use is managed in the best interests of the public rather than private enterprise.42 The Smart London Plan aims to build on this existing datastore to identify and publish data that addresses specific growth challenges, with an emphasis on working with companies and communities to create, maintain, and use this data [41].

    The Open Data White Paper, issued by the Office of Paymaster General, seeks to build a transparent society by releasing public data through open data platforms and leveraging the potential of emerging technologies [42]. The Greater London Authority processes personal data in accordance with the Data Protection Act 1998 [43].

    Adoption of International Standards

    The British Standards Institution (BSI) has already established Smart City standards and has associated with the ISO Advisory Group on smart city standards. The UK subscribes to the BSI standards for smart cities and has adopted the same [44]. The following standards and publications help address various issues for a city to become a smart city:

    Further, the Smart London Plan incorporates open data standards in accordance with London DataStore [45]. Various government reports – Smart Cities background paper, Open Data White Paper, etc., have suggested the use of standards related to Internet of Things (IoT), open data standards, etc [46].

     

    Seoul, Korea

    Introduction

    Smart Seoul 2015 was announced in June 2011 by the Seoul Metropolitan Government, which envisions integrating IT services into every field, including administration, welfare, industry and living. Through this, the Seoul Metropolitan Government plans to create a Seoul that uses smart technologies by 2015 [47]. Towards this, the Seoul Metropolitan Government plans to make use of Big Data in policy development, and through scientific analytics, will provide customized administrative services and reduce wasteful spending. Also, the government is utilising Big Data to analyse trends emerging from existing services [48]. Examples of projects that leverage big data that the government has undertaken include the Taxi Matchmaking Project – analyzes the data related to taxi stands and passengers, the Owl Bus [49] - maps the bus routes, etc.

    Status of the Project

    Building on the Smart Seoul 2015, the Seoul Metropolitan Government plans to establish 'Global Digital Seoul 2020 – New Connections, Different Experiences' vision in next five-years. In this multi-objective plan, it aims to establish a ’Big Data campus’ providing win-win cooperation among public, private, industry and university [50].

    Policies and Regulations

    The Smart Seoul 2015 aims to create a ‘Seoul Data Mart’, which will be an open platform that makes public information available for data processing [51]. Furthermore, Seoul has opened the Seoul Open Data Plaza [52], an online channel to share and provide citizens with all of Seoul’s public data, such as real-time bus operation schedules, subway schedules, non-smoking areas, locations of public Wi-Fi services, shoeshine shops, and facilities for disabled people, and the information registered in Seoul Open Data Plaza is provided in the open API format.45

    South Korea has a comprehensive law governing data privacy – Personal Information Protection Act, 2011. The law includes data protection rules and principles, including obligations on the data controller and the consent of data subjects, rights to access personal data or object to its collection, and security requirements. It also covers cookies and spam, data processing by third parties and the international transfer of data [53].

    International Standards

    The smart city standards are adopted in the development of smart cities in Korea [54]. Korea has adopted the ISO/TC 268, which is focused on sustainable development in communities. Korea also has one working group developing city indicators and another working group developing metrics for smart community infrastructures [55].

     

    Conclusion

    The smart city projects studied are at different levels of implementation and have both similarities and differences. Below is an analysis of some of the key similarities and differences between smart city projects, a comparison of these points to India’s 100 Smart City Mission, and a summary of best practices around the development of smart city frameworks.

    Nodal Agency

    All cities studied have nodal agencies driving the smart city initiatives and many have policies in place backing these initiatives. For example, while the Smart Nation programme in Singapore is being driven by the Infocomm Development Authority, in London the smart city project is governed by the Great London Authority. The Smart Seoul Project in Korea is governed by the Seoul Metropolitan Government and New York has the Mayor’s Office of Technology and Innovation serving as the coordinating entity for new technology and IoT deployments across all City agencies. In India, the nodal agency driving the 100 Smart Cities Project is the Ministry of Urban Development under the Indian Government. In India, the implementation of the Mission at the City level will be done by a Special Purpose Vehicle (SPV), which will be a limited company and will plan, appraise, approve, release funds, implement, manage, operate, monitor and evaluate the Smart City development projects.

    Policies

    Many of the cities had open data policies and data protection policies that pertain to the Smart City initiatives. In Dubai, an open data law called Dubai Open Data Law has been issued to complete the legislative framework for transforming Dubai into a Smart City and the Smart City Establishment will develop policies for the project. New York also has an Open Data Plan in place and LinkNYC will use anonymized, aggregate data to address data privacy of users. In London, the Smart London Plan incorporates the existing open data platform called ‘London DataStore’, the rules for which are defined by the Greater London Authority, which also ensures privacy and transparent use of data by processing personal data in accordance with the Data Protection Act 1998. For regulation of data in Seoul, a ‘Seoul Data Mart’ will be established to make public information available for data processing and the Seoul Open Data Plaza is an existing online channel to share and provide citizens with all of Seoul’s public data. South Korea has a comprehensive law governing data privacy in place as well. In Singapore, the Personal Data Protection Commission has committed to work and support the Smart Nation vision on data privacy and cyber security ecosystem. To achieve the vision of the project, the government has also promoted the use of open data. It can be said the these countries , with clearly laid out policies to support and guide the project, have well planned ecosystem for regulation and governance of systems, technologies and cities. All cities have incorporated open data into smart cities and many have developed guidelines for its use. All cities have similar goals of enhancing the lives of citizens and developing anticipatory regulation, however, there appears to be little discussion on the need to amend existing law or enable new law around privacy and data protection in light of data collection through smart cities. In India, no enabling legislation or policy has been formulated by the Government, apart from releasing “Mission Statement and Guidelines”, which provides details about the Project and vision, excluding a definition of a ‘smart city’ or the relevant applicable laws and policies. No information is publicly available regarding deployment of open data, use of specific technologies like cloud, big data, etc., the relevant policies and applicability of laws. Unlike India, all cities recognize the importance of big data techniques in enabling smart city visions, technology and policies. On the lines of these cities, India must work towards addressing the need for an open data framework in light of the 100 Smart Cities Mission to enable the sharing of non-confidential data between public entities and other stakeholders. This requires co-ordination to incorporate, enable and draw upon open data architecture in the cities by the Government with the existing open data framework in India, like the National Data Sharing and Accessibility Policy, 2012. Use of technology in the form of IoT and Big Data entails access to open data, bringing another policy area in its ambit which needs consideration. Also, identification and development of open standards for IoT must be looked at. Also, as data in smart cities will be generated, collected, used, and shared by both the public and private sector. It is essential that India’s existing data protection standards and regime must be amended to extend the data regulation beyond a body corporate and oversee the collection and use of data by the Government, and its agencies.

    Standards

    In Singapore, the Smart Nation initiative follows the standards laid under the purview of the Singapore Standards Council (SSC)and the Singapore IT standards abides to the international standards as defined by ISO, ITU, etc. The Country is also a member of many international standards forums (see Singapore International Standards Committee) which includes JTC1/WG9- Big Data; JTC1/WG10 - Internet of Things; JTC1/WG11 - Smart Cities. In Dubai, the Smart Dubai Executive Committee with the International Telecommunications Union (ITU) to adopt the performance indicators by the ITU Focus Group on Smart Sustainable Cities to evaluate the feasibility of the indicators. For the purpose of standards, the ANSI Network on Smart and Sustainable Cities (ANSSC) in New York is a forum smart and sustainable cities, along with US being a signatory of the ISO/ITU defined standards on smart cities. Also, The British Standards Institution (BSI) has already established Smart City standards and has associated with the ISO Advisory Group on smart city standards. The UK subscribes to the BSI standards for smart cities and has adopted the same and the Smart London Plan incorporates open data standards in accordance with London DataStore. For development of smart cities, Korea has adopted the ISO/TC 268, which is focused on sustainable development in communities and also has one working group developing city indicators and another working group developing metrics for smart community infrastructures. However, in India, the Bureau of Indian Standards (BIS) has undertaken the task to formulate standardised guidelines for central and state authorities in planning, design and construction of smart cities by setting up a technical committee under the Civil engineering department of the Bureau. However, adoption of the standards by implementing agencies would be voluntary and intends to complement internationally available documents in this area. Also, The Global Cities Institute (GCI) has undertaken a mission in the year 2015 to align with the Bureau of Indian Standards regarding development of standards of smart cities and also to forge relationships with Indian cities in light of ISO 37120. It can be said that India has currently not yet adopted international standards, but is in the process of developing national standards and adopting key international standards. Unlike other cities,which are adopting standards - national, ISO, or ITU, Indian cities are yet to adopt standards for regulation of the future smart cities.

    Notes for India

    India is in the nascent stages of developing smart cities across the country. Drawing from the practices adopted by cities across the world, smart cities in India should adopt strong regulatory and governance frameworks regarding technical standards, open data and data security and data protection policies. These policies will be essential in ensuring the sustainability and efficiency of smart cities while safeguarding individual rights. Some of these policies are already in place - such as India’s Open Data Policy and India’s data protection standards under section 43A of the ITA. It will be important to see how these policies are adopted and applied to the context of smart cities.

     

    References

    [1] Smart Cities and Transparent Evolution, http://www.posterheroes.org/Posterheroes3/_mat/PH3_eng.pdf.

    [2] "Data, Data Everywhere." The Economist, February 25, 2010. Accessed March 17, 2016, http://www.economist.com/node/15557443.

    [3] "Smart Cities." ISO. 2015. Accessed March 17, 2016, http://www.iso.org/iso/smart_cities_report-jtc1.pdf.

    [4] Transcript of Prime Minister Lee Hsien Loong's speech at Smart Nation launch on 24 November, http://www.pmo.gov.sg/mediacentre/transcript-prime-minister-lee-hsien-loongs-speech-smart-nation-launch-24-november.

    [5] Smart Nation Vision, https://www.ida.gov.sg/Tech-Scene-News/Smart-Nation-Vision.

    [6] Smart Nation, http://www.pmo.gov.sg/smartnation.

    [7] Smart Nation Platform, https://www.ida.gov.sg/~/media/Files/About%20Us/Newsroom/Media%20Releases/2014/0617_smartnation/AnnexA_sn.pdf.

    [8] Transcript of Prime Minister Lee Hsien Loong's speech at Smart Nation launch on 24 November, https://www.ida.gov.sg/blog/insg/featured/singapore-lays-groundwork-to-be-worlds-first-smart-nation/.

    [9] Prime Ministers’ Office Singapore-Smart Nation, http://www.pmo.gov.sg/smartnation.

    [10] Prime Ministers’ Office Singapore-Smart Nation, http://www.pmo.gov.sg/smartnation.

    [11] Constitution of the Republic of Singapore (Responsibility of the Prime Minister) Notification 2015, http://statutes.agc.gov.sg/aol/search/display/view.w3p;page=0;query=Status%3Acurinforce%20Type%3Aact,sl%20Content%3A%22smart%22;rec=4;resUrl=http%3A%2F%2Fstatutes.agc.gov.sg%2Faol%2Fsearch%2Fsummary%2Fresults.w3p%3Bquery%3DStatus%253Acurinforce%2520Type%253Aact,sl%2520Content%253A%2522smart%2522;whole=yes.

    [12] Personal Data Protection Singapore-Annual Report 2014-15, https://www.pdpc.gov.sg/docs/default-source/Reports/pdpc-ar-fy14---online.pdf.

    [13] Balancing Innovation and Personal Data Protection, https://www.ida.gov.sg/Tech-Scene-News/Tech-News/Digital-Government/2015/9/Balancing-innovation-and-personal-data-protection.

    [14] Department of Statistics Singapore- Free Access to More Data on the SingStat Website from 1 March 2015, http://www.singstat.gov.sg/docs/default-source/default-document-library/news/press_releases/press27022015.pdf.

    [15] Singapore Marks 50th Birthday With Open Data Contest, https://blog.hootsuite.com/singapore-open-data/.

    [16] Virtual Singapore - a 3D city model platform for knowledge sharing and community collaboration, http://www.sla.gov.sg/News/tabid/142/articleid/572/category/Press%20Releases/parentId/97/year/2014/Default.aspx.

    [17] Internet of Things (IoT) Standards Outline to Support Smart Nation Initiative Unveiled, http://www.spring.gov.sg/NewsEvents/PR/Pages/Internet-of-Things-(IoT)-Standards-Outline-to-Support-Smart-Nation-Initiative-Unveiled-20150812.aspx.

    [18] Information Technology Standards Committee, https://www.itsc.org.sg/technical-committees/internet-of-things-technical-committee-iottc and https://www.ida.gov.sg/~/media/Files/Infocomm%20Landscape/iN2015/Reports/realisingthevisionin2015.pdf.

    [19] Government of Dubai-2021 Dubai Plan-Purpose, http://www.dubaiplan2021.ae/the-purpose/.

    [20] Government of Dubai-2021 Dubai Plan, http://www.dubaiplan2021.ae/dubai-plan-2021/.

    [21] Smart Dubai, http://www.smartdubai.ae/foundation_layers.php.

    [22] The Internet of Things: Connections for People’s happiness, http://www.smartdubai.ae/story021002.php.

    [23] Smart Dubai - Current State, http://www.smartdubai.ae/current_state.php.

    [24] Smart Dubai - District Guidelines, http://smartdubai.ae/districtguidelines/Smart_Dubai_District_Guidelines_Public_Brief.pdf.

    [25] See; http://roadmap.smartdubai.ae/search-services-public.php and http://roadmap.smartdubai.ae/search-initiatives-public.php.

    [26] Smart Dubai-Smart District Guidelines, http://smartdubai.ae/districtguidelines/Smart_Dubai_District_Guidelines_Public_Brief.pdf.

    [27] Dubai Ruler issues new laws to further enhance the organisational structure and legal framework of Dubai Smart City, https://www.wam.ae/en/news/emirates/1395288828473.html.

    [28] See: http://slc.dubai.gov.ae/en/AboutDepartment/News/Lists/NewsCentre/DispForm.aspx?ID=147&ContentTypeId=0x01001D47EB13C23E544893300E8367A23439 and http://www.smartdubai.ae/dubai_data.php.

    [29] Dubai first city to trial ITU key performance indicators for smart sustainable cities, http://www.itu.int/net/pressoffice/press_releases/2015/12.aspx#.VtaYtlt97IU.

    [30] Smart Dubai Benchmark Report 2015 Executive Summary, http://smartdubai.ae/bmr2015/methodology-public.php.

    [31] Building a Smart + Equitable City, http://www1.nyc.gov/assets/forward/documents/NYC-Smart-Equitable-City-Final.pdf

    [32] Building a Smart + Equitable City, http://www1.nyc.gov/site/forward/innovations/smartnyc.page.

    [33] One New York: The Plan for a Strong and Just City, http://www1.nyc.gov/html/onenyc/about.html

    [34] Open Data for All, http://www1.nyc.gov/assets/home/downloads/pdf/reports/2015/NYC-Open-Data-Plan-2015.pdf.

    [35] 7 public projects that are turning New York into a “smart city”, http://www.builtinnyc.com/2015/11/24/7-projects-are-turning-new-york-futuristic-technology-hub.

    [36] LinkNYC, https://www.link.nyc/faq.html#privacy.

    [7] ANSI Network on Smart and Sustainable Cities, http://www.ansi.org/standards_activities/standards_boards_panels/anssc/overview.aspx?menuid=3

    [38] IoT-Enabled Smart City Framework, http://publicaa.ansi.org/sites/apdl/Documents/News%20and%20Publications/Links%20Within%20Stories/IoT-EnabledSmartCityFrameworkWP20160213.pdf.

    [39] Smart London (UK) Plan: Digital Technologies, London and Londoners, http://munkschool.utoronto.ca/ipl/files/2015/03/KleinmanM_Smart-London-UK-v5_30AP2015.pdf.

    [40] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

    [41] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

    [42] Open Data White Paper, https://data.gov.uk/sites/default/files/Open_data_White_Paper.pdf.

    [43] London Datastore-Privacy, http://data.london.gov.uk/about/privacy/.

    [44] Future Cities Standards Centre in London, https://eu-smartcities.eu/commitment/5937.

    [45] Smart London Plan, http://www.london.gov.uk/sites/default/files/smart_london_plan.pdf.

    [46] Smart Cities background paper, October 2013, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/246019/bis-13-1209-smart-cities-background-paper-digital.pdf.

    [47] Presentation of 2015 Blueprint of Seoul as ‘State-of-the-art Smart City’, http://english.seoul.go.kr/presentation-of-2015-blueprint-of-seoul-as-%E2%80%98state-of-the-art-smart-city%E2%80%99/.

    [48] “Policy Where There is Demand,” Seoul Utilizes Big Data, http://english.seoul.go.kr/policy-demand-seoul-utilizes-big-data/

    [49] Seoul’s “Owl Bus” Based on Big Data Technology, http://www.citiesalliance.org/sites/citiesalliance.org/files/Seoul-Owl-Bus-11052014.pdf

    [50] Seoul Launches “Global Digital Seoul 2020”, http://english.seoul.go.kr/seoul-launches-global-digital-seoul-2020/

    [51] Smart Seoul 2015, http://english.seoul.go.kr/wp-content/uploads/2014/02/SMART_SEOUL_2015_41.pdf

    [52] Disclosing public data through the Seoul Open Data Plaza, http://english.seoul.go.kr/policy-information/key-policies/informatization/seoul-open-data-plaza/

    [53] Data protection in South Korea: overview, http://uk.practicallaw.com/2-579-7926.

    [54]Smart Cities Seoul: a case study, https://www.itu.int/dms_pub/itu-t/oth/23/01/T23010000190001PDFE.pdf

    [55] Smart Cities-ISO, http://www.iso.org/iso/livelinkgetfile-isocs?nodeid=16193764.

     

    List of Blocked 'Escort Service' Websites

    by Pranesh Prakash last modified Jun 15, 2016 08:33 AM
    Here is the full list of URLs that Indian ISPs were asked to block on Monday, June 13, 2016.

    On April 20, 2016, DNA carried a report on a PIL seeking action against advertisements for prostitution in newspapers and on websites. That report noted that the Mumbai Police had obtained an order from a magistrates court to block 174 objectionable websites, and had sent a list to the "Group Coordinator (Cyber Laws)" within the Department of Electronics and IT. On June 13, 2016, some news agencies carried reports about the Ministry of Communications and IT having ordered ISPs to block 240 websites.

    As far as we know, the Mumbai Police has not proceeded against any of the people who run these websites, whose phone numbers are available, and whose names and addresses are also available in many cases through WHOIS queries on the domain names.

    Unfortunately, the government does not make available publicly the list of websites they have ordered ISPs to block. Given that knowledge of what is censored by the government is crucial in a democracy, we are publishing the entire list of blocked websites.

    Those of these websites that use TLS (i.e., those with 'https'), still appear to be available on multiple Indian ISPs, and others can be accessed by using a proxy VPN from outside India or by using Tor.

    Notes:

    • The list circulated to ISPs has two sub-lists, numbered from 1-174 (but containing 175 entries, with a numbering mistake), and 1-64, for a total of 239 URLs.
    • 4 URLs are repeated in the list ("www.salini.in/navi-mumbai-independent-escort-service.php", "exmumbai.in", "www.mansimathur.in/pinkyagarwal", "www.mumbaifunclubs.com")
    • For one website, both the domain name and a specific web page within it are listed (""www.mumbaiwali.in" and "www.mumbaiwali.in/navi-mumbai-escort-service.php")
    • One URL is incomplete (No. 214: "www.independentescortservicemumbai.com/mumbai%20escort%20servi..")
    • There are thus 235 unique URLs, targetting 234 websites and web pages.




    Full List of Blocked URLs

    1. www.sterlingbioscience.com
    2. rawpoint.biz
    3. www.onemillionbabes.com
    4. www.mumbaihotcollection.in
    5. simranoberoi.in
    6. rubinakapoor.biz
    7. talita.biz
    8. www.mumbaiescortsagency.net
    9. www.mumbaifunclubs.com
    10. www.alishajain.co.in
    11. www.ankitatalwar.co.in
    12. https://www.jennyarora.ind.in
    13. www.riya-kapoor.com
    14. shneha.in
    15. missinimi.in
    16. www.mumbaiglamour.in
    17. kalyn.in
    18. www.saumyagiri.co.in/city/mumbai/
    19. bookerotic.com
    20. www.divyamalik.in
    21. www.suhanisharma.co.in
    22. www.ruhi.biz
    23. umbaiqueens.in
    24. www.aliyaghosh.com
    25. priyasen.in
    26. www.highprofilemumbaiescorts.co.in
    27. charmingmumbai.com
    28. www.poojamehata.in
    29. kiiran.in/
    30. mansikher.in
    31. www.newmumbaiescorts.in
    32. www.mumbaifunclubs.com
    33. www.punarbas.in
    34. www.discreetbabes.in
    35. www.alisharoy.in
    36. www.arpitarai.in
    37. www.nidhipatel.in
    38. navimumbailescort.com
    39. www.zoyaescorts.com
    40. www.juhioberoi.in
    41. shoniya.in
    42. panchibora.in
    43. rehu.in
    44. www.nehaanand.com
    45. www.aditiray.co.in
    46. www.rakhibajaj.in
    47. www.alianoidaescorts.in
    48. www.sobiya.in
    49. www.alishaparul.in
    50. mumbai-escorts.leathercurrency.com
    51. ankita-ahuja.in
    52. www.yamika.in
    53. mumbailescort.co
    54. www.ranjika.in
    55. www.aditiray.com
    56. www.alinamumbailescort.in
    57. www.sonikaa.com/services/
    58. riyamodel.in
    59. mumbai-escorts.info
    60. soonam.in
    61. www.sejalthakkar.com
    62. www.yomika-tandon.in
    63. www.asika.in
    64. www.siyasharma.org/
    65. www.rubikamathur.in
    66. www.mumbaiescortslady.com
    67. www.sexyshe.in
    68. www.indepandentescorts.com
    69. www.saanvichopra.co.in
    70. www.goswamipatel.in
    71. ojaloberoi.in
    72. www.naincy.in
    73. www.sonyamehra.com
    74. www.pinkgrapes.in
    75. anjalitomar.in/
    76. www.nishakohli.com/
    77. sagentia.co.in
    78. mumbai.vivastreet.co.in/escort+mumbai
    79. www.deseescortgirls.in
    80. guides.wonobo.com/mumbai/mumbai-escorts-service/.4299
    81. jasmineescorts.com
    82. www.shalinisethi.com
    83. www.highclassmumbailescort.com
    84. www.vipescortsinmumbai.com
    85. www.mumbaiescorts69.co.in
    86. monikabas.co.in
    87. www.riyasehgal.com
    88. onlycelebrity.in
    89. www.greatmumbaiescorts.com/escort-service-mumbai.html
    90. www.aishamumbailescort.com
    91. www.jennydsouzaescort.com
    92. www.desifun.in
    93. www.siyaescort.co.in
    94. masti—escort.in
    95. www.sofya.in
    96. www.mumbaiwali.in/navi-mumbai-escort-service.php
    97. www.mumbaiwali.in
    98. www.calldaina.com
    99. www.mumbaiescortsservice.co.in
    100. www.escortsgirlsinmumbai.com
    101. www.passionmumbai.escorts.com
    102. www.nehakapoor.in
    103. meerakapoor.com
    104. www.dianamumbaiescorts.net .in
    105. www.allmumbailescort.in
    106. www.rakhiarora.in
    107. www.ritikasingh.com
    108. www.rekhapatil.com
    109. www.mumbaidolls.com
    110. www.piapandey.com
    111. www.mumbaicuteescorts.in
    112. www.mumbaiescortssevice.com
    113. www.onlycelebrity.com
    114. www.meetescortservice.com
    115. onlyoneescorts.com
    116. simirai.org
    117. www.riyamumbaiescorts.in
    118. www.neharana.in
    119. www.tanyaroy.com
    120. www.mumbaihiprofilegirls.in
    121. www.sexyescortsmumbai.in
    122. www.sexymumbai.escorts.com
    123. www.four-seasons—escort.in
    124. www.mumbaiescortsgirl.com
    125. www.vdreamescorts.com
    126. www.passionatemumbaiescorts.in
    127. www.payalmalhotra.in
    128. www.shrutisinha.com
    129. www.juliemumbaiescorts.com
    130. www.indiasexservices.com/mumbai.html
    131. www.mumbai-escorts.co.in
    132. www.aliyamumbaiescorts.net.in
    133. shivaniarora.co.in/escort–service-mumbai.html
    134. www.pinkisingh.com
    135. soyam.in
    136. www.arpitaray.com
    137. www.localescorts.in
    138. www.jennifermumbaiescorts.com
    139. www.yanaroy.com
    140. escorts18.in/mumbai—escorts.html
    141. www.tinamumbaiescorts.com
    142. www.mumbaijannatescorts.com
    143. www.deepikaroy.com
    144. www.nancy.co.in
    145. www.pearlpatel.in
    146. 30minsmumbaiescorts.in
    147. www.datinghopes.com
    148. https://www.riyaroy.com/services.html
    149. www.sonalikajain.com
    150. www.zainakapoor.co.in
    151. kavyajain.in
    152. www.kinnu.co.in
    153. exmumbai.in/
    154. www.mansimathur.in/pinkyagarwal
    155. exmumbai.in
    156. www.mansimathur.in/pinkyagarwal
    157. www.devikabatra.in
    158. katlin.in
    159. riyaverma.in
    160. escortsinindia.co/
    161. www.snehamumbaiescorts.in
    162. shimi.in
    163. www.mumbaiescortsforu.com/about
    164. www.chetnagaur.co.in/chetna-gaur.html
    165. www.escortspoint.in
    166. www.rupalikakkar.in
    167. www.hemangisinha.co.in
    168. 1escorts.in/location/mumbai.html
    169. www.salini.in/navi-mumbai-independent—escort-service.php
    170. www.salini.in/navi-mumbai-independent-escort-service.php
    171. www.mumbaibella.in
    172. mohitescortservicesmumbai.com
    173. www.anchu.in
    174. www.aliyaroy.co.in
    175. jaanu.co.in/mumbai-escorts-service-call-girls.html
    176. www.andyverma.com
    177. dreams-come-true.biz
    178. feel–better.biz
    179. jellyroll.biz
    180. dreamgirlmumbai.com
    181. role-play.biz
    182. mansi—mathur.com
    183. www.zarinmumbaiescorts.com
    184. mymumbai.escortss.com
    185. www.goldentouchescorts.com
    186. www.mumbaipassion.biz
    187. ishitamalhotra.com
    188. happy-ending.biz
    189. juicylips.biz
    190. www.escortsmumbai.name
    191. www.kirstygbasai.net
    192. www.hiremumbaiescorts.com
    193. www.meeraescorts.com/mumbai-escorts.php
    194. 3–5–7star.biz
    195. www.pranjaltiwari.com
    196. www.richagupta.biz
    197. way2heaven.biz
    198. piya.co/
    199. pinkflowers.info
    200. www.beautifulmumbaiescorts.com
    201. www.bestescortsinmumbai.com/charges-html
    202. www.mumbaiescorts.me
    203. www.tanikatondon.com
    204. www.escortsinmumbai.biz
    205. www.escortgirlmumbai.com
    206. www.mumbaicallgrils.com
    207. www.quickescort4u.com
    208. www.mayamalhotra.com
    209. www.legal-escort.com
    210. escortsbaba.com/mumbai-escorts.html
    211. rupa.biz
    212. www.mumbaiescorts.agency/erotic-service-mumbai.html
    213. www.escortscelebrity.com
    214. www.independentescortservicemumbai.com/mumbai%20escort%20servi..
    215. garimachopra.com
    216. kajalgupta.biz
    217. lipkiss.site
    218. aanu.in
    219. bombayescort.in
    220. hotkiran.co.in
    221. khushikapoor.in
    222. joyapatel.in
    223. rici.in
    224. aaditi.in
    225. andheriescorts.org.in
    226. www.jiyapatel.in
    227. spicymumbai.in
    228. rimpyarora.in
    229. lovemaking.co.in
    230. riyadubey.co.in
    231. escortservicesmumbai.in
    232. mumbaiescorts.co.in
    233. midnightprincess.in/
    234. vashiescorts.co.in/
    235. angee.in/
    236. www.rozakhan.in/
    237. www.mumbaiescortsvilla.in/
    238. kylie.co.in/
    239. escortservicemumbai.co.in

    Jurisdiction: The Taboo Topic at ICANN

    by Pranesh Prakash last modified Jun 29, 2016 07:51 AM
    The "IANA Transition" that is currently underway is a sham since it doesn't address the most important question: that of jurisdiction. This article explores why the issue of jurisdiction is the most important question, and why it remains unaddressed.

    In March 2014, the US government announced that they were going to end the contract they have with ICANN to run the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

    Why is the U.S. government removing the NTIA contract?

    The main reason for the U.S. government's action is that it will get rid of a political thorn in the U.S. government's side: keeping the contract allows them to be called out as having a special role in Internet governance (with the Affirmation of Commitments between the U.S. Department of Commerce and ICANN, the IANA contract, and the cooperative agreement with Verisign), and engaging in unilateralism with regard to the operation of the root servers of the Internet naming system, while repeatedly declaring that they support a multistakeholder model of Internet governance.

    This contradiction is what they are hoping to address. Doing away with the NTIA contract will also increase — ever so marginally — ICANN’s global legitimacy: this is something that world governments, civil society organizations, and some American academics have been asking for nearly since ICANN’s inception in 1998. For instance, here are some demands made in a declaration by the Civil Society Internet Governance Caucus at WSIS, in 2005:

    “ICANN will negotiate an appropriate host country agreement to replace its California Incorporation, being careful to retain those aspects of its California Incorporation that enhance its accountability to the global Internet user community. "ICANN's decisions, and any host country agreement, must be required to comply with public policy requirements negotiated through international treaties in regard to, inter alia, human rights treaties, privacy rights, gender agreements and trade rules. … "It is also expected that the multi-stakeholder community will observe and comment on the progress made in this process through the proposed [Internet Governance] Forum."

    In short: the objective of the transition is political, not technical. In an ideal world, we should aim at reducing U.S. state control over the core of the Internet's domain name system.1

    It is our contention that U.S. state control over the core of the Internet's domain name system is not being removed by the transition that is currently underway.

    Why is the Transition Happening Now?

    Despite the U.S. government having given commitments in the past that were going to finish the IANA transition by "September 30, 2000", (the White Paper on Management of Internet Names and Addresses states: "The U.S. Government would prefer that this transition be complete before the year 2000. To the extent that the new corporation is established and operationally stable, September 30, 2000 is intended to be, and remains, an 'outside' date.") and later by "fall of 2006",2 those turned out to be empty promises. However, this time, the transition seems to be going through, unless the U.S. Congress manages to halt it.

    However, in order to answer the question of "why now?" fully, one has to look a bit at the past.

    In 1998, through the White Paper on Management of Internet Names and Addresses the U.S. government asserted it’s control over the root, and asserted — some would say arrogated to itself — the power to put out contracts for both the IANA functions as well as the 'A' Root (i.e., the Root Zone Maintainer function that Network Solutions Inc. then performed, and continues to perform to date in its current avatar as Verisign). The IANA functions contract — a periodically renewable contract — was awarded to ICANN, a California-based non-profit corporation that was set up exclusively for this purpose, but which evolved around the existing IANA (to placate the Internet Society).

    Meanwhile, of course, there were criticisms of ICANN from multiple foreign governments and civil society organizations. Further, despite it being a California-based non-profit on contract with the government, domestically within the U.S., there was pushback from constituencies that felt that more direct U.S. control of the DNS was important.

    As Goldsmith and Wu summarize:

    "Milton Mueller and others have shown that ICANN’s spirit of “self-regulation” was an appealing label for a process that could be more accurately described as the U.S. government brokering a behind-the-scenes deal that best suited its policy preferences ... the United States wanted to ensure the stability of the Internet, to fend off the regulatory efforts of foreign governments and international organizations, and to maintain ultimate control. The easiest way to do that was to maintain formal control while turning over day-to-day control of the root to ICANN and the Internet Society, which had close ties to the regulation-shy American technology industry." [footnotes omitted]

    And that brings us to the first reason that the NTIA announced the transition in 2014, rather than earlier.

    ICANN Adjudged Mature Enough

    The NTIA now sees ICANN as being mature enough: the final transition was announced 16 years after ICANN's creation, and complaints about ICANN and its legitimacy had largely died down in the international arena in that while. Nowadays, governments across the world send their representatives to ICANN, thus legitimizing ICANN. States have largely been satisfied by participating in the Government Advisory Council, which, as its name suggests, only has advisory powers. Further, unlike in the early days, there is no serious push for states assuming control of ICANN. Of course they grumble about the ICANN Board not following their advice, but no government, as far as I am aware, has walked out or refused to participate.

    L'affaire Snowden

    Many within the United States, and some without, believe that the United States not only plays an exceptional role to play in the running of the Internet — by dint of historical development and dominance of American companies — but that it ought to have an exceptional role because it is the best country to exercise 'oversight' over 'the Internet' (often coming from clueless commentators), and from dinosaurs of the Internet era, like American IP lawyers and American 'homeland' security hawks, Jones Day, who are ICANN's lawyers, and other jingoists and those policymakers who are controlled by these narrow-minded interests.

    The Snowden revelations were, in that way, a godsend for the NTIA, as it allowed them a fig-leaf of international criticism with which to counter these domestic critics and carry on with a transition that they have been seeking to put into motion for a while. The Snowden revelations led Dilma Rousseff, President of Brazil, to state in September 2013, at the 68th U.N. General Assembly, that Brazil would "present proposals for the establishment of a civilian multilateral framework for the governance and use of the Internet", and as Diego Canabarro points out this catalysed the U.S. government and the technical community into taking action.

    Given this context, a few months after the Snowden revelations, the so-called I* organizations met — seemingly with the blessing of the U.S. government3 — in Montevideo, and put out a 'Statement on the Future of Internet Governance' that sought to link the Snowden revelations on pervasive surveillance with the need to urgently transition the IANA stewardship role away from the U.S. government. Of course, the signatories to that statement knew fully well, as did most of the readers of that statement, that there is no linkage between the Snowden revelations about pervasive surveillance and the operations of the DNS root, but still they, and others, linked them together. Specifically, the I* organizations called for "accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing."

    One could posit the existence of two other contributing factors as well.

    Given political realities in the United States, a transition of this sort is probably best done before an ultra-jingoistic President steps into office.

    Lastly, the ten-yearly review of the World Summit on Information Society was currently underway. At the original WSIS (as seen from the civil society quoted above) the issue of US control over the root was a major issue of contention. At that point (and during where the 2006 date for globalization of ICANN was emphasized by the US government).

    Why Jurisdiction is Important

    Jurisdiction has a great many aspects. Inter alia, these are:

    • Legal sanctions applicable to changes in the root zone (for instance, what happens if a country under US sanctions requests a change to the root zone file?)
    • Law applicable to resolution of contractual disputes with registries, registrars, etc.
    • Law applicable to labour disputes.
    • Law applicable to competition / antitrust law that applies to ICANN policies and regulations.
    • Law applicable to disputes regarding ICANN decisions, such as allocation of gTLDs, or non-renewal of a contract.
    • Law applicable to consumer protection concerns.
    • Law applicable to financial transparency of the organization.
    • Law applicable to corporate condition of the organization, including membership rights.
    • Law applicable to data protection-related policies & regulations.
    • Law applicable to trademark and other speech-related policies & regulations.
    • Law applicable to legal sanctions imposed by a country against another.

    Some of these, but not all, depend on where bodies like ICANN [the policy-making body], the IANA functions operator [the proposed "Post-Transition IANA"], and the root zone maintainer are incorporated or maintain their primary office, while others depend on the location of the office [for instance, Turkish labour law applies for the ICANN office in Istanbul], while yet others depend on what's decided by ICANN in contracts (for instance, the resolution of contractual disputes with ICANN, filing of suits with regard to disputes over new generic TLDs, etc.).

    However, an issue like sanctions, for instance, depends on where ICANN/PTI/RMZ are incorporated and maintain their primary office.

    As Milton Mueller notes, the current IANA contract "requires ICANN to be incorporated in, maintain a physical address in, and perform the IANA functions in the U.S. This makes IANA subject to U.S. law and provides America with greater political influence over ICANN."

    He further notes that:

    While it is common to assert that the U.S. has never abused its authority and has always taken the role of a neutral steward, this is not quite true. During the controversy over the .xxx domain, the Bush administration caved in to domestic political pressure and threatened to block entry of the domain into the root if ICANN approved it (Declaration of the Independent Review Panel, 2010). It took five years, an independent review challenge and the threat of litigation from a businessman willing to spend millions to get the .xxx domain into the root.

    Thus it is clear that even if the NTIA's role in the IANA contract goes away, jurisdiction remains an important issue.

    U.S. Doublespeak on Jurisdiction

    In March 2014, when NTIA finally announced that they would hand over the reins to “the global multistakeholder community”. They’ve laid down two procedural condition: that it be developed by stakeholders across the global Internet community and have broad community consensus, and they have proposed 5 substantive conditions that any proposal must meet:

    • Support and enhance the multistakeholder model;
    • Maintain the security, stability, and resiliency of the Internet DNS;
    • Meet the needs and expectation of the global customers and partners of the IANA services; and,
    • Maintain the openness of the Internet.
    • Must not replace the NTIA role with a solution that is government-led or an inter-governmental organization.

    In that announcement there is no explicit restriction on the jurisdiction of ICANN (whether it relate to its incorporation, the resolution of contractual disputes, resolution of labour disputes, antitrust/competition law, tort law, consumer protection law, privacy law, or speech law, and more, all of which impact ICANN and many, but not all, of which are predicated on the jurisdiction of ICANN’s incorporation), the jurisdiction(s) of the IANA Functions Operator(s) (i.e., which executive, court, or legislature’s orders would it need to obey), and the jurisdiction of the Root Zone Maintainer (i.e., which executive, court, or legislature’s orders would it need to obey).

    However, Mr. Larry Strickling, the head of the NTIA, in his testimony before the U.S. House Subcommittee on Communications and Technology, made it clear that,

    “Frankly, if [shifting ICANN or IANA jurisdiction] were being proposed, I don't think that such a proposal would satisfy our criteria, specifically the one that requires that security and stability be maintained.”

    Possibly, that argument made sense in 1998, due to the significant concentration of DNS expertise in the United States. However, in 2015, that argument is hardly convincing, and is frankly laughable.4

    Targetting that remark, in ICANN 54 at Dublin, we asked Mr. Strickling:

    "So as we understand it, the technical stability of the DNS doesn't necessarily depend on ICANN's jurisdiction being in the United States. So I wanted to ask would the US Congress support a multistakeholder and continuing in the event that it's shifting jurisdiction."

    Mr. Strickling's response was:

    "No. I think Congress has made it very clear and at every hearing they have extracted from Fadi a commitment that ICANN will remain incorporated in the United States. Now the jurisdictional question though, as I understand it having been raised from some other countries, is not so much jurisdiction in terms of where ICANN is located. It's much more jurisdiction over the resolution of disputes.

    "And that I think is an open issue, and that's an appropriate one to be discussed. And it's one I think where ICANN has made some movement over time anyway.

    "So I think you have to ... when people use the word jurisdiction, we need to be very precise about over what issues because where disputes are resolved and under what law they're resolved, those are separate questions from where the corporation may have a physical headquarters."

    As we have shown above, jurisdiction is not only about the jurisdiction of "resolution of disputes", but also, as Mueller reminds us, about the requirement that ICANN (and now, the PTI) be "incorporated in, maintain a physical address in, and perform the IANA functions in the U.S. This makes IANA subject to U.S. law and provides America with greater political influence over ICANN."

    In essence, the U.S. government has essentially said that they would veto the transition if the jurisdiction of ICANN or PTI's incorporation were to move out of the U.S., and they can prevent that from happening after the transition, since as things stand ICANN and PTI will still come within the U.S. Congress's jurisdiction.

    Why Has the ICG Failed to Consider Jurisdiction?

    Will the ICG proposal or the proposed new ICANN by-laws reduce existing U.S. control? No, they won't. (In fact, as we will argue below, the proposed new ICANN by-laws make this problem even worse.) The proposal by the names community ("the CWG proposal") still has a requirement (in Annex S) that the Post-Transition IANA (PTI) be incorporated in the United States, and a similar suggestion hidden away as a footnote. Further, the proposed by-laws for ICANN include the requirement that PTI be a California corporation. There was no discussion specifically on this issue, nor any documented community agreement on the specific issue of jurisdiction of PTI's incorporation.

    Why wasn't there greater discussion and consideration of this issue? Because of two reasons: First, there were many that argued that the transition would be vetoed by the U.S. government and the U.S. Congress if ICANN and PTI were not to remain in the U.S. Secondly, the ICANN-formed ICG saw the US government’s actions very narrowly, as though the government were acting in isolation, ignoring the rich dialogue and debate that’s gone on earlier about the transition since the incorporation of ICANN itself.

    While it would be no one’s case that political considerations should be given greater weightage than technical considerations such as security, stability, and resilience of the domain name system, it is shocking that political considerations have been completely absent in the discussions in the number and protocol parameters communities, and have been extremely limited in the discussions in the names community. This is even more shocking considering that the main reason for this transition is, as has been argued above, political.

    It can be also argued that the certain IANA functions such as Root Zone Management function have a considerable political implication. It is imperative that the political nature of the function is duly acknowledged and dealt with, in accordance with the wishes of the global community. In the current process the political aspects of the IANA function has been completely overlooked and sidelined. It is important to note that this transition has not been a necessitated by any technical considerations. It is primarily motivated by political and legal considerations. However, the questions that the ICG asked the customer communities to consider were solely technical. Indeed, the communities could have chosen to overlook that, but they did not choose to do so. For instance, while the IANA customer community proposals reflected on existing jurisdictional arrangements, they did not reflect on how the jurisdictional arrangements should be post-transition , while this is one of the questions at the heart of the entire transition. There were no discussions and decisions as to the jurisdiction of the Post-Transition IANA: the Accountability CCWG's lawyers, Sidley Austin, recommended that the PTI ought to be a California non-profit corporation, and this finds mention in a footnote without even having been debated by the "global multistakeholder community", and subsequently in the proposed new by-laws for ICANN.

    Why the By-Laws Make Things Worse & Why "Work Stream 2" Can't Address Most Jurisdiction Issues

    The by-laws could have chosen to simply stayed silent on the matter of what law PTI would be incorporated under, but instead the by-law make the requirement of PTI being a California non-profit public benefit corporation part of the fundamental by-laws, which are close to impossible to amend.

    While "Work Stream 2" (the post-transition work related to improving ICANN's accountability) has jurisdiction as a topic of consideration, the scope of that must necessarily discount any consideration of shifting the jurisdiction of incorporation of ICANN, since all of the work done as part of CCWG Accountability's "Work Stream 1", which are now reflected in the proposed new by-laws, assume Californian jurisdiction (including the legal model of the "Empowered Community"). Is ICANN prepared to re-do all the work done in WS1 in WS2 as well? If the answer is yes, then the issue of jurisdiction can actually be addressed in WS2. If the answer is no ­— and realistically it is — then, the issue of jurisdiction can only be very partially addressed in WS2.

    Keeping this in mind, we recommended specific changes in the by-laws, all of which were rejected by CCWG's lawyers.

    The Transition Plan Fails the NETmundial Statement

    The NETmundial Multistakeholder Document, which was an outcome of the NETmundial process, states:

    In the follow up to the recent and welcomed announcement of US Government with regard to its intent to transition the stewardship of IANA functions, the discussion about mechanisms for guaranteeing the transparency and accountability of those functions after the US Government role ends, has to take place through an open process with the participation of all stakeholders extending beyond the ICANN community

    [...]

    It is expected that the process of globalization of ICANN speeds up leading to a truly international and global organization serving the public interest with clearly implementable and verifiable accountability and transparency mechanisms that satisfy requirements from both internal stakeholders and the global community.

    The active representation from all stakeholders in the ICANN structure from all regions is a key issue in the process of a successful globalization.

    As our past analysis has shown, the IANA transition process and the discussions on the mailing lists that shaped it were neither global nor multistakeholder. The DNS industry represented in ICANN is largely US-based. 3 in 5 registrars are from the United States of America, whereas less than 1% of ICANN-registered registrars are from Africa. Two-thirds of the Business Constituency in ICANN is from the USA. While ICANN-the-corporation has sought to become more global, the ICANN community has remained insular, and this will not change until the commercial interests involved in ICANN can become more diverse, reflecting the diversity of users of the Internet, and a TLD like .COM can be owned by a non-American corporation and the PTI can be a non-American entity.

    What We Need: Jurisdictional Resilience

    It is no one's case that the United States is less fit than any other country as a base for ICANN, PTI, or the Root Zone Maintainer, or even as the headquarters for 9 of the world's 12 root zone operators (Verisign runs both the A and J root servers). However, just as having multiplicity of root servers is important for ensuring technical resilience of the DNS system (and this is shown in the uptake of Anycast by root server operators), it is equally important to have immunity of core DNS functioning from political pressures of the country or countries where core DNS infrastructure is legally situated and to ensure that we have diversity in terms of legal jurisdiction.

    Towards this end, we at CIS have pushed for the concept of "jurisdictional resilience", encompassing three crucial points:

    • Legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated.
    • Division of core Internet operators among multiple jurisdictions
    • Jurisdictional division of policymaking functions from technical implementation functions

    Of these, the most important is the limited legal immunity (akin to a greatly limited form of the immunity that UN organizations get from the laws of their host countries). This kind of immunity could be provided through a variety of different means: a host-country agreement; a law passed by the legislature; a U.N. General Assembly Resolution; a U.N.-backed treaty; and other such options exist. We are currently investigating which of these options would be the best option.

    And apart from limited legal immunity, distribution of jurisdictional control is also valuable. As we noted in our submission to the ICG in September 2015:

    Following the above precepts would, for instance, mean that the entity that performs the role of the Root Zone Maintainer should not be situated in the same legal jurisdiction as the entity that functions as the policymaking venue. This would in turn mean that either the Root Zone Maintainer function be taken up Netnod (Sweden-headquartered) or the WIDE Project (Japan-headquartered) [or RIPE-NCC, headquartered in the Netherlands], or that if the IANA Functions Operator(s) is to be merged with the RZM, then the IFO be relocated to a jurisdiction other than those of ISOC and ICANN. This, as has been stated earlier, has been a demand of the Civil Society Internet Governance Caucus. Further, it would also mean that root zone servers operators be spread across multiple jurisdictions (which the creation of mirror servers in multiple jurisdictions will not address).

    However, the issue of jurisdiction seems to be dead-on-arrival, having been killed by the United States government.

    Unfortunately, despite the primary motivation for demands for the IANA transition being those of removing the power the U.S. government exercises over the core of the Internet's operations in the form of the DNS, what has ended up happening through the IANA transition is that these powers have not only not been removed, but in some ways they have been entrenched further! While earlier, the U.S. had to specify that the IANA functions operator had to be located in the U.S., now ICANN's by-laws themselves will state that the post-transition IANA will be a California corporation. Notably, while the Montevideo Declaration speaks of "globalization" of ICANN and of the IANA functions, as does the NETmundial statement, the NTIA announcement on their acceptance of the transition proposals speaks of "privatization" of ICANN, and not "globalization".

    All in all, the "independence" that IANA is gaining from the U.S. is akin to the "independence" that Brazil gained from Portugal in 1822. Dom Pedro of Brazil was then ruling Brazil as the Prince Regent since his father Dom João VI, the King of United Kingdom of Portugal, Brazil and the Algarves had returned to Portugal. In 1822, Brazil declared independence from Portugal (which was formally recognized through a treaty in 1825). Even after this "independence", Dom Pedro continued to rule Portugal just as he had before indepedence, and Dom João VI was provided the title of "Emperor of Brazil", aside from being King of the United Kingdom of Portugal and the Algarves. The "indepedence" didn't make a whit of a difference to the self-sufficiency of Brazil: Portugal continued to be its largest trading partner. The "independence" didn't change anything for the nearly 1 million slaves in Brazil, or to the lot of the indigenous peoples of Brazil, none of whom were recognized as "free". It had very little consequence not just in terms of ground conditions of day-to-day living, but even in political terms.

    Such is the case with the IANA Transition: U.S. power over the core functioning of the Domain Name System do not stand diminished after the transition, and they can even arguably be said to have become even more entrenched. Meet the new boss: same as the old boss.


    1. It is an allied but logically distinct issue that U.S. businesses — registries and registrars — dominate the global DNS industry, and as a result hold the reins at ICANN.

    2. As Goldsmith & Wu note in their book Who Controls the Internet: "Back in 1998 the U.S. Department of Commerce promised to relinquish root authority by the fall of 2006, but in June 2005, the United States reversed course. “The United States Government intends to preserve the security and stability of the Internet’s Domain Name and Addressing System (DNS),” announced Michael D. Gallagher, a Department of Commerce official. “The United States” he announced, will “maintain its historic role in authorizing changes or modifications to the authoritative root zone file.”

    3. Mr. Fadi Chehadé revealed in an interaction with Indian participants at ICANN 54 that he had a meeting "at the White House" about the U.S. plans for transition of the IANA contract before he spoke about that when he visited India in October 2013 making the timing of his White House visit around the time of the Montevideo Statement.

    4. As an example, NSD, software that is used on multiple root servers, is funded by a Dutch foundation and a Dutch corporation, and written mostly by European coders.

    CIS Submission to TRAI Consultation on Free Data

    by Pranesh Prakash last modified Jul 01, 2016 04:04 PM
    The Telecom Regulatory Authority of India (TRAI) held a consultation on Free Data, for which CIS sent in the following comments.

     

    The Telecom Regulatory Authority of India (TRAI) asked for public comments on free data. Below are the comments that CIS submitted to the four questions that it posed.

     

    Question 1

    Is there a need to have TSP agnostic platform to provide free data or suitable reimbursement to users, without violating the principles of Differential Pricing for Data laid down in TRAI Regulation? Please suggest the most suitable model to achieve the objective.

    Is There a Need for Free Data?

    No, there is no need for free data, just as there is no need for telephony or Internet. However, making provisions for free data would increase the amount of innovation in the Internet and telecom sector, and there is a good probability that it would lead to faster adoption of the Internet, and thus be beneficial in terms of commerce, freedom of expression, freedom of association, and many other ways.

    Thus the question that a telecom regulator should ask is not whether there is a need for TSP agnostic platforms, but whether such platforms are harmful for competition, for consumers, and for innovation. The telecom regulator ought not undertake regulation unless there is evidence to show that harm has been caused or that harm is likely to be caused. In short, TRAI should not follow the precautionary principle, since the telecom and Internet sectors are greatly divergent from environmental protection: the burden of proof for showing that something ought to be prohibited ought to be on those calling for prohibition.

    Goal: Regulating Gatekeeping

    TRAI wouldn’t need to regulate price discrimination or Net neutrality if ISPs were not “gatekeepers” for last-mile access. “Gatekeeping” occurs when a single entity establishes itself as an exclusive route to reach a large number of people and businesses or, in network terms, nodes. It is not possible for Internet services to reach their end customers without passing through ISPs (generally telecom networks). The situation is very different in the middle-mile and for backhaul. Even though anti-competitive terms may exist in the middle-mile, especially given the opacity of terms in “transit agreements”, a packet is usually able to travel through multiple routes if one route is too expensive (even if that is not the shortest network path, and is thus inefficient in a way). However, this multiplicity of routes is generally not possible in the last mile.1 This leaves last mile telecom operators (ISPs) in a position to unfairly discriminate between different Internet services or destinations or applications, while harming consumer choice.

    However, the aim of regulation by TRAI cannot be to prevent gatekeeping, since that is not possible as long as there are a limited number of ISPs. For instance, even by the very act of charging money for access to the Internet, ISPs are guilty of “gatekeeping” since they are controlling who can and cannot access an Internet service that way. Instead, the aim of regulation by TRAI should be to “regulate gatekeepers to ensure they do not use their gatekeeping power to unjustly discriminate between similarly situated persons, content or traffic”, as we proposed in our submission to TRAI (on OTTs) last year.

    Models for Free Data

    There are multiple models possible for free data, none of which TRAI should prohibit unless it would enable OTTs to abuse their gatekeeping powers.

    Government Incentives For Non-Differentiated Free Data

    The government may opt to require all ISPs to provide free Internet to all at a minimum QoS in exchange for exemption from paying part of their USO contributions, or the government may pay ISPs for such access using their USO contributions.

    TRAI should recommend to DoT that it set up a committee to study the feasibility of this model.

    ISP subsidies

    ISP subsidies of Internet access only make economic sense for the ISP under the following ‘Goldilocks’ condition is met: the experience with the subsidised service is ‘good enough’ for the consumers to want to continue to use such services, but ‘bad enough’ for a large number of them to want to move to unsubsidised, paid access.

    1. Providing free Internet to all at a low speed.
      1. This naturally discriminates against services and applications such as video streaming, but does not technically bar access to them.
    2. Providing free access to the Internet with other restrictions on quality that aren’t discriminatory with respect to content, services, or applications.

    Rewards model

    A TSP-agnostic rewards platform will only come within the scope of TRAI regulation if the platform has some form of agreement with the TSPs, even if it is collectively. If the rewards platform doesn’t have any agreement with any TSP, then TRAI does not have the power to regulate it. However, if the rewards platform has an agreement with any TSP, it is unclear whether it would be allowed under the Differential Data Tariff Regulation, since the clause 3(2) read with paragraph 30 of the Explanatory Memorandum might disallow such an agreement.

    Assuming for the sake of argument that platforms with such agreements are not disallowed, such platforms can engage in either post-purchase credits or pre-purchase credits, or both. In other words, it could be a situation where a person has to purchase a data pack, engage in some activity relating to the platform (answer surveys, use particular apps, etc.) and thereupon get credit of some form transferred to one’s SIM, or it could be a situation where even without purchasing a data pack, a consumer can earn credits and thereupon use those credits towards data.

    The former kind of rewards platform is not as useful when it comes to encouraging people to use the Internet, since only those who already see worth in using in the Internet (and can afford it) will purchase a data pack in the first place. The second form, on the other hand is quite useful, and could be encouraged. However, this second model is not as easily workable, economically, for fixed line connections, since there is a higher initial investment involved.

    Recharge API

    A recharge API could be fashioned in one of two ways: (1) via the operating system on the phone, allowing a TSP or third parties (whether OTTs or other intermediaries) to transfer credit to the SIM card on the phone which have been bought wholesale. Another model could be that of all TSPs providing a recharge API for the use of third parties. Only the second model is likely to result in a “toll-free” experience since in the first model, like in the case of a rewards platform that requires up-front purchase of data packs, there has to be a investment made first before that amount is recouped. This is likely to hamper the utility of such a model.

    Further, in the first case, TRAI would probably not have the powers to regulate such transactions, as there would be no need for any involvement by the TSP. If anti-competitive agreements or abuse of dominant position seems to be taking place, it would be up to the Competition Commission of India to investigate.

    However, the second model would have to be overseen by TRAI to ensure that the recharge APIs don’t impose additional costs on OTTs, or unduly harm competition and innovation. For instance, there ought to be an open specification for such an API, which all the TSPs should use in order to reduce the costs on OTTs. Further, there should be no exclusivity, and no preferential treatment provided for the TSPs sister concerns or partners.

    “0.example” sites

    Other forms of free data, for instance by TSPs choosing not to charge for low-bandwidth traffic should be allowed, as long as it is not discriminatory, nor does it impose increased barriers to entry for OTTs. For instance, if a website self-certifies that it is low-bandwidth and optimized for Internet-enabled feature phones and uses 0.example.tld to signal this (just as wap.* were used in for WAP sites and m.* are used for mobile-optimized versions of many sites), then there is no reason why TSPs should be prohibited from not charging for the data consumed by such websites, as long as the TSP does so uniformly without discrimination. In such cases, the TSP is not harming competition, harming consumers, nor abusing its gatekeeping powers.

    OTT-agnostic free data

    If a TSP decides not to charge for specific forms of traffic (for example, video, or for locally-peered traffic) regardless of the Internet service from which that traffic emanates, as as long as it does so with the end customer’s consent, then there is no question of the TSP harming competition, harming consumers, nor abusing its gatekeeping powers. There is no reason such schemes should be prohibited by TRAI unless they distort markets and harm innovation.

    Unified marketplace

    One other way to do what is proposed as the “recharge API” model is to create a highly-regulated market where the gatekeeping powers of the ISP are diminished, and the ISP’s ability to leverage its exclusive access over its customers are curtailed. A comparison may be drawn here to the rules that are often set by standard-setting bodies where patents are involved: given that these patents are essential inputs, access to them must be allowed through fair, reasonable, and non-discriminatory licences. Access to the Internet and common carriers like telecom networks, being even more important (since alternatives exist to particular standards, but not to the Internet itself), must be placed at an even higher pedestal and thus even stricter regulation to ensure fair competition.

    A marketplace of this sort would impose some regulatory burdens on TRAI and place burdens on innovations by the ISPs, but a regulated marketplace harms ISP innovation less than not allowing a market at all.

    At a minimum, such a marketplace must ensure non-exclusivity, non-discrimination, and transparency. Thus, at a minimum, a telecom provider cannot discriminate between any OTTs who want similar access to zero-rating. Further, a telecom provider cannot prevent any OTT from zero-rating with any other telecom provider. To ensure that telecom providers are actually following this stipulation, transparency is needed, as a minimum.

    Transparency can take one of two forms: transparency to the regulator alone and transparency to the public. Transparency to the regulator alone would enable OTTs and ISPs to keep the terms of their commercial transactions secret from their competitors, but enable the regulator, upon request, to ensure that this doesn’t lead to anti-competitive practices. This model would increase the burden on the regulator, but would be more palatable to OTTs and ISPs, and more comparable to the wholesale data market where the terms of such agreements are strictly-guarded commercial secrets. On the other hand, requiring transparency to the public would reduce the burden on the regulator, despite coming at a cost of secrecy of commercial terms, and is far more preferable.

    Beyond transparency, a regulation could take the form of insisting on standard rates and terms for all OTT players, with differential usage tiers if need be, to ensure that access is truly non-discriminatory. This is how the market is structured on the retail side.

    Since there are transaction costs in individually approaching each telecom provider for such zero-rating, the market would greatly benefit from a single marketplace where OTTs can come and enter into agreements with multiple telecom providers.

    Even in this model, telecom networks will be charging based not only on the fact of the number of customers they have, but on the basis of them having exclusive routing to those customers. Further, even under the standard-rates based single-market model, a particular zero-rated site may be accessible for free from one network, but not across all networks: unlike the situation with a toll-free number in which no such distinction exists.

    To resolve this, the regulator may propose that if an OTT wishes to engage in paid zero-rating, it will need to do so across all networks, since if it doesn’t there is risk of providing an unfair advantage to one network over another and increasing the gatekeeper effect rather than decreasing it.

    Question 2

    Whether such platforms need to be regulated by the TRAI or market be allowed to develop these platforms?

    In many cases, TRAI would have no powers over such platforms, so the question of TRAI regulating does not arise. In all other cases, TRAI can allow the market to develop such platforms, and then see if any of them violates the Discriminatory Data Tariffs Regualation. For government-incentivised schemes that are proposed above, TRAI should take proactive measure in getting their feasibility evaluated.

    Question 3

    Whether free data or suitable reimbursement to users should be limited to mobile data users only or could it be extended through technical means to subscribers of fixed line broadband or leased line?

    Spectrum is naturally a scarce resource, though technological advances (as dictated by Cooper’s Law) and more efficient management of spectrum make it less so. However, we have seen that fixed-line broadband has more or less stagnated for the past many years, while mobile access has increased. So the market distortionary power of fixed-line providers is far less than that of mobile providers. However, competition is far less in fixed-line Internet access services, while it is far higher in mobile Internet access. Switching costs in fixed-line Internet access services are also far higher than in mobile services. Given these differences, the regulation with regard to price discrimination might justifiably be different.

    All in all, for this particular issue, it is unclear why different rules should apply to mobile users and fixed line users.

    Question 4

    Any other issue related to the matter of Consultation.

    None.


    1. In India’s mobile telecom sector, according to a Nielsen study, an estimated 15% of mobile users are multi-SIM users, meaning the “gatekeeping” effect is significantly reduced in both directions: Internet services can reach them via multiple ISPs, and conversely they can reach Internet services via multiple ISPs. See Nielsen, ‘Telecom Transitions: Tracking the Multi-SIM Phenomena in India’, http://www.nielsen.com/in/en/insights/reports/2015/telecom-transitions-tracking-the-multi-sim-phenomena-in-india.html

    Document Actions